NETBLT: a high throughput transport protocol
1987, ACM SIGCOMM Computer Communication Review
Sign up for access to the world's latest research
Related papers
Elsevier eBooks, 2009
In this paper we propose a sender side modification to TCP to accommodate small network buffers. We exploit the fact that the manner in which network buffers are provisioned is intimately related to the manner in which TCP operates. However, rather than designing buffers to accommodate the TCP AIMD algorithm, as is the traditional approach in network design, we suggest simple modifications to the AIMD algorithm to accommodate buffers of any size in the network. We demonstrate that networks with small buffers can be designed that transport TCP traffic in an efficient manner while retaining fairness and friendliness with standard TCP traffic.
IEEE/ACM Transactions on Networking, 2000
Many emerging scientific and industrial applications require transferring multiple Tbytes of data on a daily basis. Examples include pushing scientific data from particle accelerators/colliders to laboratories around the world, synchronizing data-centers across continents, and replicating collections of high definition videos from events taking place at different time-zones. A key property of all above applications is their ability to tolerate delivery delays ranging from a few hours to a few days. Such Delay Tolerant Bulk (DTB) data are currently being serviced mostly by the postal system using hard drives and DVDs, or by expensive dedicated networks. In this work we propose transmitting such data through commercial ISPs by taking advantage of already-paid-for off-peak bandwidth resulting from diurnal traffic patterns and percentile pricing. We show that between sender-receiver pairs with small time-zone difference, simple source scheduling policies are able to take advantage of most of the existing off-peak capacity. When the time-zone difference increases, taking advantage of the full capacity requires performing store-and-forward through intermediate storage nodes. We present an extensive evaluation of the two options based on traffic data from 200+ links of a large transit provider with PoPs at three continents. Our results indicate that there exists huge potential for performing multi Tbyte transfers on a daily basis at little or no additional cost.
2001
Internet services like the world-wide web and multimedia applications like News-and Video-on-Demand have become very popular over the last years. Since a high and rapidly increasing number of users retrieve multimedia data with high data rates, the data servers can represent a severe bottleneck. Traditional time and resource consuming operations, like memory copy operations, limit the number of concurrent streams that can be transmitted from the server, because of two reasons: (1) memory space is wasted holding identical data copies in different address spaces; and (2) a lot of CPU resources are used on copy operations. To avoid this bottleneck and make memory and CPU resources available for other tasks, i.e., more concurrent clients, we have implemented a zero-copy data path through the communication protocols to support high-speed network communication, based on UVM . In this paper, we describe the implementation and evaluation of the zero-copy protocol mechanism, and we show the potential for substantial performance improvement when moving data through the communication system without any copy operations.
1989
The emergence of performance intensive distributed applications is making new demands on computer networks. Distributed applications access computing resources and information available at multiple computers located across a network. Realizing such distributed applications over geographically distant areas reqUIres access to predictable and high performance corrununication. Circuit switching and packet switching are two possible techniques for providing high performance communication. Circuit switched networks preallocate resources to individual sources of traffic before any traffic is s"ent, whereas packet switched networks allocate resources dynamically as the traffic travels through the network. The advantage of circuit switching lies in guaranteed performance due to reserved capacity, but the the network capacity is wasted when circuits are idle. Packet switched networks have been preferred in data networks due to their lower cost and efficient utilization of network resources. However, the major limitation of current packet switched networks lies in their inability to provide predictable performance. This dissertation proposes a new architecture for providing predictable high performance in high speed packet switched networks. The architecture combines the advantages of circuit switching and packet switching by providing two services: datagramlJ and jlOWlJ. The datagram service supports best-effort delivery of traffic. The main liability of a datagram service lies in congestion. To avoid congestion, the architecture uses a novel, rate-based congestion control scheme. To support [CY88]
arXiv (Cornell University), 2021
The Internet Protocol, past, some current limitations and a glimpse of a possible future
The mismatch between the services offered by the two standard transport protocols in the Internet, TCP and UDP, and the services required by distributed multimedia applications has led to the development of a large number of partially reliable transport protocols. That is, protocols which in terms of reliability places themselves between TCP and UDP. This paper presents a taxonomy for retransmission based, partially reliable transport protocols, i.e., the subclass of partially reliable transport protocols that performs error recovery through retransmissions. The taxonomy comprises two classification schemes: one that classifies retransmission based, partially reliable transport protocols with respect to the reliability service they offer and one that classifies them with respect to their error control scheme. The objective of our taxonomy is fourfold: to introduce a unified terminology; to provide a framework in which retransmission based, partially reliable transport protocols can be examined, compared, and contrasted; to make explicit the error control schemes used by these protocols; and, finally, to gain new insights into these protocols and thereby suggest avenues for future research. Based on our taxonomy, a survey was made of existing retransmission based, partially reliable transport protocols. The survey shows how protocols are categorized according to our taxonomy, and exemplifies the majority of reliability services and error control schemes detailed in our taxonomy.
A formal set of rules and conventions governing the format and relative time of message exchange among two or more communication terminals.
Proceedings of the Third International Workshop on Network-Aware Data Management - NDM '13, 2013
Due to a number of recent technology developments, now is the right time to reexamine the use of TCP for very large data transfers. These developments include the deployment of 100 Gigabit per second (Gbps) network backbones, hosts that can easily manage 40 Gbps, and higher, data transfers, the Science DMZ model, the availability of virtual circuit technology, and wide-area Remote Direct Memory Access (RDMA) protocols. In this paper we show that RDMA works well over wide-area virtual circuits, and uses much less CPU than TCP or UDP. We also characterize the limitations of RDMA in the presence of other tra c, including competing RDMA flows. We conclude that RDMA for Science DMZ to Science DMZ transfers of massive data is a viable and desirable option for high-performance data transfer.
2018
In computer networks, loss of data packets is inevitable, in particular, because of the buffer memory overflow of at least one of the nodes located on the path from the source to the receiver, including the latter. Such losses associated with overflows are hereinafter referred to as congestion of network nodes. There are many ways to prevent and eliminate overloads; these methods, in the majority, are based on the management of data flows. A special place is taken by the maintenance of packages, taking into account their priorities. The article considers a number of original technical solutions to improve the quality of control and reduce the required amount of buffer memory of network nodes. The ideas of these solutions are quite simple for their implementation in the development of appropriate software and hardware for telecommunication devices.
References (7)
- D.Bertsekas and Ft. Gallager. Flow Control Schemes Based on Input Rate Adjustment Data Nehvorks. Prentice-Hall, Inc., 1987, Chapter 6.4.
- David Clark, Mark Lamben, and Lixia Zhang. NETBLT: A Bulk Data Transfer Protocol. Network Information Center RFC-998, SRI International. March, 1987
- Karen Sollins. The TFTP Protocol. Network Information Center RFC-783, SRI International. June, 1981
- Dan Theriault. BLAST, an Experimental File Transfer Protocol. MIT-LCS Compuer System Research Group RFC-217. March, 1982
- R. Watson and S. Mamrak. Gaining Efficiency in Transport Services by Appropriate Design and Implementation Choices.
- ACM Transactions on Computer Systems 5(2):97-120, May, 1987. Lixia Zhang. Why TCP Timers Don't Work Well. In Proceedings of Symposium on Communication Architectures and Protocols. ACM SIGCOMM, 1986.
- 'A c~mmc~n misconception is that NETBLT and Blast look similar to each other. Actually the two are very oWerent except that both protocols use separate flow control and error recovery mechanisms.