Abstract: This paper uses simulations to explore the benefits of adding selective acknowledgments (SACK) and selective repeat to TCP. We compare Tahoe and Reno TCP, the two most common reference implementations for TCP, with two modified versions of Reno TCP. The first version is New-Reno TCP, a modified version of TCP without SACK that avoids some of Reno TCP's performance problems when multiple packets are dropped from a window of data. The second version is SACK TCP, a conservative extension of Reno TCP modified to use the SACK option being proposed in the Internet Engineering Task Force (IETF). We describe the congestion control algorithms in our simulated implementation of SACK TCP and show that while selective acknowledgments are not required to solve Reno TCP's performance problems when multiple packets are dropped, the absence of selective acknowledgments does impose limits to TCP's ultimate performance. In particular, we show that without selective acknowledgments, TCP implementations are constrained to either retransmit at most one dropped packet per round-trip time, or to retransmit packets that might have already been successfully delivered.
Abstract: Recent network traffic studies argue that network arrival processes are much more faithfully modeled using statistically "self-similar" processes instead of traditional Poisson processes [LTWW94,PF94]. One difficulty in dealing with self-similar models is how to efficiently synthesize traces (sample paths) corresponding to self-similar traffic. We present a fast Fourier transform method for synthesizing approximate self-similar sample paths and assess its performance and validity. We find that the method is as fast or faster than existing methods and appears to generate a closer approximation to true self-similar sample paths than the other known fast method (Random Midpoint Displacement). We then discuss issues in using such synthesized sample paths for simulating network traffic, and how an approximation used by our method can dramatically speed up evaluation of Whittle's estimator for H, the Hurst parameter giving the strength of long-range dependence present in a self-similar time series.
Abstract: We analyze 3 million TCP connections that occurred during 15 wide-area traffic traces. The traces were gathered at five "stub" networks and two internetwork gateways, providing a diverse look at wide-area traffic. We derive analytic models describing the random variables associated with TELNET, NNTP, SMTP, and FTP connections. To assess these models we present a quantitative methodology for comparing their effectiveness with that of empirical models such as Tcplib [Danzig91]. Our methodology also allows us to determine which random variables show significant variation from site to site, over time, or between stub networks and internetwork gateways. Overall we find that the analytic models provide good descriptions, and generally model the various distributions as well as empirical models.
Abstract: We analyze the growth of a large research laboratory's wide-area TCP connections over a period of three years. Our data consisted of eight month-long traces of all TCP connections made between the site and the rest of the world. We find that many TCP protocols exhibited exponential growth in the number of connections made and bytes transferred, even though the number of hosts at the site only grew linearly. While the exponential growth of some of the protocols began tapering off with the final datasets, relatively new information-retrieval protocols such as Gopher and World-Wide Web exhibited explosive growth during the same time. Our study also found that individual users greatly affected the site's traffic profile by the inadvertent or casual initiation of multiple, periodic wide-area connections; that exponential growth is fed in part by more users ``discovering'' the Internet and in part by existing users increasingly incorporating use of the Internet into their work patterns; and that wide-area traffic geography is diverse and dynamic.
Abstract: Glish is a software system for building high-level control applications out of modular, event-oriented programs. Glish provides these applications with a high degree of flexibility, so they can adapt quickly to changing requirements. We describe the strengths of the "software bus" approach, how Glish can direct and modify interprocess communication within a distributed application, and how the system is currently used for orbit-correction at the Advanced Light Source at LBL.
Abstract: We describe "Glish", an interpreted language for building distributed systems from modular, event-oriented programs. These programs are written in conventional languages such as C, C++, or FORTRAN. Glish scripts can create local and remote processes and control their communication. Glish also provides a full, array-oriented programming language for manipulating binary data sent between the processes. In general Glish uses a centralized communication model where interprocess communication passes through the Glish interpreter, allowing dynamic modification and rerouting of data values, but Glish also supports point-to-point links between processes when necessary for high performance. Glish is available via anonymous ftp.
Abstract: The degree to which hardware and operating systems support debugging strongly influences the caliber of service that a debugger can provide. We survey the different forms in which such support is available. We limit our survey to lower-level debugger design issues such as accessing the debugged program's state and controlling its execution. The study concentrates on those types of support that make overall debugger performance efficient and that support debugger features for ferreting out hard-to-find bugs. We conclude with an overview of state-of-the-art debuggers and a proposal for a new debugger design.
Return to [ the Network Research Group].
Maintained by www8888@ee.lbl.gov