Credit Net networks are currently operational at both CMU and Harvard. The CMU testbed currently has links running at OC3 and OC12 rates and includes about 10 nodes, which will grow to about 25 high-end personal computers in the next few months. Research topics on our agenda include:
The performance of the adapters using a 90 MHz Pentium platform running NetBSD is shown below. The sustained bandwidth is close to the maximum OC3 rate for packets as small as 5 KByte, while for TCP and UDP (measured using ttcp) the performance is limited by the performance of the platform.
On a 133 MHz platform, using a newer PCI bridge chip (Triton instead of Neptune), we can sustain full duplex OC3 throughput (aggregate throughput of 268 Mbs), achieving over 90% of the bandwidth for packets as small as 1000 Bytes. We are in the process of collecting TCP and UDP results.
The adapter is described in more detail in: Host and Adapter Buffer Management in the Credit Net ATM Host Interface, Corey Kosak, David Eckhardt, Todd Mummert, Peter Steenkiste, and Allan Fisher, Proceedings of the 20th Annual Conference on Local Computer Networks, IEEE Computer Society, Minneapolis, September 1995. A method of using the adaptor to support remote memory deposits is described in Fine Grain Parallel Communication on General Purpose LANs, Todd Mummert, Corey Kosak, Peter Steenkiste, and Allan Fisher, International Conference on Supercomputing, ACM, Philadelphia, May 1996.
The basic result is that without flow control, cells get lost resulting in very poor performance, as measured using ttcp. This is illustrated by the traces shown below (traces on left): packets are lost, resulting in TCP timeouts and loss of throughput. As is shown in the traces on the right, with credit-based flow control we achieve good throughput (the sum of the throughput of the links equals the OC3 link bandwidth) and fair sharing of the bandwidth. If TCP is competing with a traffic stream without backoff, e.g. a video stream, throughput drops to close to 0.
Preliminary results on a more systematic performance comparison between rate- and credit-based flow control schemes can be found in Experimental Evaluation of ATM Congestion Control Mechanisms, Prashant Chandra, Allan Fisher, Corey Kosak, and Peter Steenkiste, IEEE ATM Workshop 1996, San Fransisco, August 1996.
More recently we implemented an all-software credit implementation on the host. The host interprets incoming credit cells and schedules packets based on the availability of credit; no hardware support on the adapter is needed. No flow control is used for the incoming data stream under the assumption that the host should should have enough buffer space to store incoming data. The results are summarized below. The nodes used in the test are 90 MHz Pentium PCs.
A more complete description and evaluation of both rate and credit-based flow control implementations on the end-points can be found in Implementation of ATM Endpoint Congestion Control Protocols, Prashant R. Chandra, Allan L. Fisher, Corey Kosak and Peter A. Steenkiste, Hot Interconnects, Stanford, IEEE, August 1996.