The Credit Net ATM Project
This is the World Wide Web home page of the
ARPA-funded Credit Net (VC Nectar) project in the
School of Computer Science at
Carnegie Mellon University.
Project Overview
The Credit Net project has developed a 622 Mbit/second flow-controlled ATM
network. The network consists of switches built by Bell Northern
Research and PCI host adapters built by Intel Architecture Labs, designed
in cooperation with Harvard and CMU, respectively. Its features include
- High speed: the 622 megabits per second of a Credit Net link are four
times as fast as commercial ATM LAN products.
- Flow control: the Credit Net switches and adapters incorporate
sophisticated flow control algorithms that prevent congestion collapse of
the network under any possible load.
- Multicast: the Credit Net switches include hardware support for
efficient multicast.
- Standards compliance: Credit Net uses standard SONET links at OC-3 and
OC-12 speeds, and complies with national and international data formats and
protocols.
As in its predecessor project,
Gigabit Nectar,
the research is focused on building a system that
supports applications effectively.
Credit Net networks are currently operational at both CMU and Harvard.
The CMU testbed currently has links running at OC3 and OC12 rates and includes about 10 nodes, which will grow to about
25 high-end personal computers in the next few months.
Research topics on our agenda include:
- ATM flow control
- Low overhead protocols
- Multimedia applications
- Support for compiled distributed programs
- Heterogeneous quality of service
- Programming interfaces for network-aware applications
This page has more information on the Credit Net
adapter and some results of early credit-based flow
control experiments. You can find more detailed information in our
Credit Net
papers .
Intel and CMU have developed
622 megabits per second host interface for the PCI bus. The main
component on the board is an ASIC that supports standard ATM cell
processing and management of host buffers. The ASIC, combined
with memory to store per-VC state and a physical layer interface,
can form a highly intergrated OC3 or OC12 ATM adapter (below on the
left). Alternatively, an optional
960 microcontroller can be added to support more experimental features of
the network such as flow control (below on the right).
The performance of the adapters using a 90 MHz Pentium platform running
NetBSD is shown below. The sustained bandwidth is close to the maximum
OC3 rate for packets as small as 5 KByte, while for TCP and UDP (measured
using ttcp) the performance is limited by the performance of the
platform.
On a 133 MHz platform, using a newer PCI bridge chip (Triton
instead of Neptune), we can sustain full duplex OC3
throughput (aggregate throughput of 268 Mbs), achieving over 90% of
the bandwidth for packets as small
as 1000 Bytes. We are in the process of collecting TCP and UDP results.
The adapter is described in more detail in:
Host and Adapter Buffer Management in the Credit Net ATM Host Interface,
Corey Kosak, David Eckhardt, Todd Mummert, Peter Steenkiste, and Allan
Fisher, Proceedings of the 20th Annual Conference on Local Computer Networks,
IEEE Computer Society, Minneapolis, September 1995.
A method of using the adaptor to support remote memory deposits is described in
Fine Grain Parallel Communication on General Purpose LANs,
Todd Mummert, Corey Kosak, Peter Steenkiste, and Allan Fisher,
International Conference on Supercomputing, ACM, Philadelphia, May 1996.
Credit-based flow control has been implemented in both the Credit Net
switches and hosts.
In February 1995 we demonstrated that credit-based flow control can
eliminate the cell loss, and resulting drop in performance, on congested
links inside an ATM network. The results are summarized below, and more
details can be found
here .
The basic result is that without flow control, cells get lost resulting
in very poor performance, as measured using ttcp. This is illustrated by the
traces shown below (traces on left): packets are lost, resulting in
TCP timeouts and loss of throughput. As is shown in the traces on the
right, with credit-based flow control we achieve good throughput
(the sum of the throughput of the links equals the OC3 link bandwidth) and
fair sharing of the bandwidth. If TCP is competing
with a traffic stream without backoff, e.g. a video stream, throughput
drops to close to 0.
Preliminary results on a more systematic performance comparison between rate-
and credit-based flow control schemes can be found in
Experimental Evaluation of ATM Congestion Control Mechanisms,
Prashant Chandra, Allan Fisher, Corey Kosak, and Peter Steenkiste,
IEEE ATM Workshop 1996, San Fransisco, August 1996.
More recently we implemented an all-software credit implementation on the host.
The host interprets incoming credit cells and schedules packets based on the
availability of credit; no hardware support on the adapter is needed. No flow
control is used for the incoming data stream under the assumption that the host
should should have enough buffer space to store incoming data. The results are
summarized below. The nodes used in the test are 90 MHz Pentium PCs.
A more complete description and evaluation of both rate and credit-based flow control
implementations on the end-points can be found in
Implementation of ATM Endpoint Congestion Control Protocols,
Prashant R. Chandra, Allan L. Fisher, Corey Kosak and Peter A. Steenkiste,
Hot Interconnects, Stanford, IEEE, August 1996.
Related Projects
The Credit Net group works closely with several other research projects at CMU,
including
the
iWarp project,
the
Fx parallel FORTRAN compiler project,
Dome,
Scotch parallel storage, the
Environmental modeling NSF grand challenge application ,
and the
multicomputer project.
People
Allan Fisher
and
Peter Steenkiste
lead the project at CMU.
David Eckhardt,
Corey Kosak,
Todd Mummert, and
Prashant Chandra
form the rest of the inner circle.
H. T. Kung
leads
the Harvard team's
switch building
project.
prs@cs.cmu.edu