We evaluated our distributed version of Quake 2 under two types of emulated deployment scenarios: 1) a purely peer-to-peer deployment where each server is colocated with a client, and 2) an ``edge-server'' style deployment where the game is deployed on a number of different servers across the Internet (similar to a Content Distribution Network or a federation of player deployed servers). In the later case, we only examine the inter-server communication costs and delays under the assumption that clients connect to the closest server.
For both scenarios we emulated the network environment on the Emulab network testbed [28]. The environment did not constrain link capacity (we used a 100Mbps switched LAN as the underlying physical topology), but did emulate end-to-end latencies by approximately delaying packets using pairwise latencies sampled from the King P2P dataset collected at MIT [6]. The median round trip latency was approximately 80ms for the peer-to-peer experiments and 90ms for the edge-server experiments. In the peer-to-peer experiments, we ran 3 virtual servers for each physical Emulab node, while in the edge-server experiments, we ran a single instance of a server per node.
In each experiment, Mercury utilizes route caching with a cache size of where is the total number of servers in the system. Although it is possible to maintain a larger cache when there are less than a hundred servers in the system, we want to demonstrate that it is not required to scale the cache linearly with the number of servers and that in the presence of server failures, it is only necessary to periodically check the staleness of a small number of cache entries. Li et al. [26] describe a method for dynamically optimize the number of entries as the rate of node churn changes that can readily be applied to Mercury.