In this section, we evaluate the running time of DR, DR-PI, and DR-CI. We measure the running time as CPU time on a 1 GHz Pentium III with 512 MB RAM, using Matlab 6 running on Linux. In all our experiments, we approximate the ``busy period'' distribution by a two-phase PH distribution.
We first discuss the running time of DR when it is applied to analyze
the preemptive priority queue, specifically an M/M/ queue with
priority classes. Figure 3.35 shows the running
time (a) as a function of the number of servers,
, and (b) as a
function of the number of classes,
. Here, we assume that all the
classes have the identical service demand distribution (the
exponential distribution distribution with rate
), and have the
identical arrival rate,
. We choose
and
such
that the total load is 0.5, i.e.
.
Although not shown, the running time becomes longer when the load is
higher or when the error bound
is set smaller.
The solid line in Figure 3.35(a) illustrates the
computational efficiency of the DR when it is applied to an FB process
(2D Markov chain), showing the running time of DR for two priority
classes () as a function of the number of servers,
. Recall
the DR analysis of the preemptive priority queue with two classes
(
), illustrated in Figure 3.3. Most of the running
time is devoted to computing the stationary
probabilities in the 1D Markov chain (Figure 3.3(d))
which is reduced from the FB process. When there are
servers, the
1D Markov chain has
states in each level (level
), and the running time increases as the
number of states in each level in the 1D Markov chain increases. As shown in the
figure, the running time increases quite slowly with the number of
servers, and in fact, the running time is within 30 seconds for up to
servers.
Overall, DR is computationally efficient when it is applied
to 2D Markov chains, i.e. RFB processes with
(FB processes)
and GFB processes. For most of the analysis of the 2D Markov chains
in Chapters 5-7,
the running time of DR is within 0.1 seconds.
|
The solid line in Figure 3.35(b) shows the running time
of DR as a function of the number of classes, , when the number of
servers is
. This is the running time of DR when it is applied to
D Markov chains. Again, the running time increases gradually as
increases, and it is within 40 seconds for up to
servers (
D
Markov chains). In fact, as we will see later, the running time is
bounded by a polynomial in
.
However, the rest of
Figure 3.35 suggests that the running time of DR
increases quickly when both and
increase.
In general,
the running time of DR is dominated by the time to compute the
stationary probabilities in the 1D Markov chain that tracks the exact
number of lowest priority jobs (class 1 jobs), i.e. the first
(foreground) process.
This 1D Markov chain has
types of busy periods of higher priority jobs (jobs of classes 2 to
),
and hence the 1D Markov chain has
Note that, in the analysis of the preemptive priority queue,
the number of different types of busy periods of higher
priority jobs in the -th process (foreground process) does not depend on the
number of phases in the PH distributions that are used in the analysis
of higher priority jobs, i.e. in the analysis of the
-th
(background) process for
. However, this is not in
general true in the analysis of the RFB process. Specifically, in the
analysis of SBCS-ID, the number of phases in the PH distributions
that are used in the analysis of the background processes affects the
number of different types of busy periods in the foreground processes.
As a result, the running time of DR increases more quickly as the
number of classes,
, increases in the analysis of SBCS-ID. This
is exactly a situation where an approximation of DR is needed.
Figure 3.36 shows the running time of DR, DR-PI, and
DR-CI as a function of the number of classes, , when they are
applied to the analysis of SBCS-ID. In all the plots, we assume
that the load made up of each class is fixed at 0.8 (i.e.
= 0.8), and
is chosen such that
class 1 jobs are the shortest and class
jobs are the longest
(``stealing idle cycles of a server for longer jobs''; specifically,
), or
is chosen such that class 1 jobs are the
longest and class
jobs are the shortest (``stealing idle cycles of
a server for shorter jobs''; specifically,
). Although
not shown, the running times of DR, DR-PI, and DR-CI tend to increase
when the load is higher of when the error bound
is set
smaller.
|
In both cases, the evaluation of RDR becomes prohibitive when . The running time of DR-PI also quickly grows, and its evaluation
becomes prohibitive when
in both cases. The
running time of DR-CI grows far more slowly than DR and DR-PI. We are
able to evaluate DR-CI for up to 21 servers in less than a minute.
The running time of DR, DR-PI, and DR-CI can be compared to
the number of states in each level (phases) in the 1D Markov chain that tracks
the exact number of class 1 jobs (foreground process). In DR, the
number of phases
, grows double exponentially;
specifically,
can be determined by the following recursive
formula:
and
. In DR-PI, the number of phases grows
exponentially:
. In DR-CI, the number of phases
grows linearly:
.