Next: Future directions
Up: Dimensionality reduction of Markov
Previous: Running time of dimensionality
Contents
Concluding remarks
In this chapter, a new analytical approach, dimensionality reduction
(DR), is proposed for analyzing multidimensional Markov chains
(Markov chains on a multidimensionally infinite state space). The
key idea in DR is in approximating an infinite portion of a Markov
chain, which often corresponds to the ``busy period'' of a system, by
a collection of PH distributions, which match the first three moments
of the duration of the busy period conditioned on how the Markov chain
enters and exits from the busy period. As DR involves approximations,
we have validated its accuracy against simulation. Overall, we
find that the error in DR is quite small: specifically, the error in
DR is within 3% (often within 1%) for the first order metric (such
as mean delay and mean queue length) and within
7% for the second order metric (such as the second moment of queue
length) for a range of parameters.
To reduce the computational complexity of DR, we have introduced two
approximations of DR: DR-PI and DR-CI. Although these approximations
slightly worsen the accuracy, they can significantly reduce the
running time.
DR allows us to analyze the performance of a multiserver system whose
behavior is modeled as a multidimensional Markov chain. Since it is a
nontrivial task to determine whether DR can be applied to a particular
multiserver system and how, we formally define a class of
multidimensional Markov chains called RFB/GFB processes (recursive
foreground-background processes or generalized foreground-background
processes) that can be analyzed via DR. The definition of RFB/GFB
processes makes it easier to determine whether a given multiserver system can
be analyzed via DR, and our analysis of RFB/GFB processes enables
one to analyze the multiserver system by simply modeling it as an RFB/GFB
process. In fact, many multiserver systems with resource sharing or
job prioritization can be modeled as RFB/GFB processes,
and we will analyze the performance of some of these multiserver systems
in Part II:
- multiserver systems with multiple priority classes (see Chapter 4),
- size-based task assignment with cycle stealing under immediate dispatching,
SBCS-ID, for server farms (see Chapter 5),
- size-based task assignment with cycle stealing under central queue,
SBCS-CQ, for server farms (see Chapter 5),
- threshold-based policies for reducing switching costs in cycle stealing (see Chapter 6), and
- various threshold-based policies for the Beneficiary-Donor model (see Chapter 7).
Beyond an analysis of multiserver systems, DR and ideas in DR have
broad applicability in computer science, engineering, operations
research, etc., where Markov chains on large state spaces are
involved. The ideas in DR can be used to reduce the size of the state
space from infinite to finite or from large to small. For example,
many optimization problems in stochastic environment can be formulated
as Markov decision problems [159], but determining the
optimal solution (optimal policy) is often prohibitive due to a
large state space. The ideas in DR may be used to reduce the size of
the state space in Markov decision processes,
so that the computational complexity in solving the Markov decision problems is reduced.
In response to increased request, DR as well as its approximations,
DR-PI and DR-CI, as described in this chapter are largely implemented,
and the latest implementation is made available at an online code
repository:
Subsections
Next: Future directions
Up: Dimensionality reduction of Markov
Previous: Running time of dimensionality
Contents
Takayuki Osogami
2005-07-19