Next, we derive the response time of the low priority (class L) jobs,
again by conditioning on the state that an arrival sees via PASTA.
Recall the Markov chain in Figure 3.28(b).
Let
be
the distribution function of
the response time of the class L job that sees
state
, i.e. phase
of level
, when it arrives, and
be its
-th moment.
The distribution function of
the response time of the class L jobs
and its
-th moment are then given by
Observe that a tagged (class L) arrival that sees state
will cause the system state to change to
at that moment.
We remove all the
arcs from the Markov chain in
Figure 3.28(b), so that there are no more class L arrivals.
(Note that the behavior of the tagged arrival is not affected by the
class L jobs that will arrive after the tagged arrival, since class L
jobs are served in FCFS order and we assume that the ``youngest'' class L
job is preempted by a high priority job.) This enables us to view the
response time for the tagged arrival as the first passage time of this
modified Markov chain from state
to the state where the
tagged arrival departs. The only complexity is in figuring out
exactly in which state the tagged arrival departs.
If the tagged arrival sees state for some
, its
response time is the passage time from state
to (any
state in) level
in the modified Markov chain, since the
tagged arrival is the only class L job in the modified system.
If the tagged arrival sees a state in level
(and the
system state is changed to level
), the tagged arrival may
depart when the modified Markov chain hits level 1 or level 0,
depending on whether the tagged arrival is the last job to be
completed or not. We will compute the response time by conditioning
on the first state in level
that we hit in the modified
Markov chain. Note that the first state in level
is either
or
, since there are no backward transitions
in the bottom two rows in the (modified) Markov chain.
If is the first state in level
, we must have
gotten there from
. This implies that the remaining class L
job is the tagged arrival, since class L jobs are
served in FCFS order and we assume that the ``youngest'' class L job is
preempted by a high priority jobs. Thus, in this case the response
time of the tagged arrival that sees
upon its arrival is
the passage time from
to
plus the passage
time from
to level 0.
If is the first state in level
, we must have
gotten there from state
. This implies that the
remaining class L job is equally likely to be the tagged arrival or
not by the memoryless property of exponential distributions. Thus, in
this case the response time of the tagged arrival that sees
upon its arrival is the passage time from
to
with probability
, and the passage time from
to
plus the passage time from
to level 0
with probability
.
Figure 3.29(b) summarizes the above argument. The
response time of a job that sees state upon its arrival is
the passage time from state
to state 0 in the Markov
chain in Figure 3.29(b). Observe that
Figure 3.29(b) is obtained from
Figure 3.28(b) by removing all the transitions with
's and splitting the transition (with rate
) from
state
to state
) into two transitions (each
with rate
), one to state
and the other to state 0.
The passage time from any state to state 0 in the Markov
chain in Figure 3.29(b) is a PH distribution (as
defined in Section 2.2), where state
is
the initial state and state 0 is the absorbing state.
Note that the distribution function and moments of the PH distribution
can be computed via Proposition 4.
In fact, by (3.11),
the distribution of the response time of the low priority jobs
can be represented as a single PH distribution (a mixture of
PH distributions is a PH distribution by Proposition 3),
where the initial state is
with probability
for all
and
,
and state 0 is the absorbing state.
To compute the distribution function of the single PH distribution and its moments
via Proposition 4,
the state space of the Markov chain in Figure 3.29(b) needs
to be truncated.
One can simply truncate it at some large level,
or one can use DR to reduce the 1D Markov chain into a Markov chain on a finite state space
as we reduce Figure 3.4(b) into Figure 3.5.
Alternatively, the moments of the passage time (from level to level 0) can be computed using
the method provided in Section 3.7, which can be more
computationally efficient than the approach via
Proposition 4. Note that there are no forward
transitions in the modified Markov chain, and the expressions in
Section 3.7 are significantly simplified: specifically,
matrices
and
are zero matrices in the
modified Markov chain for all
and
.
In general, the mean response time in the system that is modeled as an RFB/GFB process can be derived via Little's law, as long as the number of jobs is well defined for each state. However, the response time may or may not be represented as a passage time in a modified Markov chain, and its distribution and moments may or may not be analyzed via the approach introduced in this section. Also, the mean number of jobs in the queue (number of jobs that are waiting and not in service) can be derived via Little's law, but its distribution and moments may or may not be derived via an analysis of passage times. Also, note again that the error in the higher moments may be larger, since we match only the first three moments of the ``busy period'' in DR.