next up previous contents
Next: Proof of Lemma 5 Up: Proofs Previous: Proofs   Contents

Proof of Lemma 3

We prove the lemma by induction on $n$.

Base case ($n=1$): Any one-phase PH distribution is a mixture of $O$ and an exponential distribution, and the $r$-value is always $\frac{3}{2}$.

Inductive case: Suppose that the lemma holds for $n\leq k$. We show that the lemma holds for $n=k+1$ as well.

Consider any $(k+1)$-phase acyclic PH distribution, $G$, which is not an Erlang distribution. We first show that there exists a PH distribution, $F_1$, with $r^{F_1}\leq r^G$ such that $F_1$ is the convolution of an exponential distribution, $X$, and a $k$-phase PH distribution, $Y$. The key idea is to see any PH distribution as a mixture of PH distributions whose initial probability vectors, $\Vec\tau$, are base vectors. For example, the three-phase PH distribution, $G$, in Figure 2.1, can be seen as a mixture of $O$ and the three 3-phase PH distribution, $G_i$ ($i=1,2,3$), whose parameters are $\vec{\tau}^{G_1} = (1, 0, 0)$, $\vec{\tau}^{G_2} = (0, 1, 0)$, $\vec{\tau}^{G_3} = (0, 0, 1)$, and $\mathbf{T^{G_1}} = \mathbf{T^{G_2}} = \mathbf{T^{G_3}} = \mathbf{T^{G}}$. Proposition 5 and Lemma 2 imply that there exists $i\in \{1,2,3\}$ such that $r^{G_i}\leq r^G$. Without loss of generality, let $r^{G_1}\leq r^G$ and let $F_1=G_1$; thus, $r^{F_1}\leq r^G$. Note that $F_1$ is the convolution of an exponential distribution, $X$, and a $k$-phase PH distribution, $Y$.

Next we show that if $F_1$ is not an Erlang distribution, then there exists a PH distribution, $F_2$, with no greater $r$-value (i.e. $r^{F_2}\leq
r^{F_1}$). Let $Z$ be a mixture of $O$ and an Erlang-$k$ distribution, $E_k$, (i.e. $Z(\cdot) = pO(\cdot) +
(1-p)E_k(\cdot)$), where $p$ is chosen such that $\mu_1^{Z}=\mu_1^{Y}$ and $m_2^{Z}=m_2^{Y}$. There always exists such a $Z$, since the Erlang-$k$ distribution has the least $m_2$ among all the PH distributions (in particular $m_2^{E_k} \leq
m_2^{Y}$) and $m_2$ is an increasing function of $p$ ( $m_2^{Z} =
m_2^{E_k} / (1-p)$). Also, observe that, by Proposition 5 and the inductive hypothesis, $r^{Z}\leq r^{Y}$. Let $F_2$ be the convolution of $X$ and $Z$, i.e. $F_2(\cdot) = X(\cdot) \ast Z(\cdot)$. We prove that $r^{F_2}\leq
r^{F_1}$. Let $y = \mu_1^{Y} / \mu_1^{X}$. Then,

\begin{eqnarray*}
r^{F_1} & = & \frac{\left(r^{X}(m_2^{X})^2+3m_2^{X}y+3m_2^{Y}y...
...3\right)(1+y)}{\left(m_2^{X}+2y+m_2^{Z}y^2\right)^2}
= r^{F_2},
\end{eqnarray*}

where the inequality follows from $\mu_1^{Z}=\mu_1^{Y}$, $m_2^{Z}=m_2^{Y}$, and $r^{Z}\leq r^{Y}$.

Finally, we show that an Erlang distribution has the least $r$-value. $F_2$ is the convolution of $X$ and $Z$, and it can also be seen as a mixture of $X$ and a distribution, $F_3$, where $F_3(\cdot) = X(\cdot) \ast E_k(\cdot)$. Thus, by Lemma 2, at least one of $r^{X}\leq r^{F_2}$ and $r^{F_3}\leq r^{F_2}$ holds. When $r^{X}\leq r^{F_2}$, the $r$-value of the Erlang-($k+1$) distribution, $r^{E_{k+1}}$, is smaller than $r^{F_2}$, since $r^{E_{k+1}}<r^{X} \leq r^{F_2}$. When $r^{X}> r^{F_2}$ (and hence $r^{F_3}\leq r^{F_2}$), $r^{E_{k+1}}\leq r^{F_3}\leq r^{F_2}$ can be proved by showing that $r^{F_3}$ is minimized when $\mu_1^{X} = \mu_1^{E_k} / k$. Let $y = \mu_1^{X} / \mu_1^{E_k}$. Then,

\begin{displaymath}
r^{F_3} = \frac{\left(r^{E_k}(m_2^{E_k})^2 + 3m_2^{E_k}y + 6y^2 + 6y^3\right)(1+y)}{\left(m_2^{E_k}+2y+2y^2\right)},
\end{displaymath}

where $r^{E_k}=(k+2) / (k+1)$ and $m_2^{E_k}=(k+1) / k$. Therefore,

\begin{displaymath}
\frac{\partial r^{F_3}}{\partial y} = \frac{2k(k+1)(6ky^2+6k...
...\left(\frac{k+1}{k}+2y+2y^2\right)}\left(y-\frac{1}{k}\right).
\end{displaymath}

Since $k>1$, $r^{F_3}$ is minimized at $y = 1 / k$.     width 1ex height 1ex depth 0pt


next up previous contents
Next: Proof of Lemma 5 Up: Proofs Previous: Proofs   Contents
Takayuki Osogami 2005-07-19