nextupprevious
Next:5.4 Hierarchical controls of dynamicUp:5. Hierarchical Controls under thePrevious:5.2 Hierarchical control of single

  
5.3 Hierarchical controls of dynamic flowshops

We are given a stochastic process${\mbox{\boldmath$m$ }}(\varepsilon, t)=(m_1(\varepsilon, t),...,m_{N}(\varepsilon,t))$ on the standard probability space$(\Omega, {\cal F}, P)$, where $m_k(\varepsilon,t), k=1,...,N, $ is the capacity of the kth machine at time t, and $\varepsilon$ is a small parameter to be specified later. We use $u_k^{\varepsilon}(t)$ to denote the input rate to the kth machine, k=1,...,N, and$x_k^{\varepsilon} (t)$ to denote the number of parts in the buffer between the kth and (k+1)th machines, k=1,...,N-1. We assume a constant demand rate z. The difference between cumulative production and cumulative demand, called surplus, is denoted by $x_N^{\varepsilon}(t)$. If$x_N^{\varepsilon}(t)>0$, we have finished good inventories, and if $x_N^{\varepsilon}(t)<0$, we have a backlog.

The dynamics of the system can then be written as follows:

$\displaystyle \dot x_k^{\varepsilon}(t)=-a_kx_k^{\varepsilon}(t)+u_k^{\varepsilon}(t)-u_{k+1}^{\varepsilon}(t), \x_k^{\varepsilon}(0)=x_k^0, \ k=1,...,N,$
    (5.21)
where $u_{N+1}^{\varepsilon}(t)=z$ and ak>0 are constants. The attrition rate ak represents the deterioration rate of the inventory of the part type k when $x_k^{\varepsilon}(t)>0$ (k=1,...,N-1), and it represents a rate of cancelation of backlogged orders for finished goods when $x_N^{\varepsilon}(t)<0$. We assume symmetric deterioration and cancelation rates for finished good N only for convenience in exposition. It would be easy to extend our results if a+N>0 had denote the deterioration rate and a-N>0 had denoted the order cancelation rate.

Equation (5.21) can be written in the following vector form:

$\displaystyle \dot{\mbox{\boldmath$x$ }}^{\varepsilon}(t)=-\mbox{diag}({\mbox{\......lon}(t)+Bz, \ {\mbox{\boldmath$x$ }}^{\varepsilon}(0)={\mbox{\boldmath$x$ }}^0,$
    (5.22)
where A and B are given in Section 2.2, and ${\mbox{\boldmath$a$ }}=(a_1,...,a_N)$. Since the number of parts in the internal buffers cannot be negative, we impose the state constraints $x_k^{\varepsilon}(t)\geq 0$, k=1,..., N-1. To formulate the problem precisely, let $S=[0,\infty)^{N-1}\times(-\infty, \infty) \subseteq R^N$ denote the state constraint domain, for ${\mbox{\boldmath$m$ }}=(m_1,...,m_N), m_k\geq 0, k=1,..., N$, let
$\displaystyle U({\mbox{\boldmath$m$ }})= \{ {\mbox{\boldmath$u$ }}=(u_1,...,u_N,d): 0 \leq u_k \leq m_k,\ k=1,...,N\},$
    (5.23)
and for ${\mbox{\boldmath$x$ }}\in S$ let
$\displaystyle U({\mbox{\boldmath$x$ }},{\mbox{\boldmath$m$ }})$ = $\displaystyle \{{\mbox{\boldmath$u$ }}: {\mbox{\boldmath$u$ }}\in U({\mbox{\boldmath$m$ }}) \ \mbox{and} \x_k =0 \Rightarrow u_k-u_{k+1} \geq 0,$  
    $\displaystyle \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ k=1,...,N-1\}.$ (5.24)
Let the sigma algebra ${\cal F}_t^{\epsilon}=\sigma \{{\mbox{\boldmath$m$ }}(\varepsilon,s):0 \leq s \leq t\}$. We now define the concept of admissible controls.

Definition 5.5   We say that a control ${\mbox{\boldmath$u$ }}^{\varepsilon}(\cdot)=(u_1^{\varepsilon}(\cdot),...,u_N^{\varepsilon}(\cdot),d)$ is admissible with respect to the initial state vector${\mbox{\boldmath$x$ }}^0=(x_1^0,...,x_N^0) \in S$ if

(i)
${\mbox{\boldmath$u$ }}^{\varepsilon}(\cdot )$ is an ${\calF}_t^{\epsilon}$-adapted measurable process;
(ii)
${\mbox{\boldmath$u$ }}^{\varepsilon}(t) \inU({\mbox{\boldmath$m$ }}(\varepsilon,t)) $ for all$t\geq 0$;
(iii)
the corresponding state process${\mbox{\boldmath$x$ }}^{\varepsilon}(t)=(x_1^{\varepsilon}(t),...,x_N^{\varepsilon}(t))\in S$ for all$t\geq 0$.
The problem is to find an admissible control ${\mbox{\boldmath$u$ }}^{\varepsilon}(\cdot )$ that minimizes the cost function
$\displaystyle J^{\varepsilon}({\mbox{\boldmath$x$ }}^0, {\mbox{\boldmath$m$ }}^......oldmath$x$ }}^{\varepsilon}(t))+u({\mbox{\boldmath$u$ }}^{\varepsilon}(t))]dt,$
    (5.25)
where $h(\cdot)$ defin[Aes the cost of inventory/shortage,$c(\cdot)$ is the production cost, and ${\mbox{\boldmath$m$ }}^{0}$ is the initial value of ${\mbox{\boldmath$m$ }}(\varepsilon, t)$. We impose the following assumptions on the random process${\mbox{\boldmath$m$ }}(\varepsilon, t)=(m_1(\varepsilon, t),...,m_{N}(\varepsilon,t))$ and the cost function $h(\cdot)$ and $c(\cdot)$ throughout this section.

Assumption 5.2   Let ${\cal M}=\{{\mbox{\boldmath$m$ }}^1,...,{\mbox{\boldmath$m$ }}^p\}$ for some given integer $p\geq1$, where ${\mbox{\boldmath$m$ }}^j=(m_1^j,...,m^j_{N})$, withmjk, k=1,...,N, denoting the capacity of the kth machine, j=1,...,p. The capacity process ${\mbox{\boldmath$m$ }}^{\varepsilon}(t)\in{\cal M}$ is a finite state Markov chain with the infinitesimal generator $Q=Q^{(1)}+\epsilon^{-1}Q^{(2)}$, whereQ(1)=(qij(1)) and Q(2)=(qij(2)) are matrices such that $q_{ij}^{(r)}\geq 0$ if $j \neq i$, and$q_{ii}^{(r)}=-\sum_{j \neq i}q_{ij}^{(r)}$ for r=1,2. Moreover, Q(2) is irreducible and, without any loss of generality, it is taken to be the one that satisfies

\begin{displaymath}\min_{ij}\{\vert q^{(2)}_{ij}\vert: q_{ij}^{(2)}\neq 0\}=1.\end{displaymath}

Assumption 5.3   Assume that Q(2) is weakly irreducible. Let $\gamma = ( \gamma_1,...,\gamma_p)$ denote the equilibrium distribution of Q(2). That is,$\gamma$ is the only nonnegative solution to the equation

$\displaystyle \gamma Q^{(2)}=0 \ \mbox{and} \ \sum_{i=1} ^p \gamma_i =1.$
    (5.26)
Furthermore, we assume that
$\displaystyle \min_{1 \leq k \leq N}\left \{ \sum_{j=1}^p\gamma_j m_k^j\right\}>d.$
    (5.27)
Assumption 5.4$h(\cdot)$ and $c(\cdot)$ are non-negative convex function. For all ${\mbox{\boldmath$x$ }},{\mbox{\boldmath$x$ }}' \in S$ and ${\mbox{\boldmath$u$ }},{\mbox{\boldmath$u$ }}' \inU({\mbox{\boldmath$m$ }}^j)$, j=1,...,p, there exist constants C and $\kappa_{51}\geq 1$ such that
\begin{displaymath}\vert h({\mbox{\boldmath$x$ }})-h({\mbox{\boldmath$x$ }}')\......1}})\vert{\mbox{\boldmath$x$ }}-{\mbox{\boldmath$x$ }}'\vert,\end{displaymath}


and

\begin{displaymath}\vert c({\mbox{\boldmath$u$ }})-c({\mbox{\boldmath$u$ }}')\......57} \vert{\mbox{\boldmath$u$ }}-{\mbox{\boldmath$u$ }}'\vert.\end{displaymath}
We use ${\cal A}^{\varepsilon}({\mbox{\boldmath$x$ }}^0,{\mbox{\boldmath$m$ }}^0)$ to denote the set of all admissible controls with respect to ${\mbox{\boldmath$x$ }}^0 \in S$ and $ {\mbox{\boldmath$m$ }}(0)={\mbox{\boldmath$m$ }}^0$. Let$\lambda^{\varepsilon}({\mbox{\boldmath$x$ }}^0,{\mbox{\boldmath$m$ }}^0)$ denote the minimal expected cost, i.e.,
$\displaystyle \lambda^{\epsilon}({\mbox{\boldmath$x$ }}^0,{\mbox{\boldmath$m$ }......$m$ }}^0)}J^{\varepsilon}({\mbox{\boldmath$x$ }}^0, {\mbox{\boldmath$m$ }}^0).$
    (5.28)
We know, by Theorem 2.4 in [], that under Assumption 5.3,$\lambda^{\varepsilon}({\mbox{\boldmath$x$ }}^0,{\mbox{\boldmath$m$ }}^0)$ is independent of the initial condition $( \mbox{\boldmath$x$ }^{0},\mbox{\boldmath$m$ }^{0})$. Thus we will use $\lambda^{\varepsilon}$ instead of$\lambda^{\varepsilon}({\mbox{\boldmath$x$ }}^0,{\mbox{\boldmath$m$ }}^0)$. We use ${\cal P}^{\varepsilon}$ to denote our control problem, i.e.,
$\displaystyle {\cal P}^{\varepsilon}:\left\{\begin{array}{llll}&\mbox{minimiz......psilon}({\mbox{\boldmath$x$ }}^0, {\mbox{\boldmath$m$ }}^0).\end{array}\right.$     (5.29)
As in Fleming and Zhang (1998), the positive attrition rate ${\mbox{\boldmath$a$ }}$ implies a uniform bound for ${\mbox{\boldmath$x$ }}^{\varepsilon}(t)$. Next we examine elementary properties of the relative cost known also as the potential function and obtain the limiting control problem as $\varepsilon\rightarrow0$. The HJBDD equation associated with the average-cost optimal control problem in ${\cal P}^{\varepsilon}$, as shown in Sethi, Zhang, and Zhang (1998), takes the form
$\displaystyle {\lambda }^{\varepsilon}$   $\displaystyle =\inf_{{\mbox{\boldmath$u$ }} \in U({\mbox{\boldmath$x$ }},{\mbox......math$u$ }}+Bz)} +c({\mbox{\boldmath$u$ }}) \right\} +h({\mbox{\boldmath$x$ }})$  
    $\displaystyle \ \ +\left(Q^{(1)}+\frac {1} {\varepsilon}Q^{(2)} \right)w^{\varepsilon}({\mbox{\boldmath$x$ }}, \cdot) ({\mbox{\boldmath$m$ }}^j),$ (5.30)
where $ w^{\varepsilon} ({\mbox{\boldmath$x$ }},{\mbox{\boldmath$m$ }}^j)$ is the potential function of the problem${\cal P}^{\varepsilon}$,$\frac {\partial w^{\varepsilon} ({\mbox{\boldmath$x$ }},{\mbox{\boldmath$m$ }}^......g}({\mbox{\boldmath$a$ }}){\mbox{\boldmath$x$ }}+A{\mbox{\boldmath$u$ }}+Bz)}$ denotes the directional derivative of $ w^{\varepsilon} ({\mbox{\boldmath$x$ }},{\mbox{\boldmath$m$ }}^j)$ along the direction $-\mbox{diag}({\mbox{\boldmath$a$ }}){\mbox{\boldmath$x$ }}+A{\mbox{\boldmath$u$ }}+Bz$, and $Qf(\cdot)({\mbox{\boldmath$m$ }}^j):=\sum_{i \neq j}q_{ji}(f(i)-f(j))$ for any function$f(\cdot)$ on ${\cal M}$. Moreover, following Presman, Sethi, and Zhang (1999b), we can show that there exists a potential function$w^{\varepsilon}({\mbox{\boldmath$x$ }}, {\mbox{\boldmath$m$ }})$ such that the pair $( \lambda^{\varepsilon} , w^{\varepsilon} ({\mbox{\boldmath$x$ }},{\mbox{\boldmath$m$ }}))$ is a solution of (5.30), where $\lambda^{\varepsilon}$ is the minimum average expected cost for ${\cal P}^{\varepsilon}$.

The analysis of the problem begins with the boundedness of$\lambda^{\varepsilon}$ proved in Sethi, Zhang, and Zhang (1999a).

Theorem 5.7   The minimum average expected cost $\lambda^{\varepsilon}$ of ${\cal P}^{\varepsilon}$is bounded in$\varepsilon$, i.e., there exists a constant M1 >0 such that

\begin{displaymath}0\leq \lambda ^{\varepsilon } \leq M_1 \ \ \mbox{for all} \ \varepsilon >0.\end{displaymath}

Next we derive the limiting control problem as $\varepsilon\rightarrow0$. Intuitively, as the rates of the machine breakdown and repair approach infinity, the problem ${\cal P}^{\epsilon}$, which is termed the original problem, can be approximated by a simpler problem called the limiting problem, where the stochastic machine capacity process ${\mbox{\boldmath$m$ }}(\varepsilon, t)$ is replaced by a weighted form. The limiting problem, which was first introduced in Sethi, Zhang, and Zhou (1994), is formulated as follows.

As in Sethi and Zhang (1994c), we consider the enlarged control space

\begin{displaymath}U(\cdot)=({\mbox{\boldmath$u$ }}^1(\cdot),...,{\mbox{\bold......cdot),...,u^1_N(\cdot)),...,(u^p_1(\cdot),..., u^p_N(\cdot)))\end{displaymath}


such that $0\leq u^j_k(t) \leq m^j_k$, for all $t\geq 0$, j=1,...,p, and k=1,...,N, and the corresponding solution of the system

\begin{eqnarray*}\left\{\begin{array}{lll}&\dot x_k(t)=-a_kx_k(t)+(\sum_{j......1}^p\gamma_ju^j_N(t)-d), \x_N(0)=x_N^0,\end{array}\right.\end{eqnarray*}
satisfy ${\mbox{\boldmath$x$ }}(t)\in {\cal S}$ for all $t\geq 0$. Let ${\cal A}^0({\mbox{\boldmath$x$ }}) $ represent the set of all these controls with ${\mbox{\boldmath$x$ }}(0)={\mbox{\boldmath$x$ }}$. The objective of this problem is to choose a control$ U(\cdot)\in {\cal A}^0$ that minimizes
\begin{displaymath}J( U(\cdot))=\limsup_{T \rightarrow \infty} \frac {1} {T} \......(s))+\sum_{j=0} ^p\gamma_j c({\mbox{\boldmath$u$ }}^j(s))]ds.\end{displaymath}


We use ${\cal P}^0$ to denote the above problem, and will regard this as our limiting problem. Then we define the limiting control problem ${\cal P}^0$ as follows:

\begin{displaymath}{\cal P}^0:\left \{\begin{array}{llll}& J( U(\cdot))=\l......f_{U(\cdot)\in {\cal A}^0} J(U(\cdot)).\end{array}\right.\end{displaymath}


The average cost optimality equation associated with the limiting control problem ${\cal P}^0$ is

$\displaystyle \lambda=\inf_{U \in {\cal A}^0 } \left \{ \frac {\partialw({\mbo......=0} ^p \gamma_jc({\mbox{\boldmath$u$ }}^j)\right \} +h({\mbox{\boldmath$x$ }}),$
    (5.31)
where $w({\mbox{\boldmath$x$ }})$ is a potential function for ${\cal P}^0$ and$\frac {\partial w({\mbox{\boldmath$x$ }})} {\partial (-\mbox{diag}({\mbox{\boldmath$a$ }}){\mbox{\boldmath$x$ }}+AU^0+Bz)}$ is the directional derivative of$\bar w({\mbox{\boldmath$x$ }})$ along the direction$-\mbox{diag}({\mbox{\boldmath$a$ }}){\mbox{\boldmath$x$ }}+AU^0+Bz$ with$U^0=\sum_{j=1}^p\gamma_j{\mbox{\boldmath$u$ }}^j(\cdot)$. From Presman, Sethi, and Zhang (1999a), we know that there exist $\bar \lambda$ and $\bar w({\mbox{\boldmath$x$ }})$ such that (5.31) holds. Moreover, $\bar w({\mbox{\boldmath$x$ }})$ is the limit of $\bar w^{\varepsilon}({\mbox{\boldmath$x$ }},{\mbox{\boldmath$m$ }})$ as $\varepsilon\rightarrow0$. Hierarchical controls are based on the convergence of the minimum average expected cost $\lambda^{\varepsilon}$ as$\varepsilon$ goes to zero. Thus we will consider the convergence, as well as the rate of convergence. To do this, we first give without proof the following lemma similar to Lemma C.3 of Sethi and Zhang (1994a)].

Lemma 5.4   Let

\begin{displaymath}\Phi(t)=\Phi({\mbox{\boldmath$m$ }}^{\varepsilon}(t))=(I_{\......ldmath$m$ }}^{\varepsilon}(t)={\mbox{\boldmath$m$ }}^p\}})^T.\end{displaymath}


Then, for any bounded deterministic measurable process$\beta(\cdot)$,$\delta\in [0,\frac{1}{2})$, and$\tau$, which is a Markov time with respect to ${\mbox{\boldmath$m$ }}^{\varepsilon}(\cdot)$, there exists positive constants C and $\kappa_{52}$ such that

\begin{eqnarray*}P\left(\left\vert\int^{\tau+t}_{\tau} (\Phi(s)-\gamma^T)\be......e^{-\kappa_{52}\varepsilon^{-(1-2\delta)}(1+t)^{-3}}\right),\end{eqnarray*}
for all$t\geq 0$and sufficiently small$\varepsilon$.
 

In order to get the required convergence result, we need the following auxiliary result, which is the key to obtaining the convergence result.

Lemma 5.5   For $\delta\in [0,\frac{1}{2})$and any sufficiently small$\varepsilon >0$, there exist C>0, ${\mbox{\boldmath$x$ }}=(x_1,...,x_N) \in {\cal S}$and

\begin{eqnarray*}U(\varepsilon, \cdot)&=&({\mbox{\boldmath$u$ }}^1(\varepsilon......N^p(\varepsilon, \cdot)))\in {\cal A}^0({\mbox{\boldmath$x$ }})\end{eqnarray*}
such that for each j=1,...,N
$\displaystyle \varepsilon^{\delta} \leq \sum_{i=1}^p u^i_j (\varepsilon, \cdot) \leq \sum_{i=1}^p\gamma_im^i_j-\varepsilon^{\delta},$
    (5.32)
and
$\displaystyle \lambda +C_{59} \varepsilon^{\delta}>\limsup_{T \rightarrow \inf......))+\sum_{i=1}^p\gamma_i c({\mbox{\boldmath$u$ }}^i(\varepsilon, t))\right]dt,$
    (5.33)
where ${\mbox{\boldmath$x$ }}(\varepsilon, t)$ is the trajectory under $U(\varepsilon, t).$
 

For the proof, see Sethi, Zhang, and Zhang (1999a). With the help of Lemma 5.5, Sethi, Zhang, and Zhang (1999a) give the following lemma.

Lemma 5.6   For $\delta\in [0,\frac{1}{2})$, there exist$\hat C_{51}>0$,$\hat C_{52}>0$${\mbox{\boldmath$x$ }}=(x_1,...,x_N) \in {\cal S}$, and

\begin{eqnarray*}U(\varepsilon, \cdot)&=&({\mbox{\boldmath$u$ }}^1(\varepsilon......^p(\varepsilon, \cdot)))\in {\cal A}^0({\mbox{\boldmath$x$ }}),\end{eqnarray*}
such that
$\displaystyle \min_{1 \leq k \leq N-1}\inf_{0 \leq t <\infty} x_k(\varepsilon, t)\geq \hat C_{51} \varepsilon^{\delta},$
    (5.34)
and
$\displaystyle \lambda +\hat C_{52} \varepsilon^{\delta}>\limsup_{T \rightarrow......t))+\sum_{i=1}^p\gamma_i c({\mbox{\boldmath$u$ }}^i(\varepsilon, t))\right]dt,$
    (5.35)
where ${\mbox{\boldmath$x$ }}(\varepsilon, t)=(x_1(\varepsilon, t),...,x_N(\varepsilon, t))$ is the state trajectory under the control $U(\varepsilon, t)$.
 

With Lemmas 5.4, 5.5 and 5.6, we can state the main result of this section proved in Sethi, Zhang, and Zhang (1999a).

Theorem 5.8   For any $\delta\in [0,\frac{1}{2})$, there exists a constant $\hat C_{53}>0$ such that for all sufficiently small$\varepsilon >0$,

$\displaystyle \vert\lambda^{\varepsilon}-\lambda\vert\leq \hat C_{53}\varepsilon^{\delta}.$
    (5.36)
This implies in particular that $\lim_{\varepsilon \rightarrow 0}\lambda^{\varepsilon}=\lambda.$
 

Finally we give the procedure to construct an asymptotic optimal control.

Construction of an Asymptotic Optimal Control

Step I: Pick an $\varepsilon$-optimal control$U(\cdot)=({\mbox{\boldmath$u$ }}^1(\cdot),...,{\mbox{\boldmath$u$ }}^p(\cdot))\in {\cal A}^0$ for ${\cal P}^0$, i.e.,

\begin{eqnarray*}\limsup_{T \rightarrow \infty}\frac{1}{T}\int^T_0 [h(x(t))+......amma_j c({\mbox{\boldmath$u$ }}^j(t))]dt<\lambda+\varepsilon.\end{eqnarray*}
Let
\begin{displaymath}{\cal L}(k)=\{\ell: m_k^{\ell}\neq 0\}, \ \ k=1,...,N.\end{displaymath}


Furthermore, let

\begin{eqnarray*}M=\max_{1 \leq k \leq N}\left\{\sum_{j=1}^p\gamma_j m^j_k\right\}\end{eqnarray*}
and
\begin{displaymath}\bar M=\frac{M}{M-2\varepsilon^{\delta}}+1.\end{displaymath}


Define

\begin{eqnarray*}&&\bar u^j_1(t)=\left\{\begin{array}{ll}u^j_1(t) \vee (\b.......\\&&\bar u^j_k(t)=u_k^j(t), \ \ \ j=1,...,p, \ k=2,...,N-1.\end{eqnarray*}
Then we get the control
\begin{displaymath}\bar U(t)=(\bar{\mbox{\boldmath$u$ }}^1(t),...,\bar{\mbox{\boldmath$u$ }}^p(t))\in {\cal A}^0.\end{displaymath}


This step can be called partial pathwise lifting.
 

Step II: Define

\begin{displaymath}\hat u^j_k(t)=\frac{1}{1+2\varepsilon^{\delta}/(M-2\varepsilon^{\delta})}\bar u^j_k(t), \ j=1,...,p, \ k=1,...,N.\end{displaymath}


Then we get the control

\begin{displaymath}\hat U(t)=(\hat {\mbox{\boldmath$u$ }}^1(t),...,\hat {\mbox{\boldmath$u$ }}^p(t))\in {\cal A}^0.\end{displaymath}


This step can be called pathwise shrinking.
 

Step III: We choose

\begin{displaymath}\tilde U(t)=(\tilde {\mbox{\boldmath$u$ }}^1(t),...,\tilde {\mbox{\boldmath$u$ }}^p(t))\end{displaymath}


such that

\begin{displaymath}\sum_{j=1}^p \gamma_j \tilde u^j_k(t)=\sum_{j=1}^p\gamma_j\hat u^j_k(t)+\frac{\varepsilon^{\delta}}{k}, \ \ k=1,...,N.\end{displaymath}


This step can be called entire pathwise lifting.
 

Step IV: Set

\begin{eqnarray*}{\mbox{\boldmath$v$ }}^{\varepsilon}(t)=(v_1^{\varepsilon}(t)......={\mbox{\boldmath$m$ }}^j\}}\tilde{\mbox{\boldmath$u$ }}^j(t)\end{eqnarray*}
and
\begin{eqnarray*}\left\{\begin{array}{ll}&y_k(t)=(x_k+\varepsilon^{\delta}......nt^t_0e^{a_Ns}(v^{\varepsilon}_N(s)-d)dt.\end{array}\right.\end{eqnarray*}
Set
\begin{displaymath}u^{\varepsilon}_1(t)=v^{\varepsilon}_1(t).\end{displaymath}


Sub-step n(n=2,...,N): Set

\begin{displaymath}\bar u^{\varepsilon}(t)=(u^{\varepsilon}(t)-v^{\varepsilon}_1(t)+v^{\varepsilon}_2(t))^+,\end{displaymath}
\begin{displaymath}z_{n-1}^{\varepsilon}=y_{n-1}^{\varepsilon}(t),-\inf_{0 \leq s \leq t}y^{\varepsilon}_{n-1}(s)\end{displaymath}
\begin{displaymath}B^{\varepsilon}_{n-1}=\{t : z^{\varepsilon}_{n-1}(t)=0\},\end{displaymath}
\begin{eqnarray*}u^{\varepsilon}_n(t)=\left\{\begin{array}{ll}u^{\vareps......ox{if} \ t \not\in B^{\varepsilon}_{n-1}.\end{array}\right.\end{eqnarray*}
Then we get ${\mbox{\boldmath$u$ }}^{\varepsilon}(t)=(u_1^{\varepsilon}(t),...,u^{\varepsilon}_N(t))$
nextupprevious
Next:5.4 Hierarchical controls of dynamicUp:5. Hierarchical Controls under thePrevious:5.2 Hierarchical control of single