Next:4.4
Hierarchical Controls under theUp:4.
Optimal Control with thePrevious:4.2
Optimal control of dynamic
4.3 Optimal control of dynamic jobshops
We consider the dynamic jobshop given by (2.13)-(2.15)
in Section
3.3, but the Markov process modeling
the machine capacities is denoted by
without parameter .
Correspondingly, the concept of the admissible control is modified as follows.
Definition 4.1 We say that a control
is admissible with respect to the initial state vector
and ,
if
-
(i)
-
is an -adapted
process with;
-
(ii)
-
;
-
(iii)
-
the corresponding state process
for all ,
where
|
|
|
(4.24) |
with the initial condition
Let
denote the set of all admissible controls with initial conditions ,
and.
Here our problem is to find an admissible control
that minimizes the long-run average cost function
|
(4.25) |
where
defines the cost of surplus and production and
is the initial value of.
We impose the following assumptions on the process
and the cost function
throughout this section.
Assumption 4.5 Let
for some integer,
where .
The machine capacity process
is a finite state Markov chain with the following infinitesimal generator
Q:
for some
and any function
on.
Moreover, the Markov process is strongly irreducible and has the stationary
distribution ,.
Assumption 4.6 Let
and
for .
Here pn represents the average capacity of the machine
n,
and n(i,j) is the number of machine placed on the
arc (i,j). Assume that there exist
such that
|
|
|
(4.26) |
|
|
|
(4.27) |
and
|
|
|
(4.28) |
Assumption 4.7
is a non-negative, jointly convex function that is strictly convex in either
or
or both. For all
and ,
there exist constants C and
such that
Let
denote the minimal expected cost, i.e.,
|
(4.29) |
In order to get the Hamilton-Jacobi-Bellman equation for our problem, similar
to Section 4.2, we introduce some notation.
Let
denote the family of real-valued functions
defined on
such that
(i)
is convex for any ;
(ii) there exists a function
such that for any
and any
Write (4.24) in the vector form
with .
Consider now the following equation:
|
(4.30) |
where
is a constant, .
We have the following verification theorem due to
Presman,
Sethi, and Zhang (1999b).
Theorem 4.11 Assume
(i) with satisfies
(4.30); (ii) there existsfor
which
|
(4.31) |
and the equation,
has
for any initial condition (,
a
solution such
that
|
(4.32) |
Then
is an optimal control.
Furthermore, does
not depend on
and
and it coincides with .
Moreover,
for any T>0,
Next we try to construct a pair of
which satisfies (4.30). To get this pair,
we use the vanishing discount approach. Consider a corresponding control
problem with the cost discounted at the rate .
For,
we define the expected discounted cost as
Define the value function of the discounted cost problem as
Theorem 4.12 There exists
a sequence
with
as
such that for :
where, .
Theorem 4.13 (i) In our
problem,
does not depend on ,
and
(ii) the pairdefined
in Theorem4.12 is a solution
to (4.19).
Remark 4.8 Assumption
4.6
is not needed in the discounted case. It is clear that this condition is
necessary for the finiteness of the long-run average cost in the case when
tends to
as .
Theorem 4.13 states in particular that
this condition is also sufficient.The proof of Theorems 4.12
and 4.13 is based on the following lemma
obtained in Presman, Sethi, and Zhang (1999b).
Lemma 4.3 For any
and,
there exists a control policy
such that for any
|
(4.34) |
where
and
is the surplus process corresponding to the control policy
and the initial condition .
Next:5
Hierarchical Controls under theUp:5.
Optimal Control with thePrevious:5.2
Optimal control of dynamic