next up previous
Next: 2. Optimal Control under the Up: Table of Contents Previous: Table of Contents

1. Introduction

Most manufacturing firms are large, complex systems characterized by several decision subsystems, such as finance personnel, marketing, and operations. They may have a number of plants and warehouses and produce a large number of different products using a wide variety of machines and equipment. Moreover, these systems are subject to discrete events such as construction of new facilities, purchase of new equipment and scrappage of old, machine setups, failures, and repairs, and new product introductions. These events could be deterministic or stochastic. Management must recognize and react to these events. Because of the large size of these systems and the presence of these events, obtaining exact optimal policies to run these systems is nearly impossible both theoretically and computationally.

One way to cope with these complexities is to develop methods of hierarchical decision making for these systems. The idea is to reduce the overall complex problem into manageable approximate problems or subproblems, to solve these problems, and to construct a solution for the original problem from the solutions of these simpler problems.

There are several different (and not mutually exclusive) ways by which to reduce the complexity. These include decomposing the problem into problems of smaller subsystems with a proper coordinating mechanism; aggregating products and subsequently disaggregating them; replacing random processes with their averages and possibly other moments; modeling uncertainties in the production planning problem via diffusion processes; and so on. Development of such approaches for large, complex systems was identified as a particular fruitful research area by the Committee on the Next Decade in Operations Research (1988), as well as by the Panel on Future Directions in Control Theory, see Fleming (1988). A great deal of research has been conducted in the areas of Operations Research, Operations Management, Systems Theory, and Control Theory. For their importance in practice, see the surveys of the literature by Libosvar (1988), Rogers, Evans, Plante, and Wong (1991), and Bitran and Tirupati (1993), a bibliography complied by Bukh (1992), and books by Stadtler (1988) and Switalski (1989). Some other references on hierarchical systems are Simon (1962), Mesarovic, Macko, and Takahara (1970), Smith and Sage (1973), Singh (1982), and Auger (1989). It should be noted, however, that most of them concern deterministic systems.

Each approach mentioned above is suited to certain types of models and assumptions. The approach we shall first discuss is that of modeling uncertainties in the production planning problem via diffusion processes. The idea is initiated by Sethi and Thompson (1981a), Sethi and Thompson (1981b), and  Bensoussan, Sethi, Vickson, and Derzko (1984). Because controlled diffusion problems can often be solved (see  Ghosh (1993), Ghosh, Arapostathis, and Marcus (1997), Harrison and Taksar (1983), and  Harrison, Sellke, and Taylor (1983)), one uses the controlled diffusion models to approximate stochastic manufacturing systems. Kushner and Ramachandran (1989) begin with a sequence of systems whose limit is a controlled diffusion process. It should be noted that the traffic intensities of the systems in sequence converge to the critical intensity of one. They show that the sequence of value functions associated with the given sequence converges to the value function of the limiting problem. This enables them to construct a sequence of asymptotic optimal policies defined to be those for which the difference between the associated cost and the value function converges to zero as the traffic intensity approaches its critical value. The most important application of this approach concerns the scheduling of networks of queues. If a network of queues is operating under heavy traffic, that is, when the rate of customers entering some of the stations in the network is very close to the rate of service at those stations, the problem of scheduling the network can be approximated by a dynamic control problem involving diffusion processes. The optimal policies that are obtained for the dynamic control problem involving diffusion approximation are interpreted in terms of the original problem A justification of this procedure based on simulation is provided in Harrison and Wein (1989), Harrison and Wein (1990), and Kumar and Kumar (1994), for example. See also the survey on fluid models and strong approximations by Chen and Mandelbaum (1994) for related research. Furthermore, Krichagina, Lou, Sethi, and Taksar (1993) and Krichagina, Lou, and Taksar (1994) apply this approach to the problem of controlling the production rate of a single product using a single unreliable machine in order to minimize the total discounted inventory/backlog costs. They imbed the given system into a sequence of systems in heavy traffic. Their purpose is to obtain asymptotic optimal policies for the sequence of systems that can be expressed only in terms of the parameters of the original system.

Before concluding our discussion of this approach, we should emphasize that so far the approach does not provide us with an estimate of how much the policies constructed for the given original system deviate from the optimal solution, especially when the optimal solution is not known, which is most often the case. As we shall see later, the hierarchical approach under consideration in this survey enables one to provide just such an estimate in many cases.

The next approach we shall discuss is that of aggregation-disaggregation. Bitran, Hass, and Matsuo (1986) formulate a model of a manufacturing system in which uncertainties arise from demand estimates and forecast revisions. They consider first a two-level product hierarchical structure, which is characterized by families and items. Hence, the production planning decisions consist of determining the sequence of the product families and the production lot sizes for items within each family, with the objective of minimizing the total cost. Then, they consider demand forecasts and forecast revisions during the planning horizon. The authors assume that the mean demand for each family is invariant and that the planners can estimate the improvement in the accuracy of forecasts, which is measured by the standard deviation of forecast errors. Bitran, Hass, and Matsuo (1986) view the problem as a two-stage hierarchical production planning problem. The aggregate problem is formulated as a deterministic mixed integer program that provides a lower bound on the optimal solution. The solution to this problem determines the set of product families to be produced in each period. The second-level problem is interpreted as the disaggregate stage where lot sizes are determined for the individual product to be scheduled in each period. Only a heuristic justification has been provided for the approach described. Some other references in the area are Bitran and Hax (1977) , Hax and Candea (1984), Gelders and Van Wassenhove (1981), Ari and Axsåter (1988), and Nagi (1991).

Lasserre and Mercé (1990) assume that the aggregate demand forecast is deterministic, while the detailed level forecast is nondeterministic within known bounds. Their aim is to obtain an aggregate plan for which there exists a feasible dynamic disaggregation policy. Such an aggregate plan is called a robust plan, and they obtain necessary and sufficient conditions for robustness; see also Gfrerer and Zäpfel (1994).

Finally we consider the approach of replacing random processes with their averages and possibly other moments, see Sethi and Zhang (1994a). The idea of the approach is to derive a limiting control problem which is simpler to solve than the given original problem. The limiting problem is obtained by replacing the stochastic machine capacity process by the average total capacity of machines and by appropriately modifying the objective function. The solution of this problem provides us with longer-term decisions. Furthermore, given these decisions, there are a number of ways by which we can construct short-term production decisions. By combining these decisions, we have created an approximate solution of the original, more complex problem.

The specific points to be addressed in this review are results on the asymptotic optimality of the constructed solution and the extent of the deviation of its cost from the optimal cost for the original problem. The significance of these results for the decision-making hierarchy is that management at the highest level of the hierarchy can ignore the day-to-day fluctuation in machine capacities, or more generally, the details of shop floor events, in carrying out long-term planning decisions. The lower operational level management can then derive approximate optimal policies for running the actual (stochastic) manufacturing system.

While the approach could be extended for applications in other areas, the purpose here is to review models of a variety of representative manufacturing systems in which some of the exogenous processes, deterministic or stochastic, are changing much faster than the remaining ones and to apply the methodology of hierarchical decision making to them. We are defining a fast changing process as a process that is changing so rapidly that, from any initial condition, it reaches its stationary distribution in a time period during which there are few, if any, fluctuations in the other processes.

In what follows we will review the applications of the approach in Markov decision problems and stochastic manufacturing problems, where the objective function is to minimize the total discounted cost, or a long-run average cost, or a risk-sensitive criterion. Also reviewed are the research on dynamic programming equations, existence of their solutions, and verification theorems of optimality for single machine systems, flowshops, and jobshops producing multiple products. The review is divided into three parts: Part I-Discounted cost model; Part II-Average cost model; and Part III-Risk-sensitive controls.

Part I consists of Section 2 and Section 3. In Section 2, we review the existence of solutions to the dynamic programming equations associated with stochastic manufacturing systems with the discounted cost criterion. The verification theorems of optimality and the characterization of optimal controls are also given. Section 3 is devoted to reviewing the results on open-loop and/or feedback hierarchical controls that have been developed and shown to be asymptotically optimal for the systems. The computational issues are also included in this section. Part II consists of Section 4 and Section 5. In Section 4, we review the existence of solutions to the ergodic equations corresponding to stochastic manufacturing systems with the long-run average cost criterion. Also reviewed are the verification theorems of optimality and the characterization of optimal controls. Section 5 surveys hierarchical controls for single machine systems, flowshops, and jobshops under the long-run average cost criterion. Markov decision processes with weak and strong interactions are also included. Part III contains Section 6, which is concerned with risk-sensitive controls. In particular, we focus on the applications of the idea of hierarchical control in the risk-sensitive framework. Finally, we conclude the review in Section 7.
 



next up previous
Next: 2. Optimal Control under the Up: Table of Contents Previous: Table of Contents