Operations Research

or-title.jpg

Operations research combines the applications of optimization, probability and statistics to solve problems in different domains including business, energy and utilities, health services, financial services and logistics. In order to solve today’s complex system environment, operations research often works at the intersection of these disciplines, such as the use of optimization in the estimation of large scale statistical models, optimal collection of information, and stochastic optimization.

Systems engineers know how to develop and use mathematical and statistical models to help solve these decision problems. Like other engineers, they are problem formulators and solvers. Their work requires the formation of a mathematical model of a system and the analysis and prediction of the consequences of alternate modes of operating the system.

Approximation Algorithms for Perishable Inventory Systems
X.T. Gong
Perishable products such as meat, fruit, dairy products, frozen foods, and pharmaceuticals are ubiquitous and play an indispensable role in our society. However, the control and optimization of perishable inventory systems is very hard due to their finite-lifetime nature. Indeed, the optimal control policy for perishable inventory systems is very complicated; and the computation of the optimal policy using dynamic programs suffers  from the well-known “curse of  dimensionality” and is intractable even with a relatively short product lifetime. In this project, we aim to develop easy-to-compute and near-optimal approximation algorithms with worst-case performance guarantees for periodic-review perishable inventory systems with general demand processes. If successful, our research will not only significantly contribute to the research literature on perishable inventory management, but also have a broad impact on researchers and practitioners in perishable product industries/organizations.

Distributionally Robust Discrete Optimization with Entropic Value-at-Risk
Z. Y. Long
We study the discrete optimization problem under the distributionally robust framework. We optimize the Entropic Value-at-Risk, which is a coherent risk measure and is also known as Bernstein approximation for the chance constraint. We propose an efficient approximation algorithm to resolve the problem via solving a sequence of nominal problems. The computational results show that the number of nominal problems required to be solved is small under various distributional information sets.

Fast Algorithms for Big Data Analytics
A. M.-C. So
The ubiquity of big datasets and the desire to extract information and knowledge from them have motivated the development of a wide array of data analytics tools in recent years. Many of these tools aim at identifying the most informative features in a dataset according to some criteria. As such, they often require the algorithmic solution of certain (intractable)optimization problems. In this project, we will develop efficient algorithmic implementations of various optimization-based data analytics tools and rigorously establish their performance guarantees (such as convergence rate, approximation quality and statistical properties). This will contribute to both the theory and practice of big data optimization. We will also test our results on various applications, such as recommender systems and systems biology.

Financial Systemic Risk
N Chen
Financial institutions knit a complex network. They interconnect with each other directly through active borrowing-and-lending activities and holding significant amount of marketable securities against each other. In normal times, this network helps the institutions diversify their idiosyncratic risks to achieve an efficient allocation of economic resources. However, under crisis conditions, this network can be easily turned into a conduit that propagates failures at one or several institutions to the entire system. Further complicating the matter is a second layer of interconnectedness of the institutions, indirectly via the market. The asset fire sale by a distressed firm will create a significant negative externality for the rest of the system. As the recent financial crisis reveals, these two, direct and indirect but mutually enhancing, channels play an important role in the development of systemic risk. The objectives of my research aims to develop mathematical tools to modeling and analyzing systemic risk, in particular studying how defaults spread through the entire financial system.

Hidden Convexity
D. Li
The research goal is to develop sufficient conditions to identify hidden convex minimization problems. A non-convex minimization problem is called a hidden convex minimization problem if there exists an equivalent transformation such that the transformed minimization problem is convex. Sufficient conditions that are independent of transformations can be derived for identifying such class of seemingly nonconvex minimization problems that are equivalent to convex minimization problems. A global optimality can be thus achieved for this class of hidden convex optimization problems by using local search methods.

Low-Rank Tensor Recovery and Tensor PCA
S.Q. Ma
Stimulated by the need of big data analytics, and motivated by the success of compressed sensing and low-rank matrix optimization, it is important and timely to study methods for analyzing massive tensor data. Traditional matrix-based data analysis is inherently two-dimensional, which limits its ability in extracting information from a multi-dimensional perspective. Tensor-based multi-dimensional data analysis has shown that tensor models can take full advantage of the multi-dimensional structures of the data, and generate more useful information. A common observation for huge-scale data analysis is that the data exhibits a low-dimensional property. This leads to the study of low-rank tensor optimization problems. The primal goals of our research under this project are: (i) to develop new matricization schemes for tensor and to analyze the corresponding relation between its CP rank and the rank of its matrix counterpart; (ii) to apply the matricization scheme to low-rank tensor recovery problems and their variants, and to develop efficient first-order optimization algorithms for solving these problems; (iii) to develop efficient algorithms for solving sparse PCA for tensor; (iv) to apply these models and algorithms to solve tensor optimization problems arising from real applications such as statistics, signal processing, machine learning and bioinformatics.

Managing Underperformance Risk in Project Portfolio Selection
Z.Y. Long
We consider a project selection problem where each project has an uncertain return with partially characterized probability distribution. The decision maker selects a feasible subset of projects so that the risk of the portfolio return not meeting a specified target is minimized. Our work extends the riskiness index of Aumann and Serrano (2008) by incorporating the target and also distributional ambiguity. We minimize the underperformance risk of the project portfolio, which we define as the reciprocal of the absolute risk aversion (ARA) of an ambiguity averse individual with constant ARA who is indifferent between the target return with certainty and the uncertain portfolio return. Our model captures correlation and interaction effects such as synergies. We solve the model using binary search, and obtain solutions of the subproblems from Benders decomposition techniques. A computational study shows that project portfolios generated by minimizing the underperformance risk are more than competitive in achieving the target with those found by benchmark approaches, including maximization of expected return, minimization of underperformance probability, mean-variance analysis, and maximization of Roy’s (1952) safety first ratio. As a simpler alternative, we describe a greedy heuristic, which routinely provides project portfolios with near optimal underperformance risk.

New Models in Capacitated Lot Sizing Decisions
C.H. Cheng
Existing research in the capacitated lot sizing problem (CLSP) often assumes setup time to be negligible. Although there have been efforts to reduce setup time, it is not possible to reduce it to near zero in many industries. Hence, excluding setup time from CLSP models is not practical, especially when setup time is significant and production capacity is tightly constrained. In this research, this limitation will be addressed. General models considering alternate production options will be developed. In particular the inclusion of setup time will be investigated. Further the structure of the problem will be studied and solution algorithms based on the underlying structure will be developed. An extensive computational study will be carried out.

New Scheduling Models with Applications to Berth Allocation
X. Cai and C.Y. Lee
The study focuses on modelling, analysis, and algorithms for a class of new scheduling problems where a big job must occupy a full machine, while a small job may share a machine with some other jobs at the same time. Applications to berth allocation in container terminals are also investigated.

Nonconvex Approaches to Rank-Constrained Semidefinite Programs
A. M.-C. So
Many intractable problems in engineering can be formulated as a semidefinite program (SDP) with a rank constraint. Currently, a standard approach to tackle these problems is semidefinite relaxation. The idea is to drop the rank constraint to get an efficiently solvable SDP. However, standard SDP solvers typically yield high-rank solutions. In this project, we investigate the use of nonconvex regularization terms to promote low-rank solutions. The focus will be on both the computational complexity of such approaches and their practical implementations.

Nonconvex Optimization and Global Optimization
D. Li and C. K. Ng
The research goal is to develop equivalent transformations for generating a saddle point for nonconvex optimization problems. A saddle point condition is a sufficient condition for optimality. A saddle point can be generated in an equivalent representation space for nonconvex optimization problems that do not have a saddle point in their original settings. Certain equivalent transformations may convexify the perturbation function and a zero duality gap can be thus achieved. This investigation would lead to some efficient dual search algorithms that ensure the global optimality for a class of nonconvex optimization problems.

Nonlinear Integer Programming
D. Li and C. K. Ng
The research goal is to establish convergent duality theory and to develop efficient solution algorithms for large-scale nonlinear integer programming problems. The fundamental target underlying our theoretical development is to eliminate duality gap in the classical Lagrangian dual formulation. We have developed nonlinear Lagrangian theory that has yielded several new dual formulations with asymptotic zero duality gap.The key concept is the construction of a nonlinear support for a nonconvex piecewise-constant perturbation function. Our numerical implementation of a duality-gap reduction process relies on some novel cutting procedures. Performing objective-level cut, objective contour cut or domain cut reshapes the perturbation function, thus exposing eventually an optimal solution to the convex hull of a revised perturbation function and guaranteeing a zero duality gap for a convergent Lagrangian method. Applications include nonlinear knapsack problems, constrained redundancy optimization in reliability networks, and optimal control problems with integer constraints.

On Dynamic Decision Making to Meet Consumption Targets
Z. Y Long
We investigate a dynamic decision model that facilitates a target-oriented decision maker in regulating her risky consumption based on her desired target consumption level in every period in a finite planning horizon. We focus on dynamic operational decision problems of a firm where risky cash flows are being resolved over time. The firm can finance consumption by borrowing or saving to attain prescribed consumption targets over time. To evaluate the ability of the consumption in meeting respective targets, we propose the Consumption Shortfall Risk (CSR) criterion, which has salient properties of attainment content, starvation aversion, subadditivity and positive homogeneity. We show that if borrowing and saving are unrestricted and their interest rates are common, the optimal policy that minimizes the CSR criterion is to finance consumption at the target level for all periods except the last. For general convex dynamic decision problems, the optimal policies correspond to those that maximize an additive expected utility, in which the underlying utility functions are concave and increasing. Despite the interesting properties, this approach violates the principle of normative utility theory and we discuss the limitations of our target-oriented decision model.

Scheduling with Negotiable Third-Party Machines
X. Cai, C.Y. Lee and George Vairakarakis
Suppose a manufacturer has received a number of orders (jobs) from his customers, which should be completed by their respective due dates. Most of the facilities needed to process the jobs are available in the manufacturer’s own factory. However, for some reason, certain parts of the jobs must be outsourced to a third party who possesses the machines needed to process these parts. The availability of the third-party machines is negotiable, depending on the price. Consequently, the manufacturer has to (1) negotiate an agreement to secure the machine time on the third-party machines, and (2) generate a schedule to process the jobs, so as to minimize the total cost, including the cost for the use of the third-party machines and the cost incurred if the due dates of the jobs cannot be met. In general, consideration of third-party machines in machine scheduling problems relaxes a common assumption made in traditional scheduling studies. The main objective of this project is to explore models and algorithms to solve this new branch of scheduling problems. Nash Bargaining theory will be applied.

Scheduling of Perishable Jobs under Uncertain Deadlines
X. Cai and X. Zhou
We study a new class of scheduling problems involving perishable jobs with post-completion deterioration, where each finished product will be picked up by a transporter that arrives with uncertainty. The processing time to complete a job, as well as its fresh time, are random variables. If a job is finished too early, it may decay and thus incur a decaying cost; on the other hand, if it misses the pickup, it will suffer a loss due to such causes as having to be put to a local market at a discounted price. The problem is to determine an optimal policy to process and handle all the jobs, so as to minimize the total expected loss. The objective of this project is to develop an in-depth study of scheduling problems with features as described above. Topics to be addressed include those on modelling, propositions of optimal policies, and algorithms.

Sparse Optimization for High-Dimensional Data Analysis
S.Q. Ma
Introduction: In modern high dimensional data analysis, we are facing large-scale and completely dense data. It is hard to analyze these data using traditional approaches. In many problems, however, the large-scale and completely dense data usually have special structures such as sparse and low-rank structures of their solutions. Sparse optimization is a recently developed tool to analyze the data and extract the sparse information from the completely dense data. The primary goals of our research under this project are: (i) to develop a suite of efficient algorithms for solving large-scale sparse optimization problems; (ii) to provide theoretical foundations for these algorithms such as proofs of their global convergence, rate of convergence and iteration complexity for epsilon-optimality; (iii) to apply these algorithms to solve very large-scale and challenging problems from compressed sensing, machine learning and statistics such as robust PCA, sparse PCA, sparse inverse covariance selection, latent variable graphical model selection, stable principal component pursuit and compressive principal component pursuit; (iv) to implement a variety of software packages for solving these real applications that can be used in different areas.

Solutions to Diophantine Equations
D. Li
Efficient methods in finding solutions to Diophantine equations are not only of their theoretical importance, but also of their far reaching impacts on many long-standing challenges in operations research, for example, the knapsack problem. It is well-known that linear Diophantine equations are polynomially solvable, while linear Diophantine equations on a bounded integer set are NP-complete. In this research we develop novel schemes for finding solutions to Diophantine equations by disaggregation and variable fixation.

Strong approximations in multiclass queuing networks
X.F. Gao
Multiclass queueing networks have been used to model manufacturing and communication systems. For those multiclass networks with a static priority service discipline, the diffusion approximation for the queue length of a higher priority group is identically zero. This approximation is not satisfactory, particularly when the traffic contributed by those higher priority class customers in each station is not negligible. The goal of this project is to propose better approximate methods for analyzing multiclass networks, especially those with feedback structure.

Theory and Applications of Chance Constrained Optimization
A. M.-C. So
In the formulation of optimization models, the data defining the objective functions and/or constraints are often collected via estimation or sampling, and hence are only approximations of the nominal values. One approach to incorporate data uncertainty in optimization models is through chance constrained programming, in which one only needs to satisfy the constraints for most but not all realizations of the data. Unfortunately, such an approach often leads to computationally difficult optimization problems. Our aim in this project is twofold: (i) to develop tractable reformulations or approximations of chance constrained optimization problems, in which the data satisfy certain stochastic properties, and (ii) to apply our methodologies to practical problems, such as those arise in signal processing, wireless communications, control and finance.