Two-stage Stochastic Programming In Adaptive Interdisciplinary Pain Management
Chronic pain is a global health problem. About 100 million adult Americans suffering from chronic pain, which costs $560 - $635 billion per year. The Eugene McDermott Center for Pain Management at the University of Texas Southwestern Medical Center at Dallas conducted a two-stage pain management program for patients who suffer from chronic pain. Data information such as patient’s parameter, past treatment, pain outcome at different evaluation point and so on has been collected by the McDermott Center. The University of Texas at Arlington is collaborating with the McDermott Center to find optimal treatment strategies for individual patients.A treatment strategy of the two-stage pain management program is the decision made at two different stages. For this two stage decision problem, it is formulated as a two-stage stochastic programming (2SP) model. The two-stage stochastic model incorporates outcome and state transition (system prediction) models as a part of its constraints. These system prediction models have the non-convex mixed integer nonlinear property. This research proposes a linear approximation method to approximate the non-convex nonlinear constraints by piecewise linear functions. By discretizing the continuous random variables in the linear approximation 2SP , an equivalent MILP model is obtained, which is then solved very quickly by branch and bound algorithm in a mature. An alternative approach to solve the original 2SP model is using the spatial branch and bound algorithm in a non-convex MINLP solver COUENNE to solve its equivalent deterministic model directly. However, this approach is computationally intensive. Within elapsed real time limit of 6 minutes that is considered as a reasonable waiting time for patients getting recommendations from the decision support system by the McDermott Center, COUENNE cannot find a solution for the MINLP problem. The policies generated by MILP models (MILP policies) and the policies generated by MINLP models (MINLP policies) are cross evaluated by two different evaluators, MILP evaluator and MINLP evaluator. The MILP evaluator shows that policies generated by MILP models could achieve the objectives better. However, MINLP evaluator could not evaluate the policies fairly because (1) the number of scenarios in the MINLP evaluator is not enough to simulate “real/true” situation and (2) the MINLP evaluator cannot find optimal solutions for most patients within elapsed time of 15 minutes.