A reinforcement learning approach to the stochastic cutting stock problem

Resumo

We propose a formulation of the stochastic cutting stock problem as a discounted infinite-horizon Markov decision process. At each decision epoch, given current inventory of items, an agent chooses in which patterns to cut objects in stock in anticipation of the unknown demand. An optimal solution corresponds to a policy that associates each state with a decision and minimizes the expected total cost. Since exact algorithms scale exponentially with the state-space dimension, we develop a heuristic solution approach based on reinforcement learning. We propose an approximate policy iteration algorithm in which we apply a linear model to approximate the action-value function of a policy. Policy evaluation is performed by solving the projected Bellman equation from a sample of state transitions, decisions and costs obtained by simulation. Due to the large decision space, policy improvement is performed via the cross-entropy method. Computational experiments are carried out with the use of realistic data to illustrate the application of the algorithm. Heuristic policies obtained with polynomial and Fourier basis functions are compared with myopic and random policies. Results indicate the possibility of obtaining policies capable of adequately controlling inventories with an average cost up to 80% lower than the cost obtained by a myopic policy.

Publicação
EURO Journal on Computational Optimization
Avatar
Anselmo R. Pitombeira Neto
Departamento de Eng. de Produção/UFC

Professor de Pesquisa Operacional e líder do OPL. Seus interesses de pesquisa incluem a aplicação de modelagem e simulação estocástica, otimização matemática, aprendizado de máquina e métodos bayesianos a problemas em sistemas de produção e transportes.

Relacionados