optimizing with stochastic data input
Posted: Fri Mar 18, 2005 3:13 pm
Hi
I am working currently on a model where the input data are highly stochastic.
I generate these data with random normal and random binomial functions.
If I try to find an optimal solution, I get funny results, generated by extreme sets of
values generated stochastically.
To avoid this, I work with multiple parallel simulations in the same model, by adding a
supplementary subscript to most of the data of the model and I try to optimize the average of the value I try to optimize through the different value it has in the parallel simulations. I can too add a supplementary pay off, the standard deviation of this value around the average if I want a less risky policy.
I have tested the number of simulations I need, as when you change the seed of the random functions, you can get substantial differences in results.
I need currently about 100 parallel simulations to get sufficiently stable results independant of the seed values.
The problem is that as the model grows in complexity, it takes more and more time to optimize.
I can turn the model into a deterministic one, but it will be then very far from reality, where most of the problems come from the stochastic characteristics of the demand, and the necessity to optimize counterbalancing policies.
The other solution is to increase the time step, to reduce the number of steps of simulation, or simplify the model.
Another way is to look for an optimum using the sensitivity analysis and intuition.
This is possible when the number of parameters is small. But it is an option too.
Any other ideas?
Regards.
J.J. Laublé
I am working currently on a model where the input data are highly stochastic.
I generate these data with random normal and random binomial functions.
If I try to find an optimal solution, I get funny results, generated by extreme sets of
values generated stochastically.
To avoid this, I work with multiple parallel simulations in the same model, by adding a
supplementary subscript to most of the data of the model and I try to optimize the average of the value I try to optimize through the different value it has in the parallel simulations. I can too add a supplementary pay off, the standard deviation of this value around the average if I want a less risky policy.
I have tested the number of simulations I need, as when you change the seed of the random functions, you can get substantial differences in results.
I need currently about 100 parallel simulations to get sufficiently stable results independant of the seed values.
The problem is that as the model grows in complexity, it takes more and more time to optimize.
I can turn the model into a deterministic one, but it will be then very far from reality, where most of the problems come from the stochastic characteristics of the demand, and the necessity to optimize counterbalancing policies.
The other solution is to increase the time step, to reduce the number of steps of simulation, or simplify the model.
Another way is to look for an optimum using the sensitivity analysis and intuition.
This is possible when the number of parameters is small. But it is an option too.
Any other ideas?
Regards.
J.J. Laublé