Optimisation in SD

This forum contains all archives from the SD Mailing list (go to http://www.systemdynamics.org/forum/ for more information). This is here as a read-only resource, please post any SD related questions to the SD Discussion forum.
Locked
Bob L Eberlein
Member
Posts: 35
Joined: Fri Mar 29, 2002 3:39 am

Optimisation in SD

Post by Bob L Eberlein »

Hi Everyone - here are some additional questions Fareen posed that I
thought might be of general interest. They relate to policy optimization.

> 1 - How different are the optimisation approach relative to the
> conventional approach?

Policy optimization is just another step in the process. The first and most
important thing is to develop a good quality system dynamics model that
captures the important dynamics. It is definitely true that in developing the
model optimization should not be in the mind. The irritating but pervasive
existence of compensating feedback will be captured in a good system dynamics
model. By focusing on building a model that can be optimized you might miss that
feedback and get completely erroneous results.

> 2 - Does it yield more insights or more detailed insights?

The fundamental question is given the amount of time available to do the project
what is the effective way to spend. If you can learn more about the problem you
are working on from a day of optimization than from a day of manual experimentation
it makes sense to do the optimzation. One of the nice things about optimization is
that it can be done on nights and weekends, thus leveraging our own daytime activities.

> 3 - Has there been any study which compares a model, modelled with the
> conventional and optimised approaches?

Not to my knowledge. It would actually be interesting to do a study where different
tools were systematically taken away to understand their usefulness. For example
without a computer how would the results change? This may sound comical but
I am not really sure what the answer is.

> 4 - What is the definition of insights?

System dynamics modeling is problem based. Results are the effective solution of
the problem. Insights in this context are any increase in understanding that
help solve the problem.


> 5 - If insights are suppose to be transferrable, how detailed should they
> be?

It all depends on the problem in question. There are genuinely unique problems
from which you can learn nothing about other problems. However there is normally
overlap. Having a more detailed insight might decrease this overlap, but it may
also increase the value of the insight for the problems that do overlap.

Questions by,
> Fareen Ali
>
f.s.ali@lse.ac.uk

Responses from,
Bob Eberlein
Bob@vensim.com
"George Backus"
Member
Posts: 33
Joined: Fri Mar 29, 2002 3:39 am

Optimisation in SD

Post by "George Backus" »

To Fareens question:
"3 - Has there been any study which compares a model, modeled with the
conventional and optimized approaches?"
Bob Ebelein wrote:
"Not to my knowledge. It would actually be interesting to do a study where different
tools were systematically taken away to understand their usefulness. For example
without a computer how would the results change? This may sound comical but
I am not really sure what the answer is."

The closest published efforts of this type, I know of, come from Energy
Modeling Forum II from the late 70s and from Andy Fords Workshops on
utility models while he was a Los Alamos. Andy summarize the earlier
efforts and his own in a publicly-available report. (Andy can you fill in
the blanks here?). The models were all different but were asked to look at
the same scenarios. For each study, one was a SD model and the rest were
optimization, accounting (no-feedback at all), or econometric-engineering
models. The SD models showed why and how things could happen, while the
optimization models just showed what people would like to have happen. Note
that a full comparison, for the purpose here, is not valid because it there
was no comparison of feedback models with feedback models with and without
optimization.

Our own work that can put the optimization around the system, shows two
points. One, clairvoyant optimization always leads to the wrong conclusion
for policy because is assumes exact knowledge of a future that is truly
unknown. The real policy must deal with the uncertainty of the future. This
leads to the second point. When uncertainty is added, THE optimal solution
is unobtainable under real world conditions. A sub-optimal solution will be
the "risk-adjusted" optimal policy.. but the optimality still implicitly
assumes perfect (deterministic) information for each of the uncertainty
runs made. When one uses the uncertainty analysis itself to find that run
that minimizes or maximizes some objective (let the policy be an
uncertainty), a rerun of the analysis that holds the candidate polices
fixed, then finds that policy most robust to, for example, maximizing the
minimum rate-of-return. Given a real world COLLECTION of policy variables,
the actual solution is a portfolio of policies that often mitigate one
another but that do ensure an objective can be achieved with a
pre-specified level of confidence (or risk). Metaphorically, this can be
looked upon as a multi-input-output control system approach or as a complex
gene-set that makes sure you survive most "diseases" long-enough to allow
you to meet your goal (biologically, pro-creation) but equally insures
finite life time and other (biological) limitations.

(As one side note mentioned by another, some systems, such as gas and
electricity, really do use optimization models for making actually
operation decisions. Including the optimization in the SD model
representation provides a much more realistic representation from the
stakeholders position, and often shows that the specific-use of the
optimizations solution set is a big part of the problem.)


George Backus
From: "George Backus" <
gbackus@boulder.earthnet.net (George Backus)>
Policy Assessment Corporation
14604 West 62nd Place
Arvada, Colorado USA 80004-3621
Bus: +1-303-467-3566
Fax: +1-303-467-3576
Locked