Optimisation in System Dynamics

This forum contains all archives from the SD Mailing list (go to http://www.systemdynamics.org/forum/ for more information). This is here as a read-only resource, please post any SD related questions to the SD Discussion forum.
Locked
Dseville@aol.com
Junior Member
Posts: 7
Joined: Fri Mar 29, 2002 3:39 am

Optimisation in System Dynamics

Post by Dseville@aol.com »

Jim -

I like the way your brain works! Not only does fitting to the data not add
much to the actual learning from the model, but it can be incredible boring
and frustrating!

don seville
dseville@aol.com
jm62004@Jetson.UH.EDU
Junior Member
Posts: 14
Joined: Fri Mar 29, 2002 3:39 am

Optimisation in System Dynamics

Post by jm62004@Jetson.UH.EDU »

In response to Jim Hines, two places where
>"adjusting the last bits of some parameter values
>just to get it right! (while feeling confident in the model structure)"?
may be important.

First: you (intuitively or otherwise) know that the system behavior is very
sensitive to 2 or 3 critical parameters - (let us say the system is highly
nonlinear and chaotic). Let us assume: if A is between 0 and 1, B is
between 1 and 2, C is between 2 and 3, then for certain values of A, B, and
C, you will have unacceptable system behavior and you want to avoid it (in
reality). Now the intent is to find out sets of {A, B, C} which are
unacceptable, and let us say the system responds to changes in A, B, C as
low as 1E-6. We can keep A=0, keep B=1, run model with changes in C (ONE
MILLION times: C changes from 2 to 3 in steps of 1E-6) and observe/analyze
model behavior. Then keep A=0, B=(1+1E-6), repeat, and so on, ad nauseum...
That is *too much work*, and automation would help here.

Second, and *much more* important: what if your controls are (nonconstant)
functions themselves, and not (constant) parameter values? Let us say you
get ideal system output (maximum profits, minimum risk etc.) when A =
exp(-time). We have no way of knowing a priori if A is exp(-time) or
exp(-time/2) or "time" or a constant 0.6, etc. My point: looking for
optimal controls is best done by automation; the current approach of
simulation-based optimization is useful only for understanding the
model/system, but not for controlling it. With larger systems, the problem
is much worse.

I am not sure when you say:

> A good use of time is to explain to people why fitting a model to data
>is not a good use of time.

Why not???? I thought that was one very good way to "validate" system
dynamics models, otherwise why would anyone believe in them? Please explain
- I may be missing your point.

Regards

Jaideep
From: jm62004@Jetson.UH.EDU
****************************************

Jaideep Mukherjee, Ph. D.
Research Associate
Department of Industrial Engineering
University of Houston
4800 Calhoun Road
Houston, TX 77204-4812

Phone: 713 743 4181; Fax: 713 743 4190
****************************************
Alexandre J G P Rodrigues
Junior Member
Posts: 6
Joined: Fri Mar 29, 2002 3:39 am

Optimisation in System Dynamics

Post by Alexandre J G P Rodrigues »

In reply to Jim Hines posting:

"What is the value of "adjusting the last bits of some parameter values
just to get it right! (while feeling confident in the model structure)"?
If, as Alexandre says, its "wasting time" doing it by hand at this
stage, its also wasting time to do it automatically."

I would argue that if I have a model nearly well calibrated, and can make a
coffee break while the computer completes the job, that would be a tasty
coffee!!... unless I would find that the optimisation algorithm had moved
the calibration towards an unexpected direction!!... What was wrong? The
algorithm? The model structure? My mental model? There we are, learning
again... But there are more important things.

It may also depend on the value the modeller and user give to the precision
of a good callibration. If this may depend on the "a priori" thoroughness
of the modeller, more importantly it does depend on the specific scenario
of model application.

A modeller who is also the user and who just expects insights on general
policies from the model, may well find "adjusting the last bits" a waste of
time. But a modeller who has to persuade the user (eventually a sceptical
non-System Dynamicist very uninterested in studying the conceptual
limitations of predictibility ), that the model is "valid", is giving
answers above alternative models and tools, is capable of producing
accurate numerical outputs in various scenarios (where the margin for
errors is tight), and that these outputs should be taken as "true", and
that when used in practice will have a strong impact on the outcome of real
business situations, then this user might find adusting "last bits"
somewhat useful.

Adding to this, we all know how various combinations of parameter values
can generate very similar results (we talk about fairly complex models),
even when the variations in the parameter values are small. But a parameter
that worked well in a specific scenario within a certain small range of
values, might turn up to induce very different results when the model is
callibrated to another scenario. An SD model provides a structural theory
for the behaviour of the system being modelled. Whilst some parameters may
just reflect the characteristics of a particular working scenario for the
system, others represent *intrinsic* properties of the system and hence
their values are part of that theory. Calibrating a model to reproduce a
series of past real scenarios is a way to extract these theory-values. Here
accurate calibration is essential, in particular if the model/system is
sensitivity to these theory-values in some sub-domain of its possible
working scenarios.

Finally, optimisation can also be very useful to improve policies. It can
be used to explore the generation of new policies based on elementary
policies, by switching the weight given to the "value" of certain control
information. It is easy to introduce in the model structure a small set of
control policies, and the number of combinations grows high. Policies
differ in the way they *use* information and in the way they induce changes
in the "physical system" being controlled. As in reality, different
managers give different value to different information, based on their
often defective mental models. Using the model to explore new policies
based on elementary policies, where new values are given to information,
may become very time-consuming. Optimisation helps to reduce the effort,
eventually provides more accurate results, and has the great secondary
effect of encorouging the modeller to expore many more alternatives, hence
being more creative and learning more.


"A good use of time is to explain to people why fitting a model to data
is not a good use of time."

*Fixing* the model to data is certainly a bad use fo time. But improving
the model with data is certainly a good one. The difference is in the
*thinking* behind the exercise, the ultimate determinant of usefulness of
any tool.

Some times I learned and discovered important things about the feedback
theory of a system, by spending some time playing with minor changes while
trying to get it right.

Regards,

Alexandre Rodrigues
From: Alexandre J G P Rodrigues <
nop08219@mail.telepac.pt>
Dept. Management Science
The University of Strathclyde
Scotland, UK.
jm62004@Jetson.UH.EDU
Junior Member
Posts: 14
Joined: Fri Mar 29, 2002 3:39 am

Optimisation in System Dynamics

Post by jm62004@Jetson.UH.EDU »

Thanks for your detailed reply Alexandre. I am pretty much in agreement
with you, as you may see from my post on the list. I think accurate data,
accurate relationships, and accurate structure - all are important - we
cannot religiously stick to only the structure argument and thus arouse the
wrath of the economists and their ilk. I believe all are important -
depending on a specific problem at a specific time, one or the other may
assume more importance at a time.

You mentioned the issue of "general insights" from the model, and in that
case data requirements not being strict (necessarily). In nonlinear systems
(which "real" system dynamics models tend to be), if chaos lurks beneath,
we may have to be careful. If I remember correctly, a Lorenz 3-d system
shows chaos for very specific ranges of parameters and if we looked for
only a "general insight", we may miss out on the really interesting stuff.
If general insights was the only goal, chaos may never have been discovered
(the story I think is in James Gleicks book on CHAOS).

I think it is an incremental process of refining and development of models
- a structurally sound model may be criticised as lacking data, and a
data-sound model may be structurally lacking. Both *can add value* though
until a better version appears. I dont like a fanatical adherence to
structure (insistence that there must be delays, feedback, etc. for good
"insights" to arise from an SD model) when a simpler econometric model will
do the job; or, for that matter, a fanatical adherence to statistically
clean econometric result which does not lead to any understanding of how
the system works. My point: solution depends on the problem. Example: A
large LP model can add valuable insights from sensitivity analyses, even
though structurally it is very "boring" and understandable.

Regards,

Jaideep
From: jm62004@Jetson.UH.EDU

****************************************

Jaideep Mukherjee, Ph. D.
Research Associate
Department of Industrial Engineering
University of Houston
4800 Calhoun Road
Houston, TX 77204-4812

Phone: 713 743 4181; Fax: 713 743 4190
****************************************
Alexandre J G P Rodrigues
Junior Member
Posts: 6
Joined: Fri Mar 29, 2002 3:39 am

Optimisation in System Dynamics

Post by Alexandre J G P Rodrigues »

I totally agree Jaideeps remark about the importance of both data and
structural strenght of any model, and of coutse in SD modelling. As I
mentioned before, if systems have theory-parameters these must be
calibrated with care. The same applies for the problem of Chaos, as noted
by Jaideep.

I also agree with George comments on how "imperfect" the real world of
managers is, and hence the practical need of models to be accurately fitted
to real output so that managers"buy" them. How true and pragmatic this is,
I would not sell the usefulness the Optimization under this line of
argument.

There has already been some comments about the various bennefits that
Optimization might offer.

Howeve, there can be a confusion between:
(i) automated calibration
(ii) optimisation
(iii) calibration for accurate output data fitting

These are highly interrelated issues but are not necessarily the same. For
example, optimising a model does not necessarily mean the model to
reproducing accurately past behaviour. Likewise, whilst automated
calibration can be used to support an optimisation procedure, it does not
mean you are always optimising whenever you use it.

The normal use of automated calibration is to adjust the model patameters
so that the the output produced reaches a desired "value" (single value,
set of values, pattern, set of patterns) -- maxmize/minimize an objective
function. Calibration of patameters is not free of conditions, and ranges
of "validity" for each parameter value should be considered. This
immediately restricts the possibilities for the automated algorithm to
produce a solution that ensures accurate data fitting. Furthermore, the
output of the optimal solution is often unknown a priori. The range of
values considered for the parameters, should also ensure that the solution
produced is most likely to be implementable -- it would be nonsense to make
the optimisation algorithm trying "unimplementable" values.

Fitting a model to reproduce data can be useful to extract the values of
theory-parameters, to diagnose past behaviour, or to explore possible
chaos. But going back to Jim Hines comments, it can be a waste of time if
it is just done with no purpose. In particular, one should never sacrifice
a good theory-value, capable of ensuring the model is "robust" in
replicating several scenarios, just to get a better calibration for a
specific scenario -- in which case, back to Georges post, I would be ready
to direct my efforts in making the manager undertand this: is it a better a
watch that is *always* one minute behind the real time, or the one which is
stopped and gives the right time once a day?

Bottom-line: optimisation can be very useful, and in my opinion is an
important development; automated calibration can be a very useful tool: it
can support optimisation and data-fiting; accurate data-fiting can be very
important in certain situations; spending time with detailed calibration
can be important: it helps us to understand some important dynamics, in
particular to identify transition points possibly leading to chaos.

All this should be applied in the right way, at the right time, and in the
right circumstances: the benefits must outweigh the costs. But that is not
really an SD issue? Is it? I think it applies to everything.

Regards,

Alexandre Rodrigues
From: Alexandre J G P Rodrigues <
nop08219@mail.telepac.pt>
jm62004@Jetson.UH.EDU
Junior Member
Posts: 14
Joined: Fri Mar 29, 2002 3:39 am

Optimisation in System Dynamics

Post by jm62004@Jetson.UH.EDU »

This is a very interesting and useful discussion. I have a few
points/clarifications to add:

1. I agree structure of the model is very important - but if its output is
not checked with real data, then this structure is as limited as the mental
model that produced it. I dont know if any research has been conducted on
this issue, but to me it seems that, given, exactly the same problem,
different system dynamics modelers could come up with different structures
to explain it. There is no way to guarantee/predict what will happen in
future, but we can have more confidence if the model explains a) relevant
historical behavior (so data-fitting/automated calibration here is
important); b) relationships used in model are good (this is where
economists criticize SD; hence again data-fitting/automated calibration
here is important). Assuming that there are different SD models
(structures) using similar historical data but different structures, and
have similar outputs, then Id choose the model structure (as
physicists/mathematicians choose amongst competing theories) that is
simplest and most elegant.

Creating a model with fuzzy, untested relationships and/or fuzzy data is
okay too, at least to start with, because simulations can point us to the
need for better data
elationships/structure.

2. Data is very important too, as I mentioned in my post - the story I
remember is that Lorenz was working on the weather model, represented by
the now-famous three differential equations (the Lorenz system). Late
night, graduate student life - he makes a small change in one of the
parameters, goes out for a coffee, and comes back expecting nothing
earth-shattering. But there on the screen the whole pattern represented by
the state variables was completely changed and was totally unexpected
(emergence of chaos!!) - and the rest is history. So accurate data can lead
to new insights too. By the way, this was a serendipitious discovery and
the parameter choice was neither based on model nor real data. As our
knowledge of nonlinear mathematics grows, hopefully we will have better
bounds on the parameters to test for chaotic behavior.

Bottom line repeat: both are important, sometimes structure gives more
insights, sometimes data does.

3. Thanks for your clarifications Alexandre on differentiating between
calibration and optimization. Benny mentioned that the structure in reality
is not fixed, so "optimal control" is not really optimal, and automation is
not the best way to get to it. I agree that the structures
elationships
change in reality, but I still maintain that search for optimality, for a
given fixed structure, is best done by automation. Here is how I think of
this:

What is the best control if structure is S1, data is D1
What is the best control if structure is S2, data is D1
What is the best control if structure is S3, data is D1
....

If data is fuzzy, D1 could be replaced by D2, D3, etc.. This "what is the
best if" is in contrast to the usual "what-if" that is done in SD. Search
for the "best control", for each of the above, is best done by automation.
The knowledge that results from repeated optimal search would be an order
of magnitude more (practically) useful than that obtained by
simulation-based policy analysis alone (call the above optimlations or
simulzations;-)), and they will get more useful as the
data
elationships/structure defining the models improve. Why do we need
them? Because we are dealing with large and complex systems and just as
mental models are improved by SD, our search for better controls is
enhanced and bounded by (repeated) optimizations. [Stochastic optimal
control/dynamic games may be another, direct but more difficult, way to
deal with uncertain and changing realities]. Of course, as I have mentioned
b4 on the list, if the models are not dependable, optimization is meaningless.

I just think this is the current trend anyway - as computers and software
become more powerful, optimization would be the way to get better
solutions/productivity/profits etc. (Btw, Optimal control may not be the
best way for this optimization either - genetic algorithms/AI techniques
may be used, but I dont know much of them, tho recently saw an
interesting appln from Brigham Young Univ, applied to infrastructure systems).

Thank you very much for your time

Jaideep
From: jm62004@Jetson.UH.EDU
****************************************

Jaideep Mukherjee, Ph. D.
Research Associate
Department of Industrial Engineering
University of Houston
4800 Calhoun Road
Houston, TX 77204-4812

Phone: 713 743 4181; Fax: 713 743 4190
****************************************
Eric Wolstenholme
Junior Member
Posts: 3
Joined: Fri Mar 29, 2002 3:39 am

Optimisation in System Dynamics

Post by Eric Wolstenholme »

Bob, and many others who have responded to the optimisation dabate.

I have personally used optimisation for policy design and compared it
with conventional policy design on the same model and would comment on
this experience as follows.

History.

There was extensive optimisation research carried out at the Bradford
Management Centre Systm Dynamics group in the mid 1980s using the ideas
of Raimo Keloharju ( the grandfather of SD optimisation). Raimo was at
the Helsinki School of Economics and developed an optimisation procedure
which linked hill climbing routines to DYSMAP, the Bradford SD software
package.

Purpose.

Much of the recent debate has been on how to use optimisation in SD.
There are two main ways. Historical data fitting and policy design, both
parameter and structural.

Historical data fitting is a simple use and the arguements for and
against are not new.

A much more powerful use of optimisation is in policy design and
pariculrly in STRUCTURAL policy design. This can be achieved by building
up policy equations in models which contain the sum of all alternative
policies and multiplying each by a papameter. It is these parameters
which are given freedom to be optimised, relative to a range of objective
functions. Other parameters can also be optimised, including coordinates
of table function. Just as in conventional policy analysis the idea is to
develop understanding as much as solutions.

Application.

In my book System Enquiry (WILEY), I present the basics of SD
optimisation (Chapter 9) and then apply it to a defence model. This model
is analysed using intuitive poicy design (Chapter 8) and then optimised
policy design (Chapter 10).

Optimised policy design is very powerful, but requires careful
application. My approach was to take two forces (red and blue) in a
conflict situation and to optimise the policies of each force under two
sets of objective functions. One set for red and one set for blue. The
understanding came from seeing how one force would react by changing
policy in response to changes in the objectives of the other. I also use
a whole range of performance measures in the output.

It is all in the book.

The conventional policy design used 12 runs of the model. The optimed
policy design used 8400 runs! I had 12 runs of each of seven objective
functions and ech optisation run consisted of 1000 iterations or
individual simulations of the model.

The skill and the effort needed of optimised policy analysis is high but
the gain in undestanding can be significant.

The step from conventional analysis to optimised analysis is a little
like the step from qualitative to quantitative modelling.

I hope this is helpful to those of you considering embarking on this
strand of System Dynamics.

From: Eric Wolstenholme <
eric@cognitus.co.uk>
"F.S.Ali"
Junior Member
Posts: 2
Joined: Fri Mar 29, 2002 3:39 am

Optimisation in System Dynamics

Post by "F.S.Ali" »

I am currently looking into the role of optimisation in system dynamics.Is
optimisation:

1 - a contribution to system dynamics, sharing its fundamentals, or
2 - a regression in ideas, where it is incommensurable with the system
dynamics methodology.

I welcome any feedback on the topic.

Fareen Ali
f.s.ali@lse.ac.uk
Alexandre J G P Rodrigues
Junior Member
Posts: 6
Joined: Fri Mar 29, 2002 3:39 am

Optimisation in System Dynamics

Post by Alexandre J G P Rodrigues »

jm62004@Jetson.UH.EDU
Junior Member
Posts: 14
Joined: Fri Mar 29, 2002 3:39 am

Optimisation in System Dynamics

Post by jm62004@Jetson.UH.EDU »

In response to Alexandre Rodrigues:

When the systems are large and complex, as real systems tend to be, a
single optimization may not help - the arguments of why we employ
simulations in the first place would apply to optimizations too.
Traditional arguments are: you cannot control a real system based on
"traditional" mental models alone, hence you should understand it better,
hence you should do simulations, hence you should use SD, a great
simulation technique, and improve mental models and *then* be better able
to control. My response - if the real system is so large and complex, then
even simulations can only go so far in helping us with policy prescriptions
and optimization will be necessary. To get a real feel ("feel" because in
human systems, you can never be sure of "true optimality" - we are too
d...n chaotic) of "what is the best response that we can make, given what
we know", we may need repeated optimizations with different parameter
values and/or different structures (call them "optimlations" or
"simulzations" if you will). I am not talking here of sensitivity analysis
(changing parameter values either manually or machine-controlled), which is
often used as a proxy for true optimization (understandably so, since true
optimization, ie, optimal control and its extensions, for real systems is
not easy with current software/hardware - mathematical development has been
there for quite a while). It can be done - I am one living proof of that;
it is not easy - my graying hair is proof of this:-). My dissertation has
details, if anyone is interested.

To understand, build and extend models of real systems (with elements of
uncertainty, limits, delays, inefficiency, equity, etc.), simulations are
necessary. To control them, once you have done so *and* have a reasonable
set of models, optimization would be the way to go. Starting with
optimization on a shaky model is equivalent to GIGO (garbage in...)

Your idea of "wasting a few hours adjusting the last bits of some parameter
values just
to get it right! (while feelling confident in the model structure);" is the
reason why optimization may be so useful. optimization would be the next
wave, once people get tired of simulation-based tinkering and realize the
fact that optimal results (optimal control/dynamic games) could be vastly
better/different from simulation results. Simulation cannot die because
that is the way we learn and understand things. (by the way, optimization
is not new in engineering sciences - only in socio-economic sciences, where
the mathematical models can be highly nonlinear making optimization
dangerous for ones health, is optimization recent).

Regards

Jaideep
****************************************
From: jm62004@Jetson.UH.EDU
Jaideep Mukherjee, Ph. D.
Research Associate
Department of Industrial Engineering
University of Houston
4800 Calhoun Road
Houston, TX 77204-4812

Phone: 713 743 4181; Fax: 713 743 4190
****************************************
"William Steinhurst"
Member
Posts: 21
Joined: Fri Mar 29, 2002 3:39 am

Optimisation in System Dynamics

Post by "William Steinhurst" »

<snip>
> 3) In the formulation of model equations where there are issues
> requiring allocation or another type of choice. In this settings Linear
> Programs will often be used to run an optimization at each time a

Im not quite sure whether this is a fourth application of
optimisation or a restatement of Bobs #3, but here goes:

4) Sometimes an actor in the system being modeled carries out an
optimization step so that the system must contain within it an
optimiser. Consumer choices is one example; a firms output and
pricing decisions is another. Biological and psychological systems
will act to optimize certain results. Of course, it may be that
these system optimizing components are not doing a very good job, but
you can picture an infinite regression of models within models for
each decision maker. Example: the electric utility sector of certain
energy models must reflect decisions by the electric utility managers
as to how much new generation capacity to place in the order
pipeline; if those managers are assumed to minimize the amount of
power they have to buy from others (for example), the system model
must include the computation those managers perform. A distribution
chain model (like the Beer Game but with the players being modeled)
would need to reflect whatever algorithm the retailer, distributor,
jobber, and brewery use to project their demands.

As to Bobs first item--model parameter fitting--the maximum
likelihood estimation used in the Energy 2020 models from Systematic
Solutions and Policy Assessment Corp. are documented in the model
manuals.

Bill Steinhurst
From: "William Steinhurst" <
wsteinhu@psd.state.vt.us>
ularoch@ibm.net
Junior Member
Posts: 16
Joined: Fri Mar 29, 2002 3:39 am

Optimisation in System Dynamics

Post by ularoch@ibm.net »

hi bob, hi everybody,

from the humble workplace of practical business simuation we do have a
sligthly different view on models and optimation.
first the models represent real process-chains
second the problem to be solved is how to control (steer) for best margin
and deliverability, or other involved target values.
as a result we find that about 50% or more of improvement is due to a good
control, but the practical models are usually highly nonlinear, so up to now
it is more the business know-how of client and expert that give (measurable)
results than any clever algorithm.

// yours sincerely ulrich la roche
From: ularoch@ibm.net
fast focus consulting
heilighuesli 18, CH-8053 Zuerich,
switzerland
fax +411 382 1349
"George Backus"
Member
Posts: 33
Joined: Fri Mar 29, 2002 3:39 am

Optimisation in System Dynamics

Post by "George Backus" »

Jim Hines noted: "A good use of time is to explain to people why fitting a
model to data is not a good use of time."

>From a "pure" system dynamics perspective, I agree. But most executives do
not respond well to preachiness that they need a religious conversion to
the SD "vision." They have dominating political needs. Our experience is
that the value of attempting to fit the data to history usually uncovers
unique features of a company or its markets that would have killed the
project should someone else have noticed it and then pointed out how
obviously insidious the model must therefore be. The "discovery" usually
wins big credibility for the development staff because it provides a
concrete example rather than a "soft-fuzzy" insight. (Karl Popper and the
Scientific Method of inquiry still hold up well in tight spots.) The exact
match to history is tedious (expensive and time-consuming), dangerous (you
do not want to change critical parameters/structures just to catch a
transient mechanism that was and should be left out of the model), and,
other for the above noted validity check, it adds nothing to the
conclusions. It does, however, cause a dramatic buy-in by the other
departments in the company -- for the wrong reason, but this is war,
dollars and politics, not academia or religion.

With cross-departmental buy-in, the model is internalized and becomes a
permanent fixture of the company infrastructure. It is the reference from
which all other analyses are judged - for this purpose, right and wrong
results no longer even have meaning as long as results can be rationalize.
We have models that are going on twenty years of use (with annual updates)
that still are a primary part of company/government decision making - but
never the only part. (Roger Naills and my work with FOSSIL2/IDEAS is but
one SMALL, publicly-known example of this process.)

We have often been told by senior executives after "battles," that if the
model had not shown the exact number for history, its credibility would
have dropped to zero and the cause would have been lost - with great
embarrassment and "real-estate" damage to the defending executive. This
phenomena is clearly due to ignorance on the part of the alternative-agenda
adversaries, but turf wars and the inability to "force" minds to open is
real. To lightly argue from an SD perspective that we can prove that our
clients are idiots, does nothing to further the actual goal of imbedding SD
thinking within an organization and then letting it infect the population
as the opportunities arise (or as the resistance to the "SD affliction"
weakens).

If I want to "prove" that I understand the future, I must prove I
understand history. I cannot prove either one, but I can convince them of
my "truth" if I show them their "truth" (history) and my "truth" (the
future dynamics) are the consequence (child?) of one another. Obviously,
this is the bias of my work. If you want to change how the world thinks,
the simplest of SD concepts is a gigantic leap of wisdom for most
individuals and, as such, the detailed history maintains its
insignificance. But if you want to change what the world does today (I am
talking down-and-dirty, in-your-face, myopic here), past experience
(history) is everything to a human. Check out your own lives; that which
does not match your experience is rejected (or minimally viewed with great
skepticism). Therefore, the "seductive power" of matching history is, to
my distorted past experience" more critical to the SD-impact on
decision-making that any Nobel-Prize-worthy feedback-loop that gives the
ultimate insight.

George Backus
From: "George Backus" <
gbackus@boulder.earthnet.net (George Backus)>
Policy Assessment Corporation
14604 West 62nd Place
Arvada, Colorado USA 80004-3621
Bus: +1-303-467-3566
Fax: +1-303-467-3576
"Carlos Kirjner Neto"
Newbie
Posts: 1
Joined: Fri Mar 29, 2002 3:39 am

Optimisation in System Dynamics

Post by "Carlos Kirjner Neto" »

There are potential applications of optimization and optimal
control methods to dynamical systems that go a bit beyond
adjusting midel parameters and computing optimal inputs. Three
potential, "non-traditional" applications of optimization and optimal
control methods dynamical systems in general, and business
dynamics models in particular include

1)Establishing limits on performance. It is quite common that the
optimal solution obtained off-line via computational methods
cannot be implemented in real-life. However, a computed
optimal solution will provide a limit on the performance of the system
against which proposed, implementable strategies can be measured.
It helps one answer the questions:

a) when will the search effort be disproportionately larger than the
potential upside?

b) assuming that the current model is right, is X achievable in the
very best case scenario?

2) Sensitivities. Multipliers and adjoint states often can be used to
obtain an indication of what paramenters in the system really affect the
aspect of system behaviour you are most interested in. Carefully looking
at them can help one answer the question: which parameters should I tune
really carefully, and which parameters wont really affect the bottom
line?

3) Modelling mistakes/inadequacies. Trying to compute optimal solutions
of considerably difficult problems one will often find that the computed
optimal solution is correct in the sense that the algorithm worked, but
is obviously incorrect in the sense that it does not correlate with
the behaviour expected and/or observed from the system modeled. Often
the reason for this is a bad model (structure, values of parameters,
constraints,
etc.). Looking at the computed optimal solution is a way to diagnose what
aspect of the system behaviour the model faiiled to capture....

Although optimization, differential equations, and systems theory (linear
systems, control systems, etc.) have very little to do with the art of
modeling,
if used appropriately, they can be valuable tools for those trying to
understand models and extract from them the maximum they can tell us about
the underlying systems and phenomena.

Carlos
From: "Carlos Kirjner Neto"<
Carlos_Kirjner_Neto@MCKINSEY.COM>
Benny Budiman
Junior Member
Posts: 3
Joined: Fri Mar 29, 2002 3:39 am

Optimisation in System Dynamics

Post by Benny Budiman »

Greetings:

Deja Vu! Weve had this type of discussion before, two years ago if I
recalled it correctly.

Regarding Jim Hines comment:
>What is the value of "adjusting the last bits of some parameter values
>just to get it right! (while feeling confident in the model structure)"?
>If, as Alexandre says, its "wasting time" doing it by hand at this
>stage, its also wasting time to do it automatically.
Right on the money!!! Computers are really good for producing streams of
useless data; Garbage In Garbage Out!!! So, its more important to improve
the structure of the model than it is to "massage" the equations or
parameters!!!

One can play around with parameters value as much and long as one wants,
but such an effort will be fruitless, as the model is lack robustness and,
worse yet, its only applicable to specific condition that was true only
in the past! No learning as of why things happened will occur! When
there are anomalies in the model behaviors compared to data, check the
structure first and see if ones assumption is reasonable and if the
structure one used is a representation of "reality". This is where
learning occurs as one can refine ones mental model of why things
happened! A much better use of everybodys time and money!

JM wrote:
>First: you (intuitively or otherwise) know that the system behavior is very
>sensitive to 2 or 3 critical parameters - (let us say the system is highly
>nonlinear and chaotic). Let us assume: if A is between 0 and 1, B is
Model wise or based on physical observation? Many times, the "erratic"
system (model) behavior could also be the result of modeling assumption.
For those who are technical, you can observe such a behavior when
modeling a distributed parameter (continuous) system as a lumped
parameter (discrete) system, e.g. heat conduction and convection with
travelling heat source. So, one can better deal with the oddity of
behavior of the model by reformulating the model itself.

JM wrote:
>Second, and *much more* important: what if your controls are (nonconstant)
>functions themselves, and not (constant) parameter values? Let us say you
>get ideal system output (maximum profits, minimum risk etc.) when A =
>exp(-time). We have no way of knowing a priori if A is exp(-time) or
>exp(-time/2) or "time" or a constant 0.6, etc. My point: looking for
>optimal controls is best done by automation; the current approach of
Not true! Optimal control is based on the premise that the "structure"
governing the dynamics are fixed! This premise is not always true in
business environment in which governments regulations, business
practices, and technology, for examples, can and will change. Even the
classic optimal control problem, minimum time and fuel (cost
esources),
assumes that the plant is fixed! The objective is then to search for
input (policy/decision) that attains the objective function. My point
here is that if any of the conditions of the plant change, then the
optimum solution will no longer yield the desired results.

JM wrote:
>I am not sure when you [Jim Hines] say:
>
>> A good use of time is to explain to people why fitting a model to data
>>is not a good use of time.
>
>Why not???? I thought that was one very good way to "validate" system
>dynamics models, otherwise why would anyone believe in them? Please explain
>- I may be missing your point.
So far by a few miles! Validation is a part of the process. See
"Tests for Building Confidence in System Dynamics Models" by Forrester
and Senge in Modelling for Management II (Richardson, ed.) pp.413-432.
I assumed you havent read it, no?


---------------------------------
Benny Budiman
benbu@powersim.com
703-707-6421, 703-481-1271 (fax)
Powersim Corporation
The Business Simulation Company
www.powersim.com
---------------------------------
Robert J Walker
Newbie
Posts: 1
Joined: Fri Mar 29, 2002 3:39 am

Optimisation in System Dynamics

Post by Robert J Walker »

The discussion thread on optimization (understood as primarily the
"goodness of fit" in re-creating observed system performance) has been
fascinating.

What I can contribute is certainly no conceptual breakthrough, but
rather the lessons of a difficult five-year hill climb to gain the
acceptance of System Dynamics methods in Bell Canada. In retrospect it
is all too apparent that the chain of fidelity-> confidence-> support->
influence_decisions has been THE dominant mechanism all along.

Our experience has been deja-vu with respect to a recent contribution by
George Backus:
>...if you want to change what the world does today (I am talking
>down-and-dirty, in-your-face, myopic here), past experience (history) is
>everything to a human.

In addition, George has it right on the money that...
>Executives dont respond well to preachiness that they
>need a religious conversion to the SD "Vision".

In fact, they soon dont return your calls at all.

The SD Community seems to have spawned a profound dichotomy in approach
which generally divides the academics/consultants from the commercial
users. I believe this is most eloquently described by a favorite quote
of mine:
-------------------------------------------------
Prophets vs. Leaders

History bears witness to the vital part that "prophets" have played in
human progress - which is evidence of the ultimate practical value of
expressing unreservedly the truth as one sees it. Yet it also becomes
clear that the acceptance and spreading of their vision has always
depended on another class of men - "leaders" who had to be philosophical
strategists, striking a compromise between truth and mens receptivity
to it. Their effect has often depended as much on their own limitations
in perceiving the truth as on their practical wisdom in proclaiming it.

The prophets must be stoned; that is their lot, and the test of their
self-fulfillment. But a leader who is stoned may merely prove that he
has failed in his function through a deficiency of wisdom, or through
confusing his function with that of a prophet. Time alone can tell
whether the effect of such a sacrifice redeems the apparent failure as a
leader that does honour to him as a man. At the least, he avoids the
more common fault of leaders - that of sacrificing the truth to
expediency without ultimate advantage to the cause. For whoever
habitually suppresses the truth in the interests of tact will produce a
deformity from the womb of his thought.

B.H. Liddell-Hart
Strategy - The Indirect Approach
--------------------------------------------------

Five years ago I began the task (as a prophet) of preaching The Fifth
Discipline (as well as I could, being a newcomer). A few small
successes, little diffusion and no widespread acceptance, certainly NONE
at high levels.

One year ago, just after ISD96 in Cambridge, I began to sell (as a
leader) the idea of a comprehensive, issue-based, industry-wide SD model
of Bell Canadas business environment. It was to anticipate the impacts
of a huge discontinuity in our form of regulation and the onset of full
competition in Local services. After seven months of selling and five
months of doing we have exceeded our wildest expectations.

With the terrific help of the folks at Pugh-Roberts we now have a fully
dynamic model of selected aspects of our business and its key leverage
points. Confidence in (not validity of) this model is derived in LARGE
part by its ability to re-create 9 years of prior history with high
fidelity. To do this we expended huge amounts of energy "sweating the
details", including dozens of one-time-event exogenous inputs which add
little to dynamics but legitimize the look and feel of the results.

As a consequence we have achieved the ultimate in buy-in. The COO is now
the model "owner" and all major initiatives have to be pre-tested using
it. I now lead a permanent organization to "do SD full-time" at a
funding level that is double our first year investment. Several more
traditional modeling efforts have been stopped or "downsized out" since
we started.

More importantly, the interest level in System Dynamics among key
managers is growing exponentially as measured by our voice and e-mail
volume. This is a pull effect resulting from the senior level buy-in. I
expect that the diffusion process may now take off, and generate tons of
important new learning as it does. Ultimately, I still believe this
learning will constitute the sustainable value of what weve started.

Hope this adds some value to the current debate. Feedback welcome.

Bob Walker
Director - Performance
Bell Canada
rjwalker@on.bell.ca
Locked