Page 2 of 2

Can SD models be validated without being run?

Posted: Fri Jan 16, 2004 9:24 am
by "Paul M. Konnersman"
George Backus wrote:

> But of course, the sun never rises. Never has. Never will.
>
>

I found this claim rather shocking. It seems to me that the sun does
indeed rise every morning. I have confidence that it does even when I
cant or dont personally witness it. Of course, like everyone else who
believes as I do, I am assuming a relative position of observation on
the surface of the earth and an implicit, "with respect to the
horizon." This is a mental model that is utilized quite frequently by
even the most scientifically sophisticated people. There is nothing
wrong with the model. It does not seem to me to be false or invalid in
any way.

Even if the statement is reconfigured into a Ptolemaic v. Copernican
issue, it seems to me unwarranted to say or think that an earth-centric
model is false or invalid. It may not be as /useful/ for many purposes
to think in terms of an earth-centric solar system, but it is just as
legitimate.

Paul Konnersman
From: "Paul M. Konnersman" <konnersman@comcast.net>
**

Can SD models be validated without being run?

Posted: Fri Jan 16, 2004 9:50 am
by =?iso-8859-1?Q?Jean-Jacques_Laub
Hi John (Gunkler)



About the few variables that can explain the apparently chaotic behaviour of
the influence of satisfaction on productivity, they are not so few.

There are family, educational, character, financial, professional,
environment and personal variables (age for instance) etc... So you end up
with at least 50 factors of influence that may represent better the reality.

Not to mention the kind of satisfaction (Satisfaction with the level of
salary, the kind of job, the level of expectation in the future of his job
in the company, his colleagues, his boss, the way he is judged, etc.) and
there can be external factors (wife, health etc.)



Second point: you do not need a modeller to be fooled.

I have had excellent ideas coming from reading, other people or from me,
or anything else that did not come from modelling, and that turned out to be
very bad ideas later on.



About the prevention of being fooled by a model, I think it is a matter of
experience. Lots of things on earth can be damaging or useful. You can have
an accident with a car.

If you have had good experiences using SD, then you will be more confident
on its usefulness. Of course these first experiences can be bad and may be
misleading.



When I read business dynamics I found it very interesting, and thought
about looking for a consultant. But I was afraid of finding a novice, being
in an average company. Experienced consultants have no time to spare with a
new middle sized company. You see the expert at the beginning of the
consultancy and after a while, the job is done by a junior. On top of that I
had to go to Paris, 500 kilometres away. So I preferred to study it myself.



The REAL PLEA comes from the time it takes from the beginning of a model,
until you can see the results and can then take into consideration the
eventual benefit of it.

For me the only validation is when you can make the balance of the costs,
efforts and outcome of the modelling process.



You can follow a course of two days to learn to use a spreadsheet and get
the benefit from it right after the course. The problem with SD, instead of
days you have years.



I do not know the level of utility of SD that I started studying two years
ago.

It will take me several years to be able to have a judgment that comes from
experience, but the reason for carrying on the experience, is my belief into
the principle of causality. As long as causality is governing our world, any
system that respects strictly this principle, if it is used intelligently
enough, should better the understanding of it.

Regards.



J.J. Laublé
From: =?iso-8859-1?Q?Jean-Jacques_Laubl=E9?= <JEAN-JACQUES.LAUBLE@WANADOO.FR>

Allocar, Rent a car company

Can SD models be validated without being run?

Posted: Sat Jan 17, 2004 10:09 am
by "George Backus"
Paul Konnersman writes: "Even if the statement is reconfigured into a
Ptolemaic v. Copernican issue, it seems to me unwarranted to say or think
that an earth-centric model is false or invalid. It may not be as /useful/
for many purposes to think in terms of an earth-centric solar system, but
it is just as legitimate."

I think this goes directly back to what John Sterman is trying to tell us: A
model is to be useful. A "legitimate view" perspective just gets caught in
paradigm mind games. If the model (problem) we are trying to address is
relatively insensitive to the values we assigning to the "sun" parameters",
or in other words, the "sun" dynamics are peripheral to our model, then
there is no need to waste time on ensuring we understand the sun. If the
model (problem), however, is of the solar dynamics or the earth-sun
dynamics, then the geocentric view is void of causality and dynamic
understanding. It only deceives and misguides. It is useless. It is not
SD. SD is to enlighten, not obscure. .... and we can only be enlightened
by running our model and comparing its results to the data. I still contend
that the model-data-revision loop is the most important loop in SD.

George Backus
George_Backus@energy2020.com

Can SD models be validated without being run?

Posted: Sun Jan 18, 2004 6:50 pm
by "Kim Warren"
A big thank you to all those who have responded to this enquiry - I
have learned a huge amount from the discussion, not just the direct
answers to the specific question, but also concerning a whole range of
important related issues.

Kim
From: "Kim Warren" <Kim@strategydynamics.com>

Can SD models be validated without being run?

Posted: Tue Jan 20, 2004 7:59 pm
by Bill Harris
"Paul M. Konnersman" <konnersman@comcast.net> writes:
> I found this claim rather shocking. It seems to me that the sun does
> indeed rise every morning.

You must have never lived in the Pacific Northwest! :-) It seems to
rise here about 30 days out of the year; the rest of the time I presume
its lolling on the beach somewhere in warmer climes.

Bill
--
From: Bill Harris <bill_harris@facilitatedsystems.com>

Bill Harris 3217 102nd Place SE
Facilitated Systems Everett, WA 98208 USA
http://facilitatedsystems.com/ phone: +1 425 337-5541

Can SD models be validated without being run?

Posted: Wed Jan 21, 2004 7:40 pm
by "Jim Hines"
George Backus says: "Dont we need to test all ideas? In SD, we
RUN models to do that testing. We have no choice but to compare our
runs to
the real world (i.e. data)."

First, testing is good, but so is going to a movie every once in a while
or deciding to move on to another idea. We do have a choice as to
whether (and how seriously) we test an idea. Having the choice is
important because no one has time to seriously validate more than a tiny
fraction of the ideas she holds. Life is short and testing is long

Second, RUNNING models is only one way to test SD ideas. For example,
if you have the idea that giving in to your spouse will end an
"escalation" type argument, its probably better to just try that policy,
rather than building and validating a simulation model.

Third, you can run models for reasons other than validating ideas. For
example, you can use simulation models to come up with ideas in the
first place -- like the idea to give in to your spouse.

(Incidentally the giving-in policy works -- if you not only pick up your
sox, but also do the dishes, when she (or he) gets on your case; youll
be amazed at the ensuing domestic harmony. Unfortunately, continuing
implementation of the policy can be elusive.)

Jim Hines
jhines@mit.edu

Can SD models be validated without being run?

Posted: Thu Jan 22, 2004 10:10 am
by "Thompson, James. P (Jim) A142"
I do system dynamics for a living and have some thoughts on Kims question.


In Industrial Dynamics, Jay Forrester observed that, for the most part,
behaviour of a complex system is difficult to understand and, therefore, it
is difficult to predict how activities work to change conditions and how
conditions influence those activities. He suggests that by following a
particular method we can improve our understanding, make better predictions,
and then make improvements to a complex system. A crucial part of his
method was to write a simulation model to see if its behaviour is similar to
the problematic behaviour. I havent seen a change over the past 45 years
to persuade me that Forresters observation about capabilities of the
unaided mind was incorrect or that his method, when followed faithfully,
produces less than what he initially suggested. In other words, to validate
our understanding of a complex system (simulation model), I think we still
have to simulate.

All system dynamics simulation models are used to make predictions. Those
built to explain some observed phenomena help us to predict that the
observed causes and conditions will generate the same results should they or
some in a similar range recur. So matching model output to measured
observations is a very important step toward developing confidence in ones
model (thinking).

We use different types of models in system dynamics. One of the most
powerful is a written verbal description (model) of activities that cause
conditions to change and how conditions change systemic activity. When
written plainly, the description helps people with different backgrounds to
match their experiences (and measured data) to simulation output.

Most of my colleagues understand verbal descriptions more quickly and better
than equations and maps. So a verbal description (model) that contains some
familiar measured and simulated data helps to socialize logic and reasoning
that may be different from theirs.

Jim Thompson
Director, Economic & Operations Research
Cigna HealthCare
900 Cottage Grove Rd. A142
Hartford, CT 06152
jim.thompson@cigna.com
Tel. 860.226.8607
Fax 860.226.7898

Can SD models be validated without being run?

Posted: Mon Jan 26, 2004 11:56 am
by "Jim Hines"
I appreciated Raymond Josephs response to my question whether
statistical estimates have any rational claim on our attention other
than widespread use and anecdotal evidence of usefulness.

I interpret his answer as being "no", which may seem odd (to him
especially) because he wrote "yes". But, Ray argues for the usefulness
of statistics by sharing his own experiences -- that is, he provides
anecdotes.

My question wasnt whether people have good stories (anecdotes)
concerning statistics. My question was whether theres any other reason
to use them **other than** these anecdotes. I like anecdotes -- tell me
a good story and Im yours. Im just wondering whether there is an
unbroken logical argument from first principles to the use of
statistical methods in real applications. Im guessing the path always
has a logical gap -- at the transfer to practice and so using statistics
always involves a true leap of faith.

But, I dont know this for sure, and so Im asking people on the list.
Im asking people like Raymond Joseph, Bob Eberlein, David Peterson,
George Backus -- what about it guys? Is there any real reason -- beyond
widespread practice and anecdotal evidence -- to run experience through
statistical methods as opposed to running it through, say, a good story
telling machine?

Jim
From: "Jim Hines" <jhines@mit.edu>

Can SD models be validated without being run?

Posted: Tue Jan 27, 2004 8:40 am
by Michael McDevitt
As in modeling, what is the <purpose> of the model or
in this case the purpose of the statistic?

Usually, a statistic is used to disprove a hypothesis or
prove that a model is wrong. Since we know that <all models
are wrong>, the question now becomes one of utility or
<usefulness>. When the proof that the model is wrong is
not very compelling, i.e., when the models results correspond
closely to the <real world>, the model may be useful. In fact,
the model may still be useful in any case. So what?

How close must the model be to reality? How do you measure it?
What is your confidence that the models results are not the
result of random chance -- that the model just happens to
correspond cause you got lucky this time?

Lets return to the <purpose> for that answer. How close is
good enough? That may depend on the customer. A statistic
can be comforting to a customer depending on his/her scientific
paradigm. Because after all isnt your model the best theory
of how your system works? The patterns of behavior in the model
and the real world should correspond in some fashion - the
statistic only tells us how closely and with what level of
confidence that the results are not likely to be based on
chance.

All the Best,

Mike McDevitt
CACI Dynamic Systems Inc.
San Diego, CA
From: Michael McDevitt <mmcdevitt@caci.com>

Can SD models be validated without being run?

Posted: Tue Jan 27, 2004 10:06 am
by "Dan Goldner"
Jim asks, "Is there any real reason -- beyond widespread practice and
anecdotal evidence -- to run experience through statistical methods as
opposed to running it through, say, a good story telling machine?"

Why choose? Why not do both? As this thread has described, we build
confidence in our understanding, and increase our understanding, by
comparing model behavior to what we already know, searching for
discrepencies, and resolving them. "What we already know" is stored in
stories, and in numerical data. Model behavior that is inconsistent with
either is a topic for further investigation.

For both stories and data, however, one faces the question, how big does
a discrepancy have to be before you reject the current version of your
model? In the case of numerical data, statistical checks provide an
answer. I do believe that even these checks require a leap of faith:
they are all based on the idea that if you somehow could "repeat the
experiment" a large number of times, you would observe different values
for the data, that these values would be related to one another in a
distribution, etc. You have to buy that to buy statistical checks, and
that premise is more hypothetical in some cases than in others.

Now, to apply them, one must make the leap of faith described above, and
a host of assumptions along the way. But in the same way that writing a
model makes mental models explicit, and therefore easier to share and
easier for others to challenge productively, so statistics makes the
confidence-building exercise explicit, easier to share, and easier to
challenge productively.

In short, stories and data are both imperfect records of experience.
Building confidence in a model means testing it for consistentency with
that record. Statistics helps organize and document some of that
process.

Dan Goldner
From: "Dan Goldner" <dan@vensim.com>
702.735.4310
ventanasystems.com

Can SD models be validated without being run?

Posted: Tue Jan 27, 2004 2:11 pm
by George A Simpson
In response to Jim Hines question "Is there any real reason -- beyond
widespread practice and anecdotal evidence -- to run experience through
statistical methods as opposed to running it through, say, a good story
telling machine?" I can offer this compelling argument - phrased as a
story:

My company faces swinging penalties if we do not deliver the performance
set out in our service level agreement.
Statistical tools (and a good deal of modelling) allow me to perform
cost-benefit analyses of risk vs cost of risk mitigation.
Without the statistical tools, I would be exposing us to a higher than
necessary risk of penalties.

Perhaps this is the argument in a nutshell for SD - with (good) SD models,
you can lower the frequency of undesirable outcomes.

From: George A Simpson <gsimpso4@csc.com>

Can SD models be validated without being run?

Posted: Tue Jan 27, 2004 2:40 pm
by Alan Graham
Jim Hines asks whether theres a rigorous argument that statistical
analysis can/cant be used in lieu of simulation to validate a model.

This discussion chain seems to have been proceeding on the assumption
that statistical methods can be applied independently of simulation.
Except for some extraordinary cases, thats just not true for SD models.

Statistics without simulation can work only if theres a fairly ideal
set of circumstances and data. If there is perfect data for all drivers
of all equations (and you know a priori the functional form), and the
drivers to every equation are uncorrelated, sure, regress away.

Depart from ideal conditions, and regression gets messy very quickly.
Part of the art of classic statistical analysis is choosing problem
area for which data is good and known analytical band-aids compensate
for flaws.

In the applied system dynamics world, the model purpose is fixed, and
data are almost always incomplete and flawed, and feedback makes it
highly likely that drivers are correlated. The rigorous and practical
statistical method in this situation is full-information maximum-
likelihood estimation through optimal filtering (FIMLOF). See David
Petersons article in Jorgen Randers (ed.) Elements of the System
Dynamics Method (1980). (and I expect, the Vensim manual)

Now heres the punch-line: FIMLOF requires simulation of a particular
sort anyway. (Ive appended a two paragraph description of FIMLOF)
So Id conclude that except for unbelievably simple cases, one needs
to simulate in some fashion to validate against data.

(Some readers might find interesting the Appendix of my conference
paper in the 2002 (Palermo) conference proceedings, which argues that
the customary iteration of experiment-and diagnose approach to model
validation (and parameter adjustment) in fact provides an heuristic
equivalent to FIMLOF estimation and validation.

cheers,

alan

Alan K. Graham, Ph.D.

Decision Science Practice
PA Consulting Group
Alan.Graham@PAConsulting.com
One Memorial Drive, Cambridge, Mass. 02142 USA
Direct phone (US) 617 - 252 - 0384
Main number (US) 617 - 225 - 2700
***NEW*** Mobile (US) 617 - 803 - 6757
Fax (US) 617 - 225 - 2631
***NEW*** Home office (US) 617 - 489 - 0842
Home fax (US) 978 - 263 - 6861

Brief description of FIMLOF (from Graham, On Positioning System Dynamics as an
Applied Science of Strategy, 2002 Internatioanl System Dynamics Conference, Palermo Italy)

The heart of the FIMLOF algorithm is the "predict-correct" cycle: starting from estimated val-ues of the level variables, use the model equations to simulate forward to the next time for which real data are available. The "predict" part is then using model equations to predict what the ob-served data "should" be a priori for that time, i.e. the estimated observation given the estimated levels from the previous time. Of course, the real data will differ from the estimate, because of random noise driving
the dynamics and random noise corrupting the data. Those differences are called the residuals. Standard Bayesian estimation can use model equations and the residuals for that time to calculate an a posteriori estimate of the level variables for that point in time. Those estimates are the starting point for the next predict-correct cycle. So the algorithm described thus far takes a stream of observations, and produces a stream of estimated level variables, and a stream o!
f residuals. This is the Optimal Filtering (OF) part of FIMLOF.

Full-information maximum likelihood estimation backs into parameter values by calculating, for a given sample set of real observations and their residuals, the parameters that maximize the probability density of that aggregate sample. It turns out that the logarithm of the probability density, the likelihood, is a function of a quadratic weighting of the residuals. Therefore, the Full-Information Maximum Likelihood (FIML) estimate of the parameter values is a particular Weighted Least Squares (WLS) para
meter estimate, which can be found by standard search methods.

Can SD models be validated without being run?

Posted: Wed Jan 28, 2004 9:47 am
by John Sterman
My good friend Jim Hines "question[s] whether statistical estimates
have any rational claim on our attention other than widespread use
and anecdotal evidence of usefulness." Jim wants rigorous
demonstrations of the value of statistical reasoning and estimation
methods. There are so many I dont know where to begin.
Epidemiology, insurance, quality control, particle physics, medicine
and pharmaceuticals (evaluating safety and efficacy of new drugs and
procedures) and on and on. Quite frankly I cant understand the
attitude that statistical methods have no value. We live in a noisy
world with many confounding and uncontrolled sources of variation.
Without the tools of statistics the scientific method would be
crippled. Anecdotes and superstition would carry even more weight
than they already do (which is far too much). More lives have been
saved by far through proper application of statistical reasoning
than any other modeling method (beginning perhaps with John Snows
famous demonstration in 1854 that the Broad Street well in London was
the source of a cholera epidemic).

Of course, Im not suggesting statistics is a panacea, without
problems, or the only tool needed to learn about, design, and
implement effective policies to improve human welfare. We might have
a productive conversation on this list about the limitations and
proper use of different tools to address important challenges, how
such tools complement one another, how they are abused, and how we
can improve our discipline and practice. After all, all models are
wrong. But lets avoid blanket claims and unsubstantiated opinion
about entire branches of knowledge.

John Sterman
From: John Sterman <jsterman@MIT.EDU>

Can SD models be validated without being run?

Posted: Sun Feb 01, 2004 12:07 pm
by Ray
Jim asks:
<snip>
. . . the flip side of Kims question: Is there any
rational basis for having a high regard for statistical estimates beyond
the fact that these techniques are widely used and the anecdotal
evidence that they are useful?

It looks like utility would be a rational basis. If it can get us something
we want then yes.

When I have a system with noisy measurements, there is not much for
alternatives to using statistics. Stats can be used in analyzing a proposed
model, designing augmentation, and actually operating the underlying system.

If we have perfect measurements, then there is no need for statistics.
Except for these perfect systems, our only choice is to implement
statistical methods. If we choose the statistical method:
"My current measurement is the exact value",
then our results may suffer. If we employ more appropriate statistical
methods, we may be more successful.

Raymond T. Joseph, PE
RTJoseph@ev1.net

Aarden Control Engineering and Science