System Dynamics: Art, Science or Profession

This forum contains all archives from the SD Mailing list (go to http://www.systemdynamics.org/forum/ for more information). This is here as a read-only resource, please post any SD related questions to the SD Discussion forum.
Locked
"Jim Hines"
Senior Member
Posts: 88
Joined: Fri Mar 29, 2002 3:39 am

System Dynamics: Art, Science or Profession

Post by "Jim Hines" »

Bruce Cambell asks:
>How does one know which structure is valid, other than to compare
>output behaviour with observed behaviour?

Cant you also compare model structure to observed structure?

Regards,
Jim Hines
MIT
From: "Jim Hines" <jhines@MIT.EDU>
Tom Fiddaman
Senior Member
Posts: 55
Joined: Fri Mar 29, 2002 3:39 am

System Dynamics: Art, Science or Profession

Post by Tom Fiddaman »

Jay Forrest wrote:

>It seems to me that Bruce is disconnecting logic or structure from output.
>I must propose that the accuracy of the output need have no relationship to
>the accuracy of the underlying structure and that the insights gained from
>a poorly designed but brilliantly calibrated (or correlated) model could be
>very unsatisfactory.

When discussing accuracy, we need to be clear whether we mean historical or
predictive accuracy. Historical accuracy is interesting but not very
useful. What we want is predictive accuracy - whether its predicting the
response to a policy or just predicting the future. Its easy to create a
model that replicates history to any desired level of accuracy using lots
of irrelevant inputs that are spuriously correlated to the output. However,
its impossible to do this predictively, because you cant fit the model to
data that doesnt exist yet. If a model generates accurate predictive
output in nontrivial circumstances, you can be fairly sure its because the
structure is relevant. Better yet, you can use the data to help decide when
a good fit is not merely luck.

Tom


****************************************************
Thomas Fiddaman, Ph.D.
Ventana Systems http://www.vensim.com
8105 SE Nelson Road Tel (253) 851-0124
Olalla, WA 98359 Fax (253) 851-0125
Tom@Vensim.com http://home.earthlink.net/~tomfid
****************************************************
"Jim Hines"
Senior Member
Posts: 88
Joined: Fri Mar 29, 2002 3:39 am

System Dynamics: Art, Science or Profession

Post by "Jim Hines" »

At the end of an instructive posting, my friend Jack Homer writes:
> Indeed, SD practitioners should actively discourage and, at the
>least, avoid being a party to hasty action based on incomplete modeling and
>analysis.

But, all models and analyses incomplete. Right? Seems to me that the
question of when to stop modeling and start acting will always be a judgment
call based on
* Resources
* Subjective confidence in understanding gained to date, and
* Subjective assessment of how much additional value will be gotten from
additional effort

I know you are nodding your head and saying "Yeah, yeah, yeah. Motherhood
and apple pie." But, I do think (and maybe this is Jacks point) that
some people underestimate the value they will get from additional effort.
I think people most often underestimate the addition value they will get
from further analysis. In contrast, people (particularly those new to
field) tend to **overestimate** the value they will get from further
modeling (and data fitting). From time to time I see students who decide to
do more modeling, because of a hollow feeling that they havent gotten much
out of the process so far. Unfortunately, their hollow feeling doesnt come
having too small a model, but rather from not having understood enough.
Additional time on analyzing and understanding almost always yields a high
return.

Regards,
Jim
From: "Jim Hines" <jhines@MIT.EDU>
"Corey Lofdahl"
Junior Member
Posts: 7
Joined: Fri Mar 29, 2002 3:39 am

System Dynamics: Art, Science or Profession

Post by "Corey Lofdahl" »

"Whats art? Nature concentrated." --- Honore de Balzac

Edward Tufte -- author of The Visual Display of Quantitative Information,
Envisioning Information, and Visual Explanations - gave a leacture in Boston
earlier this month. The audience was sizable, and most were there for the
same reason --- they were looking for ways to improve their technical
presentations. Tuftes excellent books provide countless examples of
intuitive graphics that border on art.

The lecture began with an observation: that reducing the inherent
multidimensionality of nature (i.e., the real world) into two dimensions is
a standard problem, whether the medium is paper or canvas, HTML or
Powerpoint. Whats more, this being a hard problem with no general
solution, Tuftes books can only provide heuristics through examples of what
and what not to do. As an example of what to do, Tufte offers a map by
Charles Joseph Minard that packs six dimensions of data into a graphic of
Napoleans adventure in Russia. This six dimensions are 1) longitude, 2)
latitude, 3) time, 4) direction, 5) army size, 6) temperature. The map
shows in disturbing detail how the armys size shrank from 422,000 to 10,000
men during its journey from the Polish border to Moscow and back. 1 in 42
men made it out alive.

It seems to me that system dynamicists face the same problem, but SD brings
a different set of tools and techniques to bear. Instead of representing
nature in two dimension graphics, SD represents it in n-dimension models
with n being the number of stocks. After all, a reference mode is performed
to help the modeler select n dimensions from a much larger set.

Now is it possible to prove this process correct? Borrowing liberally from
the theory of computation, I dont think so, at least not as Russell and
Whitehead used the term prove. That is to say that the intuitions and
heuristics that obtain from an SD model can be useful and clarifying, just
as the intuitions and heuristics that obtain from a well designed graphic
can be useful, but closed form solutions are not generally available for
systems complex enough to be interesting. This is my way of saying that SD
will always have a significant component of art-ness associated with it.

Corey Lofdahl phone: 781.221.7610
SAIC/TRG/SITO fax: 781.270.0063
20 Mall Road, Suite 130 pager: 800.983.3143
Burlington, Massachusetts 01803
coreypager@bos.saic.com
Yaman Barlas
Member
Posts: 44
Joined: Fri Mar 29, 2002 3:39 am

System Dynamics: Art, Science or Profession

Post by Yaman Barlas »

Thanks for the nice recommendation of my 96 paper Richard. I will take
this opportunity to list a few "principles" that I think are crucial in sd
model validity:
1- Structure validity is essential in sd. Not because we prefer it that
way, but because the very purpose of a typical system dynamics study
dictates it: The purpose is to understand how and why the model/system
behaves the way it does and then try to find ways to improve the behavior.
This can only be done if the model contains a relevant structure w.r.t the
real problem.
2- Behavior validity must also be tested, but ONLY AFTER there is
sufficient confidence in structure validity. The reason why we test
behavior validity is that a model with poor behavior resemblance (to real
systems) would not be useful, EVEN IF it has a high-quality structure.
The issue here is one of parameter and input estimation: A model with high
structure validity would yield poor behavior validity, if some of the
critical parameters and inputs have been poorly estimated.
3- Structure validity CAN NOT be inferred from behavior validity. It can be
shown that infinite many number of structures can yield the same behavior.
(Turing). Structure validity must therefore be tested separately. (Although
"future-predictive" validity "could" say a bit more about structure
validity, it is not philosophicaly true that "you can be fairly sure that
its because the structure is relevant" - Tom Fiddaman. The
behavior-to-structure validity link is still very weak, philosophically. A
good structure could very well yield a poor predictive validity or a model
with poor structure could well yield a high predictive ability -by chance
or otherwise).
4- Structure can be tested in two ways: Direct structure testing and
indirect structure testing. Direct structure testing refers to comparing
the model structures to the real relationships. ( "Cant you also compare
model structure to observed structure?"-Jim Hines). Direct structure tests
do not involve any simulation. They are highly QUALITATIVE and judgemental
in nature. They involve comparing each equation, sub-structure and
structure against their counterparts in the real problem. Indirect
structure tests (or structure-oriented behavior tests) on the other hand do
involve simulation. Carefully designed (usually extreme) simulation runs
are carried out in order to "stress" the model and reveal any potential
structure flaws.
5- The bottomline is that we must spend more effort in developing and using
INDIRECT structure tests. (Eg. Reality Check of Vensim). These tests are
most promising in that they are more quantitative and communicable than
DIRECT structure tests and have the advantage of saying concrete things
about the structure of the model. I sometimes call the indirect structure
tests (or structure-oriented behavior tests) "dynamic" tests and the
standard behavior tests "static" tests, in that the former involve testing
the behavior of the model in varying conditions, but the latter test the
behavior of the model against the real behavior obtained in the "base"
condition only.

regards too all

Yaman Barlas

Note: We have a software for multi-step behavior validity testing (BTS). It
can be downloaded from our website at Bogazici university. It is fully
functional and semi-user-friendly, but not professional.
We also just finished working on an algorithm-software for indirect
structure testing. It is a pattern-recognition based algorithm that tries
to generalize the Reality Check of Vensim, to include hypothesizing and
testing "patterns of behavior" under pre-specified (extreme) conditions.

From: yaman barlas <
ybarlas@boun.edu.tr>
Kim Warren
Junior Member
Posts: 5
Joined: Fri Mar 29, 2002 3:39 am

System Dynamics: Art, Science or Profession

Post by Kim Warren »

Whilst it may be an impossible aspiration to be completely objective, there
is a worry if our model-building efforts are so unavoidably subjective as
this discussion suggests ...
it would be good for SD to be seen as, if not exactly a science, then at
least something of a profession. One characteristic of professions is that
when two skilled practitioners attack the same problem, they come up with
very similar structures for the diagnosis, and rather similar answers. For
simple problems, even apprentices are able to arrive at similar, somewhat
standard solutions. It is not too popular for two lawyers, accountants or
doctors to come up with approaches to our legal defence, our tax return or
our knee pain that appear quite different and at odds with each other.
The worry is, if our method really is so subjective, then the chances of
reaching such replicable outcomes between professionals in our field seem
pretty slim.
Meanwhile, whilst we cant avoid seeing the world through somewhat personal
lenses, surely there are a few absolutes. Subjectivity is not going to
alter the fact that rabbits breed at a rate reflecting the number of
healthy adults, that lakes fill when the rain falls, that debt rises as
interest charges accumulate, or staff resign when work pressure is too
great (or for other reasons that they can usually articulate). Surely we
should hope that two SD professional trying to understand rabbit
populations, water levels, growth of debt and staff attrition in specific
circumstances would arrive at solutions recognisably the same?
Kim Warren - London Business School

From: Kim Warren <
kim@farthing.globalnet.co.uk>
Bruce Campbell
Junior Member
Posts: 5
Joined: Fri Mar 29, 2002 3:39 am

System Dynamics: Art, Science or Profession

Post by Bruce Campbell »

This is something I have also been wrestling with, although within the
context of being reasonably confident that the models I build are valid.

If you work through the Road Maps series published by MIT there are a
number of models that deal specifically with rabbit populations - to
take one of Kims examples. These have been built as examples of
different structures. Although all model rabbit population the
behaviours they exhibit are totally different - S shaped growth, and
overshoot and collapse. It is just as easy to model the same "problem"
using a structure that produces undamped oscillation. All models appear,
on the surface, to be reasonable and valid. This is very confusing for
beginners.

My resolution is that the "correct" model is the one that most closely
mimics the actual behaviour observed. It is for this reason that it is
so important to have a prior reference mode of behaviour against which
model output can be compared. It also highlites the need to define the
problem rather than just modelling a system. As no two problems are
exactly the same, no two models will be exactly the same.

I have found that one of the most difficult aspects of SD is getting new
practitioners to spend the time to analyse, and understand, the problem
BEFORE they start modelling. Im also guilty of this. If the prior
investigation is done diligently most people tend to come up with
similar model structures.

Having said all that, it would be really nice if there was some
methodical way of moving from a problem description to a model
structure. But I guess that would remove the fun of the challenge!

Bruce Campbell


--
Bruce Campbell
Joint Research Centre for Advanced Systems Engineering
Macquarie University 2109
Australia

E-mail:
Bruce.Campbell@mq.edu.au
Ph: +61 2 9850 9107
Fax: +61 2 9850 9102
"geoff coyle"
Senior Member
Posts: 94
Joined: Fri Mar 29, 2002 3:39 am

System Dynamics: Art, Science or Profession

Post by "geoff coyle" »

Kym Warren and Bruce Campbell make some interesting points in this
discussion of the two-analyst problem - two equivalent people should come
up with something like the same model given the same information. Ideally
that should be the case though we need to remember that even eminent
surgeons dont practise medicine identically and might disagree on their
diagnoses, though one would hope (perhaps vainly) for some convergence
through their ability to share their discipline. Similarly, opposing lawyers
will differ in their interpretation of the law, and two engineers might
propose different designs for a given bridge. The two analyst problem has
thus not been completely solved even in disciplines supported by empirical
research (medicine and engineering) or centuries of profound scholarship on
established principles of justice.

Oddly, SD might be in a stronger position - Ill try to explain why I think
that and see if I get torn to shreds.

Classical SD suggests that one starts with a reference mode of historical
behaviour to be improved on or of future behaviour to be achieved. One then
selects some suitably important level variables (why do we now call these
stocks?) and models the causal forces which those levels produce and which,
in turn, cause their dynamics. Clearly if I select one set of levels and you
select another we are likely to have two radically different models and the
process will be subjective. There is also a risk of confusing two meanings
of importance. Does it mean that the variable is important in producing
observed behaviour or does it mean important to me in that behaviour?

Bruce suggests that behaviour mimicking the real system is an important
determinant of validity, but that can be achieved to arbitrary accuracy by a
suitable collection of Fourier transforms which has no causal content and
hence no policy implications. I phrase it as does the model do the same
things as the real world AND FOR THE SAME REASONS? - within the limits of
the simplifications I have made. The simplifications are an important
consideration as there is otherwise a tendency to make the model more and
more and more detailed in a vain search for an illusion of accuracy.
Unhappily, I have seen many models which are so large that they have ceased
to be models and have become large computer programs. Since error is
inevitable in any very large program, even to the extent of typing * when /
was needed, the outputs from a large program can be so unreliable that the
policy conclusions may be misleading or downright wrong.

There are, in my experience, four guidelines for avoiding the subjectivity
trap.

FIRST, to try to keep models reasonably small so that they can be discussed
and checked. The documentor facility in DYNAMO (and in DYSMAP and COSMIC)
was truly excellent for that. It made it easy to study the equations of a
model, something which it is now fairly rare to see people doing - though I
do see people staring at a stock/flow diagram (or the fragment which can be
seen on the screen) in the mistaken belief that it is the model.

SECOND, the emphasis on physical flow is vital and is the theoretical basis
of SD. We all know that, of course, except that (and you may not believe
this) I have met people who have never heard of that idea and yet claim to
practice system dynamics! Depressing, isnt it?

THIRD, and following from that, is the necessity for conservation of mass
and hence the cardinal importance of dimensional validity. I have seen
numerous models which are, at a glance, wildly dimensionally invalid and
hence are quite untrustworthy. Why is that? Simply because I know
practitioners and academics who have never heard of dimensional validity.

FOUR, then, is the need for practitioners, whether novices or virtuosi,
actually to read the texts on SD and not to rely on software manuals.
Excellent though those are, their purpose is to explain the use of a
computer program and teaching the precepts and practices of a discipline is
something different. Even after 30+ years in SD I still dip in to Jays
Industrial Dynamics from time to time and always to my profit. (Hands up all
those who have never read it and both hands up all those who have never
heard of it. I wish we could see the response to that - it might be very
depressing).

By the way, and if I dare mention it, my book actually refers to the
two-analyst problem (page 33) and states that it is one of the attractive
features of list extension that two analysts tackling the same problem
should come up with recognisably similar solutions (i.e.models) even if they
started with different ideas as to what were the key variables. (Hands up
those who have never heard of that.)

What an interesting discussion. It makes a welcome change from people asking
if there is a model they can use for such and such a problem. Models are
built to answer questions (see Richardsons and Pughs book - hands up all
those who have never read it) and someone elses questions are not likely to
be the same as yours.

What Kym and Bruce have put their expert fingers on is probably the need
actually to learn something about SD before rushing off and doing it.

Regards,

Geoff

geoff.coyle@btinternet.com
Professor Geoff Coyle
Consultant in System Dynamics and Strategic Analysis
Tel: (44) 01793 782817 Fax: 01793 783188
Bruce Campbell
Junior Member
Posts: 5
Joined: Fri Mar 29, 2002 3:39 am

System Dynamics: Art, Science or Profession

Post by Bruce Campbell »

Jay Forrest wrote:
> It seems to me that Bruce is disconnecting logic or structure from output.
> I must propose that the accuracy of the output need have no relationship to
> the accuracy of the underlying structure and that the insights gained from
> a poorly designed but brilliantly calibrated (or correlated) model could be
> very unsatisfactory.

However, I believe that the behaviour of the output must have a
relationship to the structure. The examples I gave, various models of
rabbit populations in the SD Road Maps series, exhibit different
behaviours due to different structures. How does one know which
structure is valid, other than to compare output behaviour with observed
behaviour? How do we know if our mental models of the problem have not
been limited in some way, leading to an incorrent structure?

The problem Ive always had with SD is, how can I be reasonably
confident that the STRUCTURE of my model is valid? Im not particularly
concerned about accuracy of output. More often than not, input is only
loosely based on actual data so output cannot be accurate. However, Ive
always been concerned about structure and behaviour. Group modelling
helps enormously, as does incremental model construction and
investigation, but does not resolve the problem.

Any comments are most welcome!

Bruce Campbell


--
Bruce Campbell
Joint Research Centre for Advanced Systems Engineering
Macquarie University 2109
Australia

E-mail:
Bruce.Campbell@mq.edu.au
Ph: +61 2 9850 9107
Fax: +61 2 9850 9102
fabiansz@consultant.com
Junior Member
Posts: 7
Joined: Fri Mar 29, 2002 3:39 am

System Dynamics: Art, Science or Profession

Post by fabiansz@consultant.com »

Dear Colleagues,

I can remember there were (just in one academic paper) about 17 different
validation tests for SD models, some of them were structural validation
tests and others were behavior validation tests. Therefore validation is
not only for SD models structure but also for its behavior.

Validation is building the right model for the intended purpose. If
models purpose is solely to generate insight, the tolerance for behavior
discrepancy will be usually greater than if models purpose is more decision
support oriented.

On the other hand, trying to pass verification tests implies to build the
SD model in a right way: the best equations, the best parameters, etc.
A person could build a perfectly bad model, if it passes the verification
tests but doesnt pass the validation tests.

Be well...

Fabian Szulanski
From: fabiansz@consultant.com
JHomer609@cs.com
Junior Member
Posts: 6
Joined: Fri Mar 29, 2002 3:39 am

System Dynamics: Art, Science or Profession

Post by JHomer609@cs.com »

Geoff Coyle writes:
>The simplifications are an important
>consideration as there is otherwise a tendency to make the model more and
>more and more detailed in a vain search for an illusion of accuracy.

Yes, a basic tenet of modeling is to make the model only as detailed as
necessary. The key phrase here is "as necessary", and its on that phrase
that the main debate in our field turns. Everyone agrees that a model should
be built to suit its purpose, and this typically includes such requirements
as realism, robustness, and the ability to evaluate alternative policies and
scenarios and to provide general insights. In brief, we all want our models
to provide credible insights and results. Where we do not all agree, it
seems, is on the question of how much work and detail is required to make a
model credible.

We can distinguish between formative or sketchy models and more elaborated
ones that have gone thoroughly through the fire of confidence testing. On
one side of the debate are folks who would say that a formative model is
usually better than none at all, because it generates insights that the
client would not have otherwise. On the other side are folks like myself who
seek insights as much as the next person, but are concerned about the
validity of those insights. A formative model provides only formative--and
quite possibly incorrect or misleading--insights. It is something I have
seen in my own modeling practice time and time again: The formative version
of a model suggests things that quite often end up paling in comparison, or
at worst, are simply not borne out later when the model has been fully
elaborated in the light of all available information.

As I have stated elsewhere (see "Why we iterate: scientific modeling in
theory and practice", SD Review 12:1, 1996), there is a place in SD practice
for exploratory modeling...and even for untestable qualitative models. But
because they are based on incomplete information and incomplete testing,
formative models should be considered to be only the first step in properly
analyzing an issue. They should not be considered a direct basis for client
action. Indeed, SD practitioners should actively discourage and, at the
least, avoid being a party to hasty action based on incomplete modeling and
analysis. This is not a matter of elitism or a misplaced desire for
"purity", but is central to the very mission of system dynamics: not to do
what is convenient but rather what is good for the long term.

Jack Homer
Voorhees, NJ
From: JHomer609@cs.com
Kim Warren
Junior Member
Posts: 5
Joined: Fri Mar 29, 2002 3:39 am

System Dynamics: Art, Science or Profession

Post by Kim Warren »

Jim Hines wrote
>Cant you also compare model structure to observed structure?


Could we also get properly grounded on such issues? In language that
works in the Strategy field ...
- resources (i.e. stocks in SD) determine performance in a
well-established manner ... customers, staff, capacity ... this is exactly
what the P&L account does, and our big challenges in Strategy are usually
about driving future profit streams,
- the only way a stock can change is if something happens to its flows, so
the only place management (or exogenous forces) can alter performance is if
they alter the flow rates,
It seems that managers are perfectly capable of understanding this, and
with a little practice, doing it quite well, laying out the grounded
structure (what is actually happening here?) - with stocks, flows, numbers
and time-charts. They are challenged, but not defeated, by questions like
If your reputation, from mkt research, is 7/10, and you have 50 sales
people, 2,000 potential customers, 500 actual customers, at what rate do
you think you will win customers, given what your rivals are up to?
The result of pushing this process is, I guess, what modelers might
describe as structural validity, and what ordinary folk might call
whats really going on - if we dont have this foundation, then I
hesitate to suggest that any attempt to simulate is worse than a waste of
time, its positively dangerous (I had better quickly dive back into the
trenches at this point before I get mown down in the cross-fire!).
[This is what I had in mind when asking whether two professionals should be
able to create very similar structure, starting from the same problem.]

Kim Warren
From: Kim Warren <
kim@farthing.globalnet.co.uk>
JHomer609@cs.com
Junior Member
Posts: 6
Joined: Fri Mar 29, 2002 3:39 am

System Dynamics: Art, Science or Profession

Post by JHomer609@cs.com »

Jim Hines is absolutely right to suggest that people too often add detail to
models without good cause. It is certainly bad practice to pull new
structure out of thin air in the misguided belief that improved curve fitting
is its own justification. But we should recognize that improved
understanding of an issue arises from the back-and-forth interplay of
information gathering and modeling, and that this interplay (if allowed to
take its course) typically leads to significant modifications and the
addition of important details to the model. I am not against small models
per se but rather the tendency to stop the iterative modeling process
prematurely at the first sign of an apparent insight or when the client seems
satisfied and is itching for closure. I believe the concept of due diligence
has an important place in modeling, meaning that we are obligated to strive
for the realistic depth of a model and not just its transitory feel-good
value for the client.

Jack Homer
From: JHomer609@cs.com
Voorhees, NJ, USA
Locked