QUERY Do marginal models marginalize modeling?

This forum contains all archives from the SD Mailing list (go to http://www.systemdynamics.org/forum/ for more information). This is here as a read-only resource, please post any SD related questions to the SD Discussion forum.
Locked
<richard.dudley@attglobal.net
Member
Posts: 26
Joined: Fri Mar 29, 2002 3:39 am

QUERY Do marginal models marginalize modeling?

Post by <richard.dudley@attglobal.net »

Posted by <Richard.Dudley@attglobal.net>

There has been some discussion within the System Dynamics Society
concerning the quality of modeling and how that might be improved.
One of the issues is the ""dumbing down"" of system dynamics modeling
to the point where it no longer a valuable tool. Ironically, the
wide availability of easy to use software, which has contributed
so much to the field, is also a partial cause of this problem.


I think we all agree that the ability to use a word processor does
not make one a novelist. The ability to use statistical software
does not make one a statistician. But for some reason a one or two
day, or a week long, course seems to convince some people that this
makes them a modeler. My own feeling is that many people in this
category are genuinely interested in doing valuable work, but they
neglect to realize their own shortcomings -- I certainly did.

If the models produced remained on their computers -- it would make
no difference.

Unfortunately, that is not the case. The results are published in
journals and can hurt the image of modeling in general and system
dynamics modeling in particular. When applied to issues concerning
public policy and science, a model can create an aura of science where
none exists. (There is certainly nothing unique about this with regard
to modeling. There have been several recent papers discussing the wide
dissemination of bad science!)

While some suggest that ""academic"" modeling is not of great interest,
there are hundreds of models published in academic journals and these
certainly influence the image of system dynamics (and other forms of
modeling). It has been suggested that members of the system dynamics
society should do more to counter this (possible) trend toward poor
modeling, but many shy away from being controversial -- sometimes with
good reason.

The advent of online publishing makes such commentary a bit easier.
Some online journals permit easy commentary by readers and these
comments are published after review by the editors. In some cases
articles which are based largely on models have links to the models
that were used. This provides a convenient and valuable setting in
which constructive comments on modeling approaches can be made within
the journal, and in some cases the model can actually be examined.
The only cost is, of course, time. Reviewing a paper takes time and
reviewing a model requires even more -- sometimes much more.

I became interested in this issue after looking at the following:

Paper: http://www.ecologyandsociety.org/vol12/iss2/art37/
Model: http://www.cifor.cgiar.org/conservation ... ch.2.5.htm

Richard
Posted by Richard.Dudley@attglobal.net
posting date Wed, 6 Feb 2008 12:06:49 -0800
_______________________________________________
Jack Harich <register@thwink.
Member
Posts: 39
Joined: Fri Mar 29, 2002 3:39 am

QUERY Do marginal models marginalize modeling?

Post by Jack Harich <register@thwink. »

Posted by Jack Harich <register@thwink.org>

Richard raises a timely question. As society's need to solve complex social
system problems increases, the results of the tools used is doing just the
opposite, at least in the area of the ""dumbing down"" of system dynamics
modeling.

I've been pondering this problem for awhile. My hypothesis for why SD has not
made the discoveries one would expect for such a potent tool is that the
process driving the use of the tool is immature. The popular process can be
roughly described as:

1. Define the boundary of the system/problem.
2. Duplicate the system behavior so that the problem symptoms appear.
3. By examination of the model, hypothesize solutions
4. Use the model to test these solutions
5. Evolve the model as necessary to select the best solution

Even Sterman's ""Steps of the modeling process"" on page 86 of ""Business Dynamics""
does not go much further than the above process. He lists these main steps:

1. Problem articulation (boundary selection)
2. Formulation of dynamic hypothesis
3. Formulation of a simulation model
4. Testing
5. Policy design and evaluation

Even with the inclusion of the further detail in Sterman, and the papers in the
SD Review, I've not been able to apply SD successfully reliably. The process
depends too much on the brilliance of the modeler. This is great if you have a
Forrester or an MIT phd on your team, but that is usually not the case.

To compensate, what I did was evolve the following process, called the System
Improvement Process (SIP):

1. Identify the problem.
2. Analyze the problem (system) until key cause and effect relationships are
understood.
3. Use that knowledge and experimentation to converge on a solution.
4. Implement the solution.

Step 2, analysis, aka System Understanding, has the following substeps:

A. Find the feedback loops that are currently dominant.
B. Find the root cause of why they are dominant.
C. Find the low leverage points and symptomatic solutions.
D. Find the feedback loops that should be dominant.
E. Find the high leverage points to make them go dominant.

Step B eliminates the common pitfall of stopping when the first plausible or satisfying
""cause"" of the symptoms is found. By framing the modeling challenge as finding why the
loops in step A are dominant, we stop thinking in terms of intuitive dynamic hypotheses,
and more in terms of Kaizen, with an SD touch. Kaizen is all about asking WHY until you
arrive at the so called root cause.

The root cause is usually a combination of things, whose emergent property is the problem
symptoms. Very often the root cause is related to one or more dominant loops that should
not be dominant. For example, in the global environmental sustainability problem, popular
SD models show the IPAT loops to be the culprit. Popular solutions are then based on
reducing the P or the A or the T directly, such as with population control, appeals to
consume less, more efficient technology, or cap and trade programs.

But by using steps A and B, we arrive at unconventional, very different root causes:
RC1. A dominant race to the bottom among politicians explains why change resistance is so high.
RC2. A dominant consumption growth loop which is fed by a dominant corporate dominance loop,
and the presence of incompatible goals between corporations and people, explains why
the human system is improperly coupled to the environment.

If you would like to see a detailed analysis explaining this, just ask.

I hope this is making sense and you can follow it.

Step C increases your system understanding greatly, by forcing you to determine why past
solutions have failed. This is true for all difficult problems. Step C also gets you started
in thinking in terms of leverage points. Solutions fail when they push on low leverage
points (LLPs).

A high leverage point (HLP) is not a place in a system where a small change causes a big
difference, as is the popular definition. By this definition, the Kyoto Protocol treaty
would be a HLP, because signing the treaty is such a small change and the effect of
implementing it would cause a large behavior change. But what about the effort it takes to
get the treaty signed and implemented? So my definition of a HLP is a place in a system
where a small amount of change force (the effort required to prepare and make a change)
causes a large amount of predictable, favorable response. This is the familiar ratio of
input to output.

LLPs are attractive, even to modelers. Richard, I believe that inability to tell a LLP
from a HLP, and lack of step C in the modeling process, is one reason for the ""dumbing
down"" trend. Of course, I'm also arguing that lack of steps A, B, C, D, and E, followed
by ""Use that knowledge and experimentation to converge on a solution."" is the greater reason.

Moving on, step D takes the modeler into a whole new direction that is missing from the
popular process. The location of the LLPs and root causes in your model will offer strong
clues as to what loops should be dominant to solve the problem. The mental leap it takes
to create the hypothesis of step D is MUCH smaller than the mental leap required in the
popular process. In fact, the leap in the popular process discussed at the beginning of
this message is so large that it goes a long way to explaining the ""dumbing down"" phenomenon.

One you have found the loops that need to go dominant, step E says find the HLPs to make
them go dominant. Again, this is a small mental leap.

What SIP does is take the modeling process and chop it into little bitty steps. Each step
is so small it can be done more reliably. The end result is a more reliable process.

That last sentence sums up my take on why ""dumbing down"" is occurring. It's lack of a
reliable process. As the popularity of modeling spreads, the ratio of brilliant modelers
to average ones falls. This causes the quality of the average model to fall as well,
because there is no simple, reliable process the average modeler can apply.

Richard, you said ""My own feeling is that many people in this category are genuinely
interested in doing valuable work, but they neglect to realize their own shortcomings --
I certainly did.""

Me too. That's why I developed SIP before my first major model.

I sincerely hope this helps,

Jack Harich
Systems Engineer
Posted by Jack Harich <register@thwink.org>
posting date Wed, 06 Feb 2008 18:35:35 -0500
_______________________________________________
Ralf Lippold <ralf_lippold@we
Member
Posts: 30
Joined: Fri Mar 29, 2002 3:39 am

QUERY Do marginal models marginalize modeling?

Post by Ralf Lippold <ralf_lippold@we »

Posted by Ralf Lippold <ralf_lippold@web.de>

Jack's comment just has triggered new thinking about the modeling process.

It seems -especially to SD newcomers as myself- specifically difficult to dive
into the nice areas of modeling. As some of you already have pointed out
earlier in several other discussions the modeling derives out of a genuine
learning process. This learning just doesn't come out of nowhere.

There has to be a real and really disturbing problem that one wants to get
done with!

I rather stick to John Sterman's ""Steps of the modeling process"" as I like
the sparkness and clearity of the steps. At first sight it looks too simple
to be a guideline to get to a working modeling. What is more important is the
way one has to take to ""the model"" in question.

As dealing a lot with improvement initiatives that always can led back to mere
change initiatives system dynamics opens a whole new set of tools in order to
explain why things don't work as they should. Unearthing the hidden mental
models through SD methods (such as causal loop diagrams) is a first step make
the problem for more people understood (besides yourself!).

> From there on one can go deeper into reasoning
the root causes (by the way one of the techniques that got Toyota in the lead
of all the other automobile companies around the world). Personally I am this
phase for quite some while now discussing a common topic with an engineer on
the other side of the world (to be exact in New Zealand).

Through the constant challenge of his and mine assumptions for the last couple
of months we are coming close to a common understanding what are -possible-
reasons that effect the actors in the viewed system. The learning is -in my
eyes- the essential benefit of this phase and even an experienced model would
have to ask the ""client"" (process owner, actors in the ""game"") what goes on in
their heads, what their mental models seem to be in order to arrive at solution
(a model in this case) that everybody is buying in.

The interesting thing during our discussions is that I am coming from an
economics background (having studied economics and business administration)
seeing often the facing problem from a broader perspective whereas he as a
mechanical engineer focuses on more details in the problem discussion. This
on the other hand propells interesting new paths to the final solution (or
""mountain"" as Geoff Coyle is describing it in one of his articles on model
building).

> If you would like to see a detailed analysis explaining this, just ask.

@Jack, I would be highly interested in a detailed analysis explaining your thoughts.

> I hope this is making sense and you can follow it.
>
> Step C increases your system understanding greatly, by forcing you to determine why past
> solutions have failed. This is true for all difficult problems. Step C also gets you started
> in thinking in terms of leverage points. Solutions fail when they push on low leverage
> points (LLPs).

Coming from a pure practical background for the last 10 years, working at a railroad
service provider and a automobile producer I have wondered for quite some time how to
connect my thinking about the facing problems and their change over time (just happened
to know of system dynamics about two years ago) to management and peers. For getting
deeper into the modeling and the fact how this can be transferred to ""non systemdynamicists ""
one really has to urge to solve a ""personal"" problem that solution has a direct impact for
yourself. So tackling the problem from two different sides follows in diverse learning paths
and through the collaboration (during our regular discussions via Skype) we accumulate SD
wisdom which will -supposedly- end up in a working model. ..and if not yet in the next months
there will be further spiral learning;-).

For the modeler to be successful with the model and the appropriate advice to the client it
comes into play what Ed Schein calls ""process consultancy"" in order to make the changes
sustainable. The ""client"" has to learn how to solve this very problem with ""new"" techniques
such as systems thinking (before he goes to actual modeling) and not being told by an
experienced modeler.

> A high leverage point (HLP) is not a place in a system where a small change causes a big
> difference, as is the popular definition. By this definition, the Kyoto Protocol treaty
> would be a HLP, because signing the treaty is such a small change and the effect of
> implementing it would cause a large behavior change. But what about the effort it takes to
> get the treaty signed and implemented? So my definition of a HLP is a place in a system
> where a small amount of change force (the effort required to prepare and make a change)
> causes a large amount of predictable, favorable response. This is the familiar ratio of
> input to output.
>
> LLPs are attractive, even to modelers. Richard, I believe that inability to tell a LLP
> from a HLP, and lack of step C in the modeling process, is one reason for the ""dumbing
> down"" trend. Of course, I'm also arguing that lack of steps A, B, C, D, and E, followed
> by ""Use that knowledge and experimentation to converge on a solution."" is the greater reason.

The LLP is what Toyota is doing through their KAIZEN strategy where small steps of improvement
lead to the ""big"" change over time. The benefits of LLP are that the don't disturb the system
as a whole as much (or not at all) as would a HLP such as the mentioned Kyoto Protocol). Through
sustainable improvements (that's the difficult part of the equation, as this doesn't come
naturally because the systems view is not very common around workforces that work in separate
processes (much like a silo!) the small improvements will accumulate and finally -after some
time- will end in better results (measured in produced widgets, customer satisfaction,
productivity, decrease of defects, and alike).

Looking forward to seeing other's ideas on the modeling process (especially from other people
who sidestepped into System Dynamics as myself).

Cheers

Ralf
Posted by Ralf Lippold <ralf_lippold@web.de>
posting date Thu, 7 Feb 2008 15:01:40 +0100
_______________________________________________
Jack Harich <register@thwink.
Member
Posts: 39
Joined: Fri Mar 29, 2002 3:39 am

QUERY Do marginal models marginalize modeling?

Post by Jack Harich <register@thwink. »

Posted by Jack Harich <register@thwink.org>

SDMAIL Ralf Lippold wrote:
> @Jack, I would be highly interested in a detailed analysis explaining your thoughts.

See this unpublished paper:

http://www.thwink.org/sustain/articles/ ... _Paper.htm

Please note it does not emphasize the process, but the results of applying the process.


> The LLP is what Toyota is doing through their KAIZEN strategy where small steps of
> improvement lead to the ""big"" change over time. The benefits of LLP are that the don't
> disturb the system as a whole as much (or not at all) as would a HLP such as the
> mentioned Kyoto Protocol). Through sustainable improvements (that's the difficult
> part of the equation, as this doesn't come naturally because the systems view is not
> very common around workforces that work in separate processes (much like a silo!) the
> small improvements will accumulate and finally -after some time- will end in better
> results (measured in produced widgets, customer satisfaction, productivity, decrease
> of defects, and alike).

I've studied ""The Elegant Solution: Toyota's Formula for Mastering Innovation,"" by Matthew
May. Among other things it covers the Toyota production system/process. It's a great book
and I've quoted from it extensively.

You need to read concepts like the definition of a leverage point more closely. A small
step of improvement is NOT a LLP. It is part of a continuous improvement process. When
Toyota or millions of other companies improve their process, they are taking small steps
that, for a low amount of change force, get a big payoff. The fact that a change is small
rather than large does not make it a LLP rather than a HLP. In manufacturing, it is the
ratio of the cost required to make a change to the benefits gained (ratio of input to
output) that determines whether a change is a LLP or a HLP. If the ratio is too high
(cost/benefits) then the change is not made.

Good luck,

Jack
Posted by Jack Harich <register@thwink.org>
posting date Fri, 08 Feb 2008 19:05:30 -0500
_______________________________________________
""Bernard Liebowitz"" <bernie
Newbie
Posts: 1
Joined: Fri Mar 29, 2002 3:39 am

QUERY Do marginal models marginalize modeling?

Post by ""Bernard Liebowitz"" <bernie »

Posted by ""Bernard Liebowitz"" <bernieliebowitz@comcast.net>

I am new to the world of System Dynamics. The recent topic of marginal
models has raised for me the question of the criteria by which one
determines how marginal or good or comprehensive is a model. Let’s
assume you indeed follow John Sterman’s steps, you have interviewed all
the stakeholders in the problem, you have drawn loops (positive and
negative) based on these discussions, flow rates have been established,
you have done root analyses, etc. What, then, determines the accuracy,
goodness of fit, adequacy, representativeness, etc. of the resulting
model? And, maybe, the various criteria-related words I have used (e.g.,
comprehensive, accurate, etc.) themselves have different meanings with
respect to a model? I would appreciate any comments.

Bernie

Bernard Liebowitz, PhD, CMC
Liebowitz & Associates, PC
980 No. Michigan Ave., Ste. 1400
Chicago, Il 60611
Posted by ""Bernard Liebowitz"" <bernieliebowitz@comcast.net>
posting date Fri, 8 Feb 2008 14:08:00 -0600
_______________________________________________
<richard.dudley@attglobal.net
Member
Posts: 26
Joined: Fri Mar 29, 2002 3:39 am

QUERY Do marginal models marginalize modeling?

Post by <richard.dudley@attglobal.net »

Posted by <richard.dudley@attglobal.net>

This is a follow-up to my original post.

I see that the topic has switched from my question about questioning
poor models, to a discussion of how we proceed with a good modeling
process. Well, that's OK, but it does not really answer my question.
There are many sources of guidelines for good modeling practice, and,
yes, we should try to follow them.

But... my questions was:

How do we encourage more widespread use of these, already acknowledged,
good modeling practices.

My recent post asks if we should be commenting more on articles which
claim to use SD modeling but may be using questionable/or incomplete
approaches. Such commentary may be worthwhile, but needs to be done
carefully, and in a constructive way. It is not easy.

My point is that marginal, widely distributed SD models, can hurt the SD
profession. I believe that SD practicioners should be cautiously
proactive in working to question the use of such models. I am working
on such a comment now, and would like to have a reviewer look at it for
me... any volunteers?

A few months ago I asked a similar question in a more narrow sense: Do
organizations ensure that the models their employees produce meet certain
quality standards, and what review processes are used, if any. There were
not many responses.

Richard
Posted by <richard.dudley@attglobal.net>
posting date Sat, 9 Feb 2008 05:47:00 -0800
_______________________________________________
""Jim Thompson"" <james.thomp
Member
Posts: 21
Joined: Fri Mar 29, 2002 3:39 am

QUERY Do marginal models marginalize modeling?

Post by ""Jim Thompson"" <james.thomp »

Posted by ""Jim Thompson"" <james.thompson@strath.ac.uk>

The literature of management science provides potentially useful insight
from experience in the development and use of simulation models for
learning and for helping to resolve complex dynamic problems. Two a
rticles germane to this topic are:

Phillips, L.D. (1984) Requisite Decision Models. Acta Psychologica,
56, 29-48.

Simon, H.A. (1989) Prediction and Prescription in Systems Modeling.
Operations Research, 38, 7-14.

The chapter on Models (Ch. 8) in George Mitchell’s The Practice of
Operational Research (1993, Chichester, John Wiley & Sons Ltd.)
discusses a simulation model as a ""device for aiding decision-making""
and as a ""collection of beliefs"". These two very different perspectives
might help the inquiring mind to establish purpose before asking about
requisiteness. In other words, one person’s marginal model may be
sufficient for the needs of another.

Jim Thompson
Posted by ""Jim Thompson"" <james.thompson@strath.ac.uk>
posting date Sat, 9 Feb 2008 10:07:12 -0500
_______________________________________________
Monte Kietpawpan <kietpawpan@
Junior Member
Posts: 12
Joined: Fri Mar 29, 2002 3:39 am

QUERY Do marginal models marginalize modeling?

Post by Monte Kietpawpan <kietpawpan@ »

Posted by Monte Kietpawpan <kietpawpan@yahoo.com>

The lack of formal SD education has been deemed as a reason why
there are too many models that do not meet SD experts' minimum
standard of quality. Here, there is a tendency to restrict the application
of SD to ""SD problems""--undesired patterns of change that can be
plotted against time. To other problems, the application is said to yield
""poor-quality models.""

Taking no formal SD education, in fact, a novice modeler, with only
limited knowledge of SD gained from Road Maps, some introductory
SD textbooks, and some papers in the Review, can develop a useful
model.

An example of a useful SD model developed by a novice modeler is
available at http://dx.doi.org/10.1007/s11069-007-9183-5

The model may be judged as a poor-quality model if we rest on the
following two criteria:
1) SD must be applied to SD problems, and
2) SD is not about point forecasting

Nevertheless, the modeling process provides some new insights,
leads to the modification of a classic law and theory, points out
serious errors in some widely used data sets, provides quite accurate
point predictions, and addesses one of the most important issue in a
coastal system.

Monte Kietpawpan
Faculty of Environmental Management,
Prince of Songkla University
Posted by Monte Kietpawpan <kietpawpan@yahoo.com>
posting date Sat, 9 Feb 2008 06:55:31 -0800 (PST)
_______________________________________________
Jack Harich <register@thwink.
Member
Posts: 39
Joined: Fri Mar 29, 2002 3:39 am

QUERY Do marginal models marginalize modeling?

Post by Jack Harich <register@thwink. »

Posted by Jack Harich <register@thwink.org>

Richard Dudley wrote:
>
> I see that the topic has switched from my question about questioning poor
> models, to a discussion of how we proceed with a good modeling process.
> Well, that's OK, but it does not really answer my question.

My point is that ""good modeling practices,"" good enough to reliably solve
difficult problem, do not yet exist. Solving tough problems with SD is still
way too dependent on the brilliance of the modeler. But bright modelers are
in short supply.

The field is unable to consistently solve the great social problems of our
time, as Forrester alludes to in ""System dynamics: The next fifty years"" when
he asks: ""Why is there so little impact of system dynamics in the most important
social questions?"" My answer is because the foundation every field needs for
success is not yet mature. In particular, SD's foundation lacks a repeatable
process. Therefore ""more widespread use"" of existing modeling practices will
not lead to SD success.

Hypothesis: Without a repeatable process, most modelers cannot solve difficult
social problems.

Corollary: Without a repeatable process, most difficult problem models will
be low quality.

This hypothesis can be tested. One way would be to writeup a case, on a problem
with difficulty level similar to the urban decay problem (Forrester 1969). The
case has all the clues and data you need to build a rough model to solve the
problem. But the case also has a lot of chaff or noise: irrelevant and attractive
data that leads to symptomatic solutions that will of course fail. It should not
be at all obvious what the backbone of the model should be.

Then you run an experiment with the case. The control group spends 30 minutes (more?
a lecture Q&A? a course?) reading an irrelevant short article. The treatment group
spends that time reading an article on how to execute a repeatable process, such as
the System Improvement Process. Then both groups receive the case and try to solve
the problem with SD. The treatment group is allowed to refer to the written process
as they go. This of course may require separate rooms for the two groups.

Such an attack would allow the process to be iteratively improved until the
experimental results were stunning, and the hypothesis was proven to satisfaction.
The output would be (1) a formal process, a best practice that we could spread, and
(2) a series of cases that can be used to teach/test this best practice.

Democracy succeeds because it relies on the rule of law, not men. SD too can succeed
if it comes to rely on the use of process, not men.


> My recent post asks if we should be commenting more on articles which claim to use
> SD modeling but may be using questionable/or incomplete approaches. Such commentary
> may be worthwhile, but needs to be done carefully, and in a constructive way. It is
> not easy.

A sort of self-policing by peers. This will be difficult to implement and maintain.
How about the alternative of self-policing by the the modelers themselves?

Perhaps this could be done if modelers had a standard test to apply to a model. The
test would objectively determine if the model had met its objectives. A standard
objective is that it be a high quality model.

Better yet would be if the model was produced with the process the above experimental
program produced. Then we are not trying to test defects out, which is poor practice.
We are trying to prevent defects at the source.


> I am working on such a comment now, and would like to have a reviewer look at it for me...
> any volunteers?

I'd be glad to help. But I'm not an academic. I'm an engineer/consultant. So I have some
limitations.

> A few months ago I asked a similar question in a more narrow sense: Do organizations
> ensure that the models their employees produce meet certain quality standards, and what
> review processes are used, if any. There were not many responses.

Hmmm, a nice question. Perhaps I've addressed it above.

Jack
Posted by Jack Harich <register@thwink.org>
posting date Sun, 10 Feb 2008 09:54:31 -0500
_______________________________________________
Bill Harris <bill_harris@faci
Senior Member
Posts: 51
Joined: Fri Mar 29, 2002 3:39 am

QUERY Do marginal models marginalize modeling?

Post by Bill Harris <bill_harris@faci »

Posted by Bill Harris <bill_harris@facilitatedsystems.com>

""SDMAIL"" <richard.dudley@attglobal.net> writes:
> How do we encourage more widespread use of these, already acknowledged,
> good modeling practices.
>
> My recent post asks if we should be commenting more on articles which
> claim to use SD modeling but may be using questionable/or incomplete
> approaches. Such commentary may be worthwhile, but needs to be done
> carefully, and in a constructive way. It is not easy.

Richard,

Thinking back on my professional experience, one good way seems to be to
publish your own research and to let it compete in the world of ideas.
What about, instead of critiquing someone's paper on a model to explain
X, doing the research and publishing your own paper describing your
model-based efforts to explain X? In the process, it seems fair to
point out places in which your contribution improves on the work of
others. Then others can pick and choose based on the evidence.

Bill
- --
Bill Harris
Facilitated Systems
Posted by Bill Harris <bill_harris@facilitatedsystems.com>
posting date Sun, 10 Feb 2008 11:19:49 -0800
_______________________________________________
Jean-Jacques Laublé <jean-jac
Senior Member
Posts: 61
Joined: Fri Mar 29, 2002 3:39 am

QUERY Do marginal models marginalize modeling?

Post by Jean-Jacques Laublé <jean-jac »

Posted by Jean-Jacques Laublé <jean-jacques.lauble@wanadoo.fr>

Hi everybody.

Different remarks concerning this thread.

First about how a reader can interpret an SD article in a paper.
There are two things that can change the way the paper will be considered by
a reader.
The first is the reader's knowledge concerning the problem being studied
The second is the personal position of the reader relative to the problem.

For example, a model about a specific business problem that I know well, or
at least that I know in my personal business man point of view, can be judged poor by
me, when it may be judged good from a researcher or academic point of view.

This makes the model good for the academic community with its standard point
of view, and not good for other people having different point of views.

Of course the same can be true with an article that may convince me, and be
found poor from an academic point of view.
It will always be difficult to convince everybody with the same article.
So there is no intrinsic quality in a model, but a relative one.

A second remark considering Jack's message.
I have too struggled for years (more than 5) trying to apply SD to real
problems after having studied Business Dynamic and followed different web distance courses.

I have long thought that I lacked just the magical SD touch that would
transform my models into usable tools. What Jack names brilliancy. I personally think that it
has more to do with experience or having studied with somebody experienced
or using a sound method than with brilliancy.

I decided to stop modelling last year, but before doing that I tried to
understand the reasons of my past failures, studying my numerous past
models, hoping that it was eventually possible to identify the errors.

I first considered the SD method.
I did not find anything wrong in the method nor in the software.
After that I studied the way I was using it.

I discovered several problems common to all my past models.
The intensity of these problems was of course varying depending on the
model.

1.. A lack of sufficiently close knowledge of the problem. I was too much
relying on other people who knowing that I did not know sufficiently closely
the problem, could manipulate me, intentionally or not.

2.. A lack of knowledge about the real added value of SD, compared to more
simple techniques. In fact I had not sufficiently tried to think about the problem
using simple methods. If one starts by using simpler methods, one gains a better
understanding of the problem, and it is easier to think about the real added
value of SD. It is too easy when faced with a difficult problem that needs a lot
of work whatever the method, to jump to more complex methods, thinking that their
sophistication will compensate either the lack of knowledge about the
problem, or the lack of time spent working on it.

3.. A lack of preparation. I mean making the risk of failure minimal.
I think that if one wants to avoid failure, one must consider Murphy's law:
everything that can happen, will happen. This need of careful preparation is
highly costly in time, and to my opinion must not be often respected.

4.. The lack of a sound method to build and analyse models, once one has
decided to use SD. Contrarily to many people I did not find any help in
Business Dynamic or a distant Web course dealing with real world problems, in building
useful models. Since then I have lately found a book that exposes a full
method and trains the reader to use it. It is Geoff Coyle's 'system dynamics modelling,
a practical approach'. Why do I find it useful? It may eventually be related to
the fact that I build the models that I use. This means that if I make a bad model,
it may cost me a lot of money. But it is not logic to have spent more
than five years, studying hard the documentation of the software, studying
deeply several books, following distant web courses and reading many papers from the
SD review and the SD annual conference, to finally find a book that at last
exposes a complete method of building and analyzing serious problems, and trains the
reader to use it!

I must remark that these five years were so misleading that it cost me
and my society an amount of money that I prefer not to speak of.

Having a better knowledge of the conditions of a good modelling process
at least from my own experience, I have stopped making models, since last
September, and will start again only if I have satisfied the above conditions.

I will have to work some time on the last condition, terminating the
study of Coyle's book, and perfecting my knowledge of SD, by whatever way
possible (probably a year or two).

I have a personal explanation about the lack of explanation about the
added value of SD justifying the use of it.

People engaged into modelling prefer not to have the added necessity to
prove the utility of their work. Accepting to consider the added value of SD, may have a
negative effect at first in the following years, until having found solutions to that
problem, it could have a very positive impact afterwards.
This myopic behaviour does not help the field.

Regards.
Jean-Jacques Laublé Eurli Allocar
Strasbourg France.

Posted by Jean-Jacques Laublé <jean-jacques.lauble@wanadoo.fr>
posting date Mon, 11 Feb 2008 15:40:58 +0100
_______________________________________________
Jay Forrester <jforestr@MIT.E
Junior Member
Posts: 12
Joined: Fri Mar 29, 2002 3:39 am

QUERY Do marginal models marginalize modeling?

Post by Jay Forrester <jforestr@MIT.E »

Posted by Jay Forrester <jforestr@MIT.EDU>

> Posted by Jack Harich <register@thwink.org>
>
> My point is that ""good modeling practices,"" good enough to reliably solve
> difficult problem, do not yet exist. Solving tough problems with SD is still
> way too dependent on the brilliance of the modeler. But bright modelers are
> in short supply.

I see here a wishing for something that has likewise not been achieved in any other
profession. A model is a theory of behavior that explains the problem at hand.
There are no rules for obtaining a theory leading to a Nobel Prize in physics that
is not ""dependent on the brilliance of the modeler."" In a more everyday realm,
there is no way to teach engineers to be the one to design the most successful
airplane. Success in an engineering design depends on understanding the underlying
concepts. Beyond the basics, a successful engineer builds on a basic education and
grows from repeated apprenticeships, from trial and error, from learning from past
failures, and being winnowed down from the large number of engineering graduates
who are not willing, or able, or competent, to go through the entire learning
process.

In this discussion thread, there have been comments about how most are not able to
devote time to getting a Ph.D. in system dynamics, as if that were enough to
guarantee success. A full scale academic program in engineering or in medicine
is only the beginning of the process of becoming a true expert.

The emphasis in these discussions needs to be less on how to find a quick path to
successful modeling and much more on how to establish educational and apprenticeship
programs to train experts.

> The field is unable to consistently solve the great social problems of our
> time, as Forrester alludes to in ""System dynamics: The next fifty years"" when
> he asks: ""Why is there so little impact of system dynamics in the most important
> social questions?"" My answer is because the foundation every field needs for
> success is not yet mature. In particular, SD's foundation lacks a repeatable
> process. Therefore ""more widespread use"" of existing modeling practices will
> not lead to SD success.

The foundation is never ""mature"" in the sense that it will not be improving,
however, I see in this discussion a failure to recognize and discuss the
foundations that already exist.

I hope we will have here a discussion of the presently identified foundations,
because it appears that many people who consider themselves working in system
dynamics are no aware of the foundations, or, at least, have not internalized
them.

As discussed above, it is extremely over-optimistic to expect that there will
ever be a repeatable process that leads everyone to create fully effective
models. We should be looking to the other professions for guides to creating
successful practitioners--look at how a heart surgeon has been trained, look
at how an engineer who designs a space ship or a chemical refinery or even a
successful automobile has been moved from novice to expert.

> Hypothesis: Without a repeatable process, most modelers cannot solve difficult
> social problems.

Not true, if one means a step-by-step process that anyone can follow. The
difficult social problems require a person with a broad and deep understanding
of the social situation coupled with a successful past career of building up
through progressively more difficult modeling situations.

>
> Corollary: Without a repeatable process, most difficult problem models will
> be low quality.

Without a solid foundation in underlying fundamentals and an extensive
background in apprenticeships and practice, ""most difficult problem models
will be low quality.""

>
> This hypothesis can be tested. One way would be to writeup a case, on a problem
> with difficulty level similar to the urban decay problem (Forrester 1969). The
> case has all the clues and data you need to build a rough model to solve the
> problem. But the case also has a lot of chaff or noise: irrelevant and attractive
> data that leads to symptomatic solutions that will of course fail. It should not
> be at all obvious what the backbone of the model should be.

This might be a very good kind of education. One might well spend two or three
years dealing with a sequence of such exercises.

>
> Then you run an experiment with the case. The control group spends 30 minutes (more?
> a lecture Q&A? a course?) reading an irrelevant short article. The treatment group
> spends that time reading an article on how to execute a repeatable process, such as
> the System Improvement Process. Then both groups receive the case and try to solve
> the problem with SD. The treatment group is allowed to refer to the written process
> as they go. This of course may require separate rooms for the two groups.

Does this not suggest that the process is far simpler than it really is? My Urban
Dynamics model had two ingredients--my 40 years dealing with dynamics in engineering
and management, and six weeks of discussions, a long half day every week, with a group
who had been fighting the battle of decaying cities, while I was spending about 30
hours per week trying to identify a useful model in what they were saying.

Certainly, the 40 years of background will not be required. Much of that experience
was without the visible goal of modeling social systems. Most of it was dealing with
dynamics through the mathematics of differential equations rather than the more
realistic and easier to understand medium of integrations. Most of the time was
without the clarity of underlying basic principles that are now available and that
are now teachable. So, we should expect that the 40 years can be shrunk to something
more like 10 years. At least an intensive year should be devoted to truly
understanding and appreciating the underlying principles of dynamics that are now
available.

>
> Such an attack would allow the process to be iteratively improved until the
> experimental results were stunning, and the hypothesis was proven to satisfaction.
> The output would be (1) a formal process, a best practice that we could spread, and
> (2) a series of cases that can be used to teach/test this best practice.
>
> Democracy succeeds because it relies on the rule of law, not men. SD too can succeed
> if it comes to rely on the use of process, not men.

But law needs courts to interpret laws and processes to enforce them. In
addition to truly understanding the available underlying principles, there
are processes that are not receiving much discussion. The following were
offered in my conference talk on the next 50 years:

"" How often do you see a paper that shows all of the following characteristics?

1. The paper starts with a clear description of the system shortcoming to be improved.
2. It displays a compact model that shows how the difficulty is being caused.
3. It is based on a model that is completely endogenous with no external time series to drive it.
4. It argues for the model being generic and descriptive of other members of a class of systems to which the system at hand belongs.
5. It shows how the model behavior fits other members of the class as policies followed by those other members are tested.
6. It arrives at recommended policies that the author is willing to defend.
7. It discusses how the recommended policies differ from past practice.
8. It examines why the proposed policies will be resisted.
9. It recognizes how to overcome antagonism and resistance to the proposed policies.""
Granted, my own models and publications do not meet all of these tests.

> Jay W. Forrester
Professor of Management
Sloan School, MIT
Posted by Jay Forrester <jforestr@MIT.EDU>
posting date Mon, 11 Feb 2008 14:42:50 -0500
_______________________________________________
Jean-Jacques Laublé <jean-jac
Senior Member
Posts: 61
Joined: Fri Mar 29, 2002 3:39 am

QUERY Do marginal models marginalize modeling?

Post by Jean-Jacques Laublé <jean-jac »

Posted by Jean-Jacques Laublé <jean-jacques.lauble@wanadoo.fr>

Hi Monte.

I answer rather lately to your post.
You write.
> Here, there is a tendency to restrict the application
> of SD to ""SD problems""--undesired patterns of change that can be
> plotted against time. To other problems, the application is said to yield
> ""poor-quality models

There are to my opinion two kind of classical applications of SD.
The most classical is the one that you call undesired patterns of change
that are often undesirable oscillations that have been experienced in the
past.
The second less classical when one is confronted to a new situation where
the past experiences are of little interest.
For you then, applications that are not from the first type are considered
poor SD.
To my opinion, good SD is any application using SD paradigm and that is
useful to something, even if up to now, the main applications found in books are from
the first category (Sterman). Warren's examples are from both types and even more from the
second type to my opinion.

You write too that SD is not about point prediction.
To my opinion it is another way of saying that SD is using aggregate data
that do not give precise results.
When one reads the article published by Springer about the Tsunami
one understands that the objective is to calculate with a sufficient
precision its speed, which is effectively a point prediction. It is too not a
classical type 1 problem. That kind of problem looks more having to deal with numerical
analysis than with SD, and one does not see what kind of loops are concerned
(eventually very short term loops).

It would then be interesting to have a look at the model as the transparency
thread suggests.
It is interesting to see how SD can help giving point prediction, which it
does not usually give, and how it can compete with traditional methods like numerical analysis for
example.
It is not available at the web site that you mention.
Is it possible to get is somewhere?
Regards.
Jean-Jacques Laublé Eurli Allocar
Strasbourg France
Posted by Jean-Jacques Laublé <jean-jacques.lauble@wanadoo.fr>
posting date Sat, 16 Feb 2008 16:14:37 +0100
_______________________________________________
""Jack Harich"" <register@thw
Newbie
Posts: 1
Joined: Fri Mar 29, 2002 3:39 am

QUERY Do marginal models marginalize modeling?

Post by ""Jack Harich"" <register@thw »

Posted by ""Jack Harich"" <register@thwink.org>

> Posted by Jay Forrester <jforestr@MIT.EDU>
> I see here a wishing for something that has likewise not been achieved
> in any other profession.

Hmmm, there seem to be many fields that reliably solve difficult
problems. Physics, chemistry, biology, electronics, etc.


> A model is a theory of behavior that explains the problem at hand.
> There are no rules for obtaining a theory leading to a Nobel Prize in
> physics that is not ""dependent on the brilliance of the modeler."" In
> a more everyday realm, there is no way to teach engineers to be the
> one to design the most successful airplane. Success in an engineering
> design depends on understanding the underlying concepts. Beyond the
> basics, a successful engineer builds on a basic education and grows
> from repeated apprenticeships, from trial and error, from learning
> from past failures, and being winnowed down from the large number of
> engineering graduates who are not willing, or able, or competent, to
> go through the entire learning process.

""There are no rules for..."" - Agreed. However I did not say ""dependent
on the brilliance of the modeler."" I said ""too dependent on the
brilliance of the modeler."" That statement does not infer that there is
a way ""to teach engineers to be the one to design the most successful
airplane,"" etc.

It seems that what's happening in this discussion, and in related ones
over the past years, is our field has moved from the Model Crisis phase
of the Kuhn Cycle to the Paradigm Revolution phase. Debate concerns what
techniques are best for allowing SD to achieve its potential. This
debate rages back and forth, with various propositions rising and
falling in popularity. The focus sharpens, as in the 50th anniversary
issue of the Review. And then the focus fades, as people realize that
what they were focusing on does not have the answers, or even the right
questions.

Richard's query on minimum acceptable modeling standards is an example
of an attempt to reach agreement on at least one factor that would
become part of the new paradigm. Once we can start to agree on factors
like this, we are nearing the end of the Paradigm Revolution phase. The
next phase is Paradigm Change, which will begin once a comprehensive
paradigm emerges that has the clear potential to take the field of
system dynamics forward, to its full potential.


> In this discussion thread, there have been comments about how most are
> not able to devote time to getting a Ph.D. in system dynamics, as if
> that were enough to guarantee success. A full scale academic program
> in engineering or in medicine is only the beginning of the process of
> becoming a true expert.
> The emphasis in these discussions needs to be less on how to find a
> quick path to successful modeling and much more on how to establish
> educational and apprenticeship programs to train experts.

That proposition is an example of a factor that should be part of the
new paradigm. But is it possible that we are putting the cart before the
horse? Isn't the presence of lots of educational programs evidence they
are in demand? And isn't demand a result of successful application of
SD? So it seems that the absence of educational programs is a coincident
symptom of a deeper cause. What that deeper cause might be is what this
debate might try to discover. What is the root cause of why SD has not
achieved its potential?

>> mature. In particular, SD's foundation lacks a repeatable process.
>> Therefore ""more widespread use"" of existing modeling practices will
>> not lead to SD success.
>>
>
> The foundation is never ""mature"" in the sense that it will not be
> improving,

Agreed. I met mature enough to reliably solve the problems the field faces.


> however, I see in this discussion a failure to recognize and discuss
> the foundations that already exist.

This could be useful. It would help us to catalog the factors that
contribute to the field's success. Which ones are missing? Which are
immature? Which are not as critical as they appear? Out of this analysis
would emerge a candidate new paradigm.

> I hope we will have here a discussion of the presently identified
> foundations, because it appears that many people who consider
> themselves working in system dynamics are no aware of the foundations,
> or, at least, have not internalized them.

I'm probably one of them, since I'm self taught. :-)

> As discussed above, it is extremely over-optimistic to expect that
> there will ever be a repeatable process that leads everyone to create
> fully effective models. We should be looking to the other professions
> for guides to creating successful practitioners--look at how a heart
> surgeon has been trained, look at how an engineer who designs a space
> ship or a chemical refinery or even a successful automobile has been
> moved from novice to expert.

Regarding ""leads everyone"" - Sorry if I didn't express myself clearly.
It's more like we need a repeatable process that leads the field, or the
average modeler who has the basic smarts to be a good modeler, to create
effective models.

""We should be looking to the other professions for guides to creating
successful practitioners"" - Yes. But there seems to be the assumption
that the most important factor is how the practitioners have been
trained. I don't think that's a root cause. It's what we are teaching
them. My hypothesis is we don't yet have what we need to teach
successful practice with SD. If you don't have that, then it doesn't
matter how people are trained. Ploughing more money into more training
will not make a difference.

But we can look to other professions for the strategies they have used
to achieve success. SD is an engineering tool. It's used to solve
problems. How a complex tool is applied, in a repeatable manner, is what
makes it consistently work well or not. I wonder how other professions
have addressed that challenge?

> > Hypothesis: Without a repeatable process, most modelers cannot solve
> > difficult social problems.
> >
> Not true, if one means a step-by-step process that anyone can follow.
> The difficult social problems require a person with a broad and deep
> understanding of the social situation coupled with a successful past
> career of building up through progressively more difficult modeling
> situations.

Problems that were difficult 100 years ago in fields like physics and
chemistry are routinely solved by today's students, as part of their
education. This is not because the problems are described more clearly,
or the students are smarter, or they have successful past related
careers. It's because the tools they use to solve problems are more mature.

Again, you misinterpret what I said. I said ""most modelers."" You are
saying ""anyone."" But then again, we are all speed reading these messages.

But perhaps the hypothesis is weak. Would it be better as: Without a
repeatable process, the field of system dynamics cannot solve difficult
social problems? This is what we're really trying to accomplish. Thanks
for helping me to see this.

> > Corollary: Without a repeatable process, most difficult problem models
> > will be low quality.
>
>Without a solid foundation in underlying fundamentals and an extensive
>background in apprenticeships and practice, ""most difficult problem
>models will be low quality.""

Now we're getting somewhere. There is a large difference between a
repeatable process and ""a solid foundation in underlying fundamentals
and an extensive background in apprenticeships and practice."" The first
is repeatable by average people. The second is not, because users of a
solid foundation can apply that knowledge any way they want. Each
insight requires a large intuitive leap, which varies greatly from
person to person, and depends on brilliance to be right most of the time.

This is a crucial point. Over the 20th century, industry started to see
the importance of formal process. For example, PERT and CPM emerged as
formal processes to manage large projects. The first to use these
techniques was the Manhattan project. People grew tired of too many
large project failures due to project managers merely having a solid
foundation. Something much more was needed. It was a process that fits
the problem. Today, PERT and CPM are required on large US government
projects. For more examples, look at the rise of Six Sigma, Kaizen, and
the countless processes in software engineering.

A formal process takes the best practices found in a solid foundation,
and makes their use less arbitrary and more predictable.

I got the process bug long ago, after reading books like Andrea Gabor's
""The Man Who Invented Quality: How W. Edwards Deming Brought the Quality
Revolution to America."" Page 7 says:

[Deming] advocates a process-obsessed management culture that is
capable of harnessing the know-how and natural initiative of its
employees and fine-tuning the entire organization to higher and higher
standards of excellence and innovation.

What might happen if the field of SD became process-obsessed, in the
sense that Deming met?


>
> Does this not suggest that the process is far simpler than it really
> is? My Urban Dynamics model had two ingredients--my 40 years dealing
> with dynamics in engineering and management, and six weeks of
> discussions, a long half day every week, with a group who had been
> fighting the battle of decaying cities, while I was spending about 30
> hours per week trying to identify a useful model in what they were
> saying.

An insightful point. If you don't have a repeatable process, then you
are forced to rely on the very few people who have 40 years of deep
experience.

A highly productive process can be surprisingly simple. My favorite
example is the Scientific Method, which has these steps:

1. Observe a phenomenon that has no good explanation.
2. Formulate a hypothesis.
3. Design an experiment(s) to test the hypothesis.
4. Perform the experiment(s).
5. Accept, reject, or modify the hypothesis.


> Certainly, the 40 years of background will not be required. Much of
> that experience was without the visible goal of modeling social
> systems. Most of it was dealing with dynamics through the mathematics
> of differential equations rather than the more realistic and easier to
> understand medium of integrations. Most of the time was without the
> clarity of underlying basic principles that are now available and that
> are now teachable. So, we should expect that the 40 years can be
> shrunk to something more like 10 years. At least an intensive year
> should be devoted to truly understanding and appreciating the
> underlying principles of dynamics that are now available.

Thanks for explaining. But how much further could the 10 years of
training in SD be reduced, if we added a process to the foundation?

Plus, currently even 10 years of training is not enough. If it was, then
the field would be solving the great social problems of our time, and it
would be gaining in popularity in the business world much faster.



> Such an attack would allow the process to be iteratively improved
> until the experimental results were stunning, and the hypothesis was
> proven to satisfaction. The output would be (1) a formal process, a
> best practice that we could spread, and (2) a series of cases that can
> be used to teach/test this best practice.
>
> Democracy succeeds because it relies on the rule of law, not men. SD
> too can succeed if it comes to rely on the use of process, not men.
>
> But law needs courts to interpret laws and processes to enforce them.
> In addition to truly understanding the available underlying
> principles, there are processes that are not receiving much
> discussion. The following were offered in my conference talk on the
> next 50 years:

It's good to hear you use the P word at last. :-)


> > "" How often do you see a paper that shows all of the following
> > characteristics?
> >
> > 1. The paper starts with a clear description of the system shortcoming
> > to be improved.
> > 2. It displays a compact model that shows how the difficulty is being
> > caused.
> > 3. It is based on a model that is completely endogenous with no
> > external time series to drive it.
> > 4. It argues for the model being generic and descriptive of other
> > members of a class of systems to which the system at hand belongs.
> > 5. It shows how the model behavior fits other members of the class as
> > policies followed by those other members are tested.
> > 6. It arrives at recommended policies that the author is willing to
> > defend.
> > 7. It discusses how the recommended policies differ from past practice.
> > 8. It examines why the proposed policies will be resisted.
> > 9. It recognizes how to overcome antagonism and resistance to the
> > proposed policies.""
> > Granted, my own models and publications do not meet all of these
> > tests.

Wonderful. In fact, valuable! It's a mother lode of rules worth
following. This is similar to Richard's ""provisional minimal list"" of
model requirements. Standards like these are the first steps toward
establishing a repeatable process. Martin even begins to take those
first few steps, when he writes:

""I find your ""minimal list"" interesting. If such a list of
""deliverables"" could exist, it would help students and in some courses
we might use kind of a form that guides them through the phases.""

And he didn't even use the P word!



Gratefully and warmly yours,

Jack
Posted by ""Jack Harich"" <register@thwink.org>
posting date Sat, 16 Feb 2008 07:54:21 -0400 (EDT)
_______________________________________________
Locked