Evaluating expected modeling benefits

This forum contains all archives from the SD Mailing list (go to http://www.systemdynamics.org/forum/ for more information). This is here as a read-only resource, please post any SD related questions to the SD Discussion forum.
Locked
Jean-Jacques Laublé jean-jacques
Senior Member
Posts: 68
Joined: Fri Mar 29, 2002 3:39 am

Evaluating expected modeling benefits

Post by Jean-Jacques Laublé jean-jacques »

Posted by Hi everybody.



Due to a reorganisation in my business I have been obliged to reconsider the time spent on my different missions and in particular on SD.

I noticed 8 drawbacks with SD, and managed to reduce seven of them but I have still to work on the one I consider as the principal: the difficulty to evaluate the level of results expected by the modelling effort.

Although this problem is not particular to SD, I think it is a principal concern with it. Are there books or articles on that subject or any comments?

I have browsed all the SD list archives using different words, and did not find much about the subject.

Thanks in advance for any reply.

Regards.

J.J. Laublé. Allocar

Strasbourg. France.
Posted by posting date Sun, 10 Apr 2005 19:35:29 +0200
Thompson James. P (Jim) A142 Jim
Junior Member
Posts: 8
Joined: Fri Mar 29, 2002 3:39 am

Evaluating expected modeling benefits

Post by Thompson James. P (Jim) A142 Jim »

Posted by ""Thompson, James. P (Jim) A142"" <Jim.Thompson@CIGNA.COM>
J.J. Laublé asks how to evaluate the level of results expected
by a system dynamics modelling effort.

System dynamics applications are investments with unknowable paybacks.
However, as John Sterman writes, ""The complexity of our mental models
vastly exceeds our capacity to understand their implications,"" and
concludes, ""Simulation is the only practical way to test these
[mental] models."" (Business Dynamics, p. 37.)

This is how system dynamics got started in the first place. Jay
Forrester's observation in 1958 - that the unaided mind cannot reliably
apprehend the performance of complex systems - set the study of system
dynamics into motion. In a 1975 essay, he noted that, ""Orderly
processes are at work in the creation of human judgment and intuition,
which frequently lead people to wrong decisions when faced with complex
and highly interacting systems,"" and concluded that constructing, testing
and analyzing feedback simulation models would lead to better decision
making.

Herbert Simon observed in 1989 that, ""We construct and run [simulation]
models because we want to understand the consequences of taking one
decision or another.""

At the very least, applying system dynamics methods indicates a serious
interest in investigating problems and answering questions. From a
review of the literature we see that in case after case, the result
is a greatly improved understanding of the issues and likely causes
of problems and better informed decisions. (See for example, Etiënne
A. J. A. Rouwette, Jac A. M. Vennix, Theo van Mullekom, ""Group model
building effectiveness: a review of assessment studies"" in System
Dynamics Review, Volume 18, Issue 1, Date: Spring 2002, Pages: 5-45.)

So you would want to ask whether the consequences of making an ill-
informed decision are likely to be important to you. That is, should
you rely on your ""judgment and intuition, which frequently lead people
to wrong decisions,"" or should you invest in something more rigorous?

Jim Thompson
Director, Economic & Operations Research
Cigna HealthCare
900 Cottage Grove Rd. A142
Hartford, CT 06152
Posted by ""Thompson, James. P (Jim) A142"" <Jim.Thompson@CIGNA.COM>
posting date Mon, 11 Apr 2005 12:02:11 -0400
Bill Harris bill_harris facilita
Junior Member
Posts: 19
Joined: Fri Mar 29, 2002 3:39 am

Evaluating expected modeling benefits

Post by Bill Harris bill_harris facilita »

Posted by Bill Harris <bill_harris@facilitatedsystems.com>
""Jean-Jacques Laublé jean-jacques.lauble wanadoo.fr"" <system-dynamics@VENSIM.COM> writes:


>> I noticed 8 drawbacks with SD, and managed to reduce seven of them but
>> I have still to work on the one I consider as the principal: the
>> difficulty to evaluate the level of results expected by the modelling
>> effort.


My curiousity is getting the best of me; what are the 7 you solved? :-)

Bill
- --
Bill Harris http://facilitatedsystems.com/weblog/
Facilitated Systems Everett, WA 98208 USA
http://facilitatedsystems.com/
Posted by Bill Harris <bill_harris@facilitatedsystems.com>
posting date Mon, 11 Apr 2005 09:33:50 -0700
Fabian Fabian f_fabian yahoo.com
Junior Member
Posts: 3
Joined: Fri Mar 29, 2002 3:39 am

Evaluating expected modeling benefits

Post by Fabian Fabian f_fabian yahoo.com »

Posted by Fabian Fabian <f_fabian@yahoo.com>
Hi JJ,

About the ROI of SD Modeling:

If the modeling is performed as a learning effort (Driver: SD Modeling,
KPI: Learning, in Balanced Scorecard notation)), you might like to ""
googlize"" ""intangible assets valuation"" e.g. Lev, Sveiby, Skandia.

If it is performed as a decision support aid related to a highly
quantitative issue, it should be quite simple to quantify the ROI,
but not so easy to isolate the influence of SD on that ROI. As usual,
this more quantitative valuation should be consensed by all
stakeholders, in this case, you and the rest within your organization.

Be well...

Fabian Szulanski
Director, System Dynamics Research Center
Professor, System Dynamics
ITBA
Posted by Fabian Fabian <f_fabian@yahoo.com>
posting date Mon, 11 Apr 2005 18:21:32 -0700 (PDT)
Alan Graham Alan.Graham paconsul
Junior Member
Posts: 11
Joined: Fri Mar 29, 2002 3:39 am

Evaluating expected modeling benefits

Post by Alan Graham Alan.Graham paconsul »

Posted by ""Alan Graham"" <Alan.Graham@paconsulting.com>
Hi,

On the question of evaluating benefits of modeling:

Over the last several years, it's become a standard practice for every
PA dynamic modeling study to quantify benefit--schedule and cost for
large projects, net present value for businesses, and a rather more
diverse measures for government and regulatory models.

Our rule of thumb is that, in comparison to policies (or strategies
or major decisions) arrived at by management judgement, policies from
a dynamic modeling exercise typically improve performance by around
20-30% for corporate strategy and large projects. (You can find several
references in the Graham and Arriza artice SDR 19(1) 2003, pg. 40)
Perhaps not coincidentally, I find the same difference in results in
problems of complex engineering optimization--does this arise from basic
mental capability to handle interactions of multiple factors?) Our
experience in projects with mitigating the cost and schedule impact
of uncontrollable events is this: typically one can find changes to
mitigate half to 2/3 of the adverse impact.

As the saying goes, ""of course, your results may differ"". System Dynamics
is a process for discovering new things, and you don't know for sure what
you'll discover until after you discover it.

Of course, if a modeler is using a phased approach, early simpler models
should be showing at least the potential stakes involved in one policy
versus another, and thus the benefit of choosing a better versus worse
policy. (Of course, if the modeler has neglected to quantify results
in terms most immediately compelling to the user, the estimate of value
will likewise not be compelling. I'm always surprised at the number of
modeling published that doesn't seem to bother with quantifying benefit.

Another component of value of modeling is reduction of risk in getting
to good policies. It's not infrequent that one describes a modeling
study and its outcome to someone outside the organization, and they s
ay ""I wouldn't have needed a whole model to see that"". But in practice,
that's easy to say afterward, but beforehand, many people often will have
reached many different conclusions about what should be done, and which
one wins out (whether good or harmful) is a pretty random and risky
process. Just look at the success rate for mergers or large projects
meeting their stated goals. So even if a modeling effort gives results
that were intuitively obvious to somebody, there's an orderly process
for increasing the odds that a good and constructive policy is the one
enacted. I don't have any good quantification of this ""risk impact"" of
modeling, but it's there and significant.

Cheers,

Alan

Alan K. Graham, Ph.D.
Decision Science Practice
PA Consulting Group
Alan.Graham@PAConsulting.com
One Memorial Drive, Cambridge, Mass. 02142 USA
Posted by ""Alan Graham"" <Alan.Graham@paconsulting.com>
posting date Mon, 11 Apr 2005 11:30:33 -0400
John Gunkler jgunkler sprintmail
Member
Posts: 30
Joined: Fri Mar 29, 2002 3:39 am

Evaluating expected modeling benefits

Post by John Gunkler jgunkler sprintmail »

Posted by ""John Gunkler"" <jgunkler@sprintmail.com>
I run into a similar dilemma in my ""day job"" - where I am a management
consultant helping organizations solve all kinds of problems using a ""tool
kit"" of competencies, models, methods, processes, etc., I have developed
over the years.

But isn't your question a lot like asking a carpenter, ""How expensive a
building can you create with that hammer?"" The answer doesn't lie in the
hammer. Just so, the expected value of modeling doesn't inhere in the
method we use to create models. The only question you can usefully ask of a
modeling method is something like, ""Will it generate the insight to help
someone solve their problem?"" Assuming that you can say ""Yes"" to that, the
""value"" of the modeling effort becomes external to the modeling method.

So, I think the answer to your question is simple to state, if somewhat more
difficult to work out in detail, and it requires the client to answer it
themselves -- something along this line:

1. What are you trying to accomplish? To change? To improve?
2. Why is it important to your organization?
3. How important is it (in net present value)?
a. How much is it costing you right now (if it's a problem)?
b. How much would it be worth if you could do it (it it's an
opportunity)?

Clients can come up with numbers to answer these questions -- and if, for
some reason, they cannot, then I suggest to them that they need to better
define what they want to accomplish (or do more cost/benefit analysis)
before they begin spending money on a ""project.""

When I use the methods of Lean Six Sigma to help companies improve their
work processes, I can fall back on the long history of other firms who have
had success, and come up with an answer such as ""Historically, you can
expect to see $230,000 bottom-line impact from the average Lean Six Sigma
Black Belt project."" Both the client and I know that this is utter nonsense
but, for some reason, it seems reassuring to them. Go figure.

That Lean Six Sigma number comes from a database of more than 10,000
projects. I don't believe we have such a database within the SD community.
And even if we did, it would contain an even wider variety of projects than
the Lean Six Sigma database does and, so, be even less useful. But we
certainly can, and do, cite examples of successful modeling efforts as a way
to reassure our clients.
Posted by ""John Gunkler"" <jgunkler@sprintmail.com>
posting date Tue, 12 Apr 2005 08:30:00 -0500
Jean-Jacques Laublé jean-jacques
Senior Member
Posts: 68
Joined: Fri Mar 29, 2002 3:39 am

Evaluating expected modeling benefits

Post by Jean-Jacques Laublé jean-jacques »

Posted by =?iso-8859-1?Q?Jean-Jacques_Laubl=E9?= <jean-jacques.lauble@wanadoo.fr>
Jim Thompson writes:

< System dynamics applications are investments with unknowable paybacks.

And later on

< That is, should
< you rely on your ""judgment and intuition, which frequently lead people
< to wrong decisions,"" or should you invest in something more rigorous?



Thanks for sharing your judgment Jim.

It seems to me that somebody that invests into projects with unknowable
paybacks

is in fact relying on judgment and intuition and is not acting rigorously.

Using a rigorous tool does not prove that one acts rigorously.

Secondly, it seems hard to believe that if one tries to evaluate the payback
of any study it is not possible to get at least some even very large
estimation of the payback.

I have started studying the payback of my modelling effort and have already
discovered new ideas that I had not yet imagined.

Sterman in his book writes about defining very precisely the object of the
modelling effort.

But it seems to me that to define precisely the effort you need first to
define the possible paybacks and at the same time the corresponding amount
of effort to get it.

This way you can adjust the payback to your own possibilities that are
always limited.

If this is not possible to do, I can understand that most organisations
hesitate to use SD.

They are not fool enough to invest into an effort with no knowable payback
even if they do not always act rigorously.

In fact this is the first error I made when I started SD, three years ago.

Why did I do this? Because I acted on judgment with no rational basis and
intuition and did not make a rigorous investment and thought that a complex
problem needed necessarily a complex method and could not be studied simply
at first. The academics might study this fundamental problem to my point of
view although

their natural inclination is not always towards practical utility especially
if the subject is not complex and subtle enough to justify their interest.

Another point: I never said that I would evaluate the paybacks using
intuition and not rigorous methods. SD is not the only rigorous way to
analyse problems. It is sometimes better to get a rough evaluation using a
rigorous but simple method in a decent time, before deciding whether it is
worth going a step further.


<""Group model building effectiveness: a review of assessment studies""

Thank you for the reference I will consult it at the member service of SD
organisation.



Bill Harris writes


My curiosity is getting the best of me; what are the 7 you solved?



I am afraid, Bill, that your curiosity might not get what it expected. First
I never wrote solve but reduce. This leaves me a large latitude of

reduction that can be small or big. The amount of reduction will be
determined by experience.

Secondly the seven drawbacks are more or less peculiar to my situation and
are not necessarily worth of attention.

Two other members from the list, asked me exactly the same question but not
going through the mailing list.

I do not know if it was because they were anxious to get a quick answer or
because they were ashamed to show to the SD community their interest in such
a question.



The 8 drawbacks are in order of descending peculiarity.



1.. Evaluation of the expected benefits. This the one I exposed in my
query and hope to have some help. I do not expect help for the seven others.
But anybody may have suggestions.
2.. Difficulty to evaluate the time needed to get the job done.
This difficulty is due to my lack of experience. I try to evaluate now the
minimum and maximum time to devote to the job by slicing it into more
sub-jobs and evaluate the time to get from one step to the next one.

3. This drawback is valid for somebody who has to deal with the same
subject that he has to model but concretely with ordinary means. It is then
more

peculiar to my case, and must occur relatively seldom.

Working with SD has the effect of putting too much distance between me and
the reality I have to deal with currently.

Formulating it in another way, SD is an intellectual exercise that does not
develop doing things practically. Or I have to be very practical and near
the field of action. During the time I build nice equations, an employee may
sadly rob the company or very concrete negative things may happen.

There are other intellectual tasks that are not so far from reality. I do
not experience this with current programming or budgets making

or negotiating commercial objectives or visiting customers which is less
intellectual.

The solution is to restrict the time I work on modelling. But if
I restrict the time I work on modelling, I have to accept that it will take
more time to

get the job done and that I must try to be more productive.
Before doing something I must consider too the utility of doing it.

4.. I am not sure if this point is specifically due to my lack of
experience or if it is current in SD. I suspect it is a bit of both.
I have in my current modelling project all the time to experience different
ways of development at every step of the modelling. This particularity makes
the time needed to get the job done difficult to evaluate and is sometimes
very discouraging. It is particularly true for my problem, where there is no
inventory, no production as exposed in the ordinary text books. I have not
yet found an example of SD problem that looks a bit like my problem. When
you consult papers or Web sites dealing with revenue management (the main
object of my present models) you will never find any reference to SD. Here
again there is no miracle. I need more experience, and before experiencing
some new ways, I must consider the utility of doing it. Good planning of the
time devoted to SD is also a good solution, because it forces you to better
use the time you have.
5.. SD is a difficult subject that needs a considerable intellectual
effort. My intellectual energy is limited and I cannot exclusively devote it
to SD.
Other tasks are much less intellectually exhausting and are
generally more quickly useful. Here again I must closely manage the time
devoted to SD.

6. It is very difficult for me to jump from an SD task to something else.
I can do it relatively easily with usual programming, although with the risk
of

coding errors. But with SD the problem is not the risk of coding errors, but
losing the general idea of the problem. SD coding is relatively easy.

It is difficult to go in and out from an SD task. The solution is to manage
closely the period of times I devote to SD, so that I will not be disturbed.

7. There are many bad surprises during the modelling process. It is due to
my lack of experience. The solution is to learn to live with this drawback
and to

accept the bad surprises when they occur.

8.. When I worked on SD, during the week, the employees knew that I was
very concentrated on a subject, and did not dare to disturb me. This is very
dangerous for my position as I have to be close to my employees
preoccupation. I have resolved this, by organizing and tracing the dialog
between me and the employees I am in charge of and working on SD mainly on
week-end.

Normally I should not do the SD job by myself but go to a specialist. It is
not my job and I have other things to do. There are unfortunately no SD

practitioners in my town and I must go to Paris to find one or eventually to
Germany. This is not very practical. And I am sure that I will not convince
a

good SD consultant to devote his limited time to a small company like ours
and in a foreign town. I have had the occasion to work with consultants on
all

sorts of subjects. The experiences were generally bad. SD being very
difficult I have even less chance to get the quality expert I need.

This a resume of my current problems. Time only will tell the continuation
of the story.

Regards.

J.J. Laublé Allocar

Strasbourg France.
Posted by =?iso-8859-1?Q?Jean-Jacques_Laubl=E9?= <jean-jacques.lauble@wanadoo.fr>
posting date Wed, 13 Apr 2005 09:47:25 +0200
Erling Moxnes Erling.Moxnes ifi.
Junior Member
Posts: 5
Joined: Fri Mar 29, 2002 3:39 am

Evaluating expected modeling benefits

Post by Erling Moxnes Erling.Moxnes ifi. »

Posted by Erling Moxnes <Erling.Moxnes@ifi.uib.no>
Two comments regarding expected modelling benefits:

1. SD can be used to find for instance a production policy that works to
stabilise production, sales, inventories etc. Whether this is a good policy
or not for a firm depends on the benefits and the costs of the production
policy. To convince decision makers, benefits and costs need to be discussed
and even better - quantified.

2. Does this mean that SD should be viewed as a branch of optimisation?
I learned in my introductory SD class that ""maximising something"" is not a
proper SD problem statement. I never forgot this, however I have often
wondered - why not? My answer to myself is that optimisation ideally
requires that one considers and makes tradeoffs between all possible policy
variables. The boundary will be very wide and focus on particularly
important complex, dynamic issues may be lost. For instance, in SD one may
develop a dynamic inventory policy without spending a lot of effort to
establish the ""desired"" inventory level (involving numerous cost estimates).
A policy sensitivity test will reveal whether the policy is sensitive to
the exact value of the ""desired"" level or not.

Still I think that the basic framework for optimisation (objective and
restrictions) could serve as a useful guideline in SD: have a clear
objective in mind and see how you can tackle the restrictions imposed by the
dynamics of the system. If the dynamic problem is of great importance for
the total net benefits, it should be acceptable (and possibly an advantage)
that a costly (and limited) overall optimisation has not been attempted.

I am eager to hear if some of you can contribute other reasons not to
""maximise something"" as the problem statement in SD?


My best regards,

Erling Moxnes



-- The System Dynamics Group University of Bergen, Norway http://www.ifi.uib.no/sd/
Posted by Erling Moxnes <Erling.Moxnes@ifi.uib.no>
posting date Thu, 14 Apr 2005 10:46:36 +0200
Jean-Jacques Laublé jean-jacques
Senior Member
Posts: 68
Joined: Fri Mar 29, 2002 3:39 am

Evaluating expected modeling benefits

Post by Jean-Jacques Laublé jean-jacques »

Posted by =?iso-8859-1?Q?Jean-Jacques_Laubl=E9?= <jean-jacques.lauble@wanadoo.fr>
Alan Graham writes

< Of course, if a modeller is using a phased approach, early simpler models
should be showing at least the potential stakes involved in one policy
versus another, and thus the benefit of choosing a better versus worse
policy.



Thanks answering to my question Alan.

This method unfortunately does not work in my case, and I think that there
must be many other problems where it does not work.

I use of course a phased approach, very little step by step.

But when you start with an influence diagram when there are about 22 feed
back loops , that all factors are interdependent and more or less important,
that is that about every loop is dependant from many others, and that you
have no objective method to track the order of dominance of each loop,
whatever you start from, the result will not even represent a fraction of
the result and can be completely biased.


In fact you suppose that when the general equation of a model being Y(t) =
F(Xi,t) i= 1 to j j being the number of policies in fact the equation is
Y(t) = G(X1.Xj-1,t) + H(Xj,t).

In that case if you start resolving the H equation you will have found the
good policy for the Xj parameter. Of course if you have only one policy to
optimize, j = 1 it works. But if you have simultaneous policies to solve
with strong probably non linear interactions, I do not see what to do.

I tried to transform the influence diagram into a quantitative model, but
due to the imprecision of the influences, the overall result could not have
a meaning not to mention the difficulty to analyse it. In fact I never
constructed fully the quantitative model.

I should have had a tool that permits from the structure of the model, to
show how by varying the level of the influences and eventually their shape,
loops dominances would vary. Otherwise stated what was the influence of the
structure of the model on the result and what was the influence of the
structure of the model plus the influences on the result.

I could have had then interesting insights about different policies. Just
knowing what is in general the dominant loop is already an important
insight. Or knowing what can be quantified and what cannot or with a lot of
imprecision, to be able to go SMOOTHLY from the qualitative diagram to a
certain level of quantification.

You have the great chance to have experiences with a lot of cases and you
muts know by experience what is possible and how to do it

My problem is that I have dynamics with many interdependent policies that
have to be combined and not enough experience to find the correct question
that will have sufficient utility while being sufficiently easy to model.

Regards.



Fabian Szulanski writes

About the ROI of SD Modeling:

<If the modeling is performed as a learning effort (Driver: SD Modeling,
<KPI: Learning, in Balanced Scorecard notation)), you might like to ""
<googlize"" ""intangible assets valuation"" e.g. Lev, Sveiby, Skandia.
Thanks for the reference!

<If it is performed as a decision support aid related to a highly
<quantitative issue, it should be quite simple to quantify the ROI,
<but not so easy to isolate the influence of SD on that ROI.
You are right; It is true that there are two phases in my question.

I wrote too that the question was not strictly related to SD.

The first question is: what are the potential benefits of the variation of
the

politics. This in fact not SD related.

And the second question is : what will be the contribution of SD to the
first question.

I have both problems and the first one is the most important because it
governs the second one. But contrary to what you write, finding the ROI is
not simple at alll.

Here again it is because the result will be obtained by a happy combination
of many policies and evaluating by advance the range of these results is
difficult, not to mention the policies to concentrate upon and of most
influence.

Once I am sure of the range of the ROI, I can start trying to find a
solution.

Here again whether it needs SD is a new problem.

It is worth to be studied too.

My problem is a combination of static optimization and dynamics, and one has

to separate the purely static problems to the dynamic ones.

There are for instance dynamic loops that can be expressed statically with
no big biases or you can have the choice and find it difficult to decide
what to do.

My problem is simply stated a resource allocation optimization.

The problem is whether one can allocate the resources all at once,
independently of the time of the decision of allocation or, if it is
necessary to spot the moment when the resource is allocated and if it has an
influence on allocation decisions that come later on.

The two options work, with of course varying approximations.

I should explain this with a very simple model. I will try to construct such
a simple model.

Thanks for expressing your ideas and best regards.



John Gunkler writes:



<But isn't your question a lot like asking a carpenter, ""How expensive a
<building can you create with that hammer?"" The answer doesn't lie in the
<hammer. Just so, the expected value of modeling doesn't inhere in the
<method we use to create models. The only question you can usefully ask of
a
<modeling method is something like, ""Will it generate the insight to help
<someone solve their problem?"" Assuming that you can say ""Yes"" to that, the
<""value"" of the modeling effort becomes external to the modeling method.



Thank you John for sharing your ideas.

Here again there are two questions. The question of the potential benefit of
modifying a policy and the aptitude of a method to find the right policy.

If a modelling method generates the insight to help someone solve his
problem it has automatically a value that is of course dependant of the
potential benefit!

So the expected value of modelling inheres partially in the method we use to
create the model. It seems to me that there is some contradiction.

<So, I think the answer to your question is simple to state, if somewhat
more
<difficult to work out in detail, and it requires the client to answer it
<themselves -- something along this line:
<1. What are you trying to accomplish? To change? To improve?
<2. Why is it important to your organization?
<3. How important is it (in net present value)?
< a. How much is it costing you right now (if it's a problem)?
< b. How much would it be worth if you could do it (it it's an
<opportunity)?

<Clients can come up with numbers to answer these questions -- and if, for
<some reason, they cannot, then I suggest to them that they need to better
<define what they want to accomplish (or do more cost/benefit analysis)
<before they begin spending money on a ""project.""
I want to better my price setting method and investments to better my
cash-flow.

the question is simple and what I am looking for is: what is the range of
increase

of the cash-flow I can expect in general by modifying my price setting
policy and

investment and disinvestment policy (buying and refunding cars).

This question is simple to expose but difficult to solve, because the result
will depend on the method. We use actually a half empiric method, that is
better then when we were using older methods. So the question could be: can
another more 'rigorous method' increase substantially the present results,
and preferably for a cost less then the increase of results.

So the cost/ benefit analysis is not independent from the method you will
use to solve the problem, and asking your customer to define the
cost/benefit when they do not know SD or even if they do, is not simple and
has to do with the method whether you use SD or not.

This is unfortunately a very theoretical discussion and does not solve my
practical problem.

It seems to me that there is no method to evaluate both questions.

I understand that the first question is very dependant on the kind of
problem to solve and that there is no general method.

But the second question is far from being solved although there could be
some method to do it. Example helping reduce the gap between an influence
diagram and the complete quantitative model and already giving some results
derived from the diagram would already help a bit.

Another idea is that the two questions are not so independent from one
another.

The way you solve the first one may have an influence on the choice of the
method to solve the second one, or help the way you use the method.



Regards to everybody.

J.J. Laublé Allocar

Strasbourg France
Posted by =?iso-8859-1?Q?Jean-Jacques_Laubl=E9?= <jean-jacques.lauble@wanadoo.fr>
posting date Thu, 14 Apr 2005 10:24:38 +0200
Alan Graham Alan.Graham paconsul
Junior Member
Posts: 11
Joined: Fri Mar 29, 2002 3:39 am

Evaluating expected modeling benefits

Post by Alan Graham Alan.Graham paconsul »

Posted by ""Alan Graham"" <Alan.Graham@paconsulting.com>
All,

Erling Moxnes asks for 1. discussion on propriety of optimization in SD,
and 2. whether SD then would be a branch of optimization.

1. There's a moderately extensive discussion of SD uses of optimization in
Graham and Ariza 2003. Dynamic, hard and strategic questions: Using optimization
to answer a marketing resource allocation question. System Dynamics Review 19(1)
27-46. The bottom line on why optimization has historically been considered
inappropriate: Explicitly, Forrester believed that optimization (the closed-form
analytical optimizations current when ID came out) would bias modelers toward
over-simplified models and poor modeling practices. Implicitly, the problems
being addressed then dealt much more frequently with stability and growth rate
than with explicit tradeoffs within (mostly) steady-state growth, as now seems
to be the case, at least in commercial strategy application. For stability
questions, optimization is both trickier and could be expected to add less value.
(And optimization software is easier, and managements are generally much more
quantitatively-oriented now.)


2. Even if optimization were heavily used in SD, SD would no more be a branch of
optimization than medicine would be a branch of biochemistry. SD is an applied
science (aimed at doing things better in the real world) rather than an ""pure""
science (aimed at creating more true knowledge within a topic area). SD draws
on mathematics, just as it draws on economics, marketing science, management
psychology and other fields, but isn't ""part"" of them either theoretically
or in practice.

(B.t.w. this view implies that the metaphysical discussions about ""true"" nature
of, e.g. levels and rates, are a bit beside the point: these concepts are part
of the field because they're useful, not because they're ""true"" in any absolute
sense.)

cheers,

Alan

Alan K. Graham, Ph.D.
Decision Science Practice
PA Consulting Group
Alan.Graham@PAConsulting.com
One Memorial Drive, Cambridge, Mass. 02142 USA
Posted by ""Alan Graham"" <Alan.Graham@paconsulting.com>
posting date Thu, 14 Apr 2005 11:11:56 -0400
Alan Graham Alan.Graham paconsul
Junior Member
Posts: 11
Joined: Fri Mar 29, 2002 3:39 am

Evaluating expected modeling benefits

Post by Alan Graham Alan.Graham paconsul »

Posted by ""Alan Graham"" <Alan.Graham@paconsulting.com>
A clarification on valuing impact of modeling in a ""phased approach"", and
what can be done with a purely causal-diagram (non-quantitative) model:

In recommending that a model could approximately value the strategic impact
benefits of a later-phase model, I was referring to a THREE-phased ""phased
approach"", where there is a qualitative model (diagram), initial simulation
model, and a more inclusive and complex simulation model. (Described in Jim
Lyneis' ""System Dynamics for business strategy: A phased approach"", System
Dynamics Review 15(1) 37-70, 1999. The initial simulation model (phase II)
can be used to quantify (at least approximately) the difference in value
between different strategies / policies, leading to a very well-informed
decision about the next-phase modeling (phase III).

Jean-Jacques Lauble rightly points out the difficulties of trying to draw
valuation directly from a causal diagram (phase I in Jim Lyneis' scheme).
I agree, and was decidedly not recommending this.

There is one nuance worth noting, however. In parallel to the traditional
SD discussion on point prediction versus direction of behavior change, we
can have the same discussion about using causal diagrams to ""predict""
direction of value impact versus a specific numerical prediction of value
impact.

One can evaluate a strategy choice--whether to do A or B, working from causal
diagrams. If this is done:

a) with a rigorous and explicit process, with
b) a good team (good modelers and people well-experienced with the real system), and with
c) some prior quantitative modeling experience analogous to the dynamics in question,

qualitative analysis can produce reasonably reliable results, for a restricted
set of questions. (Right now, however, I'm hard-pressed to articulate any general
guidelines for what questions are and are not addressable this way--it's still a
case by case question.) (A quantitative scoring of strategies based on a qualitative
model--diagram--is briefly described in Mayo, Donna, Martin Callaghan and William J.
Dalton, ""Aiming for Restructuring Success at London Underground"". System Dynamics
Review 17(3) 262-289, 2001. In this study, they had the opportunity to compare
recommendations from qualitative analysis to recommendations from later simulation
modeling--the first recommendations were reasonably accurate. They have applied s
imilar methodology many other times, with measurable success after the fact, but
the results are as yet still unpublished.)

That said, however, attempting to quantify benefit before the fact within a qualitative
framework would seem counterproductive. One would be attaching a number to a policy or
strategy impact, without the means to even start to evaluate its accuracy. In parallel
to the traditional SD discussion of point prediction, it's perhaps feasible to be
reasonably certain that strategy A gives better value than strategy B, even on the
basis of only a causal diagram (and all of the real-system knowledge that it summarizes).
But it's much more difficult (if not impossible) to say with any rigor or accuracy to
predict the quantitative value difference created by A versus B, let alone characterize
the uncertainty of that valuation.

If I were a customer, I'd much rather hear ""we usually get 20-30% improvement in value
over strategies chosen by expert judgement"" than ""we've used a highly questionable
methodology even without other quantitative modeling to determine that you'll
do $3.225B better.""

Cheers,

Alan

Alan K. Graham, Ph.D.
Decision Science Practice
PA Consulting Group
Alan.Graham@PAConsulting.com
One Memorial Drive, Cambridge, Mass. 02142 USA
Posted by ""Alan Graham"" <Alan.Graham@paconsulting.com>
posting date Thu, 14 Apr 2005 12:05:08 -0400
kstave ccmail.nevada.edu
Newbie
Posts: 1
Joined: Fri Mar 29, 2002 3:39 am

Evaluating expected modeling benefits

Post by kstave ccmail.nevada.edu »

Posted by kstave@ccmail.nevada.edu


..
>I am eager to hear if some of you can contribute other reasons not to
>""maximise something"" as the problem statement in SD?


I work on group model building projects for environmental management where
the group usually consists of stakeholders with different views on the
management problem and goal. In the beginning of the project, the task
focuses on how to bring the stakeholders to a common understanding of the
system generating the problematic behavior(s). That means we have to
start by getting the group to agree on what the problematic behavior(s)
are. Sometimes that is straightforward and sometimes it's not. In any
case, the hard part in the beginning is getting the skeptics to ""buy into""
the concept and structure of the model. At some point however, the group
""buys"" it, and then someone invariably asks ""Ok, so now that we have the
model built, can't we just get it to tell us the best answer, the optimal
solution?""

To answer that question I usually bring up the output screen, which
generally contains more than one graph (related to the problematic
behavior(s)). In the most recent case, the outputs were ""average time
spent in traffic per day"", ""Air Quality (tons of CO produced per day)"",
""Cost of Policy"", ""Population"", and a couple of others. I explained that
in order to optimize the system, they would have to specify not only
objectives or restrictions for each variable, but also weights on the
variables. Some people in the group think the best solution includes an
increasing population; some people want population to decrease. Some
people think reducing time in traffic is the most important outcome; others
think minimizing the amount of CO produced is most important. Some think
cost is critical and others think air quality at any cost is more
important. The point is that everyone who looks at the output graphs is
making their own tradeoffs and assigning different weights. It might be an
interesting exercise to have the group try to achieve consensus on an
objective function and restrictions, but I haven't gone that far. I think
the greater value of using system dynamics in this context is to help
stakeholders understand the dynamic complexity of the system and to give
them a more objective basis for discussing the resulting consequences and
tradeoffs of different strategies.

-- Krys
Posted by kstave@ccmail.nevada.edu
posting date Thu, 14 Apr 2005 15:35:14 -0700
Jean-Jacques Laublé jean-jacques
Senior Member
Posts: 68
Joined: Fri Mar 29, 2002 3:39 am

Evaluating expected modeling benefits

Post by Jean-Jacques Laublé jean-jacques »

Posted by =?iso-8859-1?Q?Jean-Jacques_Laubl=E9?= <jean-jacques.lauble@wanadoo.fr>
Hi Erling



<I am eager to hear if some of you can contribute other reasons not to
< ""maximise something"" as the problem statement in SD?





In my problems, there is pure static optimization mixed with dynamics.

I think that whether to consider any optimization depends on the problem.

Trying to optimize something can give sometimes useful insights with
sufficiently

simple problems.

But it has too severe drawbacks.


First to optimize it is necessary to decide of something to optimize.

This can be artificial and arbitrary. It may limit the range of solutions.

It is necessary to have already a good understanding of the problem to

choose the payback. Furthermore it limits your problem to an optimization

which is finding the optimum value of something although in some package

you can optimize a mix of different values which is restrictive.


Secondly it orders numerically the ways to resolve your problem and supposes
that the best ordered is the one that suits you. There is no proof that this
simplistic ordering scheme is the only way to analyze the problem. There may
be many policies being equivalent and not comparable by any ordering method.


Thirdly it supposes that this optimum exists.


Then it forces to model reality as closely as possible to catch that
optimum. It pushes toward a perfect model of reality that can be costly and
difficult to analyze.


It assimilates social problems to a mechanic system, which is not the case.


Looking for an optimum is maybe asking a question that has no answer.

In life it is better to formulate questions that can have an answer.

It saves a lot of time. This is called realism.



Regards.

J.J. Laublé, Allocar

Strasbourg France
Posted by =?iso-8859-1?Q?Jean-Jacques_Laubl=E9?= <jean-jacques.lauble@wanadoo.fr>
posting date Thu, 14 Apr 2005 17:00:24 +0200
geoff coyle geoff.coyle btintern
Member
Posts: 21
Joined: Fri Mar 29, 2002 3:39 am

Evaluating expected modeling benefits

Post by geoff coyle geoff.coyle btintern »

Posted by ""geoff coyle"" <geoff.coyle@btinternet.com>
As Alan points out, optimisation has been used in SD for a very long time. I
think that the first uses go back to the 1970s.

I no longer get System Dynamics Review, so I haven't seen his paper, but I
guess that his references will show what I mean. There's also an extended
discussion of the theory and technique of optimisation inn my 1996 book
'System Dynamics Modelling: A Practical Approach'. As well as formulating
objective functions for SD optimisation, it addresses the vita; issue that
the structure of the model can also be an optimisation parameter.

The real subtlety in SD optimisation is in the technique. On the face of it,
the model is simulated many times as the optimisation proceeds, so it looks
like optimisation by repeated simulation. However, that's just another way
of experimenting with a model and, as always, the experimental results need
to be studied closely. That study will show new insights, leading to fresh
ideas for optimisation, such as different parameter sets (including
structural ideas), or changes to the objective function. In that ay one is
doing simulation by repeated optimisation, which is a much more powerful
idea than simulation alone.

The thing that one must absolutely not do is run an optimisation and say
""ah, that's the optimal solution!"".

It's not at all clear why Alan says that optimisation could be expected to
add less value for stability questions. That's not been my experience, at
any rate.

Regards,

Geoff

Visiting Professor of Strategic Analysis,
University of Bath
Posted by ""geoff coyle"" <geoff.coyle@btinternet.com>
posting date Tue, 26 Apr 2005 10:07:58 +0100
j-d jaideep optimlator.com
Newbie
Posts: 1
Joined: Fri Mar 29, 2002 3:39 am

Evaluating expected modeling benefits

Post by j-d jaideep optimlator.com »

Posted by j-d <jaideep@optimlator.com>
I have been itching to respond to some of the recent
threads but have been holding back for one reason or
the other. A recent post by Dr. Coyle (specifically
his comments on ""simulation by repeated optimisation,
which is a much more powerful idea than simulation
alone"", and ""> The thing that one must absolutely not
do is run an optimisation and say ""ah, that's the
optimal solution!""."" has finally prompted me to chime
in.

Here are some thoughts:

1. I had done, as part of my Ph. D. work, published in
1996, multiple-player dynamic game optimization based
on the LTG model. Basically what this means is that
multiple players (two in my case, North and South
blocks of countries) are choosing policies based on
different information available to them, in order to
optimize their respective objective functions. The
ideas are dynamic extensions of static game theory
(Prisoner's Dilemma, etc.) and run the gamut of
deterministic and stochastic cooperative and
noncooperative Nash and Stackleberg games.

Assuming that regional structures of North and South
blocks are similar to LTG model, I had started with
the extremely ambitious plan of solving these games on
a macintosh computer - my gray hair turned truly gray
during that experience. The plan involved converting
all LTG graphical functions back to equivalent
equations to get multi-player differential game
solutions. Details are in the dissertation, but here
is a very short summary of lessons:

1. Non-linear optimization is very hard, especially
for models of the scale of LTG. Hence I simplified the
model to a great degree for optimization - simulations
are much easier.

2. Results of optimization can be as surprising as
those from simulation, from the ""insights"" point of
view.

3. Data is very important else you have GIGO (garbage
in garbage out).

4. A big insight for me was: because solutions of
nonlinear systems are unpredictable, both straight
simulation and multiple-player dynamic game ones
(players adjusting strategy over time), one
optimization result is not much useful (as Dr. Coyle
said), and repeated optimizations must be done (after
changing structure/parameters) to gain a deeper
understanding of what policies to apply. I had called
this idea ""optimlation"" - or doing multiple
optimizations in the way we now do simulations for
true (or closer to true) optimal results (more of this
on www.optimlator.com).

This work was done 9 years ago and I am not doing
SD/optimization any more. I am finding my current
insights in programming and database work :) One great
example of ""optimlation"" I find in traditional martial
arts practice (aikido for me). There the emphasis is
really optimization (else one's life is on the line),
but the learning happens through repeated simulations,
with the objective function always in mind (zanshin,
be aware of your surroundings/enemy all the time, etc.
etc.). All this is closer to Stackleberg stochastic
dynamic games. I am truly glad I don't have to think
of all this while actually practicing the art.

2. On Hydrological models:

A one-year stint of working at Univ of Houston with
Houston wasterwater oficials taught me that the value
of SD really lies not in replicating the engineering
water flow models, but if possible, in integrating the
more socio-political issues that are always present
(for example, why is more money going to rich
neighborhoods, how much flooding occurs as a result of
poverty leading to unimproved sewer pipelines and so
on). It is not productive (though doable and
instructive) to build SD hydrological models, because
engineers won't care much for them. Our AHA moments
came always in the integration part. Average high
level data from hydrological models can be used as
inputs to SD models and simulations done by varying
the socio-economic parameters, because that is where
the city officials lack understanding - engineering is
generally not their biggest problem.

I apologize for the long email - I had been mulling on
these threads for a long time but Dr. Coyle's
insightful comments finally made me write them down.

Thanks

Jaideep

Jaideep Mukherjee, Ph. D.
jaideep@optimlator.com
http://www.optimlator.com/
Posted by j-d <jaideep@optimlator.com>
posting date Wed, 27 Apr 2005 13:54:14 -0700 (PDT)
Alan Graham Alan.Graham paconsul
Junior Member
Posts: 11
Joined: Fri Mar 29, 2002 3:39 am

Evaluating expected modeling benefits

Post by Alan Graham Alan.Graham paconsul »

Posted by ""Alan Graham"" <Alan.Graham@paconsulting.com>
I must emphatically underscore Geoff's observations.

Optimization itself is far from the end of analysis.

Sometimes the results turn out to be completely implausible, because hill-
climbing optimization often climbs its way into extreme conditions that
weren't anticipated. (I often include optimization in our laundry list
of available validation tools--in practice, it's pretty efficient at
uncovering some kinds of flaws that really matter to outcomes.) Just as
one should distrust simple constraints on level variables (and reformulate
to represent constraints correctly), so should one distrust simple limits
on parameter values in optimization (and reformulate such that optimization
realistically never tries to go to the implausible extreme value).

Sometimes the optimization results aren't unique and represent a managerially
unacceptable solution (typically a ""harvest"" solution of going out of
business with a lot of cash.)

Sometimes it happens that optimization results turn out to highly sensitive
to assumptions, which must be dealt with.

When things are finally straightened out, ideally optimization is the
penultimate step before articulating a recommendation or outcome that simply
puts together known facts in a new way such that the recommendation becomes
obvious.

What do people think of this general description of when optimization is useful:

Just as you simulate when the known pieces of the system interact in complex
and hard-to-fathom ways, you use optimization when the various policy (/strategy/
whatever) levers combine or interact in complex ways (including combinatorial
explosion that prevents easy understanding of how they interact). Neither a
single simulation nor a single optimization is a trustworthy basis for action.

Thoughts?

cheers,

Alan

PS: Geoff's observation on applicability to stability problems: I've always had
good results in understanding stability through hypothesis-testing
experimentation. That said, my peculiar circumstances never called for
prescriptive understanding (""do this to stabilize the system""), as opposed to
understanding and explaining known oscillatory behavior. And my stability work
(economic cycles) was investigating very similar systems for years at a time.
Geoffrey may have encountered very different situations, with a newly-modeled
system and real-world time constraints. In such cases, it's plausible that
optimization may be the shortest path to useful results. I, perhaps unlike
Geoff, just haven't encountered stability as a consulting client problem.
It's amazing how much the appropriate approachs changes for different
situations. akg

Alan K. Graham, Ph.D.
Decision Science Practice
PA Consulting Group
Alan.Graham@PAConsulting.com
One Memorial Drive, Cambridge, Mass. 02142 USA
Posted by ""Alan Graham"" <Alan.Graham@paconsulting.com>
posting date Thu, 28 Apr 2005 14:15:33 -0400
Locked