QUERY The Minimum Acceptable Model Standard

This forum contains all archives from the SD Mailing list (go to http://www.systemdynamics.org/forum/ for more information). This is here as a read-only resource, please post any SD related questions to the SD Discussion forum.
Locked
Richard Dudley <richard.dudle
Junior Member
Posts: 2
Joined: Fri Mar 29, 2002 3:39 am

QUERY The Minimum Acceptable Model Standard

Post by Richard Dudley <richard.dudle »

Posted by Richard Dudley <richard.dudley@attglobal.net>

I think there is some BASIC model quality that we all expect when we read
a paper or report based on a modeling exercise. It's good to be open-
minded but there are limits. :-)

Even overview models and causal loop diagram without any equations have
minimum standards. Presumably this is some of what is taught in courses on
system dynamics modeling.

I think that we all expect some sort of standard, although that standard may
be different depending on the type of model being used.

I agree that the way of looking at a problem (or system) might vary from one
individual or group to another, and this may generate a lot of discussion and
disagreement. In fact this is the very reason a clearly laid out model is
important -- to encourage discussion about the model structure and the
assumptions made.

In order for discussion to take place -- before it can take place -- the model
must be reasonably understandable to interested parties who would normally
have (or be able to gain) sufficient knowledge to understand the concepts
embedded in the model.

Thus, in addition to the longer list of what makes an ideal model, we need to
consider the -minimum- qualities a model must have for discussion to start.

Here I am not thinking so much about the structural questions, which might be
open to discussion at a later point, but about the minimum requirements
required for understanding of the model itself in its present form.

In other words, what do we need to see in a model in order to intelligently
comment on that model even if, or especially if, we may not agree with its
content, or outcomes.

Here is a provisional minimal list of things that I think should be required,
(most will note that these have been listed elsewhere)

1. Clearly presented model structure (not just an overview map)
2. Clearly stated model equations which have no obvious errors.
2. Units for all model components
3. The model is available for examination and such examination should allow
running/testing the model
4. Short description of each model component
5. The model should withstand basic validity tests -- such as reasonable
extreme conditions tests.
6. Output should be reasonable compared to the real world
7. Output should be consistent with what the authors claim is happening in the
model (even if we may not agree).

I would expect there would be a second tier of requirements related to model
quality. Such as things related to the more careful examination of model
structure, or whether relationships are correctly formulated. etc.

I don't think the first tier of standards need to be particularly harsh, but
models which are used to promote policies (for example) should pass minimum
standards to allow discussion. (Recall that my original post was related to
widely distributed, in the literature, marginal models.... which might degrade
the image of SD modeling).



Richard
Posted by Richard Dudley <richard.dudley@attglobal.net>
posting date Wed, 13 Feb 2008 01:45:57 -0800
_______________________________________________
""Friedman, Sheldon"" <sfried
Newbie
Posts: 1
Joined: Fri Mar 29, 2002 3:39 am

QUERY The Minimum Acceptable Model Standard

Post by ""Friedman, Sheldon"" <sfried »

Posted by ""Friedman, Sheldon"" <sfriedman@sjc.edu>

I have been reading the posts on the issue of models, etc.. What has always
been on my mind, as Rich Dudley implies, is a lack of model availability.
I have been to conferences were models are discussed, read articles where
outputs are shown, written to authors expressing a desire to see the model,
but in the end have not seen any.

When I first started my education, one of the things that impressed me most
about system dynamics was the idea of transparency. Yes, the texts have the
classical models, but for the most part it seems that transparency has
disappeared. Maybe , if novices could see examples of excellent models, it
might help the problem of knowing what a good model should be.

Shelly Friedman
Posted by ""Friedman, Sheldon"" <sfriedman@sjc.edu>
posting date Wed, 13 Feb 2008 09:32:06 -0500
_______________________________________________
Bob Eberlein <bob@vensim.com&
Member
Posts: 26
Joined: Fri Mar 29, 2002 3:39 am

QUERY The Minimum Acceptable Model Standard

Post by Bob Eberlein <bob@vensim.com& »

Posted by Bob Eberlein <bob@vensim.com>

Richard's query puts standards in a very interesting light. Both on the list
and off there have been lots of discussions of model quality and standards.
However, what Richard is suggesting is a very basic set of guidelines that
are meant to ensure the ability to inspect, not guarantee results.

That seems like a very fruitful way to approach the issue. I think that
Richard has set the bar too high though, and rather than adding to his list
my inclination would be to remove from it, and also to distinguish
quantitative and qualitative models. Focusing only on the quantitative, my
list would be:

0. A clear description of the issue or issues the model addresses.
1. A clear description of model structure, including identification of key
stocks, flows and feedback (all of them for small models).
2. An available working model that allows anyone to simulate the model and
inspect equations with the meaning and units of measure for each variable
clear (either from description or simplicity of presentation).
3. A clear statement of why the model is applicable to the issues.
4. A clear and correct recipe for reproducing any model results
presented.

All of these can be accomplished by writing an understandable paper, and making
the model or models available. The burden is actually somewhat lower than that
though, since a few paragraphs can cover 0, 1, 3 and 4.

I would also note that I have said ""Clear"" a number of times. Only the recipe
needs to be correct. Everything else may be clearly wrong, as long as it is
clear.

Bob Eberlein
Posted by Bob Eberlein <bob@vensim.com>
posting date Thu, 14 Feb 2008 10:41:31 -0500
_______________________________________________
<martin@utalca.cl>
Junior Member
Posts: 4
Joined: Fri Mar 29, 2002 3:39 am

QUERY The Minimum Acceptable Model Standard

Post by <martin@utalca.cl> »

Posted by <martin@utalca.cl>

Hi Richard,

I find your ""minimal list"" interesting. If such a list of ""deliverables"" could
exist, it would help students and in some courses we might use kind of a form
that guides them through the phases.

In the ""peer review dialog"" session at the conferences, we've discussed the
existence of such a list since the first time the session was held.

I'd agree to points 1 - 4 without discussion.

Points 5 and 6 refer to validity tests (beyond structural correspondance,
already included in your point 4). I'd like to ask if the results of validity
tests should not be documented (just like the equations); this may be a short
section in the text and/or an appendix. Wouldn't this help to make up your mind
about the model (without having to test it yourself)? Maybe this is what you
meant, and I just understood it wrong.

Since it has always been stressed that we should model a problem, not a system,
maybe the model documentation should start with a section that defines the
problem and the purpose of the modeling?

Best,

Martin Schaffernicht
Posted by martin@utalca.cl
posting date Thu, 14 Feb 2008 12:13:49 -0300
_______________________________________________
Jean-Jacques Laublé <jean-jac
Senior Member
Posts: 61
Joined: Fri Mar 29, 2002 3:39 am

QUERY The Minimum Acceptable Model Standard

Post by Jean-Jacques Laublé <jean-jac »

Posted by Jean-Jacques Laublé <jean-jacques.lauble@wanadoo.fr>

Hi everybody

As Sheldon Friedman noticed, there is a lack of transparency in SD.
But to what point is it possible to avoid it?
As the tools become more and more powerful, the models become more
and more obscure, and people rely probably too much on their power, than on
their common sense and thinking which may be easier to communicate.

He considers too that model availability is scarce.

I have tried to verify this availability, by studying the last papers
published in the last Boston conference where there are models joined.

Starting from the beginning of the full proceedings on line of the
conference I have been looking for papers that have a model included, and
evaluated the possibility to examine them, taking into consideration the
complexity of the model (the more it is complex and the less I will be
interested) the similarity of the subject to my own problems, and the
software (I know well Vensim, a very little Powersim that I evaluated in the
past, and not at all I Think).

I think that I must not be very far from the average person being interested
in the field and hoping to get interesting insights from studying models built by others even
if the criteria may be very personal.

Models are generally joined in the supporting reference when there is one
mentioned.
I will have then an idea of the models that I can study, ordering them into
three categories: High, middle and low interest, taking into consideration all the factors
(interest of the subject, complexity, type of software and quality of the
documentation).

As Martin mentioned it, a clear description of the problem and the purpose
of modelling is mandatory. Otherwise, there is no possibility to rely the
model to the real life situation.

The first candidate is : Akkermans, Henk Towards Effective Quality
Management Policies for Production Ramp-ups in Supply Chains

The problem and the purpose is well explained is for me relatively
interesting (I have no supply chain to care about), but the software is I
Think, and the model relatively complex.
The model is then to be put in the low interest basket.

The second candidate is Arndt, Holger Using System Dynamics-based Learning
Environments to Enhance System Thinking
The subject is not of considerable interest for me, but it can be
compensated if the purpose is clear and the model is interesting to study.
Unfortunately I have not been able to open any of the 5 relatively small
models joined with the Powersim reader because of special formats!
With no model available it is clearly of no interest at all.

Third candidate: BenDor, Todd Modeling the Wetland Mitigation Process: A
New Dynamic Vision of No Net Loss Policy.
The supporting paper could not be read by windows, it is in a .stm format
not known by windows. I am not very much interested in ecological models,
mainly being not a specialist of the field. I put the paper into the no
interest basket.

4th candidate: Braun, Bill The Dynamics of the Eroding Goals Archetype
The problem is not a practical one, but is intellectually interesting, may
have too practical indirect applications, and can be easily understood, that
makes it a good candidate, if the other conditions are respected.
I try to open the model joined with Studio 7 player and it says 'cannot open
it in presentation mode'. No interest basket. Too bad.

5th candidate: Bueno, Newton A macroeconomic systemic model for the
Brazilian economy
I am lucky it is in Vensim and is relatively small.
It is an economic model, which is not my speciality, but at least I can
understand it, and may learn from the way the model is built and analyzed.
I can of course open the model, but having read the documentation, I realize
that I have not enough ground knowledge and particularly absolutely no
experiences from the subject, that makes the model not interesting for me.

6th candidate. Chichakly, Karim Modeling Agile Development: When is it
Effective?
The subject is about project modelling, a subject I am not interested in,
the model is of medium complexity, the documentation is long, and the model
is in I Think. I could at least open the model with a no save release of I
Think.
No interest basket.

7th Model. Chichakly, Karim SMILE and XMILE: A Common Language and
Interchange Format for System Dynamics
The subject is certainly interesting, especially in these circumstances,
where having only to deal with Vensim models, would highly interest me.
It is however particular, dealing with the translation into XML format, and
has not nothing to do directly to model building. No interest basket
although being an interesting question.

8th Model: Chiong Meza, Catherine with G.P.J. Dijkema and Cornelia van
Daalen Scenario Analysis using System Dynamics Modelling:The case of
Production Portfolio Change in the Dutch Paper and Board Industry
The subject is rather macroeconomic, which is not my speciality, is
relatively complex, and the supporting paper in PDF, shows the structure of
the model and its equations, which necessitates to rewrite completely the
model, which is impossible with the Studio 7 reader.
No interest basket.

9th model: Chomiakow, Daniel A Generic Pattern for Modeling Manufacturing
Companies
The subject is understandable by me, it is in Vensim and is relatively
small.
The only drawback is that it is not relying on a real case, and therefore is
more an academic study that is eventually very theoretic to my point of view. But it can at
least be studied that makes it a middle interest case.

10th model. Comaschi, Carlo with Vincenzo Di Giulio and Eleonora Sormani
Natural gas demand and supply in Italy
The problem is a macroeconomic model, in I Think format, is rather complex,
that puts it in the no interest basket.

11th model. Cronrath, Eva-Maria with Alexander Zock Forecasting the
Diffusion of Innovations by Analogies: Examples of the Mobile
Telecommunication Market
The subject seems interesting although not related on a concrete example, is
relatively simple and is in Vensim format. One can then put it on the middle
interest basket.

12th model: Dattee, Brice with David FitzPatrick, Henry Weil and Steffen
Bayer The Dynamics of Technological Substitutions
The model is very big, is in Powersim and I could not open it with the
Studio 7 player.
The model is probably very theoretic too. Not interesting basket.

13th model. Deegan, Michael Exploring U.S. Flood Mitigation Policies: A
Feedback View of System Behavior
The subject is out of my appreciation, is relatively complex, is in Vensim,
but the model is not joined. There are only PDF files that describe the
structure. No interest basket.

14th model. Du, Yong Incorporating System Dynamics Modeling into
Goal-oriented Adaptive Requirements Engineering
Very theoretical model with no model joined. No interest basket.

15th model. Dudley, Richard The Equity Supply Chain: Is it the Cause of So
Few Women in Management and Leadership Positions?
The subject is of middle interest (for me) , it is in Vensim, but the
supporting paper is a Powerpoint file and there is no model joined. It is
then necessary to create the model, if one has the complete information on
the equations, the structure of the model being joined.
It does not represent any real interest, if one wants to make a full deep
study of the question.
And I think that a model needs to be fully studied in its last details if
one wants to get some return from its study. No interest basket. It is too
bad that a model that is well presented, especially with a good power
presentation, is not joined.

16th model Duggan, Jim A Simulator for Continuous Agent-Based Modelling
The paper is about the relation between dynamic and agent based modelling.
There are some models in Vensim in the paper, but they are not joined, and
there is an excel file joined that I could not understand the utility.
The paper is a bit out of our consideration, and while I put it in the no
interest basket from the point of view taken in this e-mail, it can be
interesting for me to study it, if I am interested studying agent base
modelling.

17th model Duran Encalada, Jorge with Alberto Paucar-Caceres
Sustainability Model for the Valsequillo Lake in Puebla, Mexico: Combining
System Dynamics and Sustainable Urban Development
The subject is out of my appreciation, is in I Think and the supporting is
in Word format, with no model joined. No interest basket.

18th model. Dutt, Varun with Cleotilde Gonzalez Slope of Inflow Impacts
Dynamic Decision Making
This is a more theoretic psychological study. The model is simple in Vensim
and is joined.
I cannot say if the subject interests me very much, so I will put the model
in the low interest basket.

I have reviewed a quarter of the papers exposed and:
There are two models of low interest and two of middle interest, no of high
interest.

It would be interesting to carry on the study, to see if I can find one of
high interest.

Of course I can still be interested by some papers, but having not the
possibility to study a complete model, the study will be rather superficial.

What insight can one get from such a review?
What are the reasons of no interest?

The first is the too academic subjects. One often does not see any client,
nor any stake, nor any measurement of that stake.

Second when one sees a client, it is often the subject that is not well
enough known.

The third is the no disposability of models.

The fourth is the overall complexity of the models.

The documentation is generally correct

What can be improved?
The first two reasons can hardly be suppressed.

One can join models, and of course if it could be written in
a standard format like the Smile or Xmile, or in the main three most used
formats it would help a lot.

Another thing has to do with the complexity of the models.
I think that modellers should make an effort to present very simplified
problems and models.
I am sure that much more people would read them, and in the case of people
showing further interest, it would always be possible to send them a more
elaborate work.
Unfortunately making models simpler is hard.

To resume, I do not know if I can find a model that is really worth being
studied deeply in the last Boston conference but I will carry on my study when I have some
time.

So Sheldon is approximately right when he says that there are no models
available for the people interesting in learning how to build good models.
Regards to everybody.
Jean-Jacques Laublé Eurli Allocar
Strasbourg France
Posted by Jean-Jacques Laublé <jean-jacques.lauble@wanadoo.fr>
posting date Fri, 15 Feb 2008 19:22:37 +0100
_______________________________________________
Jack Harich <register@thwink.
Member
Posts: 39
Joined: Fri Mar 29, 2002 3:39 am

QUERY The Minimum Acceptable Model Standard

Post by Jack Harich <register@thwink. »

Posted by Jack Harich <register@thwink.org>

Jean-Jacques,

Your list of models is a worthy sample of what's available to help the aspiring modeler.
It shows how lack of standards has hindered the field's development. Such differences
between models and their supporting materials creates a Tower of Babel. This reduces
the efficiencey and effectiveness of communication among modelers, and makes learning
the craft more difficult than it need be.

I'd like to humbly add my own models to this list. They are not in a peer reviewed journal,
but are in the form of a 390 page as yet unpublished book at:

http://www.thwink.org/sustain/manuscrip ... tivism.htm

The web page includes a link to download the models described in the ""Analytical Activism"" book.
There are 8 different models. I'm not one for building one giant model to solve a problem. Smaller
models are so much easier to understand and use to support a complex argument. The models are in
Vensim and the manuscript is in PDF or DOC.

Those trying to improve their modeling skills or learn more about ways to model social systems may
be especially interested in the 4 Dueling Loops models. These build a medium size model in 4 easy
to follow (I hope) steps.

A meme is a mental belief a person learned from others. See Dawkins' ""The Selfish Gene"" 1976, the
last chapter, or this page: http://www.thwink.org/sustain/glossary/Meme.htm

Memetic infection occurs when a meme passes from a transfer mechanism, such as conversation or a
book, to a mind the meme was not in before.

Most of these models use memetic infection to express social forces. Our field needs something
like this so that we can more accurately and correctly model the social problems we face. In fact,
just yesterday I was reading Demet's ""The Promise of Sleep."" Page 30 touched on how the field of
sleep disorders was able to turn itself from an art into a science. The key was discovery of a
technique to record and graph brain waves with EEG machines. Before then the field was dependent
on intuition to determine what the brain was doing during wakefulness and sleep. After the use of
EEG began, the field could now measure its central phenomenon.

Applying SD to social problems is still an art. It needs to become a science.

""Science is largely quantification"" says the sleep book on page 38. So what is it in our field that
we need to quantify to turn it from an art into a science? I suspect it is social forces. I've
attempted to take a few crude steps here by modeling memes as they travel from one mind to another.
But I've not experimentally measured meme strength. To do that requires a breakthrough, such as the
one the sleep book describes on page 58. With the Multiple Sleep Latency Test, sleep researchers were
now able to measure the size of sleep debt, or the strength of the need for sleep. If we can develop
a way to measure the strength of particular memes, and the potency of memetic infectivity, then we
too may be able to take a gigantic leap forward.

For example, what if Forrester's urban decay model had used memes to model relative attractiveness
in a more fine grained manner? The model could have shown the exact route certain memes traveled to
cause different levels of attractiveness. This might open up additional insights into poor and good
solutions.

As another example, what if the CDC diabetes epidemic model that Jack Homer and others worked on used
memes to model the effects of cultural pressure, advertising pressure, educational campaigns, etc on
caloric intake and exercise? That would open up a vast frontier of new problem cause and solution insights.

Returning to the models in the download, one model in particular, The Memetic Evolution of Solutions to
Difficult Problems, uses memes heavily to model exactly what our field is attempting to do at the strategic
level. The stocks are Hypotheses to Test, Experiments Completed, Hypotheses Accepted, Unsound Selections,
Unsound Solution Components, Sound Selections, and Sound Solution Components. The book and model shows how
two different problem solving processes, Classic and Analytical Activism, result in two very different
solution success outcomes, and why.

Hope this helps,

Jack
Posted by Jack Harich <register@thwink.org>
posting date Sat, 16 Feb 2008 10:04:50 -0500
_______________________________________________
Jean-Jacques Laublé <jean-jac
Senior Member
Posts: 61
Joined: Fri Mar 29, 2002 3:39 am

QUERY The Minimum Acceptable Model Standard

Post by Jean-Jacques Laublé <jean-jac »

Posted by Jean-Jacques Laublé <jean-jacques.lauble@wanadoo.fr>

Hi everybody.

The last two threads were dealing with model quality.
I do not think that the model quality is the priority for SD.
I will take the first example I have mentioned in my previous post to
illustrate the point.
I hope the author will pardon me, but it is not my fault neither his if his
name starts with an A.

The subject was: Towards Effective Quality Management Policies for
Production Ramp-ups in Supply Chains.

The subject is about how to manage the quality of products whose demand is
growing very fast and can as quickly fall down.

The problem is easy to understand.
But imagine that you are the manager of the factory that produces the
product, or the software or anything else. A consultant comes to see you and
shows you the model and proposes you to apply it to your factory.

First the model is rather complicated and there is no high level diagram
that the manager can easily understand. But suppose that you can produce
such a diagram.

I studied very quickly the model and did not see any reference to how the
model can increase the bottom line, whether in the current year or in the following years.

Or it is the first concern of the manager, even if people in his factory may
find the model highly interesting. If I was the manager of the factory and
had no special knowledge of SD,

I would not even try to study further back the proposition, not seeing
clearly what end utility the model will bring and the end utility is the
bottom line. The model might be of high quality, it would not change
anything to my decision.

In most SD examples, whether in conferences, review or in text books, the
bottom line is nearly never considered.
Regards.
Jean-Jacques Laublé Eurli Allocar
Strasbourg France.
Posted by Jean-Jacques Laublé <jean-jacques.lauble@wanadoo.fr>
posting date Sun, 17 Feb 2008 18:48:56 +0100
_______________________________________________
Tom Forest <tforest@promethea
Junior Member
Posts: 7
Joined: Fri Mar 29, 2002 3:39 am

QUERY The Minimum Acceptable Model Standard

Post by Tom Forest <tforest@promethea »

Posted by Tom Forest <tforest@prometheal.com>

Where are our policy levers for this problem? Practically implementing any
of the recommendations throughout the field would be impossible outside of
the SD conference and journal. The only current policy lever is through the
conference review process. The current ""submission"" text at

http://www.systemdynamics.org/conferenc ... ubmissions

is undemanding.

There has been other recent discussion about ""stages"" of papers, which seems
more to be of help to the audience than the presenter. I would rather see the
submission process oriented to the modelers. For me, the conditions sine qua non
of system dynamics are reference modes and dynamic hypotheses. Without them there
is no model, or at best a model of a system. Of all the papers I've reviewed over
the years, a slim minority meet these minimal requirements, including some that
ended up in plenary sessions. Conference submissions are not so numerous as to bear
excluding papers that do no meet them, or even to relegate them to poster sessions.
Given the conflicting pressures to maximize submissions and conference attendance,
the best I could hope for would be adding a statement to submissions saying that all
plenary sessions and journal submissions must include reference modes and dynamic
hypotheses. I am not on the policy council, but if I were, I would so move. If that
succeeded, than as a second stage I might require the inclusion of policy levers and
policy analysis in plenary papers. Let's walk before we run.

Tom
Posted by Tom Forest <tforest@prometheal.com>
posting date Mon, 18 Feb 2008 09:49:01 -0800
_______________________________________________
Bill Braun <bbraun@hlthsys.co
Member
Posts: 43
Joined: Fri Mar 29, 2002 3:39 am

QUERY The Minimum Acceptable Model Standard

Post by Bill Braun <bbraun@hlthsys.co »

Posted by Bill Braun <bbraun@hlthsys.com>

Would an open source authorship approach to this be feasible? It offers the
possibility of moving beyond the limitations that Tom Forest cites. I am not
knowledgeable about the technology used for such an initiative. If there is
energy around the idea, I'll take on the preliminary research.

Bill Braun
Posted by Bill Braun <bbraun@hlthsys.com>
posting date Tue, 19 Feb 2008 07:59:45 -0500
_______________________________________________
Jean-Jacques Laublé <jean-jac
Senior Member
Posts: 61
Joined: Fri Mar 29, 2002 3:39 am

QUERY The Minimum Acceptable Model Standard

Post by Jean-Jacques Laublé <jean-jac »

Posted by Jean-Jacques Laublé <jean-jacques.lauble@wanadoo.fr>

Hi Tom

I agree with you that it is easy to criticize and difficult to find solutions.
Particularly in the context of the SD conference where the objective of the papers
seem more to be published than to be read.

In your minimum recommendations there are reference modes and dynamic hypothesis.
For me, the reference modes and dynamic hypothesis do not represent completely the
problem and the dynamic hypothesis are already parts of the solution.

I would simply prefer that the problem be completely expressed in plain English as exposed
by the client. (Most of the time there is no identifiable client, which is understandable in research models).
Of course a client is needed and if there is none, it is more difficult to have an appreciation of the model,
especially from the client.

The advantage of having an English description is that you can compare it to the different models
whether qualitative or quantitative built during the process of modelling so as to verify that
the client will is respected.

I like to understand fully what is modelled, what is the expected profit and if a sufficient
time after the model was completed the policies were implemented and the objective reached
as told by the client.

If these first recommendations were respected I would be personally very happy.

Regards.

Jean-Jacques Laublé Eurli, Allocar

Strasbourg France.
Posted by Jean-Jacques Laublé <jean-jacques.lauble@wanadoo.fr>
posting date Tue, 19 Feb 2008 15:32:39 +0100
_______________________________________________
""Jack Homer"" <jhomer@comcas
Member
Posts: 21
Joined: Fri Mar 29, 2002 3:39 am

QUERY The Minimum Acceptable Model Standard

Post by ""Jack Homer"" <jhomer@comcas »

Posted by ""Jack Homer"" <jhomer@comcast.net>

Tom Forest's idea for minimum requirements for conference papers is interesting
and could offer some needed guidance for reviewers. But, I think we'd have to
exempt purely methodological (non-applied) papers from those requirements. For
the applied papers, there are two types: policy analysis, and theory development.
Policy analysis certainly should include a discussion of policy levers, but
theory development need not. Aside from the inclusion of policy levers in policy
analysis, I think the most we could require for the applied papers is, as Tom
suggests (with slight word-smithing on my part), reference mode (behavior-over-
time graph) + causal diagram (CLD or stock-flow or hybrid). I would not include
a requirement for presenting simulation model output, because that would exclude
qualitative analyses, and I don't think we're prepared to do that. Whatever rules
we adopt should apply equally to posters, parallel, and plenary papers; there
should be no distinction, if we want something practical.

- Jack Homer
Posted by ""Jack Homer"" <jhomer@comcast.net>
posting date Tue, 19 Feb 2008 09:24:16 -0500
_______________________________________________
<richard.dudley@attglobal.net
Member
Posts: 26
Joined: Fri Mar 29, 2002 3:39 am

QUERY The Minimum Acceptable Model Standard

Post by <richard.dudley@attglobal.net »

Posted by <richard.dudley@attglobal.net>

I think that my ""minimum modeling standards"" are less rigorous than most
people imagine. What I'm thinking of here is the minimum needed to make a
model understandable to others.

This does not mean the model is good.
It does not mean the model will necessarily contain correct logic.

What it would mean is that the model should be understandable to any
interested party with the reasonable SD background and a reasonable
understanding of the problem at hand. This is required for the model to be
used as a basis of discussion among interested parties, and which would
allow it to be reviewed by others, and improved.

This idea has a parallel imbedded within the best practices for development
of good models. At some point in the ideal model building process a working
model is created in order to allow feedback from domain experts and
stakeholders. What are the requirements for this first draft model?

(e.g. these might fall within the framework of the ""best practices for model
formulation"" as reported by Martinez and Richardson 2003... An Expert View
of the System Dynamics Modeling Process: Concurrences and Divergences
Searching for Best Practices in System Dynamics Modeling. System Dynamics
Society, Palermo).

These interim models would probably have the same minimum model quality
that I am talking about. All models, including refinements of the same
model, will still at least meet the same minimum standards, plus others.

To me this seems perfectly logical. At what point you give your student, or
colleague, back the model that has been handed to you and say ""Sorry, but
this isn't even good enough for me to look at it! Please make sure it
includes the following things if you want me to look at it again.""

This is the starting point for the model building aspect of developing a
good model.

Why am I interested in this? Because I see models (in ""academic"" journals)
that do not meet what I believe to be basic standards. The model, and
findings based on it, cannot be judged. The model is either not available,
or too obscure to be understood. These MAY be good models in other senses,
and the outputs and their implications may be reasonable, but the reader
just can't tell.

Unfortunately a fair proportion of these models are built with SD software
and appear to be SD models... Thus the possible concern about marginal
models marginalizing modeling.

I would be nice to be able to say to a journal editor: ""The authors should
note that Smith et al. (2008) say that even a first draft model must meet
certain requirements to allow proper review. The model the authors present
does meet those standards. I suggest that the authors consider those minimum
standards and resubmit the paper"".

OK. Enough for now!


Thanks for all the comments on my question.

Richard
Posted by <richard.dudley@attglobal.net>
posting date Wed, 20 Feb 2008 10:54:03 +0700
_______________________________________________
Bob Eberlein <bob@vensim.com&
Member
Posts: 26
Joined: Fri Mar 29, 2002 3:39 am

QUERY The Minimum Acceptable Model Standard

Post by Bob Eberlein <bob@vensim.com& »

Posted by Bob Eberlein <bob@vensim.com>

Hi Everyone,

Tom Forest wrote:
> Where are our policy levers for this problem?

I am not sure I see this as a problem, so much as an opportunity. And where I
think it directly applies is in the creation of a library, if I may use the
term loosely, of models addressing different problems.

This is something that I, and a number of others, have been interested in
putting together for some time. One of the stumbling blocks has been a
framework for acceptability. One possibility was a thorough review by people
experienced in the field, the other was simply to let everything in.

What a minimal framework provides is a fairly mechanical set of criteria for
determining if a model should be included, and no judgment at all about whether
it is worth spending time on. That latter is clearly important, but can come
from a more distributed review process. Such an approach is hardly perfect, but
it seems to me it might be practical, and generally work pretty well.

When we get to the separate issue of what should be accepted for presentation
and publication at conferences and in journals I take a more inclusive view than
the one Tom expressed. We need to judge work on merit, not methodology. If
someone is looking at an important problem that has been or could be addressed
using System Dynamics then the work is relevant to the field. If such a work
exists in isolation, and does not make use of any System Dynamics methodology
then perhaps it is best directed elsewhere. But if it parallels, or better still
challenges, work done using a System Dynamics approach that seems important
to me.

That is not to say that Tom Forest's point should be ignored. There is lots of
work that would be a lot better if people just applied some of (a bit of or even
a gesture toward) the canonical System Dynamics approach. But I think we should
encourage people to do that, not exclude them for failing to do so (as long as
the have interesting results).

Bob Eberlein
Posted by Bob Eberlein <bob@vensim.com>
posting date Tue, 19 Feb 2008 08:03:21 -0500
_______________________________________________
Bill Harris <bill_harris@faci
Senior Member
Posts: 51
Joined: Fri Mar 29, 2002 3:39 am

QUERY The Minimum Acceptable Model Standard

Post by Bill Harris <bill_harris@faci »

Posted by Bill Harris <bill_harris@facilitatedsystems.com>

""SDMAIL Bill Braun"" <bbraun@hlthsys.com> writes:

> > Would an open source authorship approach to this be feasible? It offers the
> > possibility of moving beyond the limitations that Tom Forest cites. I am not
> > knowledgeable about the technology used for such an initiative. If there is
> > energy around the idea, I'll take on the preliminary research.

Do you mean open source with respect to the modeling software, the
model, or the paper? (I'm not clear on the antecedent of ""this."") All
seem quite doable /if/ the author so desires. Creative Commons
(http://creativecommons.org/) is one popular approach; the Free Software
Foundation (http://www.fsf.org/) has another, much older, and perhaps
more rigorous approach.

Anyone can license what they produce under one of their licenses or
under any of a number of others. http://www.fsf.org/licensing/licenses/
has a rather comprehensive list of such licenses and comments about the
extent of the freedom each offers. The OSS simulator I use is licensed
under the GNU GPL 2.0 or later. I could imagine licensing a model under
the GPL (or not). I could imagine licensing a paper under the GNU FDL
or perhaps one of the Creative Commons licenses (again, or not).

There are challenges. The journal in which you publish may want the use
of a copyright. Your client or organization (or perhaps you) may have
proprietary ownership or control concerns. If you use a GPL'd,
compiling simulator, then the compiled model is automatically GPL'd, and
so you can't distribute a compiled version of a model without also
making the entire source available, which can be a bit of a barrier (and
without granting those same rights to anyone who gets a copy). At a
minimum and painted in broad brush strokes, I can't legally distribute a
compiled model I build in my OSS tool without also shipping out the
source to the model, the simulator, and the GNU Scientific Library (or
otherwise complying with the requirement to make the source available),
which inflates the package size quite dramatically (and there may be
more GPL'd source linked in that I'd have to identify). I can
distribute the source code of a GPL'd model with no problems; you'd just
have to build and install the software in order to try it out.

""SDMAIL Bob Eberlein"" <bob@vensim.com> writes:
> > I am not sure I see this as a problem, so much as an opportunity. And
> > where I think it directly applies is in the creation of a library, if
> > I may use the term loosely, of models addressing different problems.
> >
> > This is something that I, and a number of others, have been interested
> > in putting together for some time. One of the stumbling blocks has
> > been a framework for acceptability. One possibility was a thorough
> > review by people experienced in the field, the other was simply to let
> > everything in.

Bob,

There's something attractive about that. If I may, let me drift a bit
with the idea.

When I was a practicing engineer, I read the standard (typically IEEE)
journals. I also read the trade press -- EDN, Electronic Design, and
others. Those didn't provide peer-reviewed articles, but they did often
provide newer, shorter, and more practical advice. Some of it, to be
sure, was funded by vendors eager to sell the latest components or
instruments. Some were high-quality articles on design techniques (and
some were of lesser quality). Most trade magazines also had short
""design ideas"" sections, typically full of a partial page articles
showing what someone saw as a novel yet simple circuit design to address
some goal. I think the author would typically get paid $25 to $50 if
their article were published and perhaps another $100 if it were
selected as best of the issue or best of the year.

Even without peer review and with only editorial vetting, you eventually
got to know who was likely to publish better ideas because you often
vetted the articles yourself by trying out the design ideas: those that
didn't work well or where the math didn't support the ideas tended to
fall by the wayside.

EDN and the like were supported solely by advertising, so engineers got
it for free. Advertisers were willing to sign up, for they could
reasonably expect to see engineers design good components into circuits
and systems, and that would hopefully bring an ongoing revenue stream
that more than made up for the cost of the advertising. Since our
models don't get implemented in hardware which gets replicated for a
large number of industrial or conusmer customers, it's unlikely we'll
find as many advertisers willing to support such an active trade press.

I think it could be helpful to have such a forum for system dynamicists.
To a degree, this mailing list does that and does it well. There have
been a number of discussions here about some of the finer points of SD
practice, discussions well worth saving.

Interestingly, Electronics and Electronic Design used to publish
compendiums of their design ideas from their magazines from time to
time. Those sound a bit like your idea for a model library.

There are a couple of things that attract me about the design ideas
approach:

- - It evolves. It doesn't start life as the set of things we see as
important today; it starts as a free-flowing stream of things that
various people find useful.

- - It's not dependent upon one format or vendor. People would mostly
submit schematics, but there were some mathematical equations and some
short program listings.

- - It didn't feel terribly controlled. You submitted an article, and the
editors wrote back to say ""Thank you, you're accepted,"" or ""Thank you,
but you're not accepted."" If you were rejected, you could always
submit it to another magazine, for there were quite a few.

- - It didn't feel as if you were giving up something proprietary. Things
that were that short were rarely descriptions of entire systems. It
was more likely that they'd describe a nifty, cheap, one-IC SW
receiver or a small amplifier -- something you could definitely
consider reusing as part of something else, but nothing you'd likely
use on its own, at least not for profit.

- - It was eclectic. What one editorial panel didn't like, another might
like, so you'd get a variety of ideas that didn't all match. That was
perhaps more educational than only seeing great designs (although the
process usually did seem to generate decent designs).

- - It does serve as a useful reference. I would clip interesting
articles from magazines, and I may still have a couple of the
compendiums stored in a box somewhere.

I could see us sharing more ideas like that here (and I see lots of
challenges). If we did, I don't think it matters that we use a variety
of different tools. Most tools seem to have text output that looks
enough like Vensim or like Dynamo so that people can understand the
equations. I'm perhaps the oddball when I'm using my OSS simulator, but
I bet most of you could follow that code with no problem (you can find
samples on my Web site and in my blog).

What I find hard is figuring out the incentives that would cause a
significant number of us to share those ideas in such a fashion. In the
engineering case, it was a bit of fame and a bit of cash for a bunch of
engineers who likely didn't know each other. We're tending to get to
know each other here, so we can get fame by pressing ""Send."" In our
case, I'm not sure that

The field was also more diverse: one engineer might have insights into
using one type of device, and another might have insights into using
another type of device. We only use stocks and flows :-) -- we don't
specialize very much (at that level). Are there enough interesting and
useful ways to put 1-4 stocks together (at some size, it turns into a
regular paper) that haven't been covered in the Molecules, in BD or any
of the other standard texts, or in Barry's Intro to Systems Thinking to
keep this flowing? Perhaps there are. I guess an experiment is one way
to find out.

Thoughts? Did I miss the boat entirely on your idea, Bob?

""SDMAIL Bob Eberlein"" <bob@vensim.com> writes:

> > That is not to say that Tom Forest's point should be ignored. There is
> > lots of work that would be a lot better if people just applied some of
> > (a bit of or even a gesture toward) the canonical System Dynamics
> > approach. But I think we should encourage people to do that, not
> > exclude them for failing to do so (as long as the have interesting
> > results).

Hear! Hear!

One of the things I sometimes hear when people express concerns about
the quality of work done in a field (SD or something else) is a desire
to exclude poor work, to prune it out of the standard places, to
regulate or certify it out of existence.

For some reason, I've never thought such an approach to be as useful as
one that encourages better work rather than excises poorer work.
Excluding poor work seems to involve notions of the imposition of power
and of who defines what is good. It might blow up, as people rebel
against that imposition. It might succeed at first and then fail later,
as new and useful developments may get quelled because they don't fit
the standard rubric.



Bill
- --
Bill Harris
Facilitated Systems
Posted by Bill Harris <bill_harris@facilitatedsystems.com>
posting date Wed, 20 Feb 2008 19:41:09 -0800
_______________________________________________
Jean-Jacques Laublé <jean-jac
Senior Member
Posts: 61
Joined: Fri Mar 29, 2002 3:39 am

QUERY The Minimum Acceptable Model Standard

Post by Jean-Jacques Laublé <jean-jac »

Posted by Jean-Jacques Laublé <jean-jacques.lauble@wanadoo.fr>

Hi everybody

Concerning the possibility to read models from the conference, it seems that
even if it is possible to read models with Studio reader, the reader from Powersim does
not allow to look at the structure of the model, nor at the equations. It just runs
the model, and one is obliged to accept the results without the possibility to understand them.
'I think' is better for that, the downloadable no save version permits to
look at the structure and at the equations.
I do not know if the Vensim reader permits to look at the structure and the
equations, as I use a full Vensim version.
This means that the people who publish a model in Powersim, should publish
the structure of the model and a list of the equations with the units, to
allow for a full understanding of the model.

About the minimum acceptable conditions for a model, I have again browsed
the papers, and just verified if all the models available had units and
could stand units checking.
I did not verify all the papers, having not the time.

To my surprise, many of them have some minor errors (why not correct a minor
error?), some have more errors, and some have no units at all!!!
Only about 50% had no errors in units checking.

Before learning to run, one should first learn to creep.
Dimensional consistency is one of the basic check accepted by all authors,
that is mandatory to verify the conceptual consistency, that can be already used at the
qualitative CLD step of the modelling process (without having of course the
help of the software).

The dimensional consistency helps too to avoid bugs in equations and helps a
lot the external reader who wants to understand how the model works.
Some models have too warnings about the use of non dimensionless units in
lookup table.
It is not strictly a unit error, although lookup tables should be used with
ratio that generate a dimensionless unit variable as advised by many authors.

I did not try to make some mass conservation verification, but it would be
interesting if somebody could find the time to do it.

There are a lot of students, often making not very useful work.
They could perfectly verify the quality of the papers published in the
conference.

Just a question, what do the people who decide to accept the publication of
a paper, verify?

For example, after all my models, I verify in a separate view that all the
masses that can be verified (physical or not) are balanced.

Of course I do not speak about extreme conditions tests that are already too
sophisticated when compared to basic verifications.
Regards.
Jean-Jacques Laublé Eurli, Allocar
Strasbourg France.
Posted by Jean-Jacques Laublé <jean-jacques.lauble@wanadoo.fr>
posting date Wed, 20 Feb 2008 16:32:10 +0100
_______________________________________________
Tom Forest <tforest@promethea
Junior Member
Posts: 7
Joined: Fri Mar 29, 2002 3:39 am

QUERY The Minimum Acceptable Model Standard

Post by Tom Forest <tforest@promethea »

Posted by Tom Forest <tforest@prometheal.com>

Conceptualizing a model about improving modeling standards in the SD journal
or at the conference, my thoughts turn to indicated, desired, and expected
quality standards. Where's anybody's motivation to build higher quality
models? Where are the society's motivations, if any, to improve them, and
where are the policy levers: where are the modeler's motivations? If the
system as a whole does not promote them, which I contend it does not, why
would they ever improve?

One thing I'm not seeing in this discussion is incentives: what are the
modeler's incentives to meet minimum standards? I propose excluding substandard
quality papers from plenary sessions, in a clear, straightforward, transparent
way. It could be viewed as a stick, I suppose, but I see it as a carrot: the
opportunity to present a plenary paper. No papers would be excluded from the
conference, however modestly formulated, but better formulated models could
enter the ""plenary session lottery.""

Conceding Bob Eberlein's point that some valuable methodological papers might
not contain any actual models, still it is a false dichotomy for him to say ""we
need to judge work on merit, not methodology."" This thread is about the latter.
I believe we as a society need and ought to do both, and that methodological
scrutiny is sadly lacking. If a paper is about a model, then it ought to have
a dynamic hypothesis, reference modes, and policy analysis or recommendations.
If it take me writing a check for $20 to everyone who does so, I'll do it. For
last year's conference it would have cost me less that $1,000.

Tom
Posted by Tom Forest <tforest@prometheal.com>
posting date Thu, 21 Feb 2008 10:58:25 -0800
_______________________________________________
Bob Eberlein <bob@vensim.com&
Member
Posts: 26
Joined: Fri Mar 29, 2002 3:39 am

QUERY The Minimum Acceptable Model Standard

Post by Bob Eberlein <bob@vensim.com& »

Posted by Bob Eberlein <bob@vensim.com>

Hi Tom,

Let me clarify a little bit my points which were not well articulated.

First, people are talking about at least two different things in this
thread. The intent of the original post was to ask if we can find a
criteria that will allow us to actually look at quantitative models
developed by others with some intelligence. I am here generalizing from
the original post, which was focused on system dynamics models. I do
not believe such a restriction is necessary. I think, instead that we
should look at the issue for all models that are numerical in nature.
That includes anything from physics, weather forecasting,
economics, system dynamics or anywhere that uses numbers to compute -
whether done in SASS, Excel, Matlab, Vensim, Java or even with a pencil.

My claim was that to review such models they need to have:

0. A clear description of the problem or issues the model addresses.
1. A clear description of model structure.
2. An available working model.
3. A clear statement of why the model should be considered to be
applicable to the issues.
4. A clear and correct recipe for reproducing any model results
presented.

I have shortened a couple of these.

Your list would look something like:

0. A clear description of the problem or issues the model addresses.
1. One or more reference modes.
2. An articulated dynamic hypothesis.
3. A clear description of model structure.
4. An available working model.
5. A clear statement of why the model should be considered to be
applicable to the issues.
6. Specific policy analysis or recommendations.
7. A clear and correct recipe for reproducing any model results
presented.

Does that look abour right?


Bob Eberlein
Posted by Bob Eberlein <bob@vensim.com>
posting date Fri, 22 Feb 2008 08:26:24 -0500
_______________________________________________
Jean-Jacques Laublé <jean-jac
Senior Member
Posts: 61
Joined: Fri Mar 29, 2002 3:39 am

QUERY The Minimum Acceptable Model Standard

Post by Jean-Jacques Laublé <jean-jac »

Posted by Jean-Jacques Laublé <jean-jacques.lauble@wanadoo.fr>

Hi everybody.

Tom Forrest writes :
<methodological
<scrutiny is sadly lacking. If a paper is about a model, then it ought to have
<a dynamic hypothesis, reference modes, and policy analysis or
<recommendations.

I agree with the first part of the sentence.
But why is it lacking.
To be able to scrutinize papers, there should be a clear SD practical model
building method, proved to be practically useful, clearly exposed in theory, with plenty
of examples and with the possibility of the learner to practise the method
by the way of examples with correction.

There are varying methods, more or less well explained.
For instance I have a long time believed that a dynamic hypothesis was
mandatory.
This is how I was taught. I have never been able to use this method
practically.
Since then I have found another method that does not use dynamic hypotheses.
This new method suits me much better.

As to the reference modes, many research models have none, and there are
plenty of concrete cases where reference modes are not available: lack of
data, completely new situations where the study of the past is not useful
and building reference modes of the future is highly speculative. I do not
pretend that this method will suit everybody but as long as there is no
scientific demonstration that one method is better than the others in most
cases, I think that one must be careful of not being too dogmatic and to restrict
the scrutiny to the basic common sense principles as dimensional consistency, model availability
with equations, a clear exposure of the problem, purpose, stake, as Tom suggests a minimum
of policy analysis as this is the main objective of the model and simplified
models if the original is too complex.

This would be already a great progress.
Richard Dudley summarized this very well when he writes:
<the model should be understandable to any
<interested party with the reasonable SD background and a reasonable
<understanding of the problem at hand
Regards.
Jean-Jacques Laublé Eurli Allocar
Strasbourg France.
Posted by Jean-Jacques Laublé <jean-jacques.lauble@wanadoo.fr>
posting date Fri, 22 Feb 2008 15:42:22 +0100
_______________________________________________
Tom Forest <tforest@promethea
Junior Member
Posts: 7
Joined: Fri Mar 29, 2002 3:39 am

QUERY The Minimum Acceptable Model Standard

Post by Tom Forest <tforest@promethea »

Posted by Tom Forest <tforest@prometheal.com>

Bob and I are in noisy agreement. Not wanting to reinvent the wheel, I
refer to John Sterman's ""Steps of the modeling process,"" Table 3-1 on
page 86 of Business Dynamics. If we included just this in the
""Submissions"" text on the SDS site, which I suggest that we do, it
would be a significant step forward. Here is an abridged version of
his five steps:

1. Problem Articulation (including theme selection and reference modes)
2. Formulation of Dynamic Hypotheses
3. Formulation of a Simulation Model
4. Testing
5. Policy Design and Evaluation

On the next page, John says ""[A]ll successful modelers follow a
disciplined approach that includes the following activities: (1)
articulating the problem to be addressed, (2) formulating a dynamic
hypothesis or theory about the cause of the problem, (3) formulating
a simulation model to test the dynamic hypothesis, (4) testing the
model until you are satisfied it is suitable for your purpose, (5)
designing and evaluating policies for improvement.""

It may also be that we as a society would benefit from strengthening
those capabilities within the community, perhaps with workshops at the
conference focused on one or more of these steps. I recommend adding
these five points as criteria for reviewers on a 1-5 scale, for papers
that have models, and have it as part of the feedback to authors.

In non-business systems, sometimes ""Policy Design"" is an exercise in
counterfactual understanding, because not all systems of interest have
rate equations that can be changed by people. That said, John's steps
apply to all SD models, whether business or not.

Tom
Posted by Tom Forest <tforest@prometheal.com>
posting date Sat, 23 Feb 2008 09:55:05 -0800
_______________________________________________
Locked