Im no SD modeler, amateur or otherwise, but two comments seem in order.
First, I found the Wakeland-Sterman exchange interesting and informative. It
is precisely the kind of exchange that I would like to see posted more often.
No pun intended, but it models the kind of behavior I consider professional
and, more important, from which I might learn something.
Second, regarding the substance of the exchange, which dealt with data and
models and their use of data and the goodness of the data and hence the
goodness of the models, and speaking as a user or customer of models and
modeling, it is the process of constructing models that I find useful. That
process, I believe, improves the mental models that govern decisions and, in
so doing, improves those decisions. Would I consider surrendering decision-
making to a model? Perhaps, for some decisions. But, on a large scale, and
for really important decisions? Not yet. But I would make and am making use
of them to better inform decisions that must be made and to educate people
regarding matters they think they already understand and do, but in a less
than robust way. "Robust understanding" is for me the goal and aim of the
modeling efforts in which I am currently involved.
Fred Nickols
Executive Director
Educational Testing Service
MailStop 10-P
Princeton, NJ 08541
(609) 734-5077 Tel
(609) 734-5115 Fax
fnickols@ets.org
Dialog between Wakeland & Sterman
-
- Junior Member
- Posts: 11
- Joined: Fri Mar 29, 2002 3:39 am
Dialog between Wakeland & Sterman
I liked Fred Nickols comments very much and think that they shed some
light on the source of the current debate on data and models.
People who recommend a course of action and rely in large measure on
model output to support the argument, often pay a lot of attention to
data. Fitting a model to data is like putting a character witness on
the stand: "The model never lied to me in the past, so you (and I)
can believe its picture of the future".
In contrast, people who recommend a course of action based on an
understanding of the models dynamics are in a much different position
-- A position that is much more like the position that Fred takes.
It is likely that their audience will not even see either the model or
model output, but will instead see hear a logical argument. What is
on the stand is the validity of the logical argument (not the model);
and logical argument succeeds or fails by whether it is in fact
logical and whether it is supported by the "data" and experience of
the audience. Although, a computer model may have provided the
clarity to construct the argument; the argument, once created, stands
or falls on its own merits. The reliability or validity of the model
is beside the point and not an issue, and so whether the model closely
fits the data is beside the point and not an issue.
Jim Hines
jimhines@interserv.com
light on the source of the current debate on data and models.
People who recommend a course of action and rely in large measure on
model output to support the argument, often pay a lot of attention to
data. Fitting a model to data is like putting a character witness on
the stand: "The model never lied to me in the past, so you (and I)
can believe its picture of the future".
In contrast, people who recommend a course of action based on an
understanding of the models dynamics are in a much different position
-- A position that is much more like the position that Fred takes.
It is likely that their audience will not even see either the model or
model output, but will instead see hear a logical argument. What is
on the stand is the validity of the logical argument (not the model);
and logical argument succeeds or fails by whether it is in fact
logical and whether it is supported by the "data" and experience of
the audience. Although, a computer model may have provided the
clarity to construct the argument; the argument, once created, stands
or falls on its own merits. The reliability or validity of the model
is beside the point and not an issue, and so whether the model closely
fits the data is beside the point and not an issue.
Jim Hines
jimhines@interserv.com
-
- Member
- Posts: 39
- Joined: Fri Mar 29, 2002 3:39 am
Dialog between Wakeland & Sterman
from Jim Hines:
>What is
>on the stand is the validity of the logical argument (not the model);
>and logical argument succeeds or fails by whether it is in fact
>logical and whether it is supported by the "data" and experience of
>the audience. Although, a computer model may have provided the
>clarity to construct the argument; the argument, once created, stands
>or falls on its own merits.
This is a GREAT perspective that I can use in dealing with biomedical
researchers!
It will change my approach from trying to sell the model (and modeling),
instead to a description of model output graphs, and a logical discussion
*based on the experience of the audience* !!
I can show the model, and give a thumbnail sketch of the modeling process,
but not emphasize it too much. (I do have a strong desire to keep putting
simple stock and flow diagrams in front of people to gradually raise their
conciousness, but this can be minimized.)
If SD model building actually does lead to clarity (and there is no doubt
in my mind), then my arguments will (or should) be more likely to make
sense than if I had not used modeling. But I guess I dont have to
bludgeon my audience with this.
To the extent that the results are useful and insightful, those who are
interested (or at least some of them) will seek out the model, and the
model-building process.
EJ Gallaher, Ph.D.
gallaher@teleport.com
Assoc Prof Phyiology/Pharmacology and Behavioral Neuroscience
Oregon Health Sciences Univerity
Portland, OR
>What is
>on the stand is the validity of the logical argument (not the model);
>and logical argument succeeds or fails by whether it is in fact
>logical and whether it is supported by the "data" and experience of
>the audience. Although, a computer model may have provided the
>clarity to construct the argument; the argument, once created, stands
>or falls on its own merits.
This is a GREAT perspective that I can use in dealing with biomedical
researchers!
It will change my approach from trying to sell the model (and modeling),
instead to a description of model output graphs, and a logical discussion
*based on the experience of the audience* !!
I can show the model, and give a thumbnail sketch of the modeling process,
but not emphasize it too much. (I do have a strong desire to keep putting
simple stock and flow diagrams in front of people to gradually raise their
conciousness, but this can be minimized.)
If SD model building actually does lead to clarity (and there is no doubt
in my mind), then my arguments will (or should) be more likely to make
sense than if I had not used modeling. But I guess I dont have to
bludgeon my audience with this.
To the extent that the results are useful and insightful, those who are
interested (or at least some of them) will seek out the model, and the
model-building process.
EJ Gallaher, Ph.D.
gallaher@teleport.com
Assoc Prof Phyiology/Pharmacology and Behavioral Neuroscience
Oregon Health Sciences Univerity
Portland, OR
-
- Junior Member
- Posts: 10
- Joined: Fri Mar 29, 2002 3:39 am
Dialog between Wakeland & Sterman
Responding to two comments by Jim Hines.
First:
>>What is on the stand is the validity of the logical argument (not the model);
and logical argument succeeds or fails by whether it is in fact
logical and whether it is supported by the "data" and experience of
the audience.<<
Like data fitting itself, logical argument is necessary but not sufficient to
establish the credibility of a model. Your audience walks into the room already
armed with not only experiences, but also their own logical arguments that
attempt to explain these experiences. They are not looking to you for simply
another logical argument, but one that (1) takes into account all, not just
some, of the available data and experiences, and (2) is demonstrably superior to
other plausible logical arguments. Insights are a dime a dozen, and so are
feedback loops. What is harder to come by are dynamic insights that stand up to
rigorous examination, including the ability to reproduce the full range of
relevant data and experience.
Second:
>>It is telling that the scientific model for the destruction of ozone was
developed before time series data from Antarctica was available. This is one
prominant instance where model-based insight did not depend on a close fit with
time series data.<<
Clearly, many insights come from looking at how a model behaves projected into a
period for which no data exist (generally, though not necessarily the future),
and indeed this is precisely what we hope to achieve through scenario testing.
What is less commonly understood is that valuable insights (for both modeler and
client) also arise through the process of model validation, including fitting
whatever relevant data do exist. Moreover, if you dont first establish
credibility through a process of model validation, the most wonderful
scenario-based insights in the world will (and should) be dismissed as
unsubstantiated, though perhaps interesting, speculation.
For more on this subject of model rigor and validity, Id like to refer folks
again to my article in the latest System Dynamics Review, issue 12(1).
Jack Homer
70312.2217@CompuServe.COM
First:
>>What is on the stand is the validity of the logical argument (not the model);
and logical argument succeeds or fails by whether it is in fact
logical and whether it is supported by the "data" and experience of
the audience.<<
Like data fitting itself, logical argument is necessary but not sufficient to
establish the credibility of a model. Your audience walks into the room already
armed with not only experiences, but also their own logical arguments that
attempt to explain these experiences. They are not looking to you for simply
another logical argument, but one that (1) takes into account all, not just
some, of the available data and experiences, and (2) is demonstrably superior to
other plausible logical arguments. Insights are a dime a dozen, and so are
feedback loops. What is harder to come by are dynamic insights that stand up to
rigorous examination, including the ability to reproduce the full range of
relevant data and experience.
Second:
>>It is telling that the scientific model for the destruction of ozone was
developed before time series data from Antarctica was available. This is one
prominant instance where model-based insight did not depend on a close fit with
time series data.<<
Clearly, many insights come from looking at how a model behaves projected into a
period for which no data exist (generally, though not necessarily the future),
and indeed this is precisely what we hope to achieve through scenario testing.
What is less commonly understood is that valuable insights (for both modeler and
client) also arise through the process of model validation, including fitting
whatever relevant data do exist. Moreover, if you dont first establish
credibility through a process of model validation, the most wonderful
scenario-based insights in the world will (and should) be dismissed as
unsubstantiated, though perhaps interesting, speculation.
For more on this subject of model rigor and validity, Id like to refer folks
again to my article in the latest System Dynamics Review, issue 12(1).
Jack Homer
70312.2217@CompuServe.COM
-
- Member
- Posts: 41
- Joined: Fri Mar 29, 2002 3:39 am
Dialog between Wakeland & Sterman
Jack Homer writes: "Like data fitting itself, logical argument is necessary but
not sufficient to establish the credibility of a model"
This inverts the means and the goal. The logical argument (i.e. understanding)
is the goal, the model just a means. If I had the full argument in all its
richness and subtlty to begin with, I wouldnt need the model at all. All I
want to do is help my clients lay out the logic (as well as the evidence) of
what they must do and why.
(I should add that I do love modeling; and I would probably do it, whether or
not it yielded real value. Im just lucky that it does yield real value in the
form of remarkable insight).
Jim Hines
jimhines@interserv.com
not sufficient to establish the credibility of a model"
This inverts the means and the goal. The logical argument (i.e. understanding)
is the goal, the model just a means. If I had the full argument in all its
richness and subtlty to begin with, I wouldnt need the model at all. All I
want to do is help my clients lay out the logic (as well as the evidence) of
what they must do and why.
(I should add that I do love modeling; and I would probably do it, whether or
not it yielded real value. Im just lucky that it does yield real value in the
form of remarkable insight).
Jim Hines
jimhines@interserv.com
-
- Senior Member
- Posts: 67
- Joined: Fri Mar 29, 2002 3:39 am
Dialog between Wakeland & Sterman
irt: gallaher@teleport.com (Ed Gallaher), Sat, Apr 20, 1996 2:21 PM EST
Your comment:
"It will change my approach from trying to sell the model (and modeling),
instead to a description of model output graphs, and a logical discussion
*based on the experience of the audience* !!"
Seems to connect with the great sales truth!
"Everyone loves to buy, but noone like to be sold!"
Gene Bellinger
CrbnBlu@aol.com
Your comment:
"It will change my approach from trying to sell the model (and modeling),
instead to a description of model output graphs, and a logical discussion
*based on the experience of the audience* !!"
Seems to connect with the great sales truth!
"Everyone loves to buy, but noone like to be sold!"
Gene Bellinger
CrbnBlu@aol.com