The Quality of Models
In response to the question, "How do you know when you have quality models?":
For me, all models are used for prediction -- whether the model is used to
predict a point in the future or to suggest a familiar pattern of system
performance. I find models more useful when they are used to explain how
things fit together to perform rather than predict points. That distinction
yields two broad categories: explanatory models and empirical models.
Not to put too fine a point on it, how do I know when I have a good
explanatory model? When the model tells me something about the real world
that I did not previously know. That is, if I can see enough familiar
structure (concreteness) in a model and that model produces performance
patterns that are familiar (timing, phases and amplitude of oscillations, say),
I gain confidence in the model. And again, if the model produces some
surprising behavior that I can relate to experience and data from the past, my
confidence in the model quality increases.
Mr. Moreno talked a bit about the process of and assumptions used in model
building. The process should include enough of the physics or concreteness
from the system so that I can compare its output to some historical, recorded
data. Data about populations or inventories or even attitudes from attitude
surveys can be compared to model outcomes. A match doesnt signal quality,
but it can help.
On the other hand, if the model contains a lot of soft or judgment parameters
(constants, table functions, or ethereal formulations), I can get a little edgy
about the output. So, I like terms that are unambiguous wherever possible in
the model. (Here I am using the notion of ambiguity as it connotes multiple
interpretations for the same data.) This need for clarity is one of the
benefits of model building. To the extent that the model reduces ambiguity --
about structure or parameter values -- it has already done me a service.
Im afraid I havent been much help here. It may be that judging quality in
models is similar to judging quality in art. It takes some talent, some skill,
and some practice. And even with all that, sometimes an expert is fooled.
jt55@delphi.com
Jim Thompson
The Quality of Models
-
- Junior Member
- Posts: 2
- Joined: Fri Mar 29, 2002 3:39 am
The Quality of Models
[Hosts note: Mr. Moreno is launching what I hope will be an
interesting thread of discussion. It important
to note that vocabulary is critical to this
discussion since the same words mean different
things to different people within our
community.]
Hi,
The question to pose would be, "How do you know when you have quality
models?"
This is related to the purpose of using all of this SD modeling
stuff (systems thinking, software, board games).
What is it doing for us"?
Not too long ago, I thought that the purpose was PREDICTION. But
I was advised that because of chaos theory, that is inappropriate.
My present view is that SD tools are useful for uncovering ASSUMPTIONS.
For that use,, the value of the tool is not inherent in the model itself
but in the modeling PROCESS. The higher the quality of ASSUMPTIONS
uncovered, the higher the value placed on the modeling EFFORT.
For this reason, I think that placement of ithink models or other software
m odels to be used by consultants contradicts the purpose of using the
tools. It is very difficult to learn the assumptions behind a particular
model just by PLAYING the simulation game. The model needs to be BUILT.
My opinion is that, for systems thinking to be more widely accepted,
the LANGUAGE must be modified to allow greater acceptance by various
industries. Finer distinctions should be made as to the COMPONENTS of
systems. Reality is built not from stocks and flows, it is built
around PEOPLE.
Anyways, theres my two bits.
Andrew Moreno
5612 Dove Place
Ladner, B.C.
V4K-3R3
amoreno@broken.ranch.org
interesting thread of discussion. It important
to note that vocabulary is critical to this
discussion since the same words mean different
things to different people within our
community.]
Hi,
The question to pose would be, "How do you know when you have quality
models?"
This is related to the purpose of using all of this SD modeling
stuff (systems thinking, software, board games).
What is it doing for us"?
Not too long ago, I thought that the purpose was PREDICTION. But
I was advised that because of chaos theory, that is inappropriate.
My present view is that SD tools are useful for uncovering ASSUMPTIONS.
For that use,, the value of the tool is not inherent in the model itself
but in the modeling PROCESS. The higher the quality of ASSUMPTIONS
uncovered, the higher the value placed on the modeling EFFORT.
For this reason, I think that placement of ithink models or other software
m odels to be used by consultants contradicts the purpose of using the
tools. It is very difficult to learn the assumptions behind a particular
model just by PLAYING the simulation game. The model needs to be BUILT.
My opinion is that, for systems thinking to be more widely accepted,
the LANGUAGE must be modified to allow greater acceptance by various
industries. Finer distinctions should be made as to the COMPONENTS of
systems. Reality is built not from stocks and flows, it is built
around PEOPLE.
Anyways, theres my two bits.
Andrew Moreno
5612 Dove Place
Ladner, B.C.
V4K-3R3
amoreno@broken.ranch.org
-
- Junior Member
- Posts: 10
- Joined: Fri Mar 29, 2002 3:39 am
The Quality of Models
Interesting questions.
I would say the quality of the model is established, as in other scientific
endeavors though peer review.
The issue of quality of modeling process is not that simple, but I have a
rather considered response to this question documented in paper I discussed
recently at MIT. I believe copies are available from Nan Lux. The title is:
Saeed, K. The Learning Organization in System Dynamics Practice.
Khalid Saeed
saeed@ait.ac.th
I would say the quality of the model is established, as in other scientific
endeavors though peer review.
The issue of quality of modeling process is not that simple, but I have a
rather considered response to this question documented in paper I discussed
recently at MIT. I believe copies are available from Nan Lux. The title is:
Saeed, K. The Learning Organization in System Dynamics Practice.
Khalid Saeed
saeed@ait.ac.th
-
- Newbie
- Posts: 1
- Joined: Fri Mar 29, 2002 3:39 am
The Quality of Models
Here are a few reactions to mr. Morenos opening of the quality discussion.
The reality of problem solving is, indeed, built around people.People(analysts
and/or clients) determine what the purpose or intended use of a model - or of a
modeling process - is, in a given context. I view the primary purpose of modeling
as helping solve real-world problems. So, the final quality verdict is whether the
problem has been solved, and whether it can be traced back to uses of models or
learning through modeling.
We know there is a variety of functions of models in problem solving, different
paradigms that take different attitudes, etc. Hence, the key to the quality question
lies in what we want to achieve through modeling. Is it learning? Then we should
figure out how much participants (or modelers) learned, (even though the model
may be technically poor, learning might be significant). Is it group consensus on
what the problem is, or on what should be done about it? Is it prediction of some
sort? Is the model intended to impress or convince people? Is it used to build a
consistent and convincing argument? Etc.
Each of these intentions may be used to derive operational quality criteria. Of
course, there will be relations: if a model is technically unsound, the convincing
power of the arguments based on the modeling exercise will be weak. etc.
I am aware that I am not giving any answers, but putting the question of quality
right is the first step towards an answer.
For brevity, I refer to a book by H.G. Miser and E.S. Quade, entitled ; Handbook of
Systems Analysis, Craft Issues and Procedural Choices, Wiley, 1988, in which
notably Chapters 14 and 15 deal with Evaluating succes and Quality control. The
focus is mainly on the content of analysis, less on the process, but there is a lot
there to think about!
Wil Thissen
--------------------------------------------------------------------------------------------------------
Wil Thissen
Faculty of Systems Engineering, Policy Analysis, and Management
Delft University of Technology
P.O. Box 5015
2600 GA DELFT
Netherlands
Phone: (31)-15786607, fax (31)-15783422
E-mail: thissen@sepa.tudelft.nl
The reality of problem solving is, indeed, built around people.People(analysts
and/or clients) determine what the purpose or intended use of a model - or of a
modeling process - is, in a given context. I view the primary purpose of modeling
as helping solve real-world problems. So, the final quality verdict is whether the
problem has been solved, and whether it can be traced back to uses of models or
learning through modeling.
We know there is a variety of functions of models in problem solving, different
paradigms that take different attitudes, etc. Hence, the key to the quality question
lies in what we want to achieve through modeling. Is it learning? Then we should
figure out how much participants (or modelers) learned, (even though the model
may be technically poor, learning might be significant). Is it group consensus on
what the problem is, or on what should be done about it? Is it prediction of some
sort? Is the model intended to impress or convince people? Is it used to build a
consistent and convincing argument? Etc.
Each of these intentions may be used to derive operational quality criteria. Of
course, there will be relations: if a model is technically unsound, the convincing
power of the arguments based on the modeling exercise will be weak. etc.
I am aware that I am not giving any answers, but putting the question of quality
right is the first step towards an answer.
For brevity, I refer to a book by H.G. Miser and E.S. Quade, entitled ; Handbook of
Systems Analysis, Craft Issues and Procedural Choices, Wiley, 1988, in which
notably Chapters 14 and 15 deal with Evaluating succes and Quality control. The
focus is mainly on the content of analysis, less on the process, but there is a lot
there to think about!
Wil Thissen
--------------------------------------------------------------------------------------------------------
Wil Thissen
Faculty of Systems Engineering, Policy Analysis, and Management
Delft University of Technology
P.O. Box 5015
2600 GA DELFT
Netherlands
Phone: (31)-15786607, fax (31)-15783422
E-mail: thissen@sepa.tudelft.nl
-
- Junior Member
- Posts: 14
- Joined: Fri Mar 29, 2002 3:39 am
The Quality of Models
Two points about model quality, both of which arise from work that I am
currently doing on SD model validation for the Tokyo conference.
1) Model quality (= validity) used to have a definition strongly associated
with representativeness, or realism. Models had to mimic the real world in
order to provide a laboratory in which meaningful experiments could be
conducted. This idea has not been lost but has been joined increasingly by
the concept of usefulness which concerns the ability of a model or
modelling process to bring about the change that was intended. This idea is
behind Saul Gasss work on the assessment of public-policy models and a
look at the literature reveals that SD has always had a strong usefulness
component to its self-image.
2) Various authors (Sagasti & Mitroff, Checkland, Eden, Rosenhead, Dery et
al. and Balci) talk about the importance of adequately conceptualising a
problem situation before embarking on formal modelling. They deem this to
be essential if a good (high quality) model is to result. This then leads
to an interest in conceptual (rather than formal) models and the whole
field of soft OR is motivated by the need to think carefully about the
conceptualisation phase and to provide problem structuring methods to
support such qualitative but still rational analysis.
This is a big field (!) If you are interested in finding out more could
I suggest:
Rosenhead, J. 1989. Rational Analysis for a Problematic World.
Wiley:Chichester.
or
Lane, D.C. & Rosenhead. 1994. Only Connect! An Annotated Selection from the
Literature on the Problem Structuring Methods of Soft Operational
Research. Procs. of the 1994 Int. S.D. Conf., Stirling, Scotland.
or
Lane, D.C.1994. With A Little Help From Our Friends: How system dynamics
and soft OR can learn from each other. System Dynamics Review.
10(2-3):101-134.
I shall be at the Power of System Thinking Conference next month talking
about this stuff if anyone wants more. And of course, there will be my
paper on validation, but I had better get back to finishing it now . . .
Dr. David C. Lane
Operational Research
London School of Economics
D.C.Lane@lse.ac.uk
currently doing on SD model validation for the Tokyo conference.
1) Model quality (= validity) used to have a definition strongly associated
with representativeness, or realism. Models had to mimic the real world in
order to provide a laboratory in which meaningful experiments could be
conducted. This idea has not been lost but has been joined increasingly by
the concept of usefulness which concerns the ability of a model or
modelling process to bring about the change that was intended. This idea is
behind Saul Gasss work on the assessment of public-policy models and a
look at the literature reveals that SD has always had a strong usefulness
component to its self-image.
2) Various authors (Sagasti & Mitroff, Checkland, Eden, Rosenhead, Dery et
al. and Balci) talk about the importance of adequately conceptualising a
problem situation before embarking on formal modelling. They deem this to
be essential if a good (high quality) model is to result. This then leads
to an interest in conceptual (rather than formal) models and the whole
field of soft OR is motivated by the need to think carefully about the
conceptualisation phase and to provide problem structuring methods to
support such qualitative but still rational analysis.
This is a big field (!) If you are interested in finding out more could
I suggest:
Rosenhead, J. 1989. Rational Analysis for a Problematic World.
Wiley:Chichester.
or
Lane, D.C. & Rosenhead. 1994. Only Connect! An Annotated Selection from the
Literature on the Problem Structuring Methods of Soft Operational
Research. Procs. of the 1994 Int. S.D. Conf., Stirling, Scotland.
or
Lane, D.C.1994. With A Little Help From Our Friends: How system dynamics
and soft OR can learn from each other. System Dynamics Review.
10(2-3):101-134.
I shall be at the Power of System Thinking Conference next month talking
about this stuff if anyone wants more. And of course, there will be my
paper on validation, but I had better get back to finishing it now . . .
Dr. David C. Lane
Operational Research
London School of Economics
D.C.Lane@lse.ac.uk
-
- Newbie
- Posts: 1
- Joined: Fri Mar 29, 2002 3:39 am
The Quality of Models
If anyone out there is still interested in the issue of model quality / model validity / creating confidence in models then dare I coax you into having a look at:
Lane, D.C. 1995. "The Folding Star: A comparative reframing and extension of validity concepts in system dynamics". In System Dynamics 95: Volume ! Plenary Program (T.Shimada & K.Saeed, Eds), pp. 111-130. Gakushuin University/ISDS, Tokyo.
Copies from me if you are having retrieval problems.
Dr. David C. Lane
Operational Research
London School of Economics
D.C.Lane@lse.ac.uk
Lane, D.C. 1995. "The Folding Star: A comparative reframing and extension of validity concepts in system dynamics". In System Dynamics 95: Volume ! Plenary Program (T.Shimada & K.Saeed, Eds), pp. 111-130. Gakushuin University/ISDS, Tokyo.
Copies from me if you are having retrieval problems.
Dr. David C. Lane
Operational Research
London School of Economics
D.C.Lane@lse.ac.uk