Reliable models: Part MMMCXXVII

This forum contains all archives from the SD Mailing list (go to http://www.systemdynamics.org/forum/ for more information). This is here as a read-only resource, please post any SD related questions to the SD Discussion forum.
Locked
gbackus@boulder.earthnet.net
Junior Member
Posts: 5
Joined: Fri Mar 29, 2002 3:39 am

Reliable models: Part MMMCXXVII

Post by gbackus@boulder.earthnet.net »

This discussion has now entered the stage of ruffled feathers with the feel of
the Inquisition and the common ground of the Scopes trial on evolution versus
creationism. John Sterman and George Richardson are holding up well, but it
would seem their agnostic scientific approach badly confronts the free-wheeling
sensibilities of many. While religion has its virtues, even it plays second to
rational thought. The near impossibility of separating our prejudices and hopes
from a limited and often harsh reality does not change the distinction between
belief and falsifiable fact. Using models without data does help us clarify and
even make consistent our beliefs. The purpose of using a model to solve a
problem clearly belies a strictly human value to the solution of that problem.
Nonetheless, without data, the model and its embodied theories, represent idle
conjectures that pose great dangers should the conclusions drawn be treated as
the "truth" for critical policy implementation. System dynamics has, as its
greatest strength, a maxim of causality. A causality that lends understanding
and the clear ability to falsify results by the comparison of the model with
reality - data. This is not to say the data is valid. But as George Richardson
succinctly states, the model and data provide a means to more validly delineate
the probable from the improbable. In other words the model and data provide the
only means to derive the most valid understanding of the system (not necessarily
the correct understanding as John Sterman helps clarify in his "The Growth of
Knowledge" paper).

In that a scientific model based on the best understanding of the data available
could indicate the ozone problem (as determined in the then newly recognized
data), its credibility is greatly enhanced, just as Einsteins prediction of the
bending of light by gravity added credibility to the theory of relativity. We
do not have to use all the data to make the model. We use the data to verify the
model or to indicate data that could help verify (strictly speaking just add
confidence to) the model This scientific method is iterative in the feedback
sense of data affecting hypothesis and hypothesis determining data issues.

Further, we can legitimately extract information from data without negating its
information content, such as removing a growth trend to accentuate the
oscillatory basis/theory behind the data. If logic indicates a mechanism for
which direct data does not exist, then the model must still pose a relationship
describing measurable quantities (data). The variable that cannot be directly
measured can then be inferred from the model results in combination with the
existing data. Much of the development of atomic theory and quantum mechanics
faced these same constraints. The problem here has little to do with unique
socio-economic complexity.

If I were the first person to ever notice a comet, I could argue that it is so
unique that any historical data I have is useless and irrelevant. Or I could
take first take the "best" existing theory (paradigm) and check the features of
the comet against it. All celestial objects seem to conform to gravity. Does the
comet have an elliptical orbit? Check the trajectory data. This would indirectly
determine its mass. All celestial objects, so far, conform to known chemical
laws. Get a spectrometer and check its constituents. Eventually I determine its
mass and that it is a dirty snowball orbiting the sun. (Forming a "valid"
hypothesis/model is seldom easy.)

In this example, the comet is unique, but well within our historical
understanding of the physical heavens. Nonetheless, the comets trajectory
indicates that it came from far beyond the known planets. Here now we have the
anomaly that will lead to a revolution (of sorts) that establishes a new (and
hopefully more valid) understanding of the solar system and the universe. We
now search for more data and make more hypotheses; to be strengthened or
falsified by more data. As long as the hypothesis is the most useful in
understanding cause and effect (rationality), we use it. As the data causes
overwhelming anomalies, we develop a theory that is more consistent with the
data. -- maybe still wrong, but it is the best, and why use less? How do we
know that it is the best? Because the data and our understanding of it, gives
the greatest confidence to that theory. Despite this being a "science" thought
experiment, it applies to all "socio-economic-political .... etc., thought
experiments that I can imagine.

Building a model without supporting data is equivalent to the first step of
forming a testable hypothesis via the scientific method. Stopping at that point
prevents any scientific validity of the model. If we chose to build "religious"
models, then discussion and uses associated with it are best to stay in the
realm of that religious world. If it is a model which we intend to inflict upon
the physical world, then the model better conform to the "scientific rules of
that world. The best we can "know" of that world comes from the data we
measure. As professionals, we can not guarantee that we are correct or valid,
but we can develop models that are the best possible.

Typically, "reliable" applies to the application of a tool to a physical world
situation. We determine reliability by comparing the model behaviors (output
data) to the world behavior (raw data). Thus, in this context I "strongly" side
with George and John. If we are simply coming to grips with our own biases and
exploring possible hypothesis, then it is rational to limit the impact of (yet)
misunderstood data on our brainstorming creativity. But once an specific
hypothesis is selected; no data means no validity.

I hate religious arguments. Can we move on now? Anybody know some rules for
putting a Dempster-Shaffer "belief" value on an anomaly? Is the anomaly bad
data, a one-time aberration we can neglect, or the death knell of my ancient
"tried and true" generic structure? How can I "validly" approach the problem?
Hello? Calling Earth.



George Backus Email: gbackus@boulder.earthnet.net
Policy Assessment Corporation phone: (303) 467-3566; fax: (303) 467-3576
14604 West 62nd Place Denver, Colorado 80004, USA
Locked