A causes B

This forum contains all archives from the SD Mailing list (go to http://www.systemdynamics.org/forum/ for more information). This is here as a read-only resource, please post any SD related questions to the SD Discussion forum.
Locked
daniel.jarosch@au.pwcglobal.com
Junior Member
Posts: 6
Joined: Fri Mar 29, 2002 3:39 am

A causes B

Post by daniel.jarosch@au.pwcglobal.com »

Dear SDers,
thank you for all the replys. Although I was hoping, but not really
expecting to find a scientific method to prove causal relationships, the
answers provided me with more confidence in the methods that I am currently
aware off.

For the process of uncovering (gathering evidence for) causal relationships
qualitativly, I have come accross some references on how to conduct expert
interviews/workshops. Unfortunately they are all in German. When I find
good English sources, I will send the around. Please let me know, if
anybody has a good recommendation.
Best regards
daniel

From: daniel.jarosch@au.pwcglobal.com


Flick, Uwe (1995): Qualitative Forschung. Theorie, Methoden, Anwendung in
Psychologie und Sozialwissenschaften. Reinbeck bei Hamburg: Rowohlt.

Lamnek, Siegfried (1995): Qualitative Sozialforschung. Bd. II: Methoden und
Techniken. 3. überarb. Aufl., Weinheim: Beltz.

Meuser, Michael/ Ulrike Nagel (1991): Experteninterviews - vielfach
erprobt,
wenig bedacht. Ein Beitrag zur qualitativen Methodendiskussion. In: Garz,
Detlef/ Klaus Kraimer (Hg.), Qualitativ-empirische Sozialforschung.
Konzepte,
Methoden, Analysen. Opladen: Westdeutscher Verlag, S. 441-471.

Meuser, Michael/ Ulrike Nagel (1997): Das Experteninterview -
Wissenssoziologische Voraussetzungen und methodische Durchführung. In:
Friebertshäuser, Barbara/ Annedore Prengel (Hg.), Handbuch qualitative
Forschungsmethoden in der Erziehungswissenschaft. Weinheim/ München:
Juventa,
S. 481-491.
daniel.jarosch@au.pwcglobal.com
Junior Member
Posts: 6
Joined: Fri Mar 29, 2002 3:39 am

A causes B

Post by daniel.jarosch@au.pwcglobal.com »

Dear SDlers,

I hope my issue is not too basic, but I do have question that has been
spinning in mind for a while.
Basically it is: How can we qualitatively uncover causation?
Assuming that you can not perceive causation, but only the events. (Say
fire causes heat. You can perceive the fire and expect it to cause heat,
because it always has in the past). What is the best process for me as
modeller to uncover the causal link between the events? (How can I identify
the bias of an interviewee? )
Further, as quantitative data is often not available, one has to take a
qualitative approach. In doing this the quality of the questions asked,
their order and the interpretations of the answers become very important. I
did not succeed to find any guidance in the literature, yet. Any thoughts?

Best regards from a stormy Sydney

dan
From: daniel.jarosch@au.pwcglobal.com
=?iso-8859-1?Q?Andr=E9_Reichel?=
Junior Member
Posts: 14
Joined: Fri Mar 29, 2002 3:39 am

A causes B

Post by =?iso-8859-1?Q?Andr=E9_Reichel?= »

The question how can we uncover causation is a tricky one. New and
uncommon problems are not likely to reveal the true causation of things to
us.

We can use, like Mark suggested, the scientific method to develop a
hypothesis about what is going on for what reasons, and test it. How do you
develop a working hypothesis? I would say: take a wild guess. It is almost
regardless from where you start your approach to reality -- as long as
your testing methods are as critical and cruel as you can imagine. We dont
have to be nice to our theories -- or as Karl Popper put it: we can let them
die on our behalf.

A helpful start is always some empirical data. The more you have, the closer
ones first hypothesis might be to reality. But: in terms of new problems,
hard data ist often not availabe or just with costly, time-consuming
efforts. And: the more data you have, the more likely you start to build
correlations to make it easier to keep trace of all the numbers. So: no data
will release a model builder from thinking creatively. Creativity includes
your own knowledge and experience about the addressed problem and about
model building. Your knowledge and experience transcends every wild guess
you make in the beginning. So, there is no real wild guess and in unclear
and complex situations it is just as good as late-night work over
spreadsheets, news reports, statistics and so on.

This sounds fuzzy, I have to admit, but it is a topic where there is no
to-do list you can follow. The best approach to reveal causation is to take
that guess, to develop a hypothesis and take it for real. Then, ask
yourself: if this is real, what does that mean? How do certain entities
behave? What -- in the light of ones own understanding -- would mean that
in terms of feedback and causal loops? Where are the stocks and flows?
Suddenly you start writing down reference modes and structural diagramms.
That is the time you can really test your guess, your mental model about a
problem. The iterative process of testing - improvement - testing will shed
light on the question for causation, and through discussion and re-thinking
your own assumptions you might catch a glimpse of reality.

This is the only way I know how to do it. Comments, as always, welcome. For
reading I would recommend anything by Karl Popper, i.e. Objective
Knowledge or The Logic of Scientific Discovery. Another good overview is
Bell and Bell (in: Randers 1980) System Dynamics and Scientific method.

André Reichel
Universität Stuttgart, Germany
A.Reichel@epost.de
Niall Palfreyman
Senior Member
Posts: 56
Joined: Fri Mar 29, 2002 3:39 am

A causes B

Post by Niall Palfreyman »

John Gunkler schrieb:
> An undergraduate
> sociology professor of mine once sat in front of our class and read a
> newspaper article reporting that the more educated a person was (in the
> U.S.), the longer they waited to get married....

What a lovely exercise! That must surely be useful in any number of
disciplines - thanks for the idea, John.

> Finally, let me say (since I havent seen anyone else state this obvious
> point yet) that the scientific method has not been able to find, in any
> field, a way to "prove" causal relationships -- only to provide more or less
> "evidence" that such a relationship exists.

I teach maths, physics and computational modelling to undergraduates,
and I tell them that MATHS makes no definitive statements about the
world, but rather confines itself to statements of the form "If ... then
..."; PHYSICS on the other hand makes statements of the form A(x), where
A is a predicate and x is some aspect of the world. Another way of
thinking about this is that maths concerns itself with _proof_, while
physics concerns itself with _truth_. Where life gets interesting is
when the two get together, and we link the proofs of maths with the
"truths" of physics to build a whole series of models of the world.

Then, to sober things up a bit, in relation to computation I talk about
the work of Goedel, Turing and Chaitin, which show that there are
infinitely many times more unprovable truths than provable ones in the
world. This then leads into the subject of model-building and the
constructivist idea that what is ultimately important is whether a model
works or not - that all our theories and models cannot tell us anything
about truth, but only, at best, about falsity.

Niall Palfreyman.
From: Niall Palfreyman <
niall.palfreyman@fh-weihenstephan.de>
Bill Braun
Senior Member
Posts: 73
Joined: Fri Mar 29, 2002 3:39 am

A causes B

Post by Bill Braun »

If A (and only A) does indeed cause B, then every time B occurs, A will be
present or will have occurred sometime prior. The longer the delay between
A and B, the more tenuous the causal link. There may also be other
contributing or non-obvious links. Fire [A] causes Heat . Fire [A] also
heats Stones [C] that are nearby. When Fire [A] goes out, Stones [C] still
produce Heat . Does fire cause heat (A -> B) or do stones cause heat (C
-> B)? And how does one distinguish between the two if the stones have been
transported into a tent to give off heat and the fire is not obvious to the
observer? Or if one warmed themselves at the fire and then entered the tent
warmed by stones would one conclude that A or C causes B?

Longitudinal investigations are frequently useful for separating
correlation and causation. In the above simple example, longitudinally one
would discover that if stones are not near fire they do not absorb heat to
later give off heat. Hence the emphasis on dynamic views of systems over
static (snapshot) ones. If the "system" to be modeled is assumed to be the
tent and the stones (but no more) at a given (static) point in time, it
would be difficult to validate fire as a causal agent of heat.


Additional comment...

A longitudinal investigation will also better reveal the feedback loop(s)
whereby it may be dicovered that sufficient Heat may cause Fire [A]
(e.g., ambient heat igniting rags soaked with paint thinner).

Bill Braun
From: Bill Braun <medprac@hlthsys.com>
jnoble1@mmm.com
Junior Member
Posts: 4
Joined: Fri Mar 29, 2002 3:39 am

A causes B

Post by jnoble1@mmm.com »

I, too, have been very interested in qualitative causation analysis in business
system contexts for some of the same reasons you describe. One body of
knowledge that provides an approach to this is Eli Goldratts "Thinking
Processes" tools from the Theory of Constraints. One tool, the "Current Reality
Tree", is used to reason from a set of problem symptoms back to the "core
problem" in a given problem domain using what he calls "sufficiency logic".
Others are used to analyze the conflict creating the core problem and reason
back from the proposed solution to predicted solution "symptoms". These tools
can be used analyze causality in a linear, short term time-frame or model
circular causality with the addition of feedback loops. Along with the tools is
a set of rules or criteria called the "Categories of Legitimate Reservation" for
evaluating the quality of the causal reasoning. These rules may be getting at
the second part of your query. As far as literature is concerned, "Its Not
Luck" by Eli Goldratt is the "Thinking Processes" inventors attempt to describe
these tools in action, so to speak, in the form of a novel. For a more
rigorous, systematic presentation, "Goldratts Theory of Constraints" by H.
William Dettmer, would be a better option.

In addition to the "Thinking Processes" and "Causal Loop Diagrams", which have
been a part of SD/ST for some time, I am also interested in what tools/methods
have been developed in the qualitative causality area. Specifically, what other
work has been done to look at the use of propositional logic in causality from
the domain of the classic logic sciences as it applies to SD? John Warfields
"Interpretitive Structural Models" is another approach Im aware of. How can
these methods be developed to better explicity integrate qualitative
propositions about a problem domain into more quantitative causality models,
since propositions, not variables and equations, are what the average person
more naturally thinks and communicates in?

Jon Noble
3M Supply Chain
jnoble1@mmm.com
"George Backus"
Member
Posts: 23
Joined: Fri Mar 29, 2002 3:39 am

A causes B

Post by "George Backus" »

The scientific method informs us that the only way to test a theory is
through data. The data do not have to represent accurate physical entities
but do need to reflect measurable (quantifiable) conditions. Perceptions
and emotions can be sampled and ranked. as long as a unique and reproducible
method is used for the ranking, In that situation, inferences and
conclusions can be validly drawn. The "unique" aspect implies that there is
a one-to-one correspondence between the measurement procedure/evaluation and
the the eventual ranking quantification. Still the more "fuzzy" the
quantification, the greater the data variation needed to produce confidence
that the inferred correlations (Ill get to causality shortly) are
statistically meaningful. Thus, "qualitative" is meaningless on its own
merits but can generally be transformed into quantitive information for
causal assessment.

In the strict philosophical sense, I know of no defendable approach to
"prove" causality. George Richardsons "Feedback Thought in Social Science
and Systems Theory" gives a good feel for this issue. Two groups claim
they can infer causality (as opposed to correlation) via statistics. See the
texts: "Causality in Crisis" by McKim and Turner; and "Computation,
Causation and Discovery" by Glymour and Cooper. From a simulation
perspective (i.e., what is useful?) "Granger Causality" relates past events
to current conditions and quantifies the strength of the presumed casual
relationships (See "Long-Run Economic Relationships" by Engle and Granger.

In general, one has to think of casualty as an uncertainty. The question
then becomes: What level of confidence do we assign and does it matter?
Dempster-Shafer Theory (See, for example, "Managing Uncertainty" by Katzan)
allows us to take evidence and act upon it as an uncertainty. Bayesian
assessment allows us to bound (rule our some possibilities) an focus on
(test) a finite set of "credible" causal relationships. (See for example
"Scientific Reasoning," by Howson and Urbach)

Thus, I think, for every simple question there is a complex answer starting
with "Not really, but....



George Backus
Policy Assessment Corporation
14604 West 62nd Place
Arvada, CO 80004-3621
Email:
George_Backus@ENERGY2020.com
"Mark B. Wallace"
Junior Member
Posts: 10
Joined: Fri Mar 29, 2002 3:39 am

A causes B

Post by "Mark B. Wallace" »

On 20 Nov 2001, at 19:10, daniel.jarosch@au.pwcglobal.c wrote:


>I hope my issue is not too basic, but I do have question that has been
> spinning in mind for a while.
> Basically it is: How can we qualitatively uncover causation?


Let me translate: "How should a scientist form hypotheses?" The subject
matter is induction (philosophical, as opposed to mathematical), which is
part of logic, which is part of epistemology.


>Assuming that you can not perceive causation, but only the events. (Say
> fire causes heat. You can perceive the fire and expect it to cause heat,
> because it always has in the past). What is the best process for me as
> modeller to uncover the causal link between the events?


Learn as much as you can about the world, in general, and the subject
matter of the system you are studying, in particular. If there are any
shortcuts here, they arent obvious to me. Worse, even surrounding yourself
with (alleged) subject matter experts may not be sufficient unless you can
"come up to speed" relatively (in the context of the project at hand) quickly.
Until you do, you wont be equipped to formulate the best questions, or judge
the worth of their answers.


> (How can I identify the
> bias of an interviewee? ) Further, as quantitative data is often not available,
> one has to take a qualitative approach.


I would say "start with" instead of "take." The formation of hypotheses can
be done qualitatively, but the verification of hypotheses, beyond the most
obvious, will require explicit quantification. To me (an admitted SD novice),
one of the great values of SD is that it encourages us to subject our
qualitative understandings to quantitative tests.


>In doing this the quality of the
> questions asked, their order and the interpretations of the answers become very
> important. I did not succeed to find any guidance in the literature, yet. Any
> thoughts?


For a deep understanding of the process of philosophical induction, the best
reference (by a wide margin) is:


H.W.B. Joseph, "An Introduction to Logic", Second Edition, 1916, Oxford
University Press.


Recently reprinted by The Paper Tiger, Inc., and available from either their
Web site (http://www.papertig.com/) or Amazon.com.


Despite the title, the book is anything but introductory. And, of course, it
wont tell you about any particular scientific (or business) subject matter. It
will, however, clear up fundamental confusions such as your apparent one
that causal links are between events. No, they are between actions of
(acting) entities and states of entities acted upon.


Mark
From: "Mark B. Wallace" <mark.wallace@verizon.net>
"John Gunkler"
Member
Posts: 31
Joined: Fri Mar 29, 2002 3:39 am

A causes B

Post by "John Gunkler" »

A lot of the good thinking and writing on causality, at least when I learned
about it, was done primarily in the context of the physical sciences. Much
of it (Karl Poppers work in particularly, which has been mentioned by
several people already) is very useful in any field. I also find that
William James pragmatism -- not what you may think about pragmatism, but
what he actually wrote about it -- is an unusually sensible guide.

However, in the "soft" sciences we run into situations not adequately dealt
with by the philosophers of science who focused on the physical sciences.
But in my own experience I have found one mindset that is often very useful.
It is the idea of formulating, then systematically ruling out, "alternative
hypotheses." It turns out to be quite easy to come up with a plausible
"explanation" for almost anything in the social sciences. Apparent truth,
or plausibility, is an easily met criterion. So, in order to convince me,
you will need to show me more than a plausible explanation; youll need to
show that your explanation uniquely works -- by showing how alternative
explanations dont.

One further criterion: Youll need to show that one cannot argue from the
same premises to an opposite conclusion. A quick example: An undergraduate
sociology professor of mine once sat in front of our class and read a
newspaper article reporting that the more educated a person was (in the
U.S.), the longer they waited to get married. He then asked us to explain
this "fact." We came up with a series of quite complete, compelling,
plausible, causal reasons for why this was happening. He congratulated us,
then passed around the actual newspaper clipping. We read that the "facts"
he spoke to us were exactly the opposite of the research reported in the
article -- in fact, college-educated students were marrying earlier than
their less-educated cohorts! (This was many years ago; I dont know what
the data today are.)

Ever since I have imposed the discipline on myself, and others, to show that
one cannot just as plausible argue from the same set of facts to the
opposite conclusion, and to insist that alternative explanations be
described and ruled out.

Finally, let me say (since I havent seen anyone else state this obvious
point yet) that the scientific method has not been able to find, in any
field, a way to "prove" causal relationships -- only to provide more or less
"evidence" that such a relationship exists. Theories are subject to change
as facts are unearthed, data provided, or ways of looking at the world
(paradigms, mental models) alter. This is where I turn to William James. A
causal explanation is "true" to the extent that it "works" -- it allows us
to do things (based on our understanding), to form hypotheses about future
events that then come to pass, etc.

John W. Gunkler
jgunkler@sprintmail.com
Tom Fiddaman
Senior Member
Posts: 55
Joined: Fri Mar 29, 2002 3:39 am

A causes B

Post by Tom Fiddaman »

One other notion of causality you may bump into is "Granger Causality". The
essential idea is to ask yourself, "if I knew about everything in the
universe except x, could I still predict y?" If x doesnt help predict y,
one concludes that x does not Granger-cause y.

I sometimes find this a useful thought experiment. On the other hand, the
practical applications of this test Ive seen range from suspect to ludicrous.

Theres a good summary at:
http://eco-072399b.princeton.edu/yftp/TimesF99/GCP.pdf

Regards,

Tom

****************************************************
Thomas Fiddaman, Ph.D.
Ventana Systems http://www.vensim.com
8105 SE Nelson Road Tel (253) 851-0124
Olalla, WA 98359 Fax (253) 851-0125
Tom@Vensim.com http://home.earthlink.net/~tomfid
****************************************************
Alexander Leus
Junior Member
Posts: 11
Joined: Fri Mar 29, 2002 3:39 am

A causes B

Post by Alexander Leus »

I have been away from the area of physics for some time. But if my
memory has not failed me, once one enters the atomic world, the world of
particle physics, cause and effect become possibly an illusion, fuzzy.
I cant believe I said that, after being an engineer for so long. Does
"cause and effect" just exist at the macro level and not the particle
physics level? And if so, why? It is interesting how our mental models
of reality vary depending on what we are looking at, and the tools we
utilized to do the looking.

Hopefully I am not to vague on my above comments and questions.

Alex Leus
leusa@tds.net
Bill Braun
Senior Member
Posts: 73
Joined: Fri Mar 29, 2002 3:39 am

A causes B

Post by Bill Braun »

I have no training in physics, just an avid interest, reading as much as I
can. My (untrained) understanding is that at the particle level there is a
tremendous amount of interdependent, interactive causal influence going on,
but none of it is "simple" cause and effect (of the A causes B variety).
The complex interactions are a madhouse of feedback loops, all of which
"conspire" to produce the phenomenon we observe.

Bill Braun
From: Bill Braun <medprac@hlthsys.com>
Jay Forrest
Junior Member
Posts: 12
Joined: Fri Mar 29, 2002 3:39 am

A causes B

Post by Jay Forrest »

Hi Alex!

Philosophers have long suggested that humans perceptions of "cause and
effect" are effectively incomplete and not necessarily reflective of
reality. In quantum physics we see this phenomenon clearly, where
perspective, tools, and paradigms dictate the nature of the perceived reality.

In simple, reductionist macro observation we find locally causal
influences. And the memory and recognition logic of humans reinforces that
observed relationships -- particularly in close sequence in time and space
-- are "causal." We routinely observe singular events in which two or more
humans perceive the event and causality very differently. Given that we act
on perceptions (and not on reality) it is clearly arguable that perception
is primary and not physical truth.

Judea Perl begins his new book "Causality" by noting the probabilistic
nature of causality.

My personal position is that a combination of variance in experience and
perspective creates multiple perceptions of reality and thus causality. As
we share our perceptions the uncertainty of language further fuzzes the
perception such that the message quickly becomes rather muddled and
arbitrary. As a result I strongly prefer diagrams (causal and SD) to words
for describing situations. And I find simple causality misleading and
useless. A systems perspective will incorporate perceptions of causality
but will not focus on singular, specific causalities as primary. Behavioral
characteristics reside in a system and not in the fragments or pieces of
the system and are, as such, dependent on the boundaries of the system.

In a fully defined complex system, causality may become a certainty IF the
characteristics of the system are accurately portrayed. If the description
is partial (so back to the philosophers who say all descriptions are
partial) the best we can do is capture a piece of the system and its
behavior and the outcome -- the perception of causality -- must be less
than certain and thus probabilistic.

For more on systems and philosophy I strongly recommend Gerald Midgleys
new book "Systemic Intervention: Philosophy, Methodology, and Practice".

Jay Forrest

P.O. Box 701488
San Antonio, TX 78270
Tel: 210.355.0429
E-mail: jay@jayforrest.com or jforrest@futuresguild.com
Locked