QUERY Structural or Behavioral Theory

This forum contains all archives from the SD Mailing list (go to http://www.systemdynamics.org/forum/ for more information). This is here as a read-only resource, please post any SD related questions to the SD Discussion forum.
Locked
Monte Kietpawpan <kietpawpan@
Junior Member
Posts: 12
Joined: Fri Mar 29, 2002 3:39 am

QUERY Structural or Behavioral Theory

Post by Monte Kietpawpan <kietpawpan@ »

Posted by Monte Kietpawpan <kietpawpan@yahoo.com>

SD has been regarded as a useful framework for theory development. Why is a theory created under this paradigm often called a theory of behavior? Why is the same theory not called a theory of structure?

If scientific theories must explain why things happen, a theory of behavior would explain why structure S generates behavior B. In SD, we would present the underlying feedback structure, consisting of factors (a, b, c,..., n) and causal relationships, that can exhibit B. For plausibility, we proceed to claim each causal link for each loop of the system, by explaning why, for example, a causes b; why b causes c, and so on. Eventually, each of the links is well supported by logical (deductive) or statistical
(inductive) reasonnings. Will this theory become a theory of structure?

MK
Posted by Monte Kietpawpan <kietpawpan@yahoo.com> posting date Mon, 5 Feb 2007 13:02:23 -0800 (PST) _______________________________________________
George Richardson <gpr@albany
Member
Posts: 20
Joined: Fri Mar 29, 2002 3:39 am

QUERY Structural or Behavioral Theory

Post by George Richardson <gpr@albany »

Posted by George Richardson <gpr@albany.edu>

> SD has been regarded as a useful framework for theory development.
> Why is a theory created under this paradigm often called a theory of
> behavior? Why is the same theory not called a theory of structure?

I think we do tend to think of model structure as a ""theory."" But it might be more precisely termed a ""hypothesis."" That's the sense we mean when we posit an initial ""dynamic hypothesis"" -- a simple stock- and-flow/feedback structure put forward initially as the key structure underlying some pattern of dynamic behavior. (I think the original source for the idea of a ""dynamic hypothesis"" is Randers, J. (1980), Guidelines for Model Conceptualization.
Elements of the System Dynamics Method. J. Randers, Cambridge MA, Productivity
Press: 117-138, also reprinted in Modelling for Management.)

Simulation helps us test whether a structural theory (hypothesis) fits observed behavior. If the structure passes all sorts of plausibility tests that try to match the structure to what people think the key structure of the real system is, and if simulations show that the structure is capable of generating the range of behaviors people have observe or expect in real data, then we gain confidence in the structural theory as a plausible theory of behavior. If we like the test results we might find ourselves saying ""our theory of structure looks pretty good,"" that is, we can't seem to find plausible tests that lead us to want to reject the structural hypothesis.

The duality here -- theory of structure / theory of behavior -- thus pops up in our tests for building confidence in a model. We have tests of model structure and tests of model behavior. (See the lovely diagram in Saeed, K. (1992), ""Slicing a complex problem for systems dynamics modeling,"" System Dynamics Review 8(3): 251-262].)

Still, I think we are being our clearest when we say we ""hypothesize""
structure and test that hypothesis as a ""theory of behavior.""

George P. Richardson
Chair of public administration and policy Rockefeller College of Public Affairs and Policy University at Albany - SUNY, Albany, NY 12222 Posted by George Richardson <gpr@albany.edu> posting date Tue, 6 Feb 2007 09:40:53 -0500 _______________________________________________
""Alan McLucas"" <a.mclucas@a
Junior Member
Posts: 10
Joined: Fri Mar 29, 2002 3:39 am

QUERY Structural or Behavioral Theory

Post by ""Alan McLucas"" <a.mclucas@a »

Posted by ""Alan McLucas"" <a.mclucas@adfa.edu.au>

George Richardson makes a very important point here. Thank you.
In the application of the scientific method, hypotheses must be rigorously tested before we might have sufficient confidence to declare that we have (the basis for) a theory. Rigorous testing (progressively through each stage of development of the model) is particularly important in SD because there are NO tests which one might use to 'prove' that the model we proffer as being a sufficient and necessary representation of a dynamic situation is 'true'. The best we can ever achieve is to develop models which enable us to investigate and subsequently defend our dynamic hypotheses. Jay Forrester (1961:115-129) reminds us that a model must:
Firstly, generate behaviour that does not differ significantly from the real system [when modelled our dynamic hypothesis looks like the real world], and Secondly, explain real world behaviour through the structure and equations which reflect the real causal relationships in the real-world system.
The first is self evident, whilst the second is not necessarily so. However it is the second that is critically important.
Regards,
Alan

Dr Alan McLucas
School of Information Technology and Electrical Engineering, UNSW@ADFA, Australian Defence Force Academy, Northcott Drive, CAMBPELL ACT 2600 AUSTRALIA Posted by ""Alan McLucas"" <a.mclucas@adfa.edu.au> posting date Thu, 8 Feb 2007 07:33:16 +1000 _______________________________________________
""Kim Warren"" <Kim@strategyd
Member
Posts: 36
Joined: Fri Mar 29, 2002 3:39 am

QUERY Structural or Behavioral Theory

Post by ""Kim Warren"" <Kim@strategyd »

Posted by ""Kim Warren"" <Kim@strategydynamics.com>

Perhaps I don't understand important subtleties of the debates we keep returning to about models being 'right' or not and theories of structure and behaviour. But does this tendency reflect a lack of confidence in what we do and its value to people with serious decisions to make? Maybe this helps answer the other thread about why SD struggles to make progress.

Geoff Coyle gave me a tip that has proved to be useful over many years.
A model should 'do what the real world does ..' [to a useful approximation, according to the guidance in George Richardson's post] '
. AND for the same reasons'.
An example that crops up in various forms in different situations goes something like this ..
'I am worried about why my sales are falling and what to do about it ..'

. so we explain that with a model [A] that shows sales per customer dropping - but when we look at the real-world data, this just isn't happening. Instead, customer numbers are falling.
. so we explain that with a model that shows customer win-rate falling and steady customer losses - but when we look at the real-world data, that's not happening either. Instead, customer win-rates are OK, but losses are rising.
. so we explain that with a model [C] that shows customer win-rates, loss-rates and sales/customer matching the real-world data.
Is model [A] 'wrong'? - clearly
Is model 'wrong'? - for sure
Is model [C] 'right'? - definitely. Well that specific piece is, at any rate.

In many years of doing this kind of thing, no user has ever, ever asked the abstract question about whether models are right or wrong. Their concern is simply 'Does this look like the real world I see, and does it help me make better decisions than I was making before?' - in this case, pay more attention to keeping our customers, and if we have limited resources, check that this switch will not damage our win-rate and sales/customer.

If we must have a debate about theory here, I guess our mini-hypotheses are that falling sales are caused by A, B, or C, but our single structure >> behaviour theory encompasses all three. [We never in fact build models A and B, because it's a waste of time testing those hypotheses without including C].

Frankly, I don't care a cent whether this reflects anyone's mental model or not [another question no-one has ever asked!] - the model does what the real world does and it does so demonstrably for the same reasons that the user can see playing out in the real world. What is the problem with extending this principle right back through the system, and round all the feedback loops - e.g. 'Are we losing customers because they are not getting called on, or because product quality is bad?' Well, what does the data say? And if we don't have the data, what might it look like, and is it important enough to go find out?.

Following this principle is tough when hard-to-see factors are involved, like reputation or motivation, but there is often evidence even about these things to support or refute what we think might be going on. I guess there may also be problems when models have to be aggregated or abstracted to a high-level so as to make the model tractable, like using 'pollution levels' to combine everything from water pollution to household waste tips. .. but then should we not would worry if the model made any sense to people when they can't see in it things they actually see in the world around them?

All of this makes me wonder if folk are getting anxious about whether models are right because a hypothetical structure is developed without actually looking at the data as they go? Might they then be struggling to force a model whose structure is badly flawed to match only the 1-2 behaviour outcomes they originally set out to understand? If so, no wonder it proves hard to justify the model to the user, and no wonder SD has a dodgy reputation with management.

I recall one situation that seemed to fit this possibility - an insurance firm who didn't trust or use their SD profit model .. they had a perfectly plausible causal loop diagram on which the model had been built, but it strangely did not include policy-holders. Now I'm no insurance expert, but I can't imagine how a model of insurance company profits is ever going to work if it doesn't have policy-holders in it.
Have I somehow missed the point of this debate?

Kim Warren
Posted by ""Kim Warren"" <Kim@strategydynamics.com> posting date Fri, 9 Feb 2007 13:46:47 -0000 _______________________________________________
Jean-Jacques Laublé <jean-jac
Senior Member
Posts: 61
Joined: Fri Mar 29, 2002 3:39 am

QUERY Structural or Behavioral Theory

Post by Jean-Jacques Laublé <jean-jac »

Posted by Jean-Jacques Laublé <jean-jacques.lauble@wanadoo.fr>

Hi Kim

What do you mean by policy-holders?

The people, their preoccupations, their mentality, their positions etc. or the alternative policies they may choose which are directly related to the model in question.

You write about the difficulties of taking into account reality and its diverse manifestations.

But it is the very difficulty of the field!

After 5 years of practice I begin to understand what reality and what level of aggregation I must consider (the highest possible at the beginning).
There is a huge gap between the power of the SD software and the ability of people to make a profitable use of it and this gap to my opinion is getting bigger and bigger.

The power of the software makes people focusing on the tool instead of focusing on the problem.

People probably think that because powerful features are implemented they must necessarily be used, and this leads them towards disappointment and overly complicated models.

Regards.
Jean-Jacques Laublé
Posted by Jean-Jacques Laublé <jean-jacques.lauble@wanadoo.fr> posting date Sun, 11 Feb 2007 13:00:15 +0100 _______________________________________________
""John Gunkler"" <jgunkler@sp
Member
Posts: 20
Joined: Fri Mar 29, 2002 3:39 am

QUERY Structural or Behavioral Theory

Post by ""John Gunkler"" <jgunkler@sp »

Posted by ""John Gunkler"" <jgunkler@sprintmail.com>

Kim Warren's post (no, of course you didn't miss the point -- you nailed it directly to the wall as usual!) reminds me of a concern I had when I first saw an SD model. It stemmed from the following question:

Is there only one feedback structure that can give a good fit to real-life data (in a suitably complex situation)?
-- Because, if there can be more than one, how would we ever know which one to trust and use?

But I believe in Kim Warren's example (falling sales and its dynamics) he has shown the answer. Whatever it is that is structurally different between two models producing the same behavior mode, there should be real-world evidence (data) about. So, as he explained in his example, if one model uses falling customer win rates and the other uses increasing customer loss rates to produce the behavior of interest, we can go look at (or collect) data to see which is actually occurring [or if both, or neither, is happening].

No matter how much we then disaggregate our model, the same kind of thing will happen. First, we must create a structure that produces the behavior mode. Second, if we have more than one structure that does this, we look at real-world data to see which structure ""does what the real world does for the same reasons."" And, as Kim says, that structure is ""right"" at its level of detail/disaggregation.

I cannot think of a possible exception to this. Anything we put into the structure of our models must have a real-world counterpart; that counterpart's actions must be measurable (even if some things may be difficult to measure); so Kim's method will work every time.

And to those, probably not in the SD field, who question the measurability of particular stocks and flows, I simply refer them to E.L. Thorndike's classic remark: ""Whatever exists, exists in some quantity.""

John
Posted by ""John Gunkler"" <jgunkler@sprintmail.com> posting date Sun, 11 Feb 2007 11:53:56 -0500 _______________________________________________
Locked