Can system dynamics models "learn"

This forum contains all archives from the SD Mailing list (go to http://www.systemdynamics.org/forum/ for more information). This is here as a read-only resource, please post any SD related questions to the SD Discussion forum.
"Ray on EV1"
Member
Posts: 29
Joined: Fri Mar 29, 2002 3:39 am

Can system dynamics models "learn"

Post by "Ray on EV1" »

Marsha,

Thank you for the energy and thoughtfulness you put into the message. It
has given me a lot to consider and some to comment on.

First I would like to ask you not to be too discourage about your finding of
the ridged view “we do x when y occurs”. I have run into this for decades.
It is not your fault. Stake holders in the current system keep their names
sake - stake holders - they have a stake in the system. It may be self
esteem as they were an originator or a subsequent supporter, or their
livelihood my depend upon the current system. If you present opportunities
that imply a change, the change may mean they loose their income. It is
very difficult to have a brainstorming session with subject matter experts
when they hold a stake in the game - they may not be really looking for
brainstorming as much as justifying their existence. A key working with
this is to understand it going in. Their frustration may be a knee-jerk
response that only needs some smoothing over or it may be the underlying
ploy to maintain status quo. I like to go into a situation and point out
these types of pit falls and ask the participants to suggest how to assure
full, good faith participation. Not that I expect an answer, I just want
everyone aware of the situation so they may be self policing. The worst
case position is where a client asks an service provider to brainstorm with
you but the service provider has a vested interest in the status quo. You
need to assure that the client brings additional representatives in to
equalize or level the playing field.

Second is the genetic algorithm (GA) approach. Please let me digress and
inform you of my prejudices with GA. I have seen GA mostly applied
inappropriately due to a lack of knowledge of the underlying mathematical
structure of the system and of optimization process in general. Different
types of systems can be addressed most appropriately by different types of
optimization methods, but rather than researching the correct combination,
GA is often pulled in as it is popular and stable if not robust.

But there is no magic to GA. One must still either understand the possible
node/link structures or resign to a fixed structure to optimize against.
Both classical and GA methods can address either. As the dimensions of the
problem grow, random perturbations against multiple combinations of
dimensions makes many problems unsolvable in the needed time frame. Where
one can more directly finesse an approach with classical techniques.

And lastly(?)! Running multiple experiments in “Hill-climbing” example
should not be considered reinforcement. In a problem with one independent
variable and one dependant (two dimensions), where the dependant has a
parabolic relationship with the independent, two measurement points are
needed to identify the optimum point. This is just the nice part about
math, one only needs two points to define a parabola. Reinforcement helps
us, easily distracted individuals with remembering a relationship. By
definition, an algorithm doesnt need such help. Now if the measurements of
the points are not accurate (noisy measurements), we are addressing a
stochastic system and may need multiple readings to assure some degree of
confidence for the parameters of the parabola and thus the optimum.

As for nimble learning, I wish to consider more - I am still wrestling with
the ghost in the machine. I really liked your example of being confronted
with thirst and a faucet but only knowing how to open a bottle.

Raymond T. Joseph, PE
281 343-1607
RTJoseph@ev1.net
George Richardson
Member
Posts: 23
Joined: Fri Mar 29, 2002 3:39 am

Can system dynamics models "learn"

Post by George Richardson »

On Friday, April 25, 2003, at 05:05 AM, Finn Jackson wrote:

> For a model to truly "learn" it would have to be able to rewrite its
> own
> equations.

This is the prevailing wisdom. However, I do not think it is an
adequate answer to the question of whether models can learn.

For example, a LISP or PROLOG model can rewrite its equations as it
runs, so by this criterion it might qualify as a model that can learn.
But it would have to be careful not to rewrite the equations that tell
it how to rewrite equations or it could lose its crucial ability!
Thus, it would have to have more or less fixed structure for the
rewriting capacity. Furthermore, it could not rewrite randomly, or
with wide latitude, or the model would "learn" nonsense or behave in
absurd ways, regress, or stop behaving at all. In a manner quite
analogous to a continuous model, a LISP model would have to have limits
on what it can rewrite.

In a continuous nonlinear model like a system dynamics model or an
adaptive control structure from engineering, there could be one or more
portions of the model that "rewrite" the active or dominant structures
controlling behavior. There also could be portions that "rewrite" how
those shifts in dominant structure are selected. (In somewhat like
manner, I gather the body does not change its DNA but does alter its
RNA.) One could imagine a hierarchy within a system dynamics model
that "controls for" (in William Powerss terminology) ever deeper
(higher) goals of the system and adjusts the goals and structures of
lower system levels according to growing discrepancies. Beers "viable
system" is an expression of such a hierarchy.

Theres a literature here that we should know about if we seek to
contribute to this interesting research question. I mentioned five
references in my Bergen talk on this subject, but they only hint at
whats out there. We have to go beyond opinions here. I recommend
that those who are interested build models to see how far they can push
the possibility for a nonlinear continuous model to exhibit learning
behavior. One or two good examples of possibilities and
impossibilities would be worth a lot in this discussion.

...George

*George P. Richardson
*Rockefeller College of Public Affairs and Policy
*University at Albany - SUNY, Albany, NY 12222
*
gpr@albany.edu *518-442-3859 *http://www.albany.edu/~gpr
"Rainer"
Junior Member
Posts: 4
Joined: Fri Mar 29, 2002 3:39 am

Can system dynamics models "learn"

Post by "Rainer" »

>
> For a model to truly "learn" it would have to be able to rewrite its own
> equations.

(e-mail, Finn Jackson, 04-26-2002)


Hi everybody,

I have been thinking about this statement before and I have a slight doubt as to define learning as a process of rewriting equations.

When reflecting about the term "leraning" I was tempted to define it as "a process of consciously reflecting about reality and changing behavior according to the results".

This is in so far not true, as learning often (mainly?) occurs without consciously reflecting about experience, but rather as an unconscious process of adapting to reality, adjusting to uncucessful behavior.

If, in computer terms, learning is described as as rewriting someones own equations and if rewriting the own equations requires consciousness (how can I rewrite an equation if Im not aware of its existence?) , neither models nor humans learn. (or better : nodels never, humans sometimes)

Transferring the modeling terms to humans, both need an "external force" to do the re-writing process..

To me, if computer models or humans can learn or not, is not a matter of rewriting its own equations.

The difference between models and people is, that the latter have various options to alter their behavior according to a single input, something a computer model has not.

Furthermore, those option a human finds are not only a result of a numerical input and far from mathematical logic. Usually, our feelings play are much more important role is the learning process than criterion of best numerical output.

Human learning does not mean "finding the best numerical strategy for the best numerical result". Human learning means: doing something in a different way than before, using a different tool from a variety of possible options, from which one is conciously selected.

A computermodel has only one option: reacting in a single way to a single input: no options. no process of selection.

Maybe this is the point.


Rainer Uwe Bode
From: "Rainer" <cdeh@hotlink.com.br>
cdeh@hotlink.com.br
"Ray on EV1"
Member
Posts: 29
Joined: Fri Mar 29, 2002 3:39 am

Can system dynamics models "learn"

Post by "Ray on EV1" »

Sorry for the interruption, but I am so focused on this topic that I
couldnt let this opportunity lie. Specifically, "if a model that
learns/changes could lose its problem focus".

If we look at the hill climbing problem, we could find that a given hill
climbing routine assigned to climb an hill which is not totally convex,
could end up oscillating around a point distant from the top of the hill.
The result is that the system does not reach its goal.

Did the system loose focus? Yes. We typically do not define success by
what was attempted but by what was accomplished. We sent the system out to
climb a hill and it ended up burning donuts on the hill side.

Raymond T. Joseph, PE
RTJoseph@ev1.net
"Brian Dangerfield"
Junior Member
Posts: 8
Joined: Fri Mar 29, 2002 3:39 am

Can system dynamics models "learn"

Post by "Brian Dangerfield" »

>From Brian Dangerfield:

I think maybe this thread has got slightly warped as it has
progressed.

My initial understanding of it was whether SD models could learn
on the fly as it were. As someone recently pointed out this can
happen only if the (rate) equations are changed. [Perhaps someone
with more knowledge on the subject than I can say whether cellular
automata can learn on the fly in this fashion.]

Thus to have a learning SD model one has to pre-empt all the
possibilities one can think of and have portions of rate equations
available to be switched in and out (by zero-one variables) as the
model learns. Many years ago I challenged Jay to explain how the
National Model could -- with fixed rate eqns -- possibly, even
qualitatively, reproduce known economic behaviour when economic
policy has undergone such radical changes since, say, 1900; not
necessarily here the result of learning since economic policy has
had shifts in ideas/fashion. (But maybe the National Models
purpose was not to qualitatively reproduce past economic
behaviour.)

So I think that for an SD to model to learn on the fly it would be
necessary to align the SD package being used with some other
modelling technology which would organically alter the SD model
once something new became apparent on the fly. Then the run
would continue with a better model for the new circumstances.

However...there is a danger of losing sight of the wood for the trees.
Will policy making be improved if we possessed an amended SD
modelling technology that could enable the creation of models that
learn? I would venture to assert that policy making in socio-
economic systems will *never* be replaced by machine, so is not
the initial query mis-guided anyway?

What we need to do is get policy-makers to learn better and we
have the tools already: straight SD models; repeat-running with hill-
climbing optimisation; microworlds etc. I know our community
loves the technical challenges in developing modelling technology
but the greatest difficulties come in guiding human behaviour on a
more visionary path.

Brian.
From: "Brian Dangerfield" <
B.C.Dangerfield@salford.ac.uk>




Prof Brian Dangerfield
Professor of Systems Modelling &
Executive Editor, System Dynamics Review
Centre for OR & Applied Statistics
Faculty of Business & Informatics
Maxwell Building
University of Salford
SALFORD M5 4WT
U.K.
Tel: 44 161 295 5315
Fax: 44 161 295 2130
"Martin F. G. Schaffernicht"
Junior Member
Posts: 7
Joined: Fri Mar 29, 2002 3:39 am

Can system dynamics models "learn"

Post by "Martin F. G. Schaffernicht" »

Hi,

Brian states that our goal is that policy-makers shall learn (and improve
their policy-making). In this, human learning will not be repplaced by
machine-learning. And "the greatest difficulties come in guiding human
behaviour on a more visionary path."

My original question has to do with this, in the following way. I suppose
we model in order to design and construct artifacts we later use. So
later on, our course of action will be guided by the action space provided
by these artifacts. We also know that most of us have a tendency to
"habituate" (I dont know if this word exists), and not to question too
many things as long as we act in routine. So if our artifacts do not
guide us towards questioning (breaking the transparency, if you will),
they will guide us into habituation.

It follows that we should wish to design (model) our artifacts in a way
such that they will guide us into asking questions. This does not mean
that the artifacts will ask questions, they may only indicate that the
time has come to do so. However, I feel that some forms of business
process (artefacts) are more likely to guide into asking questions than
others (for example the double-loop learning model, but also other similar
approaches).

So I wonder if we may "model" a business proces in a way that the
resulting process hels agents to early recognition of a need for
questioning. This will certainly not mean that policy-makers delegate
reflection to machines; but maight it not be possible to enrich a
SD-modelling software with some qualitative objects that may be connected
to the usual objects in order to help the users recall items that are
important (but cannot be calculated)? Example: once it has been decided
how a particular flow shall be regulated, would it not be useful to leave
sort of a "comment" in which you explain the rules or ideas used to
decide that this decision would be taken this way, and under which
conditions this way to decide shoud be reviewed? True, in this case the
model would not learn (but anyway, the kinds of learning that result in
modifications to the models structure would be hard to put into formulae;
however, the conditions under which such re-modelling would become
important may be figured out and modelled), but the resulting process
would recall agents that there is something to be learned (thats why I
like to think of it as "fuse").

But maybe I am mistaken: my argument starts by (and depends on) saying
that we model in order to construct. Sometimes I have the impression that
in SD, this is not always the case. And if one models only to understand
(but not to construct), then business processes (and artifacts emboying
them) would not take the form of the SD model. So the rest of the
argument would not apply.

Well, am I mistaken?

"Saludos",

Martin Schaffernicht
Universidad de Talca
From: "Martin F. G. Schaffernicht" <
martin@utalca.cl>


--
Martin F. G. Schaffernicht
Facultad de Ciencias Empresariales
Universidad de Talca

http://dig.utalca.cl/carpeta/Martin_Schaffernicht
"Martens"
Newbie
Posts: 1
Joined: Fri Mar 29, 2002 3:39 am

Can system dynamics models "learn"

Post by "Martens" »

Interesting discussion about model and learning, which also seems to make
almost everybody give their views, including me it seems. The main problem I
have with the direction the discussion takes and a number of the arguments
used is that models dont learn, humans learn. The use of modelling tools
and theories paradigms that are tested has to go through the cognitive
intelligence of human beings for us to be able to talk about learning. To me
the learning in system dynamics modelling mainly lies in establishing the
model, and thereafter test the actual outcome with expectations. i.e.
identify areas of discrepancies, and their causes. If there are no
discrepancies, there is not much learning, just a confirmation of something
already known, which has been well modelled, but perhaps a trivial model is
the reason.

Best Regards

Hans Martens
martens@calitas.com
"Jim Hines"
Senior Member
Posts: 88
Joined: Fri Mar 29, 2002 3:39 am

Can system dynamics models "learn"

Post by "Jim Hines" »

I mentioned earlier I didnt think incorporating learning in a model
would jeopardize a models problem focus. That struck Bill Braun as odd
enough to wonder if I was actually referring to models that really learn
("and thus change").

I think I was referring to that kind of model, but its tough to say
because we dont have any famous models that learn. I was actually
thinking of some work in OrgEv project. In this project were focussed
on the problem that that managers continually create bad policies (in
the SD sense). Why doesnt social learning (which is fundamentally
similar to biological evolution) result in better and better policies?
And, what needs to change in an organization so that better and better
policies do "evolve".

To explore this problem we model "people" as agents who can learn and
then change the structure of the model of which they are a part. I
**think** this is what Bill and others mean by a model that can really
learn. If so, weve created models that learn ...and each time our
problem focus have emerged unscathed. We still are concerned that
managers create bad policies.

Im puzzled (and I think Bill and others are, too) by this sub-thread
about learning and problem focus. Maybe the issue isnt learning but
what we all mean by "problem focus". For me a problem focus is
something a human has -- not a model. For example, the fact that
managers create bad policies is a problem that humans have, it wont
change no matter what we put in our models. The problem focus wont
change until either we solve the real problem, or (more likely, if more
sadly) our attention wanders off to some other difficulty in the world.

Jim Hines
jhines@mit.edu
"Ray on EV1"
Member
Posts: 29
Joined: Fri Mar 29, 2002 3:39 am

Can system dynamics models "learn"

Post by "Ray on EV1" »

I have to thank John Sterman for awakening me to the ghosts in my machines.
As such, I need a more objective concept of learning. It would be
instructive to address an agent subjected to a process which results in
learning a behavior. So there is an agent which is to learn a process, a
training process, and a behavioral change.

To be able to detect if the agent has learned the desired process, we should
be able to measure the difference in behavior before and after the training.
I am sure that we can construct an experiment which will expose that the
hill climbing machine could be defined as a learning machine under these
considerations.

What is more difficult is the question of intelligent agents. We can
consider the Turing test:
"It is proposed that a machine may be deemed intelligent, if it can act in
such a manner that a human cannot distinguish the machine from another human
merely by asking questions via a mechanical link".

Under this guide, a machine could easily be considered to be intelligent but
still not be able to learn. An agent which can display intelligence could
be programmed to perform the relevant actions. It is a significantly
difference programming exercise to design an agent which could acquire or
arrange the relevant knowledge on its own. An FM radio is programmed to
decode a radio frequency signal into a piece of music. It is a small step
to add the ability for a user to tune into a specific channel to hear a
desired program. A little more mechanical savvy and the user can push a
button to tune the radio to a specific channel, say 101.1 MHz. Now, lets
say we want the radio to be able to tune into the channel I want at the push
of a button. This task asks for much more than all the previous
accomplishments. In fact, the requirements needed for the task of learning
the users needs is a new technology different from all the previous, but
still must interface with all the previous technologies.


Raymond T. Joseph, PE
RTJoseph@ev1.net
"Bill Braun"
Junior Member
Posts: 3
Joined: Fri Mar 29, 2002 3:39 am

Can system dynamics models "learn"

Post by "Bill Braun" »

>From the pen of: Jim Hines
> Im puzzled (and I think Bill and others are, too) by this sub-thread
> about learning and problem focus. Maybe the issue isnt learning but
> what we all mean by "problem focus". For me a problem focus is
> something a human has -- not a model. For example, the fact that
> managers create bad policies is a problem that humans have, it wont
> change no matter what we put in our models. The problem focus wont
> change until either we solve the real problem, or (more likely, if more
> sadly) our attention wanders off to some other difficulty in the world.

Largely as an exercise to ameliorate my own confusion, I offer a brief
definition of terms for discussion.

Models that Learn

Models which add, delete, or edit its own equations, and/or add or delete
objects (stocks, flows, constants, tables, etc.) based on the models own
outputs (represented in levels and which may be subject to exogenous
analysis by one or more analytical processes before being returned to and
acted upon by the model as inputs).

(Based on my understanding, and as Nat commented, this is not yet
achievable in iThink, Stella, Vensim or Powersim - and I am not advocating
here and now that it should be.)

The implication of this appears to be that Models that Learn would
autonomously determine their own optimal structure in response to some
input, either from the models levels (with or without exogenous analysis)
and/or from changes made by the users of the model (through STEP and PULSE
type functions, or simple changes in constant variables).

It would seem possible and feasible that some equations and/or objects
could be declared "off limits" to autonomous learning, thereby retaining
certain elements of structure, and possibly the models problem focus
(inclusive of Jim Hines comment that perhaps we need a better definition
of that term as well).

Learning from Models

Models whose equations and objects (stocks, flows, constants, tables,
etc.) remain the same unless and until the modeler intentionally changes
them. Users could affect the models outputs by values given to variables
through input fields (and which may cause changes in the influence that
functions such as STEP and PULSE have in the model).

The implication of this appears to be that learning is an emergent
property of model building and the research that precedes model building.
It also appears to imply that changes in management thinking (that was
spoken to in an earlier post), although not guaranteed, is another
emergent property of model building.

Both of these implications refer to learning that is exogenous to the
model itself. Repeated runs of the same model could lead to multiple
insights, although all would have been derived from the same model.

Also implied is that multiple models could be developed for the same
problem (assuming clarity on the problem focus) reflecting different
points of view of the problem, all of which, taken together, would form
the aggregate insights into a stated problem.

Problem Focus

Observable and measurable events that:
- have been observed over a period of time
- are generally recognized to have an adverse impact on the organization
or part(s) of it
- are important to the organization and has the attention of managers


Comments and critique are welcome.

Bill Braun
From: "Bill Braun" <
bbraun@hlthsys.com>
Natarajan R C
Junior Member
Posts: 5
Joined: Fri Mar 29, 2002 3:39 am

Can system dynamics models "learn"

Post by Natarajan R C »

John Sterman
Senior Member
Posts: 117
Joined: Fri Mar 29, 2002 3:39 am

Can system dynamics models "learn"

Post by John Sterman »

Requiring a model be able to rewrite its own equations before we will
deem it able to learn seems overly restrictive:

>"Models that Learn [are] Models which add, delete, or edit its own
>equations, and/or add or delete objects (stocks, flows, constants,
>tables, etc.) based on the models own outputs (represented in levels
>and which may be subject to exogenous analysis by one or more
>analytical processes before being returned to and acted upon by the
>model as inputs)."

Human beings and our brains do not consist of equations and we do
not learn by rewriting our equations. (We may perhaps choose to model
the dynamics of the brain using equations, but these are models of
the brain, not the brain itself). Hence by the definition above
humans cannot be deemed to learn. While some cynics may suggest that
is correct, I conclude that it shows a confusion between defining a
phenomenon (learning) and a particular mechanism that might be able
to generate the phenomenon.

I once again suggest that learning is not to be defined how it is
accomplished but by what happens; that we define learning in such a
way that it does not presume a particular mechanism; and that it not
be defined in such a way as to apply only to a subset of systems or
organisms. Most people would (rightly, in my view) be uncomfortable
defining learning capability as something that only people and not,
say, chimpanzees, can do, based on the presumption that only people
have a certain set of structures while "lower" organisms do not. I
once again suggest we define learning operationally in such a way
that one can make a judgment as to whether a particular entity
learned or not in a particular situation without having to know in
advance whether that entity is a human, pigeon, or machine (model).

John Sterman
From: John Sterman <jsterman@MIT.EDU>
zenabraham@aol.com
Junior Member
Posts: 18
Joined: Fri Mar 29, 2002 3:39 am

Can system dynamics models "learn"

Post by zenabraham@aol.com »

Greetings,

To me, the concept of "learning" implies an awareness of change in knowledge.
Thus, for a system dynamics model to "learn" means not only that it must be
aware of what new information it has gained, but what information it does not
possess. Then, it must be able to determine what information it needs to
gather, then how to collect it.

Someone said this statement: "He doesnt know everything I know, only what I
tell him." Thats true for this thread and for Artificial Intelligence, as
well. Our boundaries are what we understand our thought process to be at
this point in time, which is another way of stating that were still learning.

Thus, were only capable of producing systems that replicate what we
understand "thinking" and "learning" to be. Let me offer a bit more
controversy: a simple rule is this: if we understand how the system we
created works, weve just defined the boundaries of the system. The "magic
mark" is to develop a system that organically evolves to a point such that we
dont understand it.

In closing, I would say were at that point in modern society. Its become
populated with systems so complex, that theyre beyond our ability to
completely understand them. And by this, I mean social systems, not
mechanical or biological systems of a "simple" order.

Thus, the focus should first be on the creation -- first in small steps -- of
constructs, or SD models, that are capable of unpredictable behavior.

Is that possible? I think so. Such models will have a great deal of noise,
of course.

Zennie
From: zenabraham@aol.com

____________________________________________________________
Zenophon Abraham
Chairman and CEO
Sports Business Simulations, Inc.
zennie@sportsbusinesssims.com
510-444-4037
www.sportsbusinesssims.com
"Jim Hines"
Senior Member
Posts: 88
Joined: Fri Mar 29, 2002 3:39 am

Can system dynamics models "learn"

Post by "Jim Hines" »

Only one quibble with Bill Brauns suggestion of what we call what.

A problem focus is really just the problem that the modeler is focused
on. Observability, measureability, general recognition, importance to
the organization -- none of these things are necessary or sufficient. A
student could have a problem because he feels he is at risk of being
thrown out of school. He could be wrong that he is at risk; the risk
(and his feeling of it) could be unobservable and unmeasurable, no one
else may recognize either his feeling or the risk, and finally his
feeling and/or the risk may not be important to the school or even to
him (maybe he doesnt really care if he gets thrown out or not). Even
so, he has a problem and he (or someone else) could focus on it, taking
a system dynamics approach.

(Note: Personally I think an SD project should focus on a problem that
is fundamentally important to someone and so I would have liked to have
left the "importance" idea in the definition. But, I have to admit that
the academic literature contains plenty of "problems" that dont appear
to be fundamentally important to anyone.

Jim
From: "Jim Hines" <jhines@MIT.EDU>
"ALLOCAR SRASBOURG"
Junior Member
Posts: 6
Joined: Fri Mar 29, 2002 3:39 am

Can system dynamics models "learn"

Post by "ALLOCAR SRASBOURG" »

Hi everybody

I have a question to ask and an observation to share about the subject of learning.

The two are more or less connected.

The question first.

As a businnesman I have many other friends in business and I have too a board with administrators and family stock holders. While being very interested in SD, I have not the
intention to talk to any of these people about SD.

Why?

Because I know their reaction: they will not believe me even if they have made the effort to understand the basics of SD.
Now why will it be so difficult to make them believe that SD can have any usefulness.

Even if I say that modeling does not forecast the future at all and the only usefulness is to give insights they will have very convincing arguments against it:

Having made most of them lots of experiments they are convinced that the future is globally not predictible and can give me exemples of a tiny event that had a side effect that by a positive feed back, changed completely the future.

They will say that whatever the sketch of the actual reality you will always forget that tiny event that will make any of your forecasting, completely obsolete.

Worst, they will say that forgeting that tiny event can make you take bad decisions, and that it is better to do nothing than to do bad things.

So is it not the caracteristic of SD, studying the side effects, delays and feedbacks (particularly the positive ones) that
can convince people of the unusefulness of studying the future unless it is evident?

The only argument I have found is that running an SD model is like making any experience in life. It can be good or bad depending on the experience (or the model), the past experiences you have already made, the overall context you are in.

The positive side of SD modeling is that you can in a short time make many experiences, the negative side is that it is not exactly like life, than it can have bugs or miss this tiny event.

One can say too that finally the experiences made using SD have prooven to be more often good ones that bad ones.
It will still be necessary to proove that the cost of experiencing will be less than the benefits.

But without other arguments I prefer not to talk about SD with friends or people in my business.
People will think that I have some intellectual profits thats all.

Are there other arguments for SD?


This question leads to an observation.
The difference between model learning and human learning is that the action of the model having learned may be determined by its learning the human not.

It is the old battle between the Jansenists and the Jesuits:
In a sense for the Jansenists and Pascal, the human being was completely determined at his birth to the point that you could say it he was going to hell or to heaven the other one
feeling that the human being was free.
In short numan beiings are FREE to be undetermined and to do foolish things.

Learning has a deterministic effect on a model not on a human.

That means that learning is not all.
I have taken in the past many bad decisions even when I had all the information that could have permitted to take better ones.

Why did I take these bad decisions?

There is one main reason:
I underevaluated the positive effects of the good decision and overevevaluated the positive effects of the bad decision I made. And there are often huge under or overevaluation.

There are of course other reasons, influence of other people,
the general inertia not to mention necessity which is often a good inflence.

All this to suggest that learning is not all but what one does with it is what counts.

. J.J. Laublé


From: "ALLOCAR SRASBOURG" <
allocar-strasbourg@wanadoo.fr>
DGPacker@aol.com
Junior Member
Posts: 4
Joined: Fri Mar 29, 2002 3:39 am

Can system dynamics models "learn"

Post by DGPacker@aol.com »

The Jim Hines/Bill Braun notes caused me to recall a time long, long, ago
(when computers were quite new) when I went to a lecture by the mathematician
Norbert Weiner at MIT. One thing he talked about was the enormous amount of
money being wasted on unsuccessfully developing a computer program for
language translation. He noted that this was a classic example of what we
would call system boundaries--that if the perspective were expanded so that
the computer and humans could both be compenents of a language transalation
system, then an elegant system of translation was quite possible. The
computer could do the routine very rapidly, with humans handling the subtle
and the nuances. So we do today already have systems that learn and change
their structure--with the model and the humans both essential components.

Dave Packer
From: DGPacker@aol.com
Systems Thinking Collaborative
"Thompson, Jim A142"
Junior Member
Posts: 6
Joined: Fri Mar 29, 2002 3:39 am

Can system dynamics models "learn"

Post by "Thompson, Jim A142" »

" Forrester, Sterman, Richardson, Hines, Thompson, et al. who
have, with consistency, noted that the best models are those that are
problem focused." From Bill Braun <
bbraun@hlthsys.com>

One objective of the application of system dynamics methods is to better
understand cause and effect in systems with feedback mechanisms. We
identify interesting dynamics with reference mode behavior. So we set out
to create a system dynamics model that simulates reference mode behavior.
When the model we create does simulate reference mode behavior, we improve
the likelihood of achieving that objective.

Jim Thompson
Economic & Operations Research
CIGNA HealthCare
900 Cottage Grove Road A142
Hartford, CT 06152
Phone: 860.226.8607
Fax: 860.226.7898
email: jim.thompson@cigna.com
Yaman Barlas
Member
Posts: 44
Joined: Fri Mar 29, 2002 3:39 am

Can system dynamics models "learn"

Post by Yaman Barlas »

Part of this discussion (on learning) has dealt with "Models which add,
delete, or edit its own equations..." as a definition of learning(1). Most
comments on this topic went on to state that "such self-altering models are
not achievable in Stella, Vensim of Powersim"(2).
I would argue that both of the above statements are wrong/misleading:
1- Defining model learning as the "ability to add, delete, or edit its own
equations" is very restrictive and problematic philosophically. A small
exercise will quickly prove that one can find excellent instances of learning
that do not obey such "software-oriented" definition of learning. But
furthermore, the very notion of "ability to add, delete or edit its own
equation" is a quite ill-defined. This can mean very different things
depending on what software/method/tool one is dealing with. (See, below).
2- The second assumption is simply technically wrong. The mistake follows
from the narrow, sofware-oriented definition used above. As I briefly implied
in an earlier mail, most system dynamics models DO alter their own equations,
depending on the current state of the system. As a matter of fact, a
nonlinear formulation like flow = f1(states)*f2(states) states in a sense
that the very "form" of this flow equation is changing, depending on the
functions f(.) and states. (Assume fs and variables are all vectors for
generalization). Depending on the values of the states, this equation can
"become" a linear equation for a while and then can behave like a quadratic
equation and then perhaps like a constant, etc. Nonlinear equations state in
one sense that the "forms of the equations" depend on the states (when
contrasted with linear equations). Now you can come up with very
sophisticated versions of this type of "equation altering" depending on your
needs. For instance, you can write your equations such that, depending on
the state of the system, some variables "drop" from equations (by having zero
coefficients) and new links and new variables are introduced by turning zero
coefficients into non-zero variables, etc. So, equation altering, adding,
deleting of links and variables as a result of system feedback are all quite
possibles at different levels of sophistication in sd models.
Yaman Barlas
From: yaman barlas <
ybarlas@boun.edu.tr>

---------------------------------------------------------------------------
Yaman Barlas, Ph.D.
Professor, Industrial Engineering Dept.
Bogazici University,
34342 Bebek, Istanbul, TURKEY
Fax. +90-212-265 1800. Tel. +90-212-358 1540; ext.2073
http://www.ie.boun.edu.tr/~barlas
SESDYN Group: http://www.ie.boun.edu.tr/labs/sesdyn/
-----------------------------------------------------------------------------
George Richardson
Member
Posts: 23
Joined: Fri Mar 29, 2002 3:39 am

Can system dynamics models "learn"

Post by George Richardson »

I must admit to more than a little pique at this ongoing discussion.
Sorry about that; please forgive what follows if it bugs you.

It is probably far too self-centered and impolite for me to point out
that the only example (to my knowledge) of a system dynamics model put
forward to address the question at hand was one I built and talked
about at the Bergen conference. (Hines has done some nice things on
learning using genetic algorithms attached to system dynamics models,
and others have used neural nets, and other stuff, but I think the
Bergen model is the only straight-up system dynamics model crafted to
imitate learning.) Yet the discussion has seen a wide variety of
assertions not based the Bergen model or any other system dynamics
modeling effort, and most of have not been based on any literature. We
could get most of the discussion weve seen by interviewing
scholarly-looking people on a street corner.

I got into trouble before asking us to tie ourselves down to literature
in our online discussions -- people didnt want to be so tied down.
But golly gosh (theres that pique getting really upset), dont we have
to go beyond opinion to advance ourselves? Examples:

Alan Graham (I believe) pointed out that continuous adaptive control
machinery has elements of "learning," but we still hear people saying
you have to rewrite equations (even though you and I dont do that when
we learn, as far as I can tell).

We havent heard anybody talking about what might get learned: it
occurred to me that the model I built looked like it was learning and
forgetting something, but that something was certainly not Jays name,
or the year Napoleon was defeated at Waterloo. What kinds of things
could continuous aggregate models like system dynamics models learn?
(Could they learn Jays name? Could they change their goals, or the
strategies they use to get to ther goals?)

Nobody has picked up on the empirical observation I made in the course
of the Bergen work, that it was far harder to get a model to "perceive
itself" than it was to write equations that changed dominant structure.
We could spend quite a while talking about whether its possible, and
under what circumstances, for a system dynamics model to perceive
aspects of its own behavior. Getting it to act on the perceptions
seems to be the easy part.

Nobody has gone to the continuous literature to tell us what smart folk
have been able to do, not even the tiny bibliography I cited in my
Bergen PowerPoint presentation:

> Self-learning policies in Urban Dynamics, Readings in Urban Dynamics
> II (1975).
> DeJong, Learning to plan in continuous domains. Artificial
> Intelligence 65 (1994).
> Ram & Santamaria, Continous case-based reasoning. Artifical
> Intelligence 90 (1997).
> Richardson, Andersen, Maxwell & Stewart, Foundations of Mental Model
> Research (1994).
> Powers, Behavior, the Control of Perception (1973).

I promise not to send an email like this to the list very often. I do
want wide-ranging conversation, and see it as a necessary, enjoyable
precursor to movement in the field. But once in a while, I (or I hope
someone else) must write in to remind us that we are scholars, who need
more than opinion to help us "rewrite our equations."

And one more thing: Could we agree not to assert that something is
impossible, unless were absolutely dead sure it really is impossible
and nothing of value would be learned if some poor, misguided soul
tried to do it anyway?

There. I feel much better.

...George

*George P. Richardson
*Rockefeller College of Public Affairs and Policy
*University at Albany - SUNY, Albany, NY 12222
*
gpr@albany.edu *518-442-3859 *http://www.albany.edu/~gpr
"Jim Hines"
Senior Member
Posts: 88
Joined: Fri Mar 29, 2002 3:39 am

Can system dynamics models "learn"

Post by "Jim Hines" »

No one has ever gone wrong following Georges advice, and many many
people have become better people for having embraced Georges views.

Unfortunately, its easy to misinterpret Georges recent injunction --
Im writing because a respected friend did just that. Fortunately, I
have an almost mystical bond with GR, and am prepared to explain what
George really meant.

Very simply, George asked people to talk about what they know, to talk
about it in a way that lots of people can understand, and to talk in a
way that keeps the conversation moving.

He didnt mean to say that you shouldnt voice an opinion unless you can
back it up with citations from the literature. On the contrary, George
believes that the literature itself is just a bunch of opinions people
have written down. And, in writing our emails, were also writing down
our opinions -- all to the good as far as GR is concerned.

George asks just two simple things: First, that we explain why we hold
our opinions. The academic literature does that, and George just wants
us to explain, too (though not necessarily in the same odd way that an
academic article would). Instead of only saying, "In order for a model
to make an impression on a client, the client has to participate in
creating the model." Go on to tell what experience makes you think that
is so. (And, remember, designing and administering a survey happens to
be the kind of experience that some academics seek out. Thats not a
bad thing, though it may be odd and not the sort of experience most
people usually ... well, experience.)

Second, George would like us to respond to each other, instead of just
offering opinions willy-nilly. Articles do this and George thinks we
should, too (again not necessarily in the same way as academic articles,
which tend toward a style requiring more effort and resulting in less
mileage than perhaps one would hope). But, if you think someone is full
of Hooey, dont just offer another opinion -- say that that person if
full of Hooey (in a nice way) and explain why. And if you think someone
is partly right, say that, too and why. But most of all, make sure you
know what the other person is saying and why -- so, you might first post
a question to the person and then explain why hes full of hooey.

Jim
jhines@mit.edu
DavidPKreutzer@aol.com
Junior Member
Posts: 4
Joined: Fri Mar 29, 2002 3:39 am

Can system dynamics models "learn"

Post by DavidPKreutzer@aol.com »

Dear Jim and George,

I also have nothing but the highest admiration and respect for George as well
and really enjoyed this blast of settings things "right" in a good old
fashioned lighting bolt from the Gods on Mount Olympus style, that most of us
were brought up on.

But I also had to explain to some business colleagues and younger employees
that you probably didnt really mean to appear to suggest that no one else in
the world could discuss this issue without having previously read and
"complied" with emails or comments you referenced from years ago.

But George, I am delighted to discover you have developed such a talent at
expressing curmudgeonly academic "pique" in the decade since I saw you last.
Well, done. And I had thought I was the most dramatic fellow in the field
and maybe the fifth crankiest! (LOL) Now I realize that if the old mild
mannered, ever diplomatic, Ghandhi-like, and serene George can pop off an
email like this about an issue as "hot" and consequential as whether systems
can learn or not, I may no longer even be in the running.

Good to hear from you again. Please dont think from the laid back mood of
this email that I have lost my ability to get passionate about these issues.
I am hoping to develop a couple of intense emails of my own perhaps next
week, but it may take me some time to work up my mood as I have been in some
different arenas so long I may be a little out of practice.

If I had known you guys were having this much fun here I would have come back
a long time ago!

Cheers,
David Kreutzer
From: DavidPKreutzer@aol.com
Locked