JJ (and all)...
To me, learning in an SD model is about reinforcing. That does represent a
key aspect of learning. But does it meet all ”learning” needs?
Deterministic models require the modeler to specify all aspects of the
system. Therefore there are limits to the ability to learn in ways that
move outside specified structure, parameters and model boundaries. For
example, one can learn how to open bottles i.e. open one bottle and then the
next, etc. and soon one has the facility to easily open most bottles. This
kind of learning is what an SD model is good at doing. However what the
model cannot do is to move outside the model/system to similar but not same
situations and when confronted with a new problem—be able to translate the
bottle-opening skill into the ability to turn on a water faucet.
In an SD model there can be exponential growth of learning…the limit there
is the body of knowledge perceived as being expertise in opening bottles…so
there is decreasing learning as you have “learned “ all the bottles…so in
the SD model there is nothing to allow you to have a flash of insight that
goes beyond the body of knowledge and takes you and the body of knowledge to
the next level. Exponential growth with a limit…the classic “learning
curve”…does not allow for going beyond the limit.
This reminds me of a recent lecture I attended on contingency planning for a
terrorist attack on the electric supply grid where I found myself frustrated
by the rigidity of the modeling (not SD) and planning approach. The
contingency plans were of the “we do x when y occurs” character and
necessitated endless specifications of x and y possibilities and of their
individual probabilities of occurrence as defined by the modeler. And so
the first thing that occurred to me was that there could be other “y”s not
yet specified or provided for that occur. I wondered what accommodation has
been made within the system for these unforeseen events. What I am trying
to say is that if this is the case, there have to be mechanisms in place to
deal not with a specific event per se, but with any deviation from the norm
that calls for extraordinary responses. In the face of an extraordinary
event, how does one provide for the flexibility in a system to provide the
means for taking the pieces that have been prepared in advance to facilitate
recovery and to re-assemble them into different patterns as needed? I
suppose that what I am trying to say here is that I see vulnerable systems
such as electric power grids or transportation supply chains (and by
extension, SD models, and for that matter the dynamics of learning
organizations) as being composed of nodes and links. The problem that I am
seeing is how to make these nodes and links mutable, nimble and adaptive.
“Hill-climbing” approaches to learning (e.g. by running many simulations)
enable the model/learner to use experimentation to approach some optimal
condition given the specified goals and means (the model structure and
parameters). This represents one classic form of learning, but won’t allow
for the unpredictable—such as a terrorist attack, or simply what to do when
confronted with thirst and a water faucet when all you know is how to
unscrew and open bottles—though you’ve optimized your ability to open
bottles.
Agent-based simulation using genetic algorithms may prove to be one way to
move beyond deterministic model limits and to accommodate out-of-the-box
solution/learning in organizations.
This gets at the kind of “mutation” that enables moving to new levels of
evolutionary development (a la Stephen J. Gould and like-minded others in
the field of evolutionary theory)—the LIFE to which J.J. Lauble’ refers.
There are an abundance of issues to be dealt with. For example, in the
electric power grid/terrorist attack situation should the contingency
planning process be one of choosing the predictable and segmenting it so
that along with the correct toolbox it is able to be re-assembled as
needed—albeit perhaps in new and not previously thought of ways? Is it
possible to meet the challenge of the unpredictable by using mutations? Is
"the plan" robust enough to meet that challenge—to deal with the
unanticipated event (9/11)—out-imagining the clearly nimble terrorist who
can think and do the “unimaginable”? How do we construct the stockpile of
segments? Can there be a storehouse of "spare (and perhaps new) parts" for
various (and perhaps new) types of re-assembly?
Can this kind of “nimble” (out-of-the-box) preparation for recovery of a
system be cost effective if it is not built in significant measure on
existing infrastructure or common practices within the system (i.e. company
or industry)? What economic and political costs are acceptable to the many
affected businesses, communities, and regulatory bodies?
I believe that the issues of mutation (new responses of system actors)
within shifting environments (new conditions in system climate) are
universal in methodological terms—how to build models and real-world systems
that “learn” in such a way as to be rapidly flexible and adaptive to the
unpredictable.
Marsha Price
From: "Marsha Price" <
marshaprice@earthlink.net>
Marsha Jane Price
Research Affiliate, System Dynamics
Center for Transportation and Logistics
Massachusetts Institute of Technology
Telephone: +1.617.742.3221
Mobile: +1.617.642.7900
Alternate e-mail:
mjprice@MIT.edu