Unwarranted Criticism

This forum contains all archives from the SD Mailing list (go to http://www.systemdynamics.org/forum/ for more information). This is here as a read-only resource, please post any SD related questions to the SD Discussion forum.
Locked
=?iso-8859-1?Q?Andr=E9_Reichel?=
Junior Member
Posts: 14
Joined: Fri Mar 29, 2002 3:39 am

Unwarranted Criticism

Post by =?iso-8859-1?Q?Andr=E9_Reichel?= »

Hello all,

it is true that SD does not rely too heavy on "empirical evidence or
previous knowledge of the subject". SD-models look not so much for
parameters, time-series or statistical data like other methods (especially
econometrics). The view of SD is a structural view. This means that a
SD-model portrays reality under the light of certain problems and questions.
Often these are new problems and questions which have never been dealt with
in the past. Therefore, no adequate statistical or empirical data is
available.

In these cases SD has a special strength, because it relys on structure
(stocks, flows and their interconnectedness) which can be easier determined
than any numerical ("hard") data. Structural knowledge is often widely
available throughout the portrayed real-life system and can be obtained by
interviews, news reports etc. When dealing with complex systems, parameter
values are at most times of little importance.

Take an organism, for example. Within certain boundaries, it will always
tend towards an equilibrium state, regardless how parameters are set. This
is common to all complex systems like national economies, enterprises and so
on. In SD it usually does not matter if a parameter value is 10.713 or
10.959 -- in econometric models this is a key issue, which involves a lot of
research and sometimes highflying arguments.

If you are dealing with a common problem, where a large chunk of empirical
and statistical data exists, econometrics might be a better method of
simulation. If you are dealing with new and complex problems and are mainly
interested in understanding a system, SD is the right choice.

For further reading I highly recommend and article by Dana Meadows, "The
Unavoidable A Priori" in Randers (1980). It deals with the differences of SD
and econometrics.

Viele Grüße
André Reichel
Universität Stuttgart, Germany
A.Reichel@epost.de
"Keith Linard"
Junior Member
Posts: 9
Joined: Fri Mar 29, 2002 3:39 am

Unwarranted Criticism

Post by "Keith Linard" »

John Swanson schrieb:

> While still slightly stunned by this, I know I have met people with
exactly
> this view. Why does this mis-representation of SD persist? Why is it
there
> at all?

In part this mis-representation is a reflection, perpetuated by
fundamentalists from both the econometrics and system dynamics camps, that
SD and econometrics are in opposition to each other. Both techniques have
their place, and can be used to complement each other. John Sterman, in
Section 11.5 of his excellent text Business Dynamics, notes that
econometric analysis can assist the SD modeller in choosing parameter
values.

More seriously for the SD profession, this mis-representation is also
perpetuated by the way some system dynamics modellers misuse qualitative
variables, with uncritical multiple application of judgement converters
(Effect_of_X_on_Y * Effect_of_Y_on_Z * Effect_of_Z_on_A) and with uncritical
application of cognitive algebra (e.g., automatic assumption that such
judgement converters operate in a multiplicative rather than additive,
factorial or averaging way). Indeed, one model presented at the the 1999
ISD Conference in Wellington had no stocks at all ... it was simply
qualitative converter multiplied by qualitative converter...! The
author presumed that, because it was built with SD software then it must be
an SD model. This is perhaps an extreme example, but extensive misuse of
qualitative variables is not uncommon. Such voodoo modelling gives SD the
bad name.

Stermans book has some valuable advice re development, use and validation
of judgement converters. Geoff Coyles plenary address to the 1999 ISD
conference also has very good advice in this regard. However, I suggest
that Conrad Nuthmans paper in System Dynamics Review, Vol. 10, No. 1
(Spring 1994), "Using Human Judgment in System Dynamics Models of Social
Systems", addresses this much more comprehensively. This paper should be
mandatory ready for every aspiring system dynamics modeller. Better
understanding in this area will minimise the likelihood that our models are
not "mere plausible conceptual nonsense", and remove the basis for the
mis-representation of SD.





Keith Linard
Director
UNSW Centre for Business Dynamics & Knowledge Management
University of New South Wales (ADFA Campus)
CAMPBELL ACT 2601 AUSTRALIA
Phone: -61 -(0)2 -6268-8347
Fax: -61 -(0)2 -6268-8337
Mobile: -61 -0412-376317
Email:
k-linard@adfa.edu.au
=?iso-8859-1?Q?Andr=E9_Reichel?=
Junior Member
Posts: 14
Joined: Fri Mar 29, 2002 3:39 am

Unwarranted Criticism

Post by =?iso-8859-1?Q?Andr=E9_Reichel?= »

Hello all,

Ted made some good points about some of my earlier comments on this topic.
The reference mode should always tried to be sketched over time. This is a
very essentiel first step and it clarifies your dynamic hypothesis about the
systems behavior. If you have historical data then this reference mode
should, of course, fit into it. The problem in this matter often is the lack
of precise data of certain variables of interest. As I mentioned, these are
the cases were SD has its greatest potential, because it does not need
overly precise data. The general behavior of a system is much more important
than the precise numerical values. Therefore, an SDist can build a first
learning model very quick and gain some insights about which variables are
of importance to the model. But it must also be recognized, that this
procedure gives way to the criticism stated in John Swansons email, that SD
make[s] little use of empirical evidence or previous knowledge of the
subject.

Furthermore, I made some comments about the structural view of SD. As Ted
said, SD relies on knowledge of a certain problem and this is absolutely
right. This knowledge is structural and operative knowledge about how a
system is built, how its elements are connected and which decision rules are
in place. However, the attempt to determine the structure under the goal of
reproducing the reference mode can be counter-productive in terms of system
understanding. If the structure is portrayed correctly and the dynamics do
not match the reference mode, you have to choices left: first, change
parameters and/or structure until the behavior and reference mode match;
second, assume that the strucutre is correct but your reference mode is not.
Than you have to rethink your dynamic hypothesis and challenge your mental
models of the system. If we stick too much on our hypothesis, maybe
strengthened with hard data, we might miss the opportunity to gain new and
important insights. So, my suggestion would be to portray a systems
behavior as closely as possible and to make sure it matches historical data
(if available), but also dont become to entangled with numbers and
obvious truths about a system. Of course, this will not save us from
criticism from other modeling schools.

The last comment of Ted I want to underline very heavily: I believe that if
you have any problem that changes over time use SD. In these cases SD is at
most times the right choice, but if some other methods are easier to
implement or more economical in use, we should not be too afraid to use
them. In some earlier postings on this list there have been intense
discussions about when to use SD and when not, maybe they are worth a look.

The topic raised by John Swanson, about unwarranted criticism of SD is an
important one. Every SDist faces certain prejudices about his/her modeling
technique. There are numerous articles and papers that are dealing with
this. At some time I thought, the question why to use SD had already been
answered. Now it seems as if we have to proof the benefits of SD with every
model we build, every insight we gain.

Viele Grüße
André Reichel
Universität Stuttgart, Germany
A.Reichel@epost.de
"Roberto Vacca"
Junior Member
Posts: 11
Joined: Fri Mar 29, 2002 3:39 am

unwarranted criticism

Post by "Roberto Vacca" »

Dear John Sterman:
thank you for your SD3408 - very sharp, honest, informative. I used to be a
member of the Club of Rome [resigned in 1981, objecting to lack of quality
control over current reports) and I did analysis work on WD and on the
Mesarovic-Pestel model. Fews people recognized that the latter at least
forecast correctly that worldwide the portions of primary energy supplied by
oil, coal, gas, all-others would stay constant from 1975 and for the next
quarter century. WD - so interesting and stimulating - lacked similar
success.
At ISIS (Institute for Systems Integration Studies) weve done work on
integrating different systems analysis approaches. In urban modeling
we used concurrently Input/Output Matrixes, SD, logistic substitution
analysis (based on Volterra-Lotka equations, which have a good pedigree of
forecasting success) with interesting results.
I agree that BAD SD deserves to be flogged just like bad systems
engineering and bad anything. My objections are especially against:

1 -Use of soft variables which cannot be measured directly (see William of
Occam "Entia non sunt multiplicanda praeter necessitatem")
2 - Reliance on empiric relationships (also on regression analysis) - while
no cause-effect relation is apparent and multiple feedback loops are
reasonably taken into account, often without a (certainly hard to get) clear
perception of the parameters involved. Volterra equations (apart
form their often good forecasting success) do mirror a well understood
mechanism of a population filling a niche (valid for populations of
animals, artifacts, energy sources, greenhouse gases etc)
3 - Causal Loop Diagrams and their facilitated successors: EUCLID (Easy
Understanding of Causal Loop Inherent Dynamics) aiming to make explicit
and easy to grasp for non expert multiple feedback loops, which to a large
extent are intuitively postulated but not analyzed
quantitatively and proven by means of adequate calibration and validation
supported by actual outcomes.

best

Roberto Vacca
Rome, Italy
From: "Roberto Vacca" <
mc4634@mclink.it>
"tl7886"
Newbie
Posts: 1
Joined: Fri Mar 29, 2002 3:39 am

Unwarranted Criticism

Post by "tl7886" »

Hello,

I have been involved with SD for a little over a year and a half, so please
correct me if I am wrong. I tend to disagree with Andres statement that
"it is true that SD does not rely too heavy on empirical evidence or
previous knowledge of the subject." I believe that the first step in an SD
study would be to start drawing graphs over time (reference modes) of the
historical data. At times the reference modes might be hypothesized, but
often they should come from experts with previous knowledge of the subject.
Perhaps I misunderstood Andres statement and need clarification. Andre
furthermore emphasizes SD as a structural view point and that "Structural
knowledge is often widely available throughout the portrayed real-life
system and can be obtained by interviews, news reports etc.." This is true
in the sense that the second step in an SD study would be to fit some sort
of basic structure to the reference modes. As SD modelers, however, we
should not attempt to determine the structure without the goal of having it
reproduce the reference modes.(I.e. the historical data) This suggests to
me that SD relies very much on empirical evidence or previous knowledge of
the subject. As a final note I will comment on Andres statement that "If
you are dealing with a common problem, where a large chunk of empirical and
statistical data exists, econometrics might be a better method of
simulation. If you are dealing with new and complex problems and are mainly
interested in understanding a system, SD is the right choice." I believe
that if you have any problem that changes over time use SD. Any comments or
criticisms are welcome to my reply.

Cheers!
Ted Lawrence, MPP
From: "tl7886" <tl7886@msn.com>
Rockefeller College, University at Albany
Alexander Leus
Junior Member
Posts: 11
Joined: Fri Mar 29, 2002 3:39 am

Unwarranted Criticism

Post by Alexander Leus »

Hi Ted,
I am in agreement with you. I spent many years of my early engineering
days modeling engineering systems with linear and non-linear
differential equations etc., and worked myself into the really hard
stuff called business system dynamics. The approach of SD methodology
is identical , in my opinion, to understanding engineering systems.
There is one big difference, dynamic modeling of business systems is
more complex because you have people issues etc. that do not necessarily
follow the laws of physics (ha ha). In all seriousness I agree with
you, "Garbage-In, Garbage-Out).

My other comment is when you have a hammer there is a tendency to look
for a nail. With my systems background I did have a tendency to always
take the SD approach, but there are other approaches depending on the
application and the audience you are working with.

Alex Leus
leusa@tds.net
"George Backus"
Member
Posts: 23
Joined: Fri Mar 29, 2002 3:39 am

Unwarranted Criticism

Post by "George Backus" »

I am unsure deciding that econometrics is wrong, SD is right, and paradigms
are insurmountable, buys the SD community anything. In our collective,
possibly biased view, we all believe SD (causality and feedback) has much to
offer the world -- many problems could be avoided, many a crisis could be
circumvented, if only the feedback dynamics were considered.

In almost exclusively practicing SD-based consulting for the last 20+ years,
I still do not have the courage to state that I use SD during client
meetings unless I know beforehand that the group is 100% sympathetic to SD.
I would go so far as to say that my organizations effectiveness in
providing successful solutions would drop to nil if my use of SD was widely
known in the client organization. SD has its circle of "friendly companies"
that show up often in the SD discussions and literature. There are the
fringe studies where the SD moniker did not matter, but as a mainstream
"useable" tool, SD has a bad reputation.

"System Thinking" fares a bit better because it is not in direct competition
with other methodologies. It is still usually considered "flaky" but no
more so than the other "role-playing" touchy-feely psychology" antics
internal HR departments and other consultants inflect upon organizations.
Statistics, optimization, and econometrics are deemed the solid path to the
"truth."

What separates "them" from "us?" Certainly, "they" have quacks and hacks
just like "we" do. However, for them, the "losers" are just consulting
groups or individuals to be ostracized. For us, it becomes an entire field
placed on the blacklist. Whats the deal? I argue it is theory and
methods. Good statisticians do a good job. They know their limitations and
how to interpret their results. The same is true for good econometricians
and those who use optimization. Note that many "good" SD studies such as
those we often see coming from John Sterman, PA Consulting, and many others,
do include statistics, econometrics and even optimization. But these
side-components are then just part of what we as SDers see as a more
relevant feedback whole. At a conceptual level, these other fields are
subsets of our (SD) field. They are tools we should use when appropriate,
but only as a part of our efforts. In other words, the paradigm issue is
more Newton versus Einstein rather than witchcraft versus science. Our
problem is that "we" look like witchcraft.

Other fields have a body of theory that all practitioners "should" know.
These are often, biased, idealized and inadequate for "real" work, but they
do act as the basis for applications, much like pure versus applied science.
For "real" efforts, there is a body of written methods that describe what
pitfalls to consider and what limitations must be recognized when making
assumptions, using data, or drawing conclusions. Practitioners dont have
to follow or agree with all that is written, but at least it is there as a
reference, a convention - from which comparison can be made to pass judgment
and "best practice" validity. They dont have to be "right;" they jsut
have to be jsutifiable and workable

Over the decades, the SD community has produced a wide variety of solid,
rigorously developed literature that could be assembled into our basis for
theory and methods. (Certainly, John Stermans Business Dynamics would be
one of those works.) There is some semblance of this approach to the
"certification thread" of a year past, but who beyond SD aficionados care
about SD certification? We need something outsiders can review and
critique, where we define and rigorously state "This is us."

Just as in economics, there are disciplines with different criteria; I think
the same is true in SD. The practice of influence-diagram workshops or
BeerGame sessions does not need the same quality control/assurance steps as
a tactical decision simulation for the international-terrorism control
efforts would. A yet different subset of the greater body is probably
appropriate to Systems Thinking exercises. Nonetheless, all efforts should
conform to the basic concepts of causality, delays, feedback, etc. that
define SD. We can define a set of "practices." We can define as set of
minimum-level, professional level, and advanced level considerations for
each type of practice. These are simply information that embody what the
community sees as "defining" SD and that acts as a voluntary guideline. It
also acts as a basis for others to judge (and respect) what we do.

It is perfectly acceptable to have "alternative" perspectives. The primary
"considerations" would be basic and deemed the working convention.
Non-mainstream views can be stated and treated as the alternative positions
they represent. They would not detract from or credibiliyt or the "basic"
theory and methods. When did you ever see two economists agree? Yet, they
both can talk about marginal and average considerations with a common
understanding that outsiders can appreciate and judge.

Other professional organizations have permanent working committees that
primarily deal with "standards" associated with their field. We dont.
Maybe we should. If we did, maybe someday, I could sell my wares as
SD-based and still be deemed credible compared to my economics competitors.


George Backus
Policy Assessment Corporation
14604 West 62nd Place
Arvada, CO 80004-3621
Bus: 303-467-3566
Fax: 303-467-3576
Cell: 303-807-8579
Email:
George_Backus@ENERGY2020.com
John Sterman
Senior Member
Posts: 117
Joined: Fri Mar 29, 2002 3:39 am

unwarranted criticism

Post by John Sterman »

Heres a simple dynamic hypothesis as to why negative characterizations of
system dynamics such as the one that started this thread persist.

Before describing it, I should note that these characterizations might
persist because they are true. My view, which will surprise no one, is
that good SD work is as fully rigorous as good work in any other modeling
discipline. As Keith Linard pointed out, and as the work of folks like
George Backus, Andy Ford, PA, Jack Homer, Brian Dangerfield, Ed Anderson,
and many, many others shows, good SD does draw extensively on data,
including numerical data, uses statistical/econometric tools when
appropriate, draws on optimization and other OR methods when appropriate,
and owes no apology to anyone as to its rigor.

So heres the hypothesis: Many non-SD modelers, particularly economists,
reacted very negatively to Jays early work, particularly Urban Dynamics
and World Dynamics. We can debate later whether these particular works
were flawed, whether they drew insufficiently on data, whether Jay should
have used regression to estimate relationships and parameters, and so on,
or whether these criticisms were the manifestation of deeper underlying
differences and conflicts such as an outsider encroaching on others
intellectual turf or aversion to the conclusions of UD and WD. For the
purpose of this discussion it doesnt matter.

So suppose you were a non-SD modeler and decided that UD and WD were "bad"
models, and, since these were the most prominent and widely known SD
models, that SD as a technique was flawed. You would then say, as I heard
myself from someone just last week, something like "Oh no, not system
dynamics again. Those Forrester models were discredited in the early
1970s."

There are many logical errors in this type of statement, not least of which
is conflation of a particular model (e.g., World Dynamics) with an entire
modeling method (system dynamics). Even if we were to stipulatethat World
Dynamics was a flawed model and discredited (something I dont agree with),
the leap to "so all SD models are flawed in similar ways" is obviously a
fallacy. It is analogous to saying "Here is one particular differential
equation model that is flawed, so all differential equations models are
similarly flawed." Or more plainly, it is fallacious to argue that "there
exists one bad apple, bad person, or bad novel, so all apples, people, and
novels are bad."

Nevertheless, having made the judgment that SD models are worthless, you
then decide not to read any more literature in the field. Similarly, since
SD work is unfavorably refereed in the journals in these fields, it doesnt
get published, and (we are not blameless in this dynamic), SD authors stop
trying to publish in those journals, stop attending those conferences, and
stop giving seminars in those departments.

The field of system dynamics continues to evolve, develop, and improve, but
because you get no new information, your opinion is never updated; because
your opinion doesnt change you have no reason to seek new information on
the current work in the field.

That is, there are positive feedbacks that create a self-enforcing
equlibrium in which people exposed to SD early may have formed a negative
opinion about it that then prevents them from learning about new
developments, which, perhaps, they might view more favorably.

The signal that this dynamic is operating is when a critic of system
dynamics points to very old work such as Urban Dynamics or World Dynamics
as the "evidence" that SD is flawed. Such critics are almost always
unaware of any recent work in the field, much less of modern modeling
protocols for use of data, model testing, and so on.

Note that all agents in this framework, the economists and the SD people,
are behaving rationally from their local perspective: It is rational to
avoid reading a literature you dont think has any merit; it is rational to
avoid submitting articles to journals that wont accept it.

Note also that this structure is symmetrical and does not point a finger of
blame at economists or othes outside of SD: if SD people get defensive and
reject the utility of econometrics or economic theory they will also fail
to learn about new developments, fail to create channels of communication
across disciplinary boundaries, and become insular, out of date, and thus
unable to do the highest quality work; if this behavior continues, it soon
would justify negative assessments from these other fields.

This dynamic should be obvious to all SD folks; a formal model of it is
easily constructed, and makes a good student exercise for introductory
courses (and, interestingly, it is easy to build a simple equilibrium,
fully rational game theoretic model that gives the same outcome). It
operates also in business contexts. Consider air travel. Suppose you have
a bad experience on a particular airline and decide never to fly with that
firm again. You do, however, generate unfavorable word of mouth about
them, telling your tale of lost bags, cancelled flights, surly employees,
and so on at dinner parties for years afterwards. Meanwhile, the airline
may have improved and could even have great service today, but since you
havent personally experienced it, you dont revise your opinion and
continue to generate unfavorable word of mouth long after your opinion is
no longer accurate. Just this dynamic occurred with Continental Airlines -
it took years of high quality service, awards, etc. to overcome the bad
reputation they developed in the 1980s when they were truly awful.
Similarly, there are many people today who will not even consider buying an
American made car because they had bad experiences in the 1970s, when the
quality of US cars was poor. US car quality has improved a lot (though
there is still plenty of room for improvement), but these people dont know
it firsthand, tend to discount what they may hear from others, and continue
to buy foreign cars.

So - what then are the policy implications? First, lets stop pursuing the
low leverage policies that take a lot of energy and dont accomplish
anything. There is low leverage in whining about how "they dont
understand us" or how "this is a paradigm clash." Its also not helpful --
and its incorrect -- to defend bogus ideas such as "numerical data arent
important" or "regression doesnt apply to complex nonlinear systems."
This last point is critically important. It may be true that simple linear
OLS regressions are inappropriate for complex nonlinear dynamic systems,
but econometrics is far more sophisticated than that and has been for
years. Could it be that some SD folks have an outdated and erroneous view
of the capability of these tools because they long ago decided they werent
appropriate? Could it be that SD graduate students arent required to take
enough econometrics, economics, OR, and other relevant courses because
their advisors have outdated opinions about the relevance and suitability
of these tools?

Second, where are the high-leverage policies? Im sure I dont have the
answer to that, but one possibility is to make the effort to cross the
boundaries of the disciplines. Make sure students learn the state of the
art in OR and economic tools, not only SD tools. Learn to read other
literatures and be sure to cite them when relevant. Most importantly,
update the opinions of those who have not followed the field by doing great
work, and then making the effort to communicate it outside the boundaries
of the field. Bring your enemies into the conversation. Seek, and listen
to, their advice. You will need a thick skin and a lot of persistence, but
the effort will improve both the quality of your work and its impact.

John Sterman
From: John Sterman <jsterman@MIT.EDU>
"George Backus"
Member
Posts: 23
Joined: Fri Mar 29, 2002 3:39 am

unwarranted criticism

Post by "George Backus" »

Johns points are well taken, but.... one does not see OR much outside of
the OR journals. If one talks about regression versus econometrics (as in
economics), one does not see econometrics much outside selected economic
journals. Discrete event simulation has a narrow journalistic audience, yet
its foundation is roundly accepted. There is always the problem of deciding
what methodology is the most efficient (or valid) to throw at a problem, but
interdisciplinary teams often have the backgrounds to make reasonable
judgments -- if they have some standard basis for evaluation and comparison.

I agree SD studies (with an adequate flavor of "complementary methods")
should pursue a wider audience as John notes, but I dont think that is the
"on the ground" solution to our credibility problem. We remain an undefined
entity. When an organization contracts for an optimal analysis or an
econometric study, it knows what to expect -- at least, what the ideal
should look like. There is the worry about whether the individual is up to
the task, but if the individual fails, it is deemed because he or she did
not meet the accepted standards of the profession/discipline -- and not that
the methodology is bogus. When an organization asks for a System Dynamicist
or System Thinker, who knows what to expect?


George

George Backus, President
Policy Assessment Corporation
14604 West 62nd Place
Arvada, CO 80004-3621
Bus: 303-467-3566
Fax: 303-467-3576
Cell: 303-807-8579
Email:
George_Backus@ENERGY2020.com
Bill Harris
Senior Member
Posts: 75
Joined: Fri Mar 29, 2002 3:39 am

Unwarranted Criticism

Post by Bill Harris »

George,

Youve written about Unwarranted Criticism and now Leveraging the
List-Serve. I like some of your points, but I have another
perspective on what makes SD credible in the marketplace. Its only
my perspective, not <<truth>>.

I came into this field from an electrical engineering background,
where SD originally seemed quite similar to old analog computer setups
from school (nope, I didnt graduate quite yesterday). There are
several possible factors that make an engineer credible. One is
certification, and some do get their PE (Professional Engineering)
certification. Except for certain fields such as civil engineering,
though, Ive found the PE to be a relatively rare commodity, as there
are "industrial exemptions" in most (all?) states allowing engineers
in a company to practice without a PE, and Ive never had a company
ask if I had a PE. (Ive not practiced as an independent engineer,
either.)

Publications are another possible credibility factor. Its not bad to
have articles, patents, and books to ones credit (and on ones
resume). I sense, though, that, possibly outside of academia,
publishing is not widespread as the primary means to engineering
credibility. When I think of engineering publications, I think of
everything from IEEE Transactions (perhaps more oriented towards
academia) to trade journals such as EDN and Electronic Design and
perhaps more popular media such as Scientific American.

Base competence is, of course, fundamental, as John stated. As you
noted, varying levels of model precision and reliability are called
for, depending upon the problem. Getting caught with inappropriately
sloppy thinking is not good, but, as Jay reminded us this morning,
inappropriate structure is a key part of sloppy thinking.

The other factor that seems to get electrical engineers credibility is
the results theyve provided businesses, organizations, and people. I
think people see the computer on their desk and the cell phone or PDA
in their hand and realize somebody did something useful that really
worked. When most of the computers or cell phones or PDAs they see
work, they begin to believe engineers (or whoever develops these
things--Im not sure they could always name the professions involved)
have credible, useful, and important (to them) skills. (If most cell
phones didnt work and didnt improve from new release to new release,
engineers might not enjoy much credibility.) People may not
understand the technology involved in designing a cell phone, but I
think they could be led to relating their phone to the skill of a
number of people good at RF and microwave design, for example.

SD is different. We dont create millions of copies of what we
produce; were a bit more like architects, perhaps, or custom
systems engineers (a more apt analogy?).

Perhaps (my claim) the leading factor in credibility is repeatably
providing successful solutions to "customers." I think it helps to
let it be known somehow that SD (or at least simulation) was involved,
but I suspect leading with SD is not destined for success (your point,
too, George).

Theres another factor I think is important, and Geoffrey Moore
summarized it in Crossing the Chasm. Its the lifecycle phase were
in. Innovators and visionaries care about the neat technology and the
potential strategic impact of a new approach. Maybe thats where
weve been selling this stuff to date. Those are the folks who tend
to care about the technical side, although they may not care much
about certifications.

Maybe its time to start "crossing the chasm" and selling SD to
pragmatists, those who wont really care much about how we do what we
do but just whether we provide them reliable, easy to apply results
that are worth more than they cost. (Maybe thats what youve been
doing, George, based on your descriptions.)

If that be true, then I suggest the competency emphases you suggest
are perhaps appropriate internally, helping each of us to get better
at what we do, but that the external focus should be on solving
peoples problems and returning gains to them as a result of what we
do, and then repeating this cycle. (There is more in Moores book
that is probably pertinent, but he says it better than I would this
morning.)

And, if we follow the engineering model Ive outlined above (or
perhaps even the Deming model of how to improve manufacturing), then
putting up certification barriers isnt as fruitful as focusing on
educational opportunities to help us all get better.

Comments?

Bill
From: Bill Harris <
bill_harris@facilitatedsystems.com>
--
Bill Harris 3217 102nd Place SE
Facilitated Systems Everett, WA 98208 USA
http://facilitatedsystems.com/ phone: +1 425 337-5541
"George Backus"
Member
Posts: 23
Joined: Fri Mar 29, 2002 3:39 am

Unwarranted Criticism

Post by "George Backus" »

Bill Harris perspectives are always insightful. His assessment appears
solid. I think it can be extended, however. The disciplines of engineering
are already a respected. People have trust in the fields. The certification
allows a client to also have more trust in the person providing the
engineering services. In the case of SD, the field is not trusted. My
experience is that those who do practice SD well are trusted, not for the SD
per se, but for successfully providing the services and results needed. It
may be wrong headed, but it still seems to me that if we did had a public
"standard" describing what we do (to some extent written as an external
presentational piece (a bit of marketing acumen included), then potential
clients and even current adversaries would acquire a better comfort level
for SD.

In a more jaded sense, the idea may be that nobody would really read our
"manifesto" except us. In my work, the models we use were always under
suspicion because they were not the optimization models conventionally used.
It become even more problematic whenever it was discovered that the models
were a SD effort. We wrote 7 large volumes of documentation to respond to
concerns using a mix of SD and "classical" methodology language and
arguments.
The concerns have all disappeared. Nonetheless, no client (to my knowledge)
as done more than just hold the documents, look at the table of contents,
and read the intro paragraphs. Still, it gave them immediate confidence
that made using our model a legitimate choice (although this may only be my
perception). For us it gives a continued reference to assess any future
modifications or "enhancements" to the model and it gives anchor points to
initiate the discussion when clients do have a hot concern about some
particular aspect of our work.

In summary, publishing, certification, and good work are necessary
components of our efforts to gain acceptance. I just continue to think
that we also need draw a line in the sand and set our standard (the
political statement of our identity kind of standard) for all to see.

Via Bob Walker, I now understand that Jack Pugh and Gordon Kubanek are
already pursing the development of a "Body of Knowledge" and "Standards of
Practice," respectively. They probably have much better insights than I on
how and if to move forward along the lines we have been discussing.

George

George Backus, President
Policy Assessment Corporation
14604 West 62nd Place
Arvada, CO 80004-3621
Bus: 303-467-3566
Fax: 303-467-3576
Cell: 303-807-8579
Email:
George_Backus@ENERGY2020.com
"Paul M. Konnersman"
Junior Member
Posts: 6
Joined: Fri Mar 29, 2002 3:39 am

Unwarranted Criticism

Post by "Paul M. Konnersman" »

As a listener (I prefer that term to "lurker") on this list, I hesitate to
contribute because the quality of the dialog is so consistently high. I do so
in this case because it seems one in which my marginality may be an asset.

It sounds as though many (but not all) of us believe that SD is misperceived or
misunderstood by important groups of people to the detriment of both SD
academics and SD practitioners (to say nothing of the organizations and society
that might benefit from the broader use of SD). John Sterman has addressed
intra-academic dynamics while George Backus and others have focused on the
executives who buy SD consulting services.

There is a much larger group, even restricting ourselves to the college
educated population, that has absolutely no knowledge of SD. This is a loss to
society, because some of these individuals are business and government decision
makers who might sanction the use of SD and many others have significant
influence on those decision makers. I know that Jay Forrester, The Creative
Learning Exchange and others are taking very important steps to address this
concern. My concern is that there would seem to be a very long delay in that
loop. It would be useful to undertake additional efforts that, even though less
potent, might yield earlier benefits. Furthermore. the uninformed might be a
softer initial target than the misinformed, who have already made up their
minds.

While SD has occasionally received some favorable exposure in the general
press, far more is needed. Wouldnt journalists be high leverage targets for
generating both a better and a broader understanding of SD. As an example,
perhaps some curricular or extra-curricular programs for Nieman Fellows could
be provided during their residency in Cambridge. Im thinking of something
along the lines of the one week summer course at MIT, but perhaps spread over
some weeks to accommodate the schedules of both the Fellows and the presenters.



Paul Konnersman
From: "Paul M. Konnersman" <
konnersman@mediaone.net>
Locked