SD Models from Written Text

This forum contains all archives from the SD Mailing list (go to http://www.systemdynamics.org/forum/ for more information). This is here as a read-only resource, please post any SD related questions to the SD Discussion forum.
zenabraham@aol.com
Junior Member
Posts: 18
Joined: Fri Mar 29, 2002 3:39 am

SD Models from Written Text

Post by zenabraham@aol.com »

Hello,

I must register my disagreement with this point of view. While Im not a fan
of the development of this technology, the more I think about it, the more my
point of view is changing.

The major problem in SD, to me, is that the practitioners state that building
representations of a system by using SD software causes a more meaningful
outcome than one provided by using spreadsheets.

Yet, spreadsheets are more widely disseminated that SD software. So there
are people who using spreadsheets and make mistakes, because there are
various levels of expertise at building models within that device style, too.

Fine.

The "market" for consultants will ferret out the weak versus the strong. Im
more concerned that SD harms its total potential market value by these
self-esteem-based attempts at exclusivity. I think the best way to assure
that experts exist and are differentiated, as we expand the ranks of SD
practitioners, is to have a licensing system.

But I think inclusion for expansion should be the objective.

Zennie
From: zenabraham@aol.com
=?iso-8859-1?Q?Andr=E9_Reichel?=
Junior Member
Posts: 14
Joined: Fri Mar 29, 2002 3:39 am

SD Models from Written Text

Post by =?iso-8859-1?Q?Andr=E9_Reichel?= »

Hi all,

I was glad to read the comments by Lazaros about this topic after all these
very enthusiastic postings. A software tool for extracting a SD model out of
textual description would indeed benefit only the expert model builder (I
dont regard myself as one) just as iconographic SD software. After all, the
serious and careful model builder (hopefully I can count me into that crowd)
knows, that the most important step in modeling is not in playing with
software, but in conceptualising the model in your head and with pen and
paper. If this is thoroughly done, the rest of the model building is
(almost) a piece of cake. After this stage, an extraction software would be
of interest just to relief us from fizzling with icons and equations. Yet,
even this work can give way to interesting insights, when things just cannot
be done the way you thought they should. So the question in this case is
not, how can a software like this be written, but for what purpose and for
whom. To sum it up: my point is to educate novice model builders in the
craft of model building BEFORE they start to play with all kinds of software
tools.

Regards / Viele Grüße
André Reichel
A.Reichel@epost.de
Niall Palfreyman
Senior Member
Posts: 56
Joined: Fri Mar 29, 2002 3:39 am

SD Models from Written Text

Post by Niall Palfreyman »

John Gunkler schrieb:

> ... the people who have been working for two decades on what
> was initially called "artificial intelligence" (especially, expert
> knowledge capture) have developed protocols for interviewing "expert"
> subjects...

Hm. An interesting juxtaposition: SD and Knowledge Engineering. Id
never thought of them in combination before, but I can see a lot of
similarity. I think most knowledge engineers these days have moved more
towards requirements capture in software development, since the skills
are very similar. Two texts which immediately occur to me as possibly
useful are:
"A Practical Guide to Knowledge Acquisition", Scott, Clayton & Gibson,
Addison Wesley (1991)
This book is excellent, detailed and thorough, but it doesnt include
the important technique of Kelly Grids. For a concise look at that, the
following is good:
"Migrating to Object Technology", Graham, Addison Wesley (1994).

Im not sure how adapted these techniques would be for supporting the
modelling activities of SD modellers, but I think this direction of
thinking could be a very profitable one.

Another way in which this direction might be useful is in relation to
the discussion around whether its at all a good idea to extract models
from text. In the knowledge engineering world the knowledge engineers
became an expensive bottleneck, which drove the search for methods of
automatic knowledge acquisition. This need was partly fulfilled by such
automatic learning mechanisms as neural networks, but these have the
disadvantage that the storage format of the systems knowledge is so
obscure that no-one can understand the knowledge in order to check its
validity. A separate thread of research in knowledge acquisition and in
software requirements analysis, and one which may be relevant to this
thread, is the idea of automatic tools to _support_ the knowledge
acquisition process. These tools dont _do_ the acquisition, but they do
provide the knowledge capturer with a supportive framework within which
s/he carries out the acquisition process.

Stella, iThink and Vensim capture the model, but they dont support and
guide the direction of model-elicitation. Would such a tool be a good
idea in SD? It would provide a half-way post between those who want to
derive models automatically and those who dont.

Niall Palfreyman.
From: Niall Palfreyman <
niall.palfreyman@fh-weihenstephan.de>
carolus
Junior Member
Posts: 18
Joined: Wed Mar 31, 2004 5:14 pm

SD Models from Written Text

Post by carolus »

Hi All,

This thread on extracting models from written documents generates a
certain Aha-erlebnis, as the germans would probably say.

The ratio behind the initial request seems to be the availability of a
tool that does the hard work:
extracting knowledge from certain sources.
If that would be possible, it would leave more time to the real SD-work.

John Gunkler stated:

>(...) And the people who have been working for two decades on what
>was initially called "artificial intelligence" (especially, expert
>knowledge capture) have developed protocols for interviewing "expert"
>subjects. Perhaps there is something quite useful that we could tap
>from their experience. It seems to me that capturing "mental models"
>and capturing "expert knowledge" of some circumscribed subject area are
>very similar exercises.
>
The suggestion made, is that AI might have come up with certain results
the SD community could use.

As far as I know this is not the case. One might even question the
possibility of extracting knowledge from written sources.
The same question arose in the AI and Law community some 10 to 15 years
ago. Then, the research started
with high expectations aiming at the development of knowledge systems
which could accept as input a certain
story - text based - and generate as output a legally acceptable
reasoning and options for solutions.
This result has never been reached and there are lots of reasons why it
didnt and probably never will.

One of the interesting observations which can be made with respect to
this research, is that it also
wanted to extract knowledge from experts: either by using thinking-
aloud protocols or by analysing
written sources (such as legal cases). Based on the idea: if we could
detect how experts handle a case - or
a problem - one could deduct from this a certain protocol how (other)
problems should or could be tackled.

Jac Vennix already reported on this list on his experiences with the
frist type:

>Just a cautionary note on extracting models from written documents. I have
>done something along those lines for my PhD research. I extended the coding
>procedure as described in Axelrods book. However, even after extensive training
>of six students in extracting clds from written pieces of text (guided by
>a coding procedure book), I was not able to get a sufficient inter coder
>reliability (i.e. > 0.80). Now this was over 10 years ago, but I do not
>have the impression that much has changed in the last decade wrt this issue.
>
The latter one - extracting knowledge from written texts (in terms of
general rules) - also failed.
One of the main reasons is - in short - that knowledge is not static: it
developes and changes.
Using any method of extracting knowledge implies that the result is by
principle outdated.
Particularly within the legal domain this is unacceptable. Also, the
result is very hard to be validated, since,
almost any outcome in a legal dispute is possible. It depends on the
line of argumenation. And that relates to
the basic idea in SD: a different question or RMoB of a certain problem
may generate different models.
Just like Sterman has put it: the actual validation of models is impossible.
The usage of (the strict meaning of) validation within SD is a
contraditio in terminis.

A second point is that any written text is in itself the result of an
extracting method, though most of
the time done by the expert himself. Any text therefore is double
biased: first, there is the perception of
this particular expert, and second, there is - again - the subjective
representation of his perception
in words. And even a third stage: the subjective perception of this text
by the reader ...

A third point is that SD - just like the legal domain - depends on
argumentation or, as it is put
by Forrester and Senge: building confidence.
We do not just present a model, we argue about it while creating is. One
is not convicted
just because the judge says so, but - in ideal circumstances - because
the judge is convinced that
this particular outcome is the most reasonable one AND he - the judge -
does motivate this point of view.
This implies that a black box is never acceptable: we have to be convinced
of the soundness of the proposed way of handling a certain problem. In
skipping this part, leaving it
to a sophisticated - if ever possible - knowledge-modelling tool, one
would diminish one of the most
important- though hard - parts in SD.


Greetings,

Carolus Grütters
Law & IT

University of Nijmegen
The Netherlands
From: Carolus =?ISO-8859-1?Q?Gr=FCtters?= <c.grutters@jur.kun.nl>
keith@linard.info
Newbie
Posts: 1
Joined: Fri Mar 29, 2002 3:39 am

SD Models from Written Text

Post by keith@linard.info »

> Isnt there a doctoral dissertation in here somewhere?

Yes, but a word of warning re linguistic cultures.

10 years ago one of my doctoral students (Dr Rod Jewell) as an element of
his PhD research developed a tool for analysing concepts contained in
corporate documents. An example of its application was the scanning of top
level corporate documents of the Australian Defence Organisation (Corporate
Plans, Annual Reports, Strategic Guidance Documents, Senate Budget
Estimates Submissions etc) for coherence with espoused doctrine and
(public) Government policy.

The tool was built with Lotus Agenda (a magnificent text analysis research
tool, unfortunately dropped by Lotus in early 1990s). The concept
analysis engine involved the development of structured thesauruses
(thesauri ?) which, inter alia, incorporated acronyms, jargon and those
critical internal trigger words which convey obvious meaning to
conoscenti when used in particular contexts. These thesauri were developed
in conjunction with senior military personnel.

The first major application seemed to work well, stripping out from the
tens of thousands of words of tortuous text those paragraphs relating to
goals and objectives, and showing significant congruance of all areas of
defence with government goals & objectives ... except for the strategic
intelligence area. Closer analysis revealed that the problem lay in the
fact that the thesauri captured well the military & general bureaucratic
linguistics ... but the intelligence community were from a radically
different culture and wrote in a significantly different dialect more
akin to that of the diplomatic / "foreign affairs" tribe.

Our subsequent refelctions on this identified a diversity of tribes in
the Defence Organisation, all with their particular dialects ... HR
bureaucrats, finance bureaucrats, OR techos, IT techos, engineers,
Service personnel (Army, Navy & Air Force, with their multiplicity of
sub-tribes ... F111 pilots have a significantly different culture from
Army Black Hawk pilots) ... and of course the intelligence community.

Three critical lessons I draw from this research include:
* language & culture (including organisation culture) are intimately
intertwined.
* automated textual analysis requires significant pre-programming of
cultural cues.
* even with this, if you assume cultural homogeneity, within the texts
you are likely to get it wrong.

Quite apart from this, corporate literature typically confuses causality
with hierarchical relationship, especially linkages in the chart of
accounts and many key systemic problems are part of the unwritten
culture or structure. I see System Dynamics to be for more about mutual
learning between client and consultant, a mutual learning which
necessitates the consultant digging well below the deliberate omissions and
ignorance of official documents in a way impossible for any automated text
tool.

Still, if anyone wants to pursue that route, I suggest that a good place to
start would be to see if any copy of Lotus Agenda exists in you local
museum of software.

Keith Linard
Director
Centre for Business Dynamics & Knowledge Management
University of New South Wales
ADFA ACT 2601 AUSTRALIA
Email: keith@linard.info
Mobile: 0412-376-317 (Country code = -61)
FAX: (0)2 6257-6617 (Country code = -61)
Locked