Hi every body.
When trying to understand what was meant by "learn"
I tried to load learning3.mdl from Mr Richardson,
but could only load it as text and
when I wanted to transform in sketch Vensim said "syntax errors".
I am new to SD and must have made an error.
I feel that without a formal definition of the word "learn" it is rather
impossible to answer the question.
What I mean by formal definition is:
Say that you have two SD Models:
One that cannot learn ant the other that can.
Both have a finite set of definitions and can both be represented by
one of the current SD software.
If you open the cannot model, you can with a finite number of
modifications transform the cannot into the can.
All these modifications can be broken into very simple ones.
What is the simple modification that will transform the cannot into
a can?
Or maybe it is a set of modifications that will transform the cannot
into the can because the can can have the particularity of having
a submodel that gives it the learning power.
Or what is the minimum set of basic modifications that transform the
cannot into a can?
Is it a binary problem yes or no?
Or is it more a capacity of learning?
Can this capacity be measured?
Maybe not with any number integer even infinite or real.
That capacity must be equivalent to the total of knowledge of the
universe which is difficult to evaluate with standard system of measures.
When looking at the different answers it seems that a learning SD Model is
a model that will transform the way it reacts to a similar external stimulus after a
while.
What will cause the transformation? May be experience that is summarized
by exogenous data and the way it reacts to these exogenous data.
It is a sort of educated learning, bounded by the assumptions of the
maker of the model has made about the external word of the model and
by the purpose of the model.
One can imagine that the model can change the way it experiences
things, but it will need a second model to do that. A third model can
modify the second one etc..
Is that education helping the learning process or not?
The ideal solution would be to have a model that can experience without
preconceived idea and able to change without boundaries and without
any preconceived idea about the outside word.
But how can the model start its modification wihout a hint about how to react
at the beginning of its experience?
If education was bad for invention it would not be necessary to be a good
jazz improviser, to know by heart every sort of scales in all keys which takes a long
time. And knowing all the scales helps improvisation nevertheless the preconceived
way to play scales.
Learning the molecules helps creativity in SD Modeling and does not restrict it.
One can too imagine a model that is well educated and that can depending
on the situation forget everything he has learned, to have the possibility to
adjust to an external word different from the one expected.
While finding the subject intellectually interesting I am not presently much concerned
by the question of model learning but more by the model maker learning.
Having started SD last year I hoped at first that SD models would resolve the tricky problems
I have to resolve in my business.
After the Hines-Eberlein distance SD course I rushed to construct SD Models and after
some time I found happily for my mental health that the problem for me
was not to make a certain model but to increase my perception of reality
by constructing and "playing" with different models which is a much softer way to see
things and releaves the pressure to construct the perfect model.
J.J. Laublé.
From: "ALLOCAR SRASBOURG" <allocar-strasbourg@wanadoo.fr>
can system dynamics model learn
-
- Junior Member
- Posts: 6
- Joined: Fri Mar 29, 2002 3:39 am
-
- Senior Member
- Posts: 79
- Joined: Fri Mar 29, 2002 3:39 am
can system dynamics model learn
Dear colleagues,
The discussion on model learning reminds me of a brief article titled
"Minds over Methods" I wrote some seventeen years ago relating how my
thinking transformed from working with Jay Forresters group. System
dynamics models were meant originally to help us learn, not learn by
themselves. Perhaps we are better off adhering to that mandate.
Khalid
Reference
Saeed, K. 1986. Minds Over Methods. System Dynamics Review. 2(2).
From: Khalid Saeed <saeed@WPI.EDU>
The discussion on model learning reminds me of a brief article titled
"Minds over Methods" I wrote some seventeen years ago relating how my
thinking transformed from working with Jay Forresters group. System
dynamics models were meant originally to help us learn, not learn by
themselves. Perhaps we are better off adhering to that mandate.
Khalid
Reference
Saeed, K. 1986. Minds Over Methods. System Dynamics Review. 2(2).
From: Khalid Saeed <saeed@WPI.EDU>
-
- Junior Member
- Posts: 6
- Joined: Fri Mar 29, 2002 3:39 am
can system dynamics model learn
Hi everybody
One question: what is the use of the question of Mr Schaffernicht. Or more positively what is the real stake of this question? If the answer is yes or no where will it help?
What for did Mr Schaffenicht asked this question?
Secondly here is a model description I would like to know if it can be considered as learning or not.
I call that program the Hara Kiri model.
It can react completely differently to different output after having learned from the exogenous input.
It contradicts apparently the last words of Mr Bode about reacting in a single way to a single input: no options, no process selection and it confirms apparently Mr Dangerfield view excepted that it does not use any other techniques than SD.
Suppose that we have a simple forecasting SD Model driven by another program whose main action is to start the SD Model plus some minor features.
The Sd model sometimes gets some exogenous input that tell it if the previous forecasting were right and help it to readjust its parameters through a common calibrating process.
Say that once these new parameters value fall wide outside the bounds dictated by reality.
The driving model can then erase all the equations of the SD Models and replace the output by: sorry I do not feel anymore fit to forecast.
It will then radically change its behaviour from its original one.
The system formed by the driving model and the SD model will adjust to a change in the outside word.
Or course it is a negative adjustment.
One can imagine that the designer of the model (the model will never been able to free itself completely for its designer)
has made one hundred other models adressing the same problem with each time a completely new conception (as it is too easy to do in SD).
The driving program can take all the hundred programs and test them one after the other again all the old data and choose the one with the best result. (I know it does not proove that the model is better).
If the result is not better than the first program it has the option to change the parameters or consider a more recent period of time, making the assumption that some models can be better than others depending on the period.
Nevertheless the final result will be that the program will change its behaviour.
Where is the learning?
I feel that it is when it founds that it is not fit anymore.
There is no more learning afterwards.
Afterward there is a programmed reaction to the learning.
The better output would be to acknowledge its learning: "I fill no more fit and thats all.
Or give the user the possibility to carry on with another program which he can always do anyway.
About the negative reaction to erase the equations and to say it is no more fit, it is in a sense very positive.
It is much better to recognize that one cannot do anything then to make bad forecasts.
Best regards to everybody!
J.J. Laublé.
From: "ALLOCAR SRASBOURG" <allocar-strasbourg@wanadoo.fr>
One question: what is the use of the question of Mr Schaffernicht. Or more positively what is the real stake of this question? If the answer is yes or no where will it help?
What for did Mr Schaffenicht asked this question?
Secondly here is a model description I would like to know if it can be considered as learning or not.
I call that program the Hara Kiri model.
It can react completely differently to different output after having learned from the exogenous input.
It contradicts apparently the last words of Mr Bode about reacting in a single way to a single input: no options, no process selection and it confirms apparently Mr Dangerfield view excepted that it does not use any other techniques than SD.
Suppose that we have a simple forecasting SD Model driven by another program whose main action is to start the SD Model plus some minor features.
The Sd model sometimes gets some exogenous input that tell it if the previous forecasting were right and help it to readjust its parameters through a common calibrating process.
Say that once these new parameters value fall wide outside the bounds dictated by reality.
The driving model can then erase all the equations of the SD Models and replace the output by: sorry I do not feel anymore fit to forecast.
It will then radically change its behaviour from its original one.
The system formed by the driving model and the SD model will adjust to a change in the outside word.
Or course it is a negative adjustment.
One can imagine that the designer of the model (the model will never been able to free itself completely for its designer)
has made one hundred other models adressing the same problem with each time a completely new conception (as it is too easy to do in SD).
The driving program can take all the hundred programs and test them one after the other again all the old data and choose the one with the best result. (I know it does not proove that the model is better).
If the result is not better than the first program it has the option to change the parameters or consider a more recent period of time, making the assumption that some models can be better than others depending on the period.
Nevertheless the final result will be that the program will change its behaviour.
Where is the learning?
I feel that it is when it founds that it is not fit anymore.
There is no more learning afterwards.
Afterward there is a programmed reaction to the learning.
The better output would be to acknowledge its learning: "I fill no more fit and thats all.
Or give the user the possibility to carry on with another program which he can always do anyway.
About the negative reaction to erase the equations and to say it is no more fit, it is in a sense very positive.
It is much better to recognize that one cannot do anything then to make bad forecasts.
Best regards to everybody!
J.J. Laublé.
From: "ALLOCAR SRASBOURG" <allocar-strasbourg@wanadoo.fr>
-
- Member
- Posts: 29
- Joined: Fri Mar 29, 2002 3:39 am
can system dynamics model learn
I like Richard Hammings commentary on modeling which points out the
importance is not in the numbers the model produces but the insight the
numbers bring.
Our goals are diverse - a good thing. If some wish to design models that
learn, we have to opportunity to study learning. If one of our goals to
better learning, modeling the learning process will have significant value.
Raymond T. Joseph, PE
281 343-1607
RTJoseph@ev1.net
importance is not in the numbers the model produces but the insight the
numbers bring.
Our goals are diverse - a good thing. If some wish to design models that
learn, we have to opportunity to study learning. If one of our goals to
better learning, modeling the learning process will have significant value.
Raymond T. Joseph, PE
281 343-1607
RTJoseph@ev1.net
-
- Junior Member
- Posts: 2
- Joined: Fri Mar 29, 2002 3:39 am
can system dynamics model learn
Colleagues
I think Khalids Saeeds short note is the sanest contribution of all
to this conversation. Who, in the real world, could possibly want a
self-learning model? Models are useful only if they promote human
learning. To overlook this is to miss the entire point of SD.
Back in the 1970s I gave up a career in business modelling in despair
at the attitude of managers (and modelers) who regarded models not only
as a substitute for learning, but as a "political tool" to reinforce
existing prejudices. I only returned to SD in the early 1990s having
been re-pursuaded that new tools (espy STELLA and iThink) and new
thinking (aka Senge) could indeed promote learning in some managers.
These days I am less sanguine that SD makes a real difference, because
too many modelers and practitioners still fall into the trap of
believing that clever modelling techniques and complex models are a
substitute for the really hard work - that of changing management
thinking. This is a primary reason why SD and "systems thinking" has
again fallen into the backwaters of management consciousness.
And, to pre-empt the chorus of dissent from the SD community, I simply
say - 40 years on from publication of "Industrial Dynamics", and 13
years after publication of "The Fifth Discipline", how many really
important management decisions are being influenced by SD and systems
thinking? Yes, we win a small battle occasionally - I guess thats why
I stay in the field - but I cant really see anything like a
"self-learning model" changing anything important.
Richard Stevenson
From: Richard Stevenson <richard@cognitus.co.uk>
I think Khalids Saeeds short note is the sanest contribution of all
to this conversation. Who, in the real world, could possibly want a
self-learning model? Models are useful only if they promote human
learning. To overlook this is to miss the entire point of SD.
Back in the 1970s I gave up a career in business modelling in despair
at the attitude of managers (and modelers) who regarded models not only
as a substitute for learning, but as a "political tool" to reinforce
existing prejudices. I only returned to SD in the early 1990s having
been re-pursuaded that new tools (espy STELLA and iThink) and new
thinking (aka Senge) could indeed promote learning in some managers.
These days I am less sanguine that SD makes a real difference, because
too many modelers and practitioners still fall into the trap of
believing that clever modelling techniques and complex models are a
substitute for the really hard work - that of changing management
thinking. This is a primary reason why SD and "systems thinking" has
again fallen into the backwaters of management consciousness.
And, to pre-empt the chorus of dissent from the SD community, I simply
say - 40 years on from publication of "Industrial Dynamics", and 13
years after publication of "The Fifth Discipline", how many really
important management decisions are being influenced by SD and systems
thinking? Yes, we win a small battle occasionally - I guess thats why
I stay in the field - but I cant really see anything like a
"self-learning model" changing anything important.
Richard Stevenson
From: Richard Stevenson <richard@cognitus.co.uk>
-
- Junior Member
- Posts: 7
- Joined: Fri Mar 29, 2002 3:39 am
can system dynamics model learn
Hi again,
the "Hara Kiri" model is programmed to detect its own failure and to
say "hey, Im not fit anymore". This means that the modeller was able
to articulate the conditions that distinguish failure. Then he did
not know how to produce a new model in the face of failue,
and so he prepared all thinkable models for the situation at hand.
He was not able to say under which conditions which of the models
would have to be used, but at least he articulated a plan for
switching between models as long as the currently used one fails too
badly. All the necessary learning has been done by the modeller
before runing the models. So I would say there is no "new" knowlege
generated after saving the models, but the whole approach has made the
modeler think hard and learn about how to recognize failure and what to
do then.
And this is what my question is about: guidance for (early) detection of
crisis or failure and for what to do then. It seems to me that this is
important: the goal is n ot to make models learn, but to learn while
making models, and some guidance is surely needed (it is easy to model
in order to show why your model is right, but in order to learn, "fuses"
are useful; and while designing fuses, the modeler will learn at a higher
level).
Thanks for the discussion,
From: "Martin F. G. Schaffernicht" <martin@utalca.cl>
--
Martin F. G. Schaffernicht
Facultad de Ciencias Empresariales
Universidad de Talca
http://dig.utalca.cl/carpeta/Martin_Schaffernicht
the "Hara Kiri" model is programmed to detect its own failure and to
say "hey, Im not fit anymore". This means that the modeller was able
to articulate the conditions that distinguish failure. Then he did
not know how to produce a new model in the face of failue,
and so he prepared all thinkable models for the situation at hand.
He was not able to say under which conditions which of the models
would have to be used, but at least he articulated a plan for
switching between models as long as the currently used one fails too
badly. All the necessary learning has been done by the modeller
before runing the models. So I would say there is no "new" knowlege
generated after saving the models, but the whole approach has made the
modeler think hard and learn about how to recognize failure and what to
do then.
And this is what my question is about: guidance for (early) detection of
crisis or failure and for what to do then. It seems to me that this is
important: the goal is n ot to make models learn, but to learn while
making models, and some guidance is surely needed (it is easy to model
in order to show why your model is right, but in order to learn, "fuses"
are useful; and while designing fuses, the modeler will learn at a higher
level).
Thanks for the discussion,
From: "Martin F. G. Schaffernicht" <martin@utalca.cl>
--
Martin F. G. Schaffernicht
Facultad de Ciencias Empresariales
Universidad de Talca
http://dig.utalca.cl/carpeta/Martin_Schaffernicht