Balancing Activity in Investigating Dynamic Problems
Posted: Sun Feb 01, 2004 8:33 pm
(was Using Statistics in Dynamics Models, Do we need to Simulate to
Validate Models?)
The discussion of statistics and simulation has been very interesting
but I fear perhaps somewhat abstract and theoretical in nature. A number
of the points that Jim Hines has raised are fundamentally more practical
and boil down deciding how to spend the limited time we have to look at
a problem. We clearly will never have the time to do everything we want
to do, let alone everything we could do, to attack a problem. So how do
we decide what to do?
As I see it there are three types of things we need to do when going
after a problem (these may iterate):
1. Model the problem (formally or informally) and make sure the model
makes sense.
2. Analyze the model to understand why it does what it does and extend
that understanding to the real problem.
3. Present that understanding (or the answers) to others in a meaningful
and convincing way.
The discussion so far has been heavily focussed on the first of these -
making sure the model makes sense. Clearly if we only do number 1
nothing will come of our work. So the question becomes when do we move
on to #2 and how much effort do we expend there. If we look at some of
the most influential models, both in our field and in others, people
have spent a great deal of time working on #2, and used the resulting
understanding as a basis for #3. I know, from working with Jim, that
this is exactly what he likes to do. I also know that I tend to spend
more time on #1 and have to speed through #2 and #3.
In part, I do things the way I do because I am never satisfied that my
models make sense. One reason for this is that the data are always
pointing out shortcomings. Another is that consulting engagements tend
to require overpromising of the scope of the problem to be addressed.
Still, I do not know if my allocation of time is really the best way to
attack the problems I work with a given amount of effort. I suspect that
Jim would tell me that if I spent more time on #2 I would do a better
job even for overscoped problems with messy data.
>From working with more mainstream consultants I know that many have a
tendency to focus almost entirely on #3 without really ever making sense
of anything. That always scares me. But the question remains - what are
the practical guidelines for working a problem. Is it just a personal
thing, or is there an emphasis that would make all of us more effective
at what we do?
Bob Eberlein
bob@vensim.com
Validate Models?)
The discussion of statistics and simulation has been very interesting
but I fear perhaps somewhat abstract and theoretical in nature. A number
of the points that Jim Hines has raised are fundamentally more practical
and boil down deciding how to spend the limited time we have to look at
a problem. We clearly will never have the time to do everything we want
to do, let alone everything we could do, to attack a problem. So how do
we decide what to do?
As I see it there are three types of things we need to do when going
after a problem (these may iterate):
1. Model the problem (formally or informally) and make sure the model
makes sense.
2. Analyze the model to understand why it does what it does and extend
that understanding to the real problem.
3. Present that understanding (or the answers) to others in a meaningful
and convincing way.
The discussion so far has been heavily focussed on the first of these -
making sure the model makes sense. Clearly if we only do number 1
nothing will come of our work. So the question becomes when do we move
on to #2 and how much effort do we expend there. If we look at some of
the most influential models, both in our field and in others, people
have spent a great deal of time working on #2, and used the resulting
understanding as a basis for #3. I know, from working with Jim, that
this is exactly what he likes to do. I also know that I tend to spend
more time on #1 and have to speed through #2 and #3.
In part, I do things the way I do because I am never satisfied that my
models make sense. One reason for this is that the data are always
pointing out shortcomings. Another is that consulting engagements tend
to require overpromising of the scope of the problem to be addressed.
Still, I do not know if my allocation of time is really the best way to
attack the problems I work with a given amount of effort. I suspect that
Jim would tell me that if I spent more time on #2 I would do a better
job even for overscoped problems with messy data.
>From working with more mainstream consultants I know that many have a
tendency to focus almost entirely on #3 without really ever making sense
of anything. That always scares me. But the question remains - what are
the practical guidelines for working a problem. Is it just a personal
thing, or is there an emphasis that would make all of us more effective
at what we do?
Bob Eberlein
bob@vensim.com