Building your own models
Posted: Sat Apr 01, 2000 8:10 am
Very helpful clarification Jaideep (and apologies for jumping in so hard
first time). Your reply is a useful focus for my concern ... your first two
Unstructured problem characteristics (first-cut, top down) are fine. The
problem arises with the third characteristic - qualitative. Heres my
reasoning
- A basic reality-check for any situation should, I believe, involve
gathering some top-level numbers (have we got an idea of ... how fast this
firm is losing customers / how quickly this population is catching a disease
/ how rapidly this natural resource is being depleted and so on ???).
Without these high-level numbers, I dont see how we can be sure that we are
addressing the clients issue, or even talking the same language. (It often
surprises me how, when asked to put numbers on something, people discover
they were using the same word to mean quite different things, which is why I
would never put an item on a board without knowing what its value might be).
- Such headline items either are, or depend upon, stocks of some kind
(time-based problems always involve stocks), and stocks are accumulating or
depleting at some quantified rate. So a further reality check should again
involve getting some rough numbers. We may have whatever mental model we
like about what the structure of the system is, or what we think its
feedback structure looks like - but those stocks exist, can be detected and
measured, and are doing as they please, not as we might like to imagine.
- CLDs cant capture the arithmetic of accumulation and depletion, so (in my
view) we cant have confidence in any qualitative behavioural insight that
we might offer from such a top-down, first-cut, qualitative approach. (It
has long been known that linear flows can create complex dynamics with no
feedback whatever, and even simple reinforcing structures can create a wide
variety of behaviours beyond the basic exponential growth and decline that
is usually taken to be their hallmark).
All is not lost, though, because it seems that ordinary folk are quite
capable of sketching out the numbers over time on important stocks in any
real case. They need only a little SD help to get a clear understanding of
how changing flows are causing these numbers. From there it is a small step
to seeing, quantitatively, why their issue is embarked on its particular
trajectory through time.
I think we would all acknowledge, that, when any more than the most basic
feedback is added in, the wider explanation becomes complex to the point of
being non-intuitive. But this is no reason to abandon what can be achieved
with the fundamental numbers that are generally knowable, even in the
earliest phase of work.
Id like to build on another important point Jaideep makes - the link to
statistical methods. In recent work on retail banking by some colleagues at
Darwin here in London, it proved critical (unsurprisingly) to know the rates
of customer acquisition and loss, and more importantly, the reason why these
rates were running as they were. Customer research had been done for years,
so it was easy to do regression analysis. These flows could be forecast with
quite some precision, if customers were asked how warmly they felt towards
the bank.
Correlation methods failed, though, the instant we crossed the flow-to-stock
boundary (so customer numbers, deposits, interest-income etc. did not
correlate well with anything) - which is exactly as expected, since
accumulation confounds linear causality and destroys any chance of finding
reliable correlations.
So may be we shouldnt be at all afraid to embrace statistical methods for
understanding the directly causal relationships that do exist, so long as we
dont pursue them across stock boundaries (Im sure others must have made
this observation long ago, but I remember it coming as a flash of light to
me when I first saw the point)
Kim
From: "Kim Warren" <kim@strategydynamics.com>
first time). Your reply is a useful focus for my concern ... your first two
Unstructured problem characteristics (first-cut, top down) are fine. The
problem arises with the third characteristic - qualitative. Heres my
reasoning
- A basic reality-check for any situation should, I believe, involve
gathering some top-level numbers (have we got an idea of ... how fast this
firm is losing customers / how quickly this population is catching a disease
/ how rapidly this natural resource is being depleted and so on ???).
Without these high-level numbers, I dont see how we can be sure that we are
addressing the clients issue, or even talking the same language. (It often
surprises me how, when asked to put numbers on something, people discover
they were using the same word to mean quite different things, which is why I
would never put an item on a board without knowing what its value might be).
- Such headline items either are, or depend upon, stocks of some kind
(time-based problems always involve stocks), and stocks are accumulating or
depleting at some quantified rate. So a further reality check should again
involve getting some rough numbers. We may have whatever mental model we
like about what the structure of the system is, or what we think its
feedback structure looks like - but those stocks exist, can be detected and
measured, and are doing as they please, not as we might like to imagine.
- CLDs cant capture the arithmetic of accumulation and depletion, so (in my
view) we cant have confidence in any qualitative behavioural insight that
we might offer from such a top-down, first-cut, qualitative approach. (It
has long been known that linear flows can create complex dynamics with no
feedback whatever, and even simple reinforcing structures can create a wide
variety of behaviours beyond the basic exponential growth and decline that
is usually taken to be their hallmark).
All is not lost, though, because it seems that ordinary folk are quite
capable of sketching out the numbers over time on important stocks in any
real case. They need only a little SD help to get a clear understanding of
how changing flows are causing these numbers. From there it is a small step
to seeing, quantitatively, why their issue is embarked on its particular
trajectory through time.
I think we would all acknowledge, that, when any more than the most basic
feedback is added in, the wider explanation becomes complex to the point of
being non-intuitive. But this is no reason to abandon what can be achieved
with the fundamental numbers that are generally knowable, even in the
earliest phase of work.
Id like to build on another important point Jaideep makes - the link to
statistical methods. In recent work on retail banking by some colleagues at
Darwin here in London, it proved critical (unsurprisingly) to know the rates
of customer acquisition and loss, and more importantly, the reason why these
rates were running as they were. Customer research had been done for years,
so it was easy to do regression analysis. These flows could be forecast with
quite some precision, if customers were asked how warmly they felt towards
the bank.
Correlation methods failed, though, the instant we crossed the flow-to-stock
boundary (so customer numbers, deposits, interest-income etc. did not
correlate well with anything) - which is exactly as expected, since
accumulation confounds linear causality and destroys any chance of finding
reliable correlations.
So may be we shouldnt be at all afraid to embrace statistical methods for
understanding the directly causal relationships that do exist, so long as we
dont pursue them across stock boundaries (Im sure others must have made
this observation long ago, but I remember it coming as a flash of light to
me when I first saw the point)
Kim
From: "Kim Warren" <kim@strategydynamics.com>