Delays in SD modeling!!
-
- Newbie
- Posts: 1
- Joined: Fri Mar 29, 2002 3:39 am
Delays in SD modeling!!
Can any suggest me of a paper/book reference material
to know how to model delays, and understand how they
should work??
My problem is similar to many problems of building
houses but in a different way!! where i would like to
include randomness component in it!!
i have seen many delays being implemented in a average
time scenario, but i would like to model it in a
random way with a pdf curves!!
Would greatly appreciate anyones help or suggestion!!
A fellow rookie in Systems Dynamics!!
Ram.
From: "Ramaswamy P." <rpasupatham@yahoo.com>
to know how to model delays, and understand how they
should work??
My problem is similar to many problems of building
houses but in a different way!! where i would like to
include randomness component in it!!
i have seen many delays being implemented in a average
time scenario, but i would like to model it in a
random way with a pdf curves!!
Would greatly appreciate anyones help or suggestion!!
A fellow rookie in Systems Dynamics!!
Ram.
From: "Ramaswamy P." <rpasupatham@yahoo.com>
-
- Junior Member
- Posts: 2
- Joined: Fri Mar 29, 2002 3:39 am
Delays in SD modeling!!
In case you get any response of value, I would be interested to receive
it. One of my major concerns is the affect of the usual modelling of
delays.
e.g. for hiring workforce, having a delay of 3 month, this is usually
modelled as 1/3 of the workforce beeing available at each successive
month. I guess that this deviation from real world would habe
significant impact onto the simulation results, but at the moment, I
cannot prove it.
Regards
Alfred
From: Alfred Bosshard <bosshard@active.ch>
it. One of my major concerns is the affect of the usual modelling of
delays.
e.g. for hiring workforce, having a delay of 3 month, this is usually
modelled as 1/3 of the workforce beeing available at each successive
month. I guess that this deviation from real world would habe
significant impact onto the simulation results, but at the moment, I
cannot prove it.
Regards
Alfred
From: Alfred Bosshard <bosshard@active.ch>
-
- Newbie
- Posts: 1
- Joined: Fri Mar 29, 2002 3:39 am
Delays in SD modeling!!
Most SD textscover the question of delays: for example see Foresters
1961 industrial dynamics chapter 9, or Stermans 2000 Business Dynamics
chapter 11 for almost all you ever wanted to know about delays.
If you dont want to know quite that much... then take a look at the
"roadmaps" paper D-4614-F Generic Structures: Exponential Material
Delays.
--
Richard G. Dudley
rdudley@indo.net.id
1961 industrial dynamics chapter 9, or Stermans 2000 Business Dynamics
chapter 11 for almost all you ever wanted to know about delays.
If you dont want to know quite that much... then take a look at the
"roadmaps" paper D-4614-F Generic Structures: Exponential Material
Delays.
--
Richard G. Dudley
rdudley@indo.net.id
-
- Senior Member
- Posts: 94
- Joined: Fri Mar 29, 2002 3:39 am
Delays in SD modeling!!
This is, in fact, a VERY tricky problem. Simply citing textbooks (even mine,
which I am sure you must know about in any case) doesnt help you. (I HOPE
that you have at least read the books and are not just relying on a software
manual!!).
As Bill Harris said, whats the problem youre trying to model?
Regards,
Geoff
Professor R G Coyle,
Consultant in System Dynamics and Strategic Modelling,
Telephone +44 (0) 1793 782817, Fax ... 783188
email geoff.coyle@btinternet.com
which I am sure you must know about in any case) doesnt help you. (I HOPE
that you have at least read the books and are not just relying on a software
manual!!).
As Bill Harris said, whats the problem youre trying to model?
Regards,
Geoff
Professor R G Coyle,
Consultant in System Dynamics and Strategic Modelling,
Telephone +44 (0) 1793 782817, Fax ... 783188
email geoff.coyle@btinternet.com
-
- Senior Member
- Posts: 94
- Joined: Fri Mar 29, 2002 3:39 am
Delays in SD modeling!!
This is an old and interesting problem. My 1977 book Management System
Dynamics, John Wiley (long out of print but should be available in any
serious SD library) has 4 pages on measuring the order of a delay. The end
result is:
N=DEL^2/Vn
where N is the required delay order, DEL is the delay magnitude (which is
squared in the calculation) and Vn is the variance in the delay.
For example in a population in which most deaths occur between 50 years of
age and 80, the mean age of death is approximately 65. The dispersion in
deaths is 30 years (80-50) and the standard deviation can be estimated as
30/8 (the usual 4 standard deviations rule) or 3.75. The required N is then
30^2/3.75^2 or about 300. This required a very small DT of approximately
0.0216. That was written before we realised that, with Euler integration, DT
should be the decimal equivalent of an exact binary fraction, so the correct
value of DT is 0.015625.
Of course, N=200 is very cumbersome in many packages so the simple thing is,
on the face of it, to use a pipeline delay. That, however, requires some
assumption such as everyone dies at exactly 65 (or any other choice) years
of age, which is manifestly incorrect.
Bob Eberlein correctly reminds us that the common practice of using DELAY3
may mislead and we should be more careful. I dont think that he is right in
saying that Vensims pipeline delay has an infinite number of hidden levels.
It is more likely to be DEL/DT hidden levels> Maybe hell correct me.
I hope this helps. Isnt it odd that its been in the literature for more
than 30 years. maybe we should read each others books or even read any
books on SD before we start programming.
Regards,
Geoff
Professor R G Coyle,
Consultant in System Dynamics and Strategic Modelling,
Telephone +44 (0) 1793 782817, Fax ... 783188
email geoff.coyle@btinternet.com
Dynamics, John Wiley (long out of print but should be available in any
serious SD library) has 4 pages on measuring the order of a delay. The end
result is:
N=DEL^2/Vn
where N is the required delay order, DEL is the delay magnitude (which is
squared in the calculation) and Vn is the variance in the delay.
For example in a population in which most deaths occur between 50 years of
age and 80, the mean age of death is approximately 65. The dispersion in
deaths is 30 years (80-50) and the standard deviation can be estimated as
30/8 (the usual 4 standard deviations rule) or 3.75. The required N is then
30^2/3.75^2 or about 300. This required a very small DT of approximately
0.0216. That was written before we realised that, with Euler integration, DT
should be the decimal equivalent of an exact binary fraction, so the correct
value of DT is 0.015625.
Of course, N=200 is very cumbersome in many packages so the simple thing is,
on the face of it, to use a pipeline delay. That, however, requires some
assumption such as everyone dies at exactly 65 (or any other choice) years
of age, which is manifestly incorrect.
Bob Eberlein correctly reminds us that the common practice of using DELAY3
may mislead and we should be more careful. I dont think that he is right in
saying that Vensims pipeline delay has an infinite number of hidden levels.
It is more likely to be DEL/DT hidden levels> Maybe hell correct me.
I hope this helps. Isnt it odd that its been in the literature for more
than 30 years. maybe we should read each others books or even read any
books on SD before we start programming.
Regards,
Geoff
Professor R G Coyle,
Consultant in System Dynamics and Strategic Modelling,
Telephone +44 (0) 1793 782817, Fax ... 783188
email geoff.coyle@btinternet.com
-
- Junior Member
- Posts: 9
- Joined: Fri Mar 29, 2002 3:39 am
Delays in SD modeling!!
Thanks to John Sterman for emphasising "If you do decide to use a built-in
function, you must be prepared to justify why it is appropriate and what
its limitations are". This goes for use of built-in delay functions,
time-step and the integration method.
In relation to the discussion of delays, my experience is that the extra
effort in constructing "... an explicit aging chain ... and (including) co
flows ... associated with each cohort" (John Sterman) is worthwhile in
creating understanding of structure and in identifying levers for management
intervention. With array structures, such aging chains can be implemented
with minimal increase in the (apparent) complexity of the visual
presentation of the model structure.
The pitfalls highlighted by Geoff Coyle in relation to using large
population cohorts and averaging delay functions again raise the question,
discussed some months ago, why one would do this when a census model aging
chain array is so easy to build and far more accurate.
Keith Linard
Director
Centre for Business Dynamics & Knowledge Management
University of New South Wales
Phone: -61-(0)2-6268-8347
Fax: -61-(0)2-6268-8337
Email: k-linard@adfa.edu.au
function, you must be prepared to justify why it is appropriate and what
its limitations are". This goes for use of built-in delay functions,
time-step and the integration method.
In relation to the discussion of delays, my experience is that the extra
effort in constructing "... an explicit aging chain ... and (including) co
flows ... associated with each cohort" (John Sterman) is worthwhile in
creating understanding of structure and in identifying levers for management
intervention. With array structures, such aging chains can be implemented
with minimal increase in the (apparent) complexity of the visual
presentation of the model structure.
The pitfalls highlighted by Geoff Coyle in relation to using large
population cohorts and averaging delay functions again raise the question,
discussed some months ago, why one would do this when a census model aging
chain array is so easy to build and far more accurate.
Keith Linard
Director
Centre for Business Dynamics & Knowledge Management
University of New South Wales
Phone: -61-(0)2-6268-8347
Fax: -61-(0)2-6268-8337
Email: k-linard@adfa.edu.au
-
- Senior Member
- Posts: 94
- Joined: Fri Mar 29, 2002 3:39 am
Delays in SD modeling!!
Jim Hines wrote:
> Geoff Coyles formula N=DEL^2/Vn is very nice and I for one agree that I
> should have read his book a long time ago.
>
That was very kind. Can I claim that this be now known as Coyles Law?
Apart from the echo phenomenon, DELAY3 is usually perfectly satisfactory
for the poor quality data which are all that is usually available for many
models.
Regards,
Geoff
Professor R G Coyle,
Consultant in System Dynamics and Strategic Modelling,
Telephone +44 (0) 1793 782817, Fax ... 783188
email geoff.coyle@btinternet.com
> Geoff Coyles formula N=DEL^2/Vn is very nice and I for one agree that I
> should have read his book a long time ago.
>
That was very kind. Can I claim that this be now known as Coyles Law?
Apart from the echo phenomenon, DELAY3 is usually perfectly satisfactory
for the poor quality data which are all that is usually available for many
models.
Regards,
Geoff
Professor R G Coyle,
Consultant in System Dynamics and Strategic Modelling,
Telephone +44 (0) 1793 782817, Fax ... 783188
email geoff.coyle@btinternet.com
-
- Senior Member
- Posts: 79
- Joined: Fri Mar 29, 2002 3:39 am
Delays in SD modeling!!
See, of course,
Sterman, John. 2000. Business Dynamics: Systems Thinking and Modeling for a
Complex World. McGraw Hill
for modeling delays and every thing else involved with system dynamics
modeling.
Khalid
_____________________________________
Khalid Saeed
Professor and Department Head
Social Science and Policy Studies
W. P. I., 100 Institute Road
Worcester, MA 01609, USA
Ph: 508-831-5563; fax: 508-831-5896
email: saeed@wpi.edu
SSPS Dept: http://www.wpi.edu/+SSPS
Sterman, John. 2000. Business Dynamics: Systems Thinking and Modeling for a
Complex World. McGraw Hill
for modeling delays and every thing else involved with system dynamics
modeling.
Khalid
_____________________________________
Khalid Saeed
Professor and Department Head
Social Science and Policy Studies
W. P. I., 100 Institute Road
Worcester, MA 01609, USA
Ph: 508-831-5563; fax: 508-831-5896
email: saeed@wpi.edu
SSPS Dept: http://www.wpi.edu/+SSPS
-
- Senior Member
- Posts: 73
- Joined: Fri Mar 29, 2002 3:39 am
Delays in SD modeling!!
A pipeline delay (infinite order material delay) might be what you are
looking for if you are modeling specific, time defined stages that people
go through (such as a training period before any productivity starts).
Bill Braun
From: Bill Braun <medprac@hlthsys.com>
looking for if you are modeling specific, time defined stages that people
go through (such as a training period before any productivity starts).
Bill Braun
From: Bill Braun <medprac@hlthsys.com>
-
- Member
- Posts: 49
- Joined: Fri Mar 29, 2002 3:39 am
Delays in SD modeling!!
Alfred Bosshard raised the interesting question of how much it matters
what form a delay takes. And someone else raised the question of time
profiled delay distributions.
There are, of course, some rules of thumb. The higher the order of a
delay (see note below) the more phase shift - the more phase shift the
more likely instability will be. There are lots of exceptions to this
but it tends to be a good common sense guide.
There is also a relativly common belief that the use of first and third
order exponential delays is sufficient for capturing interesting
dynamics. I fall into that common belief camp, but have always worried
about the wisdom of this. It is certainly true that in situations such
as demographic problems if you go to anything less than a yearly age
cohort (effectivly a 60+ order delay) you get very significant
dispersion effects (a high birth rate increases the number of 60
year-olds 20 years later).
None of this is very difinitive, but I think there is a nice research
challenge out there around the nature or delays. Any delay can be
represented as a probability distribution of the time it will take an
atom entering a level to leave that level. As we go from first to
infinite order exponential delays we go from a pure exponential dropping
from a high to low level, to a unimodal but smooth curve, to a big bump
right at the delay time. This, of course, is a small subset of the
profiles that could exist (for example suppose that everything stayed at
least 1 year but no more than 3 years). In the advanced versions of
Vensim 4.1 we have added in a DELAY PROFILE function that could make
this type of research easier.
NOTE The order of a delay refers to the number of levels involved in the
delay. For example a DELAY3 or SMOOTH3 delay has 3 levels while a
SMOOTH has only 1. A pure delay (such as DELAY FIXED in Vensim) has an
infinite number of levels.
Bob Eberlein
bob@vensim.com
what form a delay takes. And someone else raised the question of time
profiled delay distributions.
There are, of course, some rules of thumb. The higher the order of a
delay (see note below) the more phase shift - the more phase shift the
more likely instability will be. There are lots of exceptions to this
but it tends to be a good common sense guide.
There is also a relativly common belief that the use of first and third
order exponential delays is sufficient for capturing interesting
dynamics. I fall into that common belief camp, but have always worried
about the wisdom of this. It is certainly true that in situations such
as demographic problems if you go to anything less than a yearly age
cohort (effectivly a 60+ order delay) you get very significant
dispersion effects (a high birth rate increases the number of 60
year-olds 20 years later).
None of this is very difinitive, but I think there is a nice research
challenge out there around the nature or delays. Any delay can be
represented as a probability distribution of the time it will take an
atom entering a level to leave that level. As we go from first to
infinite order exponential delays we go from a pure exponential dropping
from a high to low level, to a unimodal but smooth curve, to a big bump
right at the delay time. This, of course, is a small subset of the
profiles that could exist (for example suppose that everything stayed at
least 1 year but no more than 3 years). In the advanced versions of
Vensim 4.1 we have added in a DELAY PROFILE function that could make
this type of research easier.
NOTE The order of a delay refers to the number of levels involved in the
delay. For example a DELAY3 or SMOOTH3 delay has 3 levels while a
SMOOTH has only 1. A pure delay (such as DELAY FIXED in Vensim) has an
infinite number of levels.
Bob Eberlein
bob@vensim.com
-
- Senior Member
- Posts: 88
- Joined: Fri Mar 29, 2002 3:39 am
Delays in SD modeling!!
Concerning Ramaswamys desire to put a probability density function into a
delay.
Some people may not be aware that the traditional delay structures: first-
and higher- order smooths and material delays already have a probabilistic
interpretation. The interpretation is that the time constant (or the sum of
time constants in the case of higher-order delays) represents the average
residence time of an element in the stock. The actual residence times are
distributed around this average. In the case of a first-order smooth or
material delay, the distribution is exponential.
Of course these traditional delays are not random in the sense that they
dont suffer from sampling "errors". So, for example, you always get a
perfect exponential distribution if you use a first order material delay.
This doesnt detract from insights, except for when sampling errors are key
to what you want to explore. In this case, a discrete event approach might
be better; although you could certainly create a stock whose outflow depends
on draws from an probability distribution.
Regards,
Jim
From: "Jim Hines" <jhines@MIT.EDU>
delay.
Some people may not be aware that the traditional delay structures: first-
and higher- order smooths and material delays already have a probabilistic
interpretation. The interpretation is that the time constant (or the sum of
time constants in the case of higher-order delays) represents the average
residence time of an element in the stock. The actual residence times are
distributed around this average. In the case of a first-order smooth or
material delay, the distribution is exponential.
Of course these traditional delays are not random in the sense that they
dont suffer from sampling "errors". So, for example, you always get a
perfect exponential distribution if you use a first order material delay.
This doesnt detract from insights, except for when sampling errors are key
to what you want to explore. In this case, a discrete event approach might
be better; although you could certainly create a stock whose outflow depends
on draws from an probability distribution.
Regards,
Jim
From: "Jim Hines" <jhines@MIT.EDU>
Delays in SD modeling!!
Bob Eberlein wrote:
>
> None of this is very difinitive, but I think there is a nice research
> challenge out there around the nature or delays. Any delay can be
> represented as a probability distribution of the time it will take an
> atom entering a level to leave that level. As we go from first to
> infinite order exponential delays we go from a pure exponential dropping
> from a high to low level, to a unimodal but smooth curve, to a big bump
> right at the delay time.
In fact the outflow rate from an nth-order DELAY function subject to a PULSE
input rate lasting one DT is the shape of the corresponding probability
density function for the nth-order Erlang function whose Expected value is
the Delay Time. Hence, the output rate is the Expected value of the number
of units that leave the DELAYs final internal Level.
> This, of course, is a small subset of the
> profiles that could exist (for example suppose that everything stayed at
> least 1 year but no more than 3 years). In the advanced versions of
> Vensim 4.1 we have added in a DELAY PROFILE function that could make
> this type of research easier.
This could be a big help because only the Erlang family of probability
densities has a SIMPLE Rate-Level structure to generate the density in
response to a PULSE (or delta-function) input.
The problem has a mathematical solution (essentially, develop a system of
differential equations whose particular solution, when the forcing function
is a delta-function, is the density). Needless to say, like most
mathematical solutions, this is less than helpful. I invite you to try it
out on the Normal distribution for starters.
A more "practical" result may be to develop an Erlang-function transform of
the density distribution of the delay times of interest and represent the
delay distribution by a more or less (but likely much more) complicated mess
of DELAY functions probably including first-, etc. order DELAYS with time
constants (Average Delay Times) that are integer multiples of the expected
value of the Delay Time of the given distribution. The quotes in the first
sentence are a little bit of applied math irony.
R. Joel Rahn
From: Joel Rahn <rjrahn@videotron.ca>
>
> None of this is very difinitive, but I think there is a nice research
> challenge out there around the nature or delays. Any delay can be
> represented as a probability distribution of the time it will take an
> atom entering a level to leave that level. As we go from first to
> infinite order exponential delays we go from a pure exponential dropping
> from a high to low level, to a unimodal but smooth curve, to a big bump
> right at the delay time.
In fact the outflow rate from an nth-order DELAY function subject to a PULSE
input rate lasting one DT is the shape of the corresponding probability
density function for the nth-order Erlang function whose Expected value is
the Delay Time. Hence, the output rate is the Expected value of the number
of units that leave the DELAYs final internal Level.
> This, of course, is a small subset of the
> profiles that could exist (for example suppose that everything stayed at
> least 1 year but no more than 3 years). In the advanced versions of
> Vensim 4.1 we have added in a DELAY PROFILE function that could make
> this type of research easier.
This could be a big help because only the Erlang family of probability
densities has a SIMPLE Rate-Level structure to generate the density in
response to a PULSE (or delta-function) input.
The problem has a mathematical solution (essentially, develop a system of
differential equations whose particular solution, when the forcing function
is a delta-function, is the density). Needless to say, like most
mathematical solutions, this is less than helpful. I invite you to try it
out on the Normal distribution for starters.
A more "practical" result may be to develop an Erlang-function transform of
the density distribution of the delay times of interest and represent the
delay distribution by a more or less (but likely much more) complicated mess
of DELAY functions probably including first-, etc. order DELAYS with time
constants (Average Delay Times) that are integer multiples of the expected
value of the Delay Time of the given distribution. The quotes in the first
sentence are a little bit of applied math irony.
R. Joel Rahn
From: Joel Rahn <rjrahn@videotron.ca>
-
- Senior Member
- Posts: 117
- Joined: Fri Mar 29, 2002 3:39 am
Delays in SD modeling!!
The discussion about ways to represent delay distributions other than the
Erlang family (and its limiting distribution, the pipeline delay given by
output(t) = input(t - DelayTime)) has been helpful. Personally, I look
forward to using the arbitrary delay shape capability Bob Eberlein has
created for Vensim 4.1. Nevertheless, a word of caution is in order.
We use the various delay types such as first-order material delay,
sixth-order material delay and so on as chunks of structure because they
arise so often that we dont want to be bothered to represent their
internal structure explicitly every time. Hence ithink, Vensim, and
Powersim offer a number of these delays as built in functions (macros,
really). The caution comes because it becomes very easy to use these
built-ins without thinking through whether they are truly appropriate.
First, you have to decide whether the delay you seek to represent is a
material or information delay. It matters, both conceptually, and, under
certain conditions, quantitatively (if the delay time is not a constant,
the two types of delays behave differently). Second, you have to decide
whether the outflows (in a material delay) are capacitated - that is,
whether there are constraints that limit the outflow rate. The built-in
delays are linear operators, which means, among other things, that the
output depends only on the stock of material in transit, so that the mean
and distribution of deliveries is the same no matter how large a pulse is
entered into the delay. This is never literally true, though perhaps is
close enough in some situations. As a modeler, you have to decide whether
the constraints on the processing of items in the delay are important to
your purpose; if so, you must model the capacity constraint explicitly.
Finally, if you had data that suggested the distribution of arrivals in a
delay was not unimodal, it is likely that the output is actually the result
of two or more separate delay processes, and you should probably model them
separately and explicitly. For example, there is some evidence that the
distributed lag for the response of energy demand to a change in prices is
bimodal: if energy prices rise suddenly and stay high, people will
relatively quickly cut the utilization of the most energy-intensive capital
stocks, and soon after may also retrofit existing capital stocks to be more
efficient. The utilization and retrofitting delay may have a mean response
time of several years. Over the longer term, however, investments in new
and more efficient capital will further reduce energy demand. The mean
response time for this effect is at least as long as the lifetime of the
capital stock (longer, since it takes time for capital producers to design
and build more efficient capital). In nearly all situations I can think of
it would be bad modeling practice, however, to capture this complex
response using a single aggregate delay function. The three channels of
energy demand response are quite different, with different underlying
physical processes, constraints, lifetimes, and economic inputs. They
should be modeled explicitly and separately; indeed, you should probably
construct an explicit aging chain for the stock of energy consuming capital
and include coflows for the energy requirements associated with each cohort
and vintage of capital. Each cohort could then include explicit retrofits
(chapter 12 of Business Dynamics discusses how and shows examples).
In short, the ability to use built-in functions to capture delays can be a
wonderful time saver, but before doing so, we modelers--both novice and
experienced--must carefully think through the underlying physics and
decision making processes we seek to capture. Often it is better--both for
the quality of the model and our ability to explain it to our audience--to
represent delay processes explicitly. If you do decide to use a built-in
function, you must be prepared to justify why it is appropriate and what
its limitations are.
John Sterman
From: John Sterman <jsterman@MIT.EDU>
Erlang family (and its limiting distribution, the pipeline delay given by
output(t) = input(t - DelayTime)) has been helpful. Personally, I look
forward to using the arbitrary delay shape capability Bob Eberlein has
created for Vensim 4.1. Nevertheless, a word of caution is in order.
We use the various delay types such as first-order material delay,
sixth-order material delay and so on as chunks of structure because they
arise so often that we dont want to be bothered to represent their
internal structure explicitly every time. Hence ithink, Vensim, and
Powersim offer a number of these delays as built in functions (macros,
really). The caution comes because it becomes very easy to use these
built-ins without thinking through whether they are truly appropriate.
First, you have to decide whether the delay you seek to represent is a
material or information delay. It matters, both conceptually, and, under
certain conditions, quantitatively (if the delay time is not a constant,
the two types of delays behave differently). Second, you have to decide
whether the outflows (in a material delay) are capacitated - that is,
whether there are constraints that limit the outflow rate. The built-in
delays are linear operators, which means, among other things, that the
output depends only on the stock of material in transit, so that the mean
and distribution of deliveries is the same no matter how large a pulse is
entered into the delay. This is never literally true, though perhaps is
close enough in some situations. As a modeler, you have to decide whether
the constraints on the processing of items in the delay are important to
your purpose; if so, you must model the capacity constraint explicitly.
Finally, if you had data that suggested the distribution of arrivals in a
delay was not unimodal, it is likely that the output is actually the result
of two or more separate delay processes, and you should probably model them
separately and explicitly. For example, there is some evidence that the
distributed lag for the response of energy demand to a change in prices is
bimodal: if energy prices rise suddenly and stay high, people will
relatively quickly cut the utilization of the most energy-intensive capital
stocks, and soon after may also retrofit existing capital stocks to be more
efficient. The utilization and retrofitting delay may have a mean response
time of several years. Over the longer term, however, investments in new
and more efficient capital will further reduce energy demand. The mean
response time for this effect is at least as long as the lifetime of the
capital stock (longer, since it takes time for capital producers to design
and build more efficient capital). In nearly all situations I can think of
it would be bad modeling practice, however, to capture this complex
response using a single aggregate delay function. The three channels of
energy demand response are quite different, with different underlying
physical processes, constraints, lifetimes, and economic inputs. They
should be modeled explicitly and separately; indeed, you should probably
construct an explicit aging chain for the stock of energy consuming capital
and include coflows for the energy requirements associated with each cohort
and vintage of capital. Each cohort could then include explicit retrofits
(chapter 12 of Business Dynamics discusses how and shows examples).
In short, the ability to use built-in functions to capture delays can be a
wonderful time saver, but before doing so, we modelers--both novice and
experienced--must carefully think through the underlying physics and
decision making processes we seek to capture. Often it is better--both for
the quality of the model and our ability to explain it to our audience--to
represent delay processes explicitly. If you do decide to use a built-in
function, you must be prepared to justify why it is appropriate and what
its limitations are.
John Sterman
From: John Sterman <jsterman@MIT.EDU>
-
- Senior Member
- Posts: 117
- Joined: Fri Mar 29, 2002 3:39 am
Delays in SD modeling!!
Geoff Coyle points out the useful formula relating the order of an Erlang
delay to the mean and variance of the delay process:
Order = (Mean Delay)^2/Variance
As intuition suggests, the smaller the variance relative to the mean delay,
the tighter the distribution of the outflow around its mean, and thus the
higher the order of the delay. This is a useful rule of thumb, but caution
must be applied. The formula is exact only if the delay is a member of the
Erlang family. Unfortunately, the results are quite poor if the
distribution is not Erlang. Thus for the formula to be useful, one must
have confidence that the delay distribution is quite close to some order
Erlang process. To know this one would nearly always require access to
enough data to plot the distribution and compare it to the Erlang family,
in which case one can estimate the mean and order of the delay directly
from the distribution (the formula becomes unnecessary).
Geoffs example of mortality of the 50-80 year old cohort in a population
provides an illustration. Geoffs data indicated a 300th order delay,
which is essentially equivalent to a pipeline delay, implying that there
are no deaths among 50 to 79 year olds. As Geoff points out, this is not
reasonable. The problem arises because mortality in the 50-80 cohort is
not well-approximated by any member of the Erlang family. Indeed, there is
significant mortality out of each cohort; for many purposes modelers will
need to represent population with an explicit aging chain with mortality
out of each cohort, with cohorts representing 5 or fewer years (one is
customary in demographic models today).
The more fundamental question is how much effort modelers should put into
getting the order of their delays "right." As always, it depends on the
purpose of the study. If the data needed to plot the distribution of the
delay output are available, it is an easy matter to determine which Erlang
(or other) lag best captures it. Usually, however, the data are not
readily available. Before spending a great deal of time collecting
additional data, modelers should perform sensitivity analysis to see if
their policy or other conclusions are sensitive to assumptions about the
order of the delay. For example, one might vary the assumed delay order
from a low value (e.g., first-order) to infinite (pipeline); if your policy
recommendations do not change your best judgment about the order of the
delay is provides sufficient accuracy and it is not worthwhile to collect
better data. Of course, you should take care that you consider the joint
(multivariate) sensitivity of your results and not only the univariate
sensitivity, since there could be interactions between the order of the
delay and other parameters and structural assumptions in your model.
Finally, what counts as an important change in conclusions depends on the
model purpose: if the model is to be used to support decision making, what
counts is "policy sensitivity" - that is, do the policy conclusions (the
suite of recommended policies) change over the plausible range of
alternative assumptions? Other purposes may demand other standards, and
may lead to different conclusions regarding the need to gather additional
data.
This statement is not to be construed as a justification for casual
empiricism, sloppiness, or sloth: in system dynamics as in all modeling
methods, it is essential that we modelers pay careful attention to data,
work hard to test our assumptions, and do the often difficult and time
consuming empirical work needed to discover and correct errors in our
models, both formal and mental.
John Sterman
From: John Sterman <jsterman@MIT.EDU>
delay to the mean and variance of the delay process:
Order = (Mean Delay)^2/Variance
As intuition suggests, the smaller the variance relative to the mean delay,
the tighter the distribution of the outflow around its mean, and thus the
higher the order of the delay. This is a useful rule of thumb, but caution
must be applied. The formula is exact only if the delay is a member of the
Erlang family. Unfortunately, the results are quite poor if the
distribution is not Erlang. Thus for the formula to be useful, one must
have confidence that the delay distribution is quite close to some order
Erlang process. To know this one would nearly always require access to
enough data to plot the distribution and compare it to the Erlang family,
in which case one can estimate the mean and order of the delay directly
from the distribution (the formula becomes unnecessary).
Geoffs example of mortality of the 50-80 year old cohort in a population
provides an illustration. Geoffs data indicated a 300th order delay,
which is essentially equivalent to a pipeline delay, implying that there
are no deaths among 50 to 79 year olds. As Geoff points out, this is not
reasonable. The problem arises because mortality in the 50-80 cohort is
not well-approximated by any member of the Erlang family. Indeed, there is
significant mortality out of each cohort; for many purposes modelers will
need to represent population with an explicit aging chain with mortality
out of each cohort, with cohorts representing 5 or fewer years (one is
customary in demographic models today).
The more fundamental question is how much effort modelers should put into
getting the order of their delays "right." As always, it depends on the
purpose of the study. If the data needed to plot the distribution of the
delay output are available, it is an easy matter to determine which Erlang
(or other) lag best captures it. Usually, however, the data are not
readily available. Before spending a great deal of time collecting
additional data, modelers should perform sensitivity analysis to see if
their policy or other conclusions are sensitive to assumptions about the
order of the delay. For example, one might vary the assumed delay order
from a low value (e.g., first-order) to infinite (pipeline); if your policy
recommendations do not change your best judgment about the order of the
delay is provides sufficient accuracy and it is not worthwhile to collect
better data. Of course, you should take care that you consider the joint
(multivariate) sensitivity of your results and not only the univariate
sensitivity, since there could be interactions between the order of the
delay and other parameters and structural assumptions in your model.
Finally, what counts as an important change in conclusions depends on the
model purpose: if the model is to be used to support decision making, what
counts is "policy sensitivity" - that is, do the policy conclusions (the
suite of recommended policies) change over the plausible range of
alternative assumptions? Other purposes may demand other standards, and
may lead to different conclusions regarding the need to gather additional
data.
This statement is not to be construed as a justification for casual
empiricism, sloppiness, or sloth: in system dynamics as in all modeling
methods, it is essential that we modelers pay careful attention to data,
work hard to test our assumptions, and do the often difficult and time
consuming empirical work needed to discover and correct errors in our
models, both formal and mental.
John Sterman
From: John Sterman <jsterman@MIT.EDU>
-
- Senior Member
- Posts: 88
- Joined: Fri Mar 29, 2002 3:39 am
Delays in SD modeling!!
Geoff Coyles formula N=DEL^2/Vn is very nice and I for one agree that I
should have read his book a long time ago.
Two comments: I think that its probably good to bear in mind the many
injunctions (often from Goeff Coyle) that modeling decisions depend on the
problem. Often a modeler only needs the order of delay required for the
proper dynamics.
Often it a third order delay seems sufficient dynamically. Jim Lyneis, who
has done an awful lot of modeling, has mentioned to me a single exception:
Oscillations from "echos"-- for example the up-turn in births when the baby
boomers started having their own children and the echo that will occur when
those children begin having babies of their own (oh, my), or the upturn that
occurs when a large number of durable products (all sold at roughly the same
time) begin to wear out.
A twelfth-order delay at least seems to be required, for echos. Does anyone
know of another situation where DYNAMICALLY a third-order delay does not
typically suffice?
Regards,
Jim Hines
jhines@mit.edu
should have read his book a long time ago.
Two comments: I think that its probably good to bear in mind the many
injunctions (often from Goeff Coyle) that modeling decisions depend on the
problem. Often a modeler only needs the order of delay required for the
proper dynamics.
Often it a third order delay seems sufficient dynamically. Jim Lyneis, who
has done an awful lot of modeling, has mentioned to me a single exception:
Oscillations from "echos"-- for example the up-turn in births when the baby
boomers started having their own children and the echo that will occur when
those children begin having babies of their own (oh, my), or the upturn that
occurs when a large number of durable products (all sold at roughly the same
time) begin to wear out.
A twelfth-order delay at least seems to be required, for echos. Does anyone
know of another situation where DYNAMICALLY a third-order delay does not
typically suffice?
Regards,
Jim Hines
jhines@mit.edu
-
- Senior Member
- Posts: 55
- Joined: Fri Mar 29, 2002 3:39 am
Delays in SD modeling!!
At 10:28 AM 11/12/2000 +0000, Geoff Coyle wrote:
>Bob Eberlein correctly reminds us that the common practice of using DELAY3
>may mislead and we should be more careful. I dont think that he is right in
>saying that Vensims pipeline delay has an infinite number of hidden levels.
>It is more likely to be DEL/DT hidden levels> Maybe hell correct me.
Geoff Coyles count of the hidden levels in pipeline delays is correct. At
the same time, since the time step can be made (almost) arbitrarily small
(within limits of modeler patience and the numerical precision of the SW),
the number of levels might as well be infinite, so Bobs right too. This
illustrates a hidden cost of pipeline or infinite-order delays: computation
time. Its fairly easy to bog down a large model with careless use of
pipeline delays; they should be used where theyre realistic, not just easy.
As always, putting the structure to the test is an effective solution. Its
really easy to use a built-in Nth order delay function and vary the order
parameter to see what happens. Its almost as easy to use arrays, varying
the array bounds to change the delay order. This can be a good way to
critique an existing model - just browse around for DELAY or SMOOTH-type
functions and change their order, or insert delays into the first-order
outflows of stocks.
You can demonstrate the N = Delay^2/StdDev^2 rule of thumb by testing with
an Nth order delay structure (a sample model follows at the end of this note):
DTOrderMeanStd DevDelay^2 /SD^2
0.062519.9959.9461.01
0.06253105.7193.06
0.06255104.4015.16
0.062510103.06110.66
119.9969.4711.11
13104.834.29
15103.16210.00
110100NA
319.9988.361.43
519.9997.0692.00
101100NA
One thing to note here is that the rule of thumb works well as long as the
time step (DT) is much less than delay time/delay order. As the time step
gets large, the model is no longer a good approximation of continuous.
When the delay time/delay order = DT, the model is effectively discrete
(with Euler integration). This is how most demographic models that Ive
seen work; they have zero dispersion in the transit time of individuals
through the population cohorts, because the cohort duration (1 or 5 years)
is the same as the time step. The dispersion in life expectancy then comes
entirely from the fact that all cohorts are subject to some death rate,
initially small but larger for later cohorts.
One other complication introduced by higher-order delays is decision rule
formulation. In a classical SD supply chain model, you might find a 2nd
order system - a stock of work-in-progress feeding a stock of finished
inventory. Its then easy to formulate an production start rule that has a
nice behavioral interpretation and behaves reasonably in steady-state
conditions, e.g. Production Starts = MAX(0, Expected Sales + a*(Desired
Inventory-Inventory) + b*(Desired WIP - WIP). Its easy to make the
work-in-progress stock into 10th order delay, but then how do you write the
decision rule? Assume people manage the system as if its first order? Find
out how the Nth order algorithm in their MRP system works? You may quickly
find yourself headed toward a detailed simulation of the shop floor thats
not appropriate for strategic questions at hand. This is not an argument
against higher-order delays; just an appreciation of the challenges involved.
After all this, Im not sure weve answered the original question; it
sounded to me like Ram was interested in modeling the stochastic effects
explicitly, rather than just understanding how they boil down to various
deterministic delay structures for large populations.
Tom
A simple delay test:
********************************
.Delay
********************************
Delay=10
"Delay^2/Var"=ZIDZ(Delay^2,Var Arrival Time)
Inflow=IF THEN ELSE(Time = 0, 1/TIME STEP, 0)
Mean Arrival Time= INTEG (Weighted Arrival Time, 0)
Order=1
Outflow= DELAY N(Inflow,Delay, 0, Order)
SD Arrival Time=Var Arrival Time^0.5
Sq Diff Arrival Time=Outflow*(Delay-Time)^2
Var Arrival Time= INTEG (Sq Diff Arrival Time, 0)
Weighted Arrival Time=Outflow*Time
********************************
.Control
********************************
Simulation Control Parameters
FINAL TIME = 100
INITIAL TIME = 0
SAVEPER = 1
TIME STEP = 1
****************************************************
Thomas Fiddaman, Ph.D.
Ventana Systems http://www.vensim.com
8105 SE Nelson Road Tel (253) 851-0124
Olalla, WA 98359 Fax (253) 851-0125
Tom@Vensim.com http://home.earthlink.net/~tomfid
****************************************************
>Bob Eberlein correctly reminds us that the common practice of using DELAY3
>may mislead and we should be more careful. I dont think that he is right in
>saying that Vensims pipeline delay has an infinite number of hidden levels.
>It is more likely to be DEL/DT hidden levels> Maybe hell correct me.
Geoff Coyles count of the hidden levels in pipeline delays is correct. At
the same time, since the time step can be made (almost) arbitrarily small
(within limits of modeler patience and the numerical precision of the SW),
the number of levels might as well be infinite, so Bobs right too. This
illustrates a hidden cost of pipeline or infinite-order delays: computation
time. Its fairly easy to bog down a large model with careless use of
pipeline delays; they should be used where theyre realistic, not just easy.
As always, putting the structure to the test is an effective solution. Its
really easy to use a built-in Nth order delay function and vary the order
parameter to see what happens. Its almost as easy to use arrays, varying
the array bounds to change the delay order. This can be a good way to
critique an existing model - just browse around for DELAY or SMOOTH-type
functions and change their order, or insert delays into the first-order
outflows of stocks.
You can demonstrate the N = Delay^2/StdDev^2 rule of thumb by testing with
an Nth order delay structure (a sample model follows at the end of this note):
DTOrderMeanStd DevDelay^2 /SD^2
0.062519.9959.9461.01
0.06253105.7193.06
0.06255104.4015.16
0.062510103.06110.66
119.9969.4711.11
13104.834.29
15103.16210.00
110100NA
319.9988.361.43
519.9997.0692.00
101100NA
One thing to note here is that the rule of thumb works well as long as the
time step (DT) is much less than delay time/delay order. As the time step
gets large, the model is no longer a good approximation of continuous.
When the delay time/delay order = DT, the model is effectively discrete
(with Euler integration). This is how most demographic models that Ive
seen work; they have zero dispersion in the transit time of individuals
through the population cohorts, because the cohort duration (1 or 5 years)
is the same as the time step. The dispersion in life expectancy then comes
entirely from the fact that all cohorts are subject to some death rate,
initially small but larger for later cohorts.
One other complication introduced by higher-order delays is decision rule
formulation. In a classical SD supply chain model, you might find a 2nd
order system - a stock of work-in-progress feeding a stock of finished
inventory. Its then easy to formulate an production start rule that has a
nice behavioral interpretation and behaves reasonably in steady-state
conditions, e.g. Production Starts = MAX(0, Expected Sales + a*(Desired
Inventory-Inventory) + b*(Desired WIP - WIP). Its easy to make the
work-in-progress stock into 10th order delay, but then how do you write the
decision rule? Assume people manage the system as if its first order? Find
out how the Nth order algorithm in their MRP system works? You may quickly
find yourself headed toward a detailed simulation of the shop floor thats
not appropriate for strategic questions at hand. This is not an argument
against higher-order delays; just an appreciation of the challenges involved.
After all this, Im not sure weve answered the original question; it
sounded to me like Ram was interested in modeling the stochastic effects
explicitly, rather than just understanding how they boil down to various
deterministic delay structures for large populations.
Tom
A simple delay test:
********************************
.Delay
********************************
Delay=10
"Delay^2/Var"=ZIDZ(Delay^2,Var Arrival Time)
Inflow=IF THEN ELSE(Time = 0, 1/TIME STEP, 0)
Mean Arrival Time= INTEG (Weighted Arrival Time, 0)
Order=1
Outflow= DELAY N(Inflow,Delay, 0, Order)
SD Arrival Time=Var Arrival Time^0.5
Sq Diff Arrival Time=Outflow*(Delay-Time)^2
Var Arrival Time= INTEG (Sq Diff Arrival Time, 0)
Weighted Arrival Time=Outflow*Time
********************************
.Control
********************************
Simulation Control Parameters
FINAL TIME = 100
INITIAL TIME = 0
SAVEPER = 1
TIME STEP = 1
****************************************************
Thomas Fiddaman, Ph.D.
Ventana Systems http://www.vensim.com
8105 SE Nelson Road Tel (253) 851-0124
Olalla, WA 98359 Fax (253) 851-0125
Tom@Vensim.com http://home.earthlink.net/~tomfid
****************************************************