Michael Evans wrote:
It IS tricky because you have to integrate a stochastic process (essentially
a sequence of delta functions with random arguments) and there are different
ways to do this, none of which are equivalent to the Euler or Runge-Kutta
methods implemented in most SD packages. Look for literature on stochastic
integration and Ito (or Itô) integration, for example. No doubt there are
more such methods available.
From: Joel Rahn <rjrahn@videotron.ca>
SD and Markov processes
-
- Newbie
- Posts: 1
- Joined: Fri Mar 29, 2002 3:39 am
SD and Markov processes
In a recent piece of research we used stochastic processes within an SD model.
We were trying to explain how findings of gas depend on the amount of gas that
has already been found. These findings of gas were represented as a Poisson
Process (the parameter was estimated from available data), which could be
generalized as a density-dependent Poisson Process.
In this case, to use averages for the size of the findings may be misleading as
these averages don’t account for the likelihood of finding a major well or a
small one - different behavior modes are trigger depending on the size of the
gas well that has been found.
We only used this approach for some sort of validation purpose but I believe
that it maybe interesting and not difficult to implement for other purposes.
Any standard SD software is appropriate.
Regards.
Isaac Dyner
Professor
Universidad Nacional de Colombia
From: idyner@perseus.unalmed.edu.co
We were trying to explain how findings of gas depend on the amount of gas that
has already been found. These findings of gas were represented as a Poisson
Process (the parameter was estimated from available data), which could be
generalized as a density-dependent Poisson Process.
In this case, to use averages for the size of the findings may be misleading as
these averages don’t account for the likelihood of finding a major well or a
small one - different behavior modes are trigger depending on the size of the
gas well that has been found.
We only used this approach for some sort of validation purpose but I believe
that it maybe interesting and not difficult to implement for other purposes.
Any standard SD software is appropriate.
Regards.
Isaac Dyner
Professor
Universidad Nacional de Colombia
From: idyner@perseus.unalmed.edu.co
-
- Senior Member
- Posts: 94
- Joined: Fri Mar 29, 2002 3:39 am
SD and Markov processes
SD and Markov is, has been said, a very tricky problem but I dont see why
one would want to do it.
My (limited no doubt) grasp of Markov is that there is no causation. States
reflect, absorb etc at random according to their type. Its much closer to
discrete event simulation and SD is fundamentally about the effects of
causal processes on dynamic behaviour.
Surely a Markov process can be more easily programmed in many ways which
would be easier than SD.
Ill be fascinated to see the outcome of this. If someone does manage to
represent this in an SD language I hope that they will circulate it.
Regards,
Geoff
Professor R G Coyle,
Consultant in System Dynamics and Strategic Modelling,
Telephone +44 (0) 1793 782817, Fax ... 783188
email geoff.coyle@btinternet.com
one would want to do it.
My (limited no doubt) grasp of Markov is that there is no causation. States
reflect, absorb etc at random according to their type. Its much closer to
discrete event simulation and SD is fundamentally about the effects of
causal processes on dynamic behaviour.
Surely a Markov process can be more easily programmed in many ways which
would be easier than SD.
Ill be fascinated to see the outcome of this. If someone does manage to
represent this in an SD language I hope that they will circulate it.
Regards,
Geoff
Professor R G Coyle,
Consultant in System Dynamics and Strategic Modelling,
Telephone +44 (0) 1793 782817, Fax ... 783188
email geoff.coyle@btinternet.com
-
- Senior Member
- Posts: 75
- Joined: Fri Mar 29, 2002 3:39 am
SD and Markov processes
There was an article published that attempted to relate Markov processes
and SD, as I recall. I seem to recall it in the 1977-1987 time frame,
perhaps in the IEEE Trans. Sys. Man Cybernetics or IEEE Trans. Control
Theory--it was at least in one of those types of journals.
Unfortunately, I dont seem to have a copy, so I cant give you a better
reference. It seemed like a nice curiousity at the time, but I never
came up with a need for it.
Bill
From: Bill Harris <bill_harris@facilitatedsystems.com>
and SD, as I recall. I seem to recall it in the 1977-1987 time frame,
perhaps in the IEEE Trans. Sys. Man Cybernetics or IEEE Trans. Control
Theory--it was at least in one of those types of journals.
Unfortunately, I dont seem to have a copy, so I cant give you a better
reference. It seemed like a nice curiousity at the time, but I never
came up with a need for it.
Bill
From: Bill Harris <bill_harris@facilitatedsystems.com>
-
- Newbie
- Posts: 1
- Joined: Fri Mar 29, 2002 3:39 am
SD and Markov processes
Bill Harris (SD3184) wrote:
>There was an article published that attempted to relate Markov processes
>and SD, as I recall. I seem to recall it in the 1977-1987 time frame,
>........
Sahin K.E., Equivalence of Markov Models to a Class of
System Dynamics Models, _IEEE Trans. on Systems, Man, and
Cybernetics_, Vol. SMC-9, No. 7, July 1979, pp. 398-402.
Regards,
Nicola Bianchi
From: Nicola Bianchi <bianchi@ge.cnr.it>
National Research Council (CNR)
Institute on Intelligent Systems for Automation (IAN)
Genoa, Italy
>There was an article published that attempted to relate Markov processes
>and SD, as I recall. I seem to recall it in the 1977-1987 time frame,
>........
Sahin K.E., Equivalence of Markov Models to a Class of
System Dynamics Models, _IEEE Trans. on Systems, Man, and
Cybernetics_, Vol. SMC-9, No. 7, July 1979, pp. 398-402.
Regards,
Nicola Bianchi
From: Nicola Bianchi <bianchi@ge.cnr.it>
National Research Council (CNR)
Institute on Intelligent Systems for Automation (IAN)
Genoa, Italy
-
- Junior Member
- Posts: 2
- Joined: Fri Mar 29, 2002 3:39 am
SD and Markov processes
Markov modelling is used substantially in the field of Health Economics. The
connection between a subset of Markov models and a subset of SD models may
be seen as follows.
Suppose the problem to be modelled is the progression of some medical
condition (under one of a number of different management strategies).
Suppose that people in the relevant patient group can be classified into a
finite number of states. Then you can construct an SD model by having a
level for each possible state.
Now impose the limitation that the rate variable representing transfer from
state X to state Y is a fixed multiple of the number in state X, and that
this is true for all rate variables. Then consider a fixed cycle time T. For
each level X, run the SD model for time T with X starting at a non-zero
value and all other levels starting at a zero value. Expressing the final
value of level Y as a fraction of the initial value of level X gives the
transition probabilities for a Markov chain model with cycle time T. For any
starting population, the two models can be run and will give the same output
at all multiples of time T.
Similarly a Markov chain such that for any state S there is a non-zero
probability of remaining in S in any cycle can be converted to an SD model
by letting the cycle time tend to zero, adjusting the transition
probabilities accordingly and converting them into rates.
If there is a state S with zero probability of remaining in S for a cycle,
that corresponds (roughly) to a pipeline delay in an SD model.
Clearly the SD model can be more general than the Markov. However, the
limitation on rate variables described above is not unreasonable for
non-infectious diseases with no restriction on the availability of
resources.
Using a fixed time cycle does lead to potential inaccuracies in estimating
costs and effects; these are usually negligible compared to uncertainties in
the data used to build the model in the first place.
Both SD and Markov models have the limitation that each state is assumed to
represent a homogeneous group of patients. This limitation can be overcome
to some extent by increasing the number of states, but if the homogeneity
assumption is a serious problem, then it is preferable to move to a
stochastic (individual-based) modelling approach such as discrete event
simulation.
Pelham Barton, PhD
Lecturer in Mathematical Modelling
Health Economics Facility
University of Birmingham
0121-414 3170 (Office)
0121-414 7051 (Fax)
07773 125984 (Mobile)
p.m.barton@bham.ac.uk
connection between a subset of Markov models and a subset of SD models may
be seen as follows.
Suppose the problem to be modelled is the progression of some medical
condition (under one of a number of different management strategies).
Suppose that people in the relevant patient group can be classified into a
finite number of states. Then you can construct an SD model by having a
level for each possible state.
Now impose the limitation that the rate variable representing transfer from
state X to state Y is a fixed multiple of the number in state X, and that
this is true for all rate variables. Then consider a fixed cycle time T. For
each level X, run the SD model for time T with X starting at a non-zero
value and all other levels starting at a zero value. Expressing the final
value of level Y as a fraction of the initial value of level X gives the
transition probabilities for a Markov chain model with cycle time T. For any
starting population, the two models can be run and will give the same output
at all multiples of time T.
Similarly a Markov chain such that for any state S there is a non-zero
probability of remaining in S in any cycle can be converted to an SD model
by letting the cycle time tend to zero, adjusting the transition
probabilities accordingly and converting them into rates.
If there is a state S with zero probability of remaining in S for a cycle,
that corresponds (roughly) to a pipeline delay in an SD model.
Clearly the SD model can be more general than the Markov. However, the
limitation on rate variables described above is not unreasonable for
non-infectious diseases with no restriction on the availability of
resources.
Using a fixed time cycle does lead to potential inaccuracies in estimating
costs and effects; these are usually negligible compared to uncertainties in
the data used to build the model in the first place.
Both SD and Markov models have the limitation that each state is assumed to
represent a homogeneous group of patients. This limitation can be overcome
to some extent by increasing the number of states, but if the homogeneity
assumption is a serious problem, then it is preferable to move to a
stochastic (individual-based) modelling approach such as discrete event
simulation.
Pelham Barton, PhD
Lecturer in Mathematical Modelling
Health Economics Facility
University of Birmingham
0121-414 3170 (Office)
0121-414 7051 (Fax)
07773 125984 (Mobile)
p.m.barton@bham.ac.uk
-
- Junior Member
- Posts: 19
- Joined: Fri Mar 29, 2002 3:39 am
SD and Markov processes
Im writing to add a few thoughts to P. M. Bartons comments about the
similarities between SD and Markov Process models for reflecting the
progression of chronic illlnesses in populations. I agree with him that the
two are similar when used to model naturally-occurring disease processes and
also incremental effects of interventions such as new technologies that
reduce mortality rates.
SD models can offer some real advantages when modeling the effects of
different care delivery strategies. In SD models, mortality rates and rates
of flow between disease states can be made variable and dependent on such
things as the relative workloads carried by different components of the
health care system and the care delivery patterns that result. The feedback
loops among illness states, workloads, and care delivery created by these
linkages can help to explain how certain care delivery strategies "lock in"
disease patterns.
In the US, for example, an emphasis on high-tech interventions for people who
are already quite sick leaves limited resources for more cost-effective
primary care. This pattern of care delivery produces both higher costs and
higher mortality rates than in other countries that ration high-tech
resources and emphasize primary care. Inclusion of these feedbacks can also
help health care planners and managers identify high leverage interventions
that reduce the burden of illness and make more resources available for
primary care, reducing the burden of illness, and so on...
These feedback structures were an important addition to a Microworld
developed by my colleagues and me to help health care providers understand
the impacts of different strategies for improving community health status.
Much of the underlying model is "Markov-like" in reflecting transitions among
disease states and age groups, but the feedback loops provided an ability to
differentiate among strategies that would not have been possible with a
Markov Process model alone.
There are additional feedback loops that shed light on the relationship
between care delivery and illness prevalence. One set of these involve
care-seeking behavior. In a set of dental manpower models, colleagues and I
were able to show how simply matching manpower to current needs would lock in
existing illness and care-seeking patterns (largely in response to symptoms)
while small surpluses created the potential for encouraging more
preventively-oriented careseeking behavior and reducing illness prevalence
over time.
In a model of heroin addiction, the communitys definition of the problem (as
medical vs. criminal) was seen by us as a key determinant of rates of flow
between treatment programs and either a drug-free state or return to
addiction. The communitys definition of the problem is the result of the
communitys experience with heroin addiction and related problems such as
addict crime. In the US at least, emphasis on a criminal view has helped to
lock in addiction as a serious problem and get in the way of dealing with it
effectively.
These kinds of feedbacks seem important for understanding illness and care
delivery and a real contribution of SD models.
Gary Hirsch
GBHirsch@aol.com
similarities between SD and Markov Process models for reflecting the
progression of chronic illlnesses in populations. I agree with him that the
two are similar when used to model naturally-occurring disease processes and
also incremental effects of interventions such as new technologies that
reduce mortality rates.
SD models can offer some real advantages when modeling the effects of
different care delivery strategies. In SD models, mortality rates and rates
of flow between disease states can be made variable and dependent on such
things as the relative workloads carried by different components of the
health care system and the care delivery patterns that result. The feedback
loops among illness states, workloads, and care delivery created by these
linkages can help to explain how certain care delivery strategies "lock in"
disease patterns.
In the US, for example, an emphasis on high-tech interventions for people who
are already quite sick leaves limited resources for more cost-effective
primary care. This pattern of care delivery produces both higher costs and
higher mortality rates than in other countries that ration high-tech
resources and emphasize primary care. Inclusion of these feedbacks can also
help health care planners and managers identify high leverage interventions
that reduce the burden of illness and make more resources available for
primary care, reducing the burden of illness, and so on...
These feedback structures were an important addition to a Microworld
developed by my colleagues and me to help health care providers understand
the impacts of different strategies for improving community health status.
Much of the underlying model is "Markov-like" in reflecting transitions among
disease states and age groups, but the feedback loops provided an ability to
differentiate among strategies that would not have been possible with a
Markov Process model alone.
There are additional feedback loops that shed light on the relationship
between care delivery and illness prevalence. One set of these involve
care-seeking behavior. In a set of dental manpower models, colleagues and I
were able to show how simply matching manpower to current needs would lock in
existing illness and care-seeking patterns (largely in response to symptoms)
while small surpluses created the potential for encouraging more
preventively-oriented careseeking behavior and reducing illness prevalence
over time.
In a model of heroin addiction, the communitys definition of the problem (as
medical vs. criminal) was seen by us as a key determinant of rates of flow
between treatment programs and either a drug-free state or return to
addiction. The communitys definition of the problem is the result of the
communitys experience with heroin addiction and related problems such as
addict crime. In the US at least, emphasis on a criminal view has helped to
lock in addiction as a serious problem and get in the way of dealing with it
effectively.
These kinds of feedbacks seem important for understanding illness and care
delivery and a real contribution of SD models.
Gary Hirsch
GBHirsch@aol.com
-
- Junior Member
- Posts: 6
- Joined: Fri Mar 29, 2002 3:39 am
SD and Markov processes
Hail SDers!
I wonder if anybody has done any work representing Markov processes
(dynamic statistical processes where the current value depends partly on
the previous value) using system dynamics? One would think that this kind
of closed recursive loop would be ideal for SD, but in fact I find it
fairly tricky. Has anyone tried this? How did it go?
Regards,
Michael Evans
Ph.D. student
Centre for Ecological Economics and Water Policy Research
University of New England
NSW 2350 Australia
mevans@metz.une.edu.au
ph + 61 2 6773 3744
fax 2 6773 3237
From: Michael Evans <mevans@metz.une.edu.au>
I wonder if anybody has done any work representing Markov processes
(dynamic statistical processes where the current value depends partly on
the previous value) using system dynamics? One would think that this kind
of closed recursive loop would be ideal for SD, but in fact I find it
fairly tricky. Has anyone tried this? How did it go?
Regards,
Michael Evans
Ph.D. student
Centre for Ecological Economics and Water Policy Research
University of New England
NSW 2350 Australia
mevans@metz.une.edu.au
ph + 61 2 6773 3744
fax 2 6773 3237
From: Michael Evans <mevans@metz.une.edu.au>