SD and Markov processes

This forum contains all archives from the SD Mailing list (go to http://www.systemdynamics.org/forum/ for more information). This is here as a read-only resource, please post any SD related questions to the SD Discussion forum.
Locked
Joel Rahn
Junior Member
Posts: 6
Joined: Fri Mar 29, 2002 3:39 am

SD and Markov processes

Post by Joel Rahn »

Michael Evans wrote:

It IS tricky because you have to integrate a stochastic process (essentially
a sequence of delta functions with random arguments) and there are different
ways to do this, none of which are equivalent to the Euler or Runge-Kutta
methods implemented in most SD packages. Look for literature on stochastic
integration and Ito (or Itô) integration, for example. No doubt there are
more such methods available.

From: Joel Rahn <rjrahn@videotron.ca>
idyner@perseus.unalmed.edu.co
Newbie
Posts: 1
Joined: Fri Mar 29, 2002 3:39 am

SD and Markov processes

Post by idyner@perseus.unalmed.edu.co »

In a recent piece of research we used stochastic processes within an SD model.
We were trying to explain how findings of gas depend on the amount of gas that
has already been found. These findings of gas were represented as a Poisson
Process (the parameter was estimated from available data), which could be
generalized as a density-dependent Poisson Process.
In this case, to use averages for the size of the findings may be misleading as
these averages don’t account for the likelihood of finding a major well or a
small one - different behavior modes are trigger depending on the size of the
gas well that has been found.
We only used this approach for some sort of validation purpose but I believe
that it maybe interesting and not difficult to implement for other purposes.
Any standard SD software is appropriate.
Regards.
Isaac Dyner
Professor
Universidad Nacional de Colombia
From: idyner@perseus.unalmed.edu.co
"Barton, Pelham"
Junior Member
Posts: 2
Joined: Fri Mar 29, 2002 3:39 am

SD and Markov processes

Post by "Barton, Pelham" »

Markov modelling is used substantially in the field of Health Economics. The
connection between a subset of Markov models and a subset of SD models may
be seen as follows.

Suppose the problem to be modelled is the progression of some medical
condition (under one of a number of different management strategies).
Suppose that people in the relevant patient group can be classified into a
finite number of states. Then you can construct an SD model by having a
level for each possible state.

Now impose the limitation that the rate variable representing transfer from
state X to state Y is a fixed multiple of the number in state X, and that
this is true for all rate variables. Then consider a fixed cycle time T. For
each level X, run the SD model for time T with X starting at a non-zero
value and all other levels starting at a zero value. Expressing the final
value of level Y as a fraction of the initial value of level X gives the
transition probabilities for a Markov chain model with cycle time T. For any
starting population, the two models can be run and will give the same output
at all multiples of time T.

Similarly a Markov chain such that for any state S there is a non-zero
probability of remaining in S in any cycle can be converted to an SD model
by letting the cycle time tend to zero, adjusting the transition
probabilities accordingly and converting them into rates.

If there is a state S with zero probability of remaining in S for a cycle,
that corresponds (roughly) to a pipeline delay in an SD model.

Clearly the SD model can be more general than the Markov. However, the
limitation on rate variables described above is not unreasonable for
non-infectious diseases with no restriction on the availability of
resources.

Using a fixed time cycle does lead to potential inaccuracies in estimating
costs and effects; these are usually negligible compared to uncertainties in
the data used to build the model in the first place.

Both SD and Markov models have the limitation that each state is assumed to
represent a homogeneous group of patients. This limitation can be overcome
to some extent by increasing the number of states, but if the homogeneity
assumption is a serious problem, then it is preferable to move to a
stochastic (individual-based) modelling approach such as discrete event
simulation.

Pelham Barton, PhD
Lecturer in Mathematical Modelling
Health Economics Facility
University of Birmingham
0121-414 3170 (Office)
0121-414 7051 (Fax)
07773 125984 (Mobile)
p.m.barton@bham.ac.uk
GBHirsch@aol.com
Junior Member
Posts: 19
Joined: Fri Mar 29, 2002 3:39 am

SD and Markov processes

Post by GBHirsch@aol.com »

Im writing to add a few thoughts to P. M. Bartons comments about the
similarities between SD and Markov Process models for reflecting the
progression of chronic illlnesses in populations. I agree with him that the
two are similar when used to model naturally-occurring disease processes and
also incremental effects of interventions such as new technologies that
reduce mortality rates.

SD models can offer some real advantages when modeling the effects of
different care delivery strategies. In SD models, mortality rates and rates
of flow between disease states can be made variable and dependent on such
things as the relative workloads carried by different components of the
health care system and the care delivery patterns that result. The feedback
loops among illness states, workloads, and care delivery created by these
linkages can help to explain how certain care delivery strategies "lock in"
disease patterns.

In the US, for example, an emphasis on high-tech interventions for people who
are already quite sick leaves limited resources for more cost-effective
primary care. This pattern of care delivery produces both higher costs and
higher mortality rates than in other countries that ration high-tech
resources and emphasize primary care. Inclusion of these feedbacks can also
help health care planners and managers identify high leverage interventions
that reduce the burden of illness and make more resources available for
primary care, reducing the burden of illness, and so on...

These feedback structures were an important addition to a Microworld
developed by my colleagues and me to help health care providers understand
the impacts of different strategies for improving community health status.
Much of the underlying model is "Markov-like" in reflecting transitions among
disease states and age groups, but the feedback loops provided an ability to
differentiate among strategies that would not have been possible with a
Markov Process model alone.

There are additional feedback loops that shed light on the relationship
between care delivery and illness prevalence. One set of these involve
care-seeking behavior. In a set of dental manpower models, colleagues and I
were able to show how simply matching manpower to current needs would lock in
existing illness and care-seeking patterns (largely in response to symptoms)
while small surpluses created the potential for encouraging more
preventively-oriented careseeking behavior and reducing illness prevalence
over time.

In a model of heroin addiction, the communitys definition of the problem (as
medical vs. criminal) was seen by us as a key determinant of rates of flow
between treatment programs and either a drug-free state or return to
addiction. The communitys definition of the problem is the result of the
communitys experience with heroin addiction and related problems such as
addict crime. In the US at least, emphasis on a criminal view has helped to
lock in addiction as a serious problem and get in the way of dealing with it
effectively.

These kinds of feedbacks seem important for understanding illness and care
delivery and a real contribution of SD models.

Gary Hirsch
GBHirsch@aol.com
Locked