Discrete Event Probability

Use this forum to post Vensim related questions.
Post Reply
makmuh60
Newbie
Posts: 1
Joined: Mon May 07, 2018 6:58 pm
Vensim version: PLE

Discrete Event Probability

Post by makmuh60 »

Hello
There is one point that I have could solve out. According to my simulation, the system will choose one dicrete event related to its probabilities to happen.
For example,event A has 0.4 , event B has 0.35 and event C has 0.25. Each ''Time'' simulation will choese randomly and system continue...

I hope I can tell my problem clearly..
Waiting for your replies.
Thanks
tomfid
Administrator
Posts: 3806
Joined: Wed May 24, 2006 4:54 am

Re: Discrete Event Probability

Post by tomfid »

I think you want something like this:

ProbA = 0.4
CondProbB = 0.5833 ~ the conditional probability of B if not A, equal to .35/(.35+.25)

Event = IF THEN ELSE( RANDOM UNIFORM(0,1,0)<ProbA, 1, IF THEN ELSE( RANDOM UNIFORM(0,1,0)<CondProbB, 2, 3)) ~ returns 1 for event A, 2 for B, 3 for C
LAUJJL
Senior Member
Posts: 1421
Joined: Fri May 23, 2003 10:09 am
Vensim version: DSS

Re: Discrete Event Probability

Post by LAUJJL »

Hi

Another formulation possible in the model joined.

Regards.

JJ
Attachments
probability.mdl
(3.66 KiB) Downloaded 231 times
tomfid
Administrator
Posts: 3806
Joined: Wed May 24, 2006 4:54 am

Re: Discrete Event Probability

Post by tomfid »

JJ's approach is better because it doesn't require 2 random number draws (they're a bit expensive).

You could simplify slightly by deleting the underlined part here:

event = if then else(random number <= prob a , 1 , if then else(random number > prob a :AND: random number <= prob a + prob b , 2 , 3) )

It's impossible to reach the second IF call if the random number <= Prob A, so it doesn't need to be tested again.
LAUJJL
Senior Member
Posts: 1421
Joined: Fri May 23, 2003 10:09 am
Vensim version: DSS

Re: Discrete Event Probability

Post by LAUJJL »

Hi Tom

The formulation I used is certainly redundant but it shows better that the interval (0, prob a) of length a corresponds to the probability of a, as the interval (prob a, prob a + prob b) of length b corresponds to the probability of b and the interval of (prob a + prob b , prob a + prob b + prob c = 1) of length prob c corresponds to the probability of c.

By the way, if one supposes that the events a , b , c correspond to an event putting a machine into state a,b,c and when running the machine during a time step, the cost of running depends on the state plus an additional cost that one can chose and that will determine the transition probabilities of going from one state to another state after one time step. The probabilities of events a , b , c depend on the previous event and the chosen additional cost and are not independent .

What is at every step the best additional cost to chose as to minimize the cost of running the machine during say a specified length of time?

Regards.

JJ
tomfid
Administrator
Posts: 3806
Joined: Wed May 24, 2006 4:54 am

Re: Discrete Event Probability

Post by tomfid »

When you say, "The probabilities of events a , b , c depend on the previous event," is that the same as, "the probabilities depend on the current state" (because the event determines the state)?

Also, is the probability of any event less than 1 in a given time step?

This sounds like a Markov Chain transition matrix problem, where you can pay to vary the transition probabilities to achieve a desired equilibrium. Seems like it should be solvable analytically.
LAUJJL
Senior Member
Posts: 1421
Joined: Fri May 23, 2003 10:09 am
Vensim version: DSS

Re: Discrete Event Probability

Post by LAUJJL »

Hi Tom
In fact the machine has three possible states and the state at the end of one step depends on the state at the beginning of the step and on the additional cost that will determine the transition probability from one state to another one.
Each probability is of course less than 1. I know that it works like a Markov chain, but this does not give me the solution to that problem.

I have one solution to that problem but it requires using dynamic programming methods that consist of starting to solve the problem at the end of the total time and going backwards step by step, something that is not so easy to program. I wonder if there are methods in the SD methodology that avoid this method.
Regards .
JJ
tomfid
Administrator
Posts: 3806
Joined: Wed May 24, 2006 4:54 am

Re: Discrete Event Probability

Post by tomfid »

Just thinking out loud here:

It sounds like this then:

State: a,b,c
NewState <-> state
OldState <-> state

where

Current[state] updates based on a transition matrix,

P Trans[OldState,NewState] = f( Addl Spending )

where f() might be something like:

P Trans[OldState,NewState] = Base P Trans[OldState,NewState] * EXP( - Additional Spending[OldState] / Avoidance Cost[OldState,NewState] )

though you'd have to do something additional to keep things summing to 1 - perhaps special treatment on the diagonal.

Presumably there's something like:

payoff = cost[state] + additional spending[state]

In any case, if your time horizon is long, then your spending policy for each state should be constant, and therefore P Trans is constant. That means the chain settles into a predictable steady state, but you might not be able to solve for the optimal spending analytically due to the nonlinear spending effect. However, it would be easy to optimize this in Vensim by simply minimizing the payoff over some horizon long enough to make the variance small.

If your time horizon is short, it's a little messier, because you might care about the initial state and your policy might have some transient behavior, e.g., at the end when discounting suggests that you might want to quit investing in preservation of a good state. This might require some time variation in the spending policy and a stochastic optimization rather than simply a long one, but in principle it seems doable in Vensim.
LAUJJL
Senior Member
Posts: 1421
Joined: Fri May 23, 2003 10:09 am
Vensim version: DSS

Re: Discrete Event Probability

Post by LAUJJL »

Hi Tom
Thank you for bothering with this problem.
Your solution works well with a small number of states.
Even with a small number of states, one still has to optimize, and it is not possible to use the synthesim and optimize at the same time like what one does with the synthesim and the sensibility, something that would be useful if possible in Vensim.
But the real problem comes when the number of states increases.
The payoff = cost[state] + additional spending[state] has to be optimized over the parameters
Additional spending[state].
In many cases the number of parameters is more than a hundred. I have a case with a number of states equal to power(2,360) or power(1000,36) a number with more than 100 zeros.
Fortunately there are methods to solve this kind of problems: aggregation of states into sub groups easier to manage etc…
The current method to avoid optimization is to find the optimum at the end of the period that is the min over the state(additional spending[state]) as the end state has no effect on the cost at the end of the period.

Then the value of the state at the end of the period is min over the state(additional spending[state]).
One has then to go backwards to the period before the last period and find its optimum cost that is equal to the minimum over the state(additional spending[state]) during the period before the last period of (period cost(state at the beginning of the period) plus additional spending[state] plus expected value of the period at the end of the period that depends on the additional spending and that was calculated previously).
Going backwards it is possible to go up to the first step.
To understand fully the method, it is better to study Bertsekas book ‘dynamic programming and optimal control’ volume 1, chapter 1. Chapter 1 does already give a good idea of the method and solutions for simple problems, like the one exposed here.
The question I was asking, is if in the SD method, there are other possibilities to solve sequential decisions in a no deterministic world than pure open loop optimization.
Best regards.
JJ
tomfid
Administrator
Posts: 3806
Joined: Wed May 24, 2006 4:54 am

Re: Discrete Event Probability

Post by tomfid »

Nothing special in SD, I'm afraid. Our tools are oriented towards brute-force hill climbing because that's usually the only thing that works. But that limits applicability to problems with very large parameter vectors, unless the space can somehow be simplified by parameterizing meta-rules for the decisions. The DP backwards induction sounds reasonable, though maybe not fun. :)
tomfid
Administrator
Posts: 3806
Joined: Wed May 24, 2006 4:54 am

Re: Discrete Event Probability

Post by tomfid »

BTW, optimization in Synthesim would be cool, and I've been investigating how to do it. It's a bit trickier than other things (like sensitivity) because (a) it's potentially slow, and you have to make sure that the optimization terminates, and (b) on-screen sliders might have to be moved if they are optimization parameters. This can be done in Ventity and it's cool.
LAUJJL
Senior Member
Posts: 1421
Joined: Fri May 23, 2003 10:09 am
Vensim version: DSS

Re: Discrete Event Probability

Post by LAUJJL »

Hi Tom

I am presently studying Dynamic Programming to integrate it to my models. It looks until now that Vensim is totally adapted to this method, which is good news.

It is too a lot of fun, because it seems to increase considerably the adaptability of my models to my problems.

In the last book, 'analytical methods for dynamic modelers', in the last 5 chapters oriented towards decision making and optimization (which is the main purpose of SD) 4 of them mention Bertsekas's books in their reference.

Best regards.

JJ
Post Reply