MIN/MAX Functions

This forum contains all archives from the SD Mailing list (go to http://www.systemdynamics.org/forum/ for more information). This is here as a read-only resource, please post any SD related questions to the SD Discussion forum.
Locked
Bill Braun bbraun hlthsys.com
Member
Posts: 29
Joined: Fri Mar 29, 2002 3:39 am

MIN/MAX Functions

Post by Bill Braun bbraun hlthsys.com »

Posted by Bill Braun <bbraun@hlthsys.com>
In Jim Hines' workshop at ISDC 2005 in Boston there was brief discussion of the MIN and MAX function and their use in preventing a stock from going negative (where such a thing could not happen in reality, such as potential customers). The point, as I understood it, was that the use of MIN and MAX were substitutes for a more complete understanding of policy/decision behavior.

Take the assumption that the initial value of the stock ""potential_customers"" is 100 and that five customers per time unit (month) will be won.

potential_customers = -dt*actual_customers
actual_customers = +dt*potential_customers
customers_won = customers_won_per_month
customers_won_per_month = 5
potential_customers = 100 [init]

Run over 60 months, potential_customers goes negative. This can be prevented by:

customers_won = MIN(customers_won_per_month,potential_customers)

This equation/policy seems to suggest that winning the last remaining potential customer requires the same effort as the first potential customer.

If potential_customers are finite, is the process of winning customers goal seeking? If so:

customers_won =
customers_won_per_month*(1-(actual_customers/INIT(potential_customers)))

generates such a curve and prevents potential_customers from going negative.

If you were at Jim's workshop and spoke of this, could you expand? Comments from any one?

Bill Braun
Posted by Bill Braun <bbraun@hlthsys.com>
posting date Sun, 16 Oct 2005 07:43:56 -0400
Tom Fiddaman tom vensim.com
Junior Member
Posts: 9
Joined: Fri Mar 29, 2002 3:39 am

MIN/MAX Functions

Post by Tom Fiddaman tom vensim.com »

Posted by Tom Fiddaman <tom@vensim.com>
>In Jim Hines' workshop at ISDC
>2005 in Boston there was brief discussion of<br>
>the MIN and MAX function and their use in preventing a stock from going
>negative (where such a thing could not happen in reality, such as
>potential customers). The point, as I understood it, was that the use of MIN and
> MAX were substitutes for a more complete understanding of policy/decision
> behavior.

Right. All outflows from physical stocks need first-order negative feedback control to ensure that the outflow doesn't drive the stock negative (no inventory -> no sales). Whether you use MIN logic (with abrupt transitions) or functions that provide smoother transitions really depends on the level of aggregation of the commodity under consideration, and the amount of time you want to devote to what might be a minor issue in your model.

Consider a slightly simpler example (a store selling soup):

Inventory = INTEG( - Sales Rate, initial inventory ) ~ cans Sales Rate = Normal Sales Rate ~ cans/week Normal Sales Rate = 5 ~ cans/week

In this model, sales are constant and will draw the stock below zero (since there's no replenishment here). Obviously that's physically impossible, unless perhaps you think rare quantum fluctuations can create antisoup. A fix is:

Sales Rate = MIN( Normal Sales Rate, Inventory/TIME STEP ) ~cans/week

In this case, Inventory/TIME STEP represents the maximum quantity that can flow out of the stock without causing it to go negative. However, while often convenient, this can be a crude representation of what's really going on. The behavior it generates is constant sales, abruptly dropping to 0 when the stock runs out. That might be OK if the store only has one kind of soup - when it's gone it's gone.

But in most situations we are talking about more aggregate representations of inventory. In that case, there might be 10 kinds of soup. When the chicken soup runs out, some shoppers will switch to clam chowder, while others will leave in a huff. In that case, the approach to zero inventory is gradual, as more and more flavors are depleted, until just a few cans of cream of asparagus remain for a long time. There are a variety of ways to represent this, including:

Sales Rate = MIN( Normal Sales Rate, Max Sales Rate ) ~ cans/week Max Sales Rate = Inventory/Min Time to Sell ~ cans/week Min Time to Sell = 4 ~ weeks (representing the time, longer than TIME STEP, that it takes to deplete stock at low levels)

Or, perhaps more appropriate to this situation, you often see:

Sales Rate = Normal Sales Rate*
EFFECT OF INVENTORY ON SALES(Inventory/Normal Inventory) ~ can/week

Here, effect of inventory on sales rate is a nonlinear function (table or lookup depending on your lingo) characterizing the dropoff in sales at low inventory, and passing through the points (0,0) and (1,1). It's a bit tricky because you also have to ensure that it approaches 0 quickly enough to prevent negative inventory due to integration error, which requires that Effect of Inventory on Sales remains below a line of slope Normal Inventory/(Normal Sales Rate*TIME STEP). Or, to put it another way, the slope of the lookup defines an implicit time constant analogous to Min Time to Sell above.

Because the implicit time constant embedded in the lookup above can be troublesome, it's better to reformulate as:

Sales Rate = Normal Sales Rate*
EFFECT OF AVAILABILITY ON SALES(Max Sales Rate/Normal Sales Rate) ~ cans/week Max Sales Rate = Inventory/Min Time to Sell ~ cans/week

The lookup still passes through (0,0) and remains below the diagonal, but otherwise is easier to interpret. Of course, you can always replace the table/lookup with an equivalent analytic function.

Generally, SD models will lean toward the smoother formulations of the outflow limit. The simpler MIN(Desired Outflow , Stock/TIME STEP) is realistic only for highly disaggregate situations, like you might encounter modeling a production line. The difference can be important, as abrupt changes in the slope of a relationship cause abrupt changes in feedback loop dominance, which may be unrealistic. However, if the constraint is just a minor part of a model with other, more interesting things going on, it's often expedient and not problematic to use MIN (certainly better than doing nothing).

Note that part of the reason I chose a new example is that the customers example played fast & loose with units of measure, which are essential for realism as well as very helpful for understanding what's going on. For example:

customers_won = MIN(customers_won_per_month,potential_customers)

customers_won_per_month is a flow (people/month) and potential_customers is a stock (people), so these can't share the RHS of the equation unmodified. Implicitly, this says that the minimum time to deplete potential customers is one month, which could be different from both TIME STEP (DT) and whatever the real time constant is.

Incidentally most of this is in Forrester's Industrial Dynamics, see for example equation 15-5 and the discussion of variable order-handling delays following. I'm sure it's in Business Dynamics as well, better than I've explained it, though my copy is currently in a moving box. Business Dynamics also has epidemic and Bass diffusion models relevant to the customers question. Tom

Tom Fiddaman<br>
Ventana Systems, Inc.
Posted by Tom Fiddaman <tom@vensim.com>
posting date Tue, 18 Oct 2005 20:22:54 -0600
geoff coyle geoff.coyle btintern
Member
Posts: 21
Joined: Fri Mar 29, 2002 3:39 am

MIN/MAX Functions

Post by geoff coyle geoff.coyle btintern »

Posted by ""geoff coyle"" <geoff.coyle@btinternet.com>
There is a very simple solution to this issue, but Bill's question also confounds that with some equations that I can't really follow. His equations might be re-written as:

potential_customers=potential_customers-dt*rate_of_winning_customers
potential_customers=100 (the initial value) rate_of_winning_customers=5

but, of course, this will go negative. The solution is to do some dimensional analysis. Using [...] to denote 'the dimensions (units) of' we have [potential-customers]=[people] and [rate_of_winning_customers]=[people/month]. Finally we need [dt]=[month] but that does not mean that the numerical value of dt is 1 month. That confuses the unit in which time is measured, which is months in this case though it could be whatever is appropriate to the problem, with dt, which is a number that we invent in order to get simulations to run on a computer, and is otherwise meaningless.

In these dimensional terms Bill's last equation, which is customers_won = MIN(customers_won_per_month,potential_customers), is dimensionally invalid.

It should be (using my variable names)

rate_of_winning_customers=MIN(normal_rate_of_winning_customers,potential_customers/dt)
normal_rate_of_winning_customers=5

which is dimensionally valid and will not go negative if dt is properly chosen. This is one of the three exceptions to the law that dt must not be used on the RHS of rate (or flow) equations. (They are in my 1996 book). The number, 5, cannot be used within the MIN, as dimensional analysis software has no way of knowing what its dimensions are.

One could of, course, model normal_rate_of_winning_customers so as to relate to more complex considerations. I'd still use the MIN function, though, as belt and braces protection.

Dimensional validity (aka consistency or coherence) goes back as far as Forrester's initial book and is repeated in about 6 subsequent texts. It is something that SD modellers ought to treat with more respect as it links to the most basic foundation of SD, which is that all quantities (even soft variables such as morale) are treated as though they are physical entities and therefore need to be dimensionally consistent.

MIN and MAX have to be used with care. Had my first equation been
potential_customers=MAX(0,potential_customers-dt*rate_of_winning_customers)
it would have been using MAX to suppress incorrect dynamics. Weird behaviour, such as impossible negatives, has to be tracked down and corrected, not fiddled away.

The other difficulty in Bill's model is that he has two different names for the same variable; customers_won is the same thing as customers_won_per_month, but is called something different. I suspect that this may be due to the apparent confusion between the time unit and dt, but that seems to be increasingly common in SD.

If the host will allow a further comment, I'll just mention that there has in the past been a debate about the 'meaning' of dt. In fact, dt is meaningless - it is simply a small number that we need to get the model to run on the computer. It has to be small (< the_shortest_delay/4*delay_order is a standard rule) so as to avoid numerical instability (with Eulerian
integration) in the delayed flows. I've never had the slightest difficulty in explaining that to a client, taking the trouble to find words with which the client is comfortable).

Of course, these comments are not to be taken as personal to Bill Braun, whom I have never met. Honest criticism can never be personal, but I hope it helps.

Regards,

Geoff

Visiting Professor of Strategic Analysis,
University of Bath
Posted by ""geoff coyle"" <geoff.coyle@btinternet.com>
posting date Tue, 18 Oct 2005 11:13:50 +0100
George Richardson gpr albany.edu
Junior Member
Posts: 6
Joined: Fri Mar 29, 2002 3:39 am

MIN/MAX Functions

Post by George Richardson gpr albany.edu »

Posted by George Richardson <gpr@albany.edu>
On Oct 19, 2005, at 6:18 AM, geoff coyle geoff.coyle btinternet.com
wrote:


>> It should be (using my variable names)
>>
>> rate_of_winning_customers=MIN
>> (normal_rate_of_winning_customers,potential_customers/dt)
>> normal_rate_of_winning_customers=5
>>
>> which is dimensionally valid and will not go negative if dt is
>> properly chosen. This is one of the three exceptions to the law that
>> dt must not be used on the RHS of rate (or flow) equations.


But since dt means nothing in the real system, this equation fails to capture what is going on operationally in the system, so I doubt it should ever be used.

..George
Posted by George Richardson <gpr@albany.edu>
posting date Thu, 20 Oct 2005 21:39:43 -0400
geoff coyle geoff.coyle btintern
Member
Posts: 21
Joined: Fri Mar 29, 2002 3:39 am

MIN/MAX Functions

Post by geoff coyle geoff.coyle btintern »

Posted by ""geoff coyle"" <geoff.coyle@btinternet.com>
George makes an interesting point here.

He is right, in a sense, as it has always been the case, going right back to the start of SD, that dt should not be used on the RHS of rate and auxiliary equations, to use the original notation. Also in that notation, it is wrong to write RATE1.KL=f(RATE2.KL) as that means that RATE1, which has not yet happened in time, depends on RATE2, which has also not yet happened and is therefore unknown. (And call me old fashioned, but it seems to be a pity that we have largely dropped the use of time postscripts. I found them to be invaluable in teaching students to think dynamically, but it's a matter of
preference.)

Of course, in equation 15.4 of Industrial Dynamics, Forrester used /dt to prevent negative values. I take off my hat at least twice a day in recognition of Jay's achievement but we should not divide by dt just because he did. We should only do it if it is valid to do so and I would suggest that is we write:

the_flow_that_happens=MIN(the_flow_that_should_happen,stock/dt) we are simply providing insurance against negative values. I'd also arrange for the model to flag up when the stock/dt constraint occurred as that might point me to some inadequacy in the representation of operational factors in the equation for the_flow_that_should_happen. (The flag could be done by testing if the_flow_that_should_happen*dt < stock).

For additional error protection, I would also use a mass-balance equation to check that the model is not leaking stock, or acquiring it by erroneous means.

Maybe I'm over-cautious in checking for errors but I have seen some truly magnificent mistakes in some of the models I've reviewed.

Regards,

Geoff

Visiting Professor of Strategic Analysis,
University of Bath
Posted by ""geoff coyle"" <geoff.coyle@btinternet.com>
posting date Sun, 23 Oct 2005 12:12:05 +0100
Nijland lukkenaer planet.nl
Junior Member
Posts: 4
Joined: Fri Mar 29, 2002 3:39 am

MIN/MAX Functions

Post by Nijland lukkenaer planet.nl »

Posted by Nijland <lukkenaer@planet.nl>

>>MIN(normal_rate_of_winning_customers , potential_customers/dt) (see
Coyle)


>>MIN(Normal_Sales_Rate , Inventory/TIME STEP) (see Fiddaman)




These constructions are dimensionally valid, but have a “scaling error”

(see also the comment of George Richardson).

The first term is expressed as a number of people per unit of time,

the second term as a number of people per (dt units) of time.

Compare: MIN(5[ton_of apples] , 20[kg_of apples]), which is also problematic.

The “unit of time” should not be confused with:

1) the “solution interval” (“dt” or “time step”).

2) the “time constant” or “average_residing_time”

So, shouldn’t one rather replace these equations by:

MIN(normal_rate_of_winning_customers , Inventory / Average_residing_time) ?

MIN(Normal_Sales_Rate , Inventory/TIME CONSTANT) ?



However, the MIN function often gives other problems of unrealistic discontinuities

(see Bill Braun, Tom Fiddaman), and should be avoided as much as possible.



I have tried to collect some comments in this discussion up to now,

and present here a model containing the comments:



Conclusions up to now:

a) disaggregate equations into parts which have substantial meaning (= ecological,

psychological, sociological, economic, biological, chemical etc...),

b) include negative feedback control between levels and their outflows

(this has substantial meaning, for consistent with the first law of thermo-dynamics),

c) keep the dimensions consistent,

d) avoid mathematical tricks (MAX, MIN) to prevent variables to get “unwanted” values

(initially one might use them in the model building process, to get the model running,

later on they should be replaced by more theoretically and empirically valid constructions,

in the end, however, some of them may be reintroduced, as a safeguard against

“time constants” becoming too small under certain dynamic conditions),

e) avoid dt in all substantial equations (except in level equations).



The following model aims to incorporate all these conditions:



actual_customers = maximal_customers – potential_customers

CUSTOMERS NOT YET PERSUADED, BUT STILL TO BE PERSUADED [PEOPLE]

Explanation: is an auxiliary variable. Note, however, that this auxiliary variable has the character of a stock, as it is the complement of the level variable potential_customers (see the discussion about the order of models some time ago on this list).



potential_customers = potential_customers + dt * (-customers_won) (INITIAL value = 2).

ACTUAL NUMBER OF CUSTOMERS [PEOPLE]

Explanation: this is a level equation.



customers_won = (potential_customers / tcnorm )* susceptability

NUMBER OF NEW CUSTOMERS PER TIME UNIT [PEOPLE*MONTH–1]

Explanation: this is the rate equation. This equation closes a negative feedback of the model (potential_customers _> customers_won _> potential_customers), giving the required condition mentioned by Fiddeman, that all outflows from stocks need first-order negative feedback.



maximal_customers = 90

MAXIMUM NUMBER OF CUSTOMERS [PEOPLE]

Explanation: a constant (may be an exogenous variable from another submodel, of course).



tcnorm = 20

NORMAL TIME CONSTANT [MONTH]

Explanation: a constant; the reciprocal of the time constant gives the fraction of “potential_customers” which are persuaded per unit of time (under normal conditions).



susceptibility = { potential_customers / potential_customers(INIT) }**susc_exp

MULTIPLIER FOR EASINESS TO WIN NEW CUSTOMERS [DIM. LESS]

Explanation: an auxiliary. First remark: The first customer will be more susceptible than the last one; the multiplier susceptibility introduces the concept of increasing effort to win new customers, as the stock of potential_customers gets depleted (concepts mentioned by Bill Braun and Tom Fiddaman).

We assume that the customer population has a (skewed) log-normal distribution over the feature susceptibility. (Such log-normal distributions are very common; an important one, which has narrow links with winning of customers and selling of commodities, is income-distribution). As a consequence the susceptibility curve will be a skewed sigmoid curve. This skewed sigmoid may be approximated by a power function of the form “{potential_customers / potential_customers(INIT) }**susc_exp”. Of course one may also replace this relation by a TABLE function.

Second remark: the quotient “tcnorm / suceptibility” may be regarded as the “time-variable” of the customer-winning process, with dimension [MONTH]; The variability of the average_residing_time may imply that we also need a variable solution interval dt !.

(“time constants” are sometimes “time variables”)

Third remark: The susceptibility equation closes a second negative feedback: (potential_customers _> susceptibility _> customers_won _> potential_customers). This feedback gives an extra control on negative values of “potential_customers”. It is possible to aggregate both relations (potential_customers _> customers_won) and (potential_customers _> susceptibility _> customers_won) into one new equation:

customers_won = constant * {potential_customers**(1+susc_exp)}.

Doing such makes the model mathematically more compact, but also conceptually less transparent.



susc_exp=0.5 (0 < susc_exp 1)

SPECIFIC SUSCEPTIBILITY EXPONENT [DIM.LESS]

Explanation: this exponent is a measure for the population’s distribution over susceptibility.



DT=0.1; SAVPER=1; LENGTH=60



In my opinion this model is also an “archetype” for selling of soup, or for selling of cars, as “one car sold” corresponds with “one new customer”. Of course, the real process may need an cascade of these archetypes, as the winning of customers and the selling of products may proceed “in phases”. And an extra positive loop may be needed, which relects a possible self-reinforcement of the process. Such would make the rate equation something like:

customers_won = actual-customers * (potential_customers / tcnorm) * susceptiblily



Geert Nijland,

email: lukkenaer@planet.nl
Posted by Nijland <lukkenaer@planet.nl>
posting date Thu, 27 Oct 2005 12:43:09 +0200
geoff coyle geoff.coyle btintern
Member
Posts: 21
Joined: Fri Mar 29, 2002 3:39 am

MIN/MAX Functions

Post by geoff coyle geoff.coyle btintern »

Posted by ""geoff coyle"" <geoff.coyle@btinternet.com>
I am afraid that Lukkenaer is confused about time in SD models (he is not alone).

Let us suppose that the nature of the system we are dealing with is such that it only makes sense to think in terms of weeks and that, in order to see the dynamic behaviour, we need to simulate 100 of those weeks. The dimensions of anything involving time will therefore be [week]. That will apply to a variable called TIME, which increases from 0 to 100 as time passes (note upper and lower case) and which therefore records how far the simulation has progressed. It will apply to a control called LENGTH (=100) which is the value of TIME a which the simulation will stop. It will apply to dt (<1) which is the artefact we create in order to be able to simulate.

All rates (flows) will be measured in, say, [people/week]. The number of people who will have flowed depends on for how long the flow operates. This will be small during a dt and perhaps quite large during a week. So Lukenaer's comment on people/ dt units of time is meaningless.

He goes on to say that the unit of time should not be confused with dt or average_residing_time but that seems to be that he is doing. Both of these parameters are measured in units of time, and that's all there is to it.

I don't know that discontinuities are necessarily unrealistic. I can think of any number of instances in real systems.

In any case, division by dt may be necessary in cases of accelerating collapse.

Regards,

Geoff

Visiting Professor of Strategic Analysis,
University of Bath
Posted by ""geoff coyle"" <geoff.coyle@btinternet.com>
posting date Sat, 29 Oct 2005 10:57:25 +0100
Nijland lukkenaer planet.nl
Junior Member
Posts: 4
Joined: Fri Mar 29, 2002 3:39 am

MIN/MAX Functions

Post by Nijland lukkenaer planet.nl »

Posted by Nijland <lukkenaer@planet.nl>
Coyle wrote:


>> I am afraid that Lukkenaer is confused about time in SD models (he is
>> not alone).

(A). I think, I’m not confused about the different time-concepts. For the rest I agree, for the larger part, with Coyle’s comments, and enjoy the discussion.

We are discussing about:


customers_won = MIN(normal_rate_of_winning_customers , potential_customers / dt)



I agree that dt is an artefact which is not a part of the substantial model.

Also level equations as such (containing dt) do not reflect substantial theory.

(the feed-back structure of the model does, of course, reflect substantial theory).



(B). However, my intention was to say (in accordance with the comments of others)

that the MIN function, here, may be better replaced by a construct,

corresponding with substantial theory (gradual saturation of the process).



customers_won = potential_customers * fraction_customers_won



fraction_customers_won = (potential_customers / potential_customers_init) ** exponent



Ratios between levels and rates (in some cases these might rather be called “time variables” than “time constants”) may get so small that extra safeguards seem to be required to prevent the model from getting impossible values. However, if one prevents a model from impossible values, one may mask the possibility of the dt value being too large, or one may overlook hidden substantial theory present in the model, or required substantial theory, absent in the model. Revising the theory and/or reducing the value of dt may be better then adding MIN/MAX functions.

And, ... some sub-processes may need smaller dt values then others (not all languages give this option).



(C):


>> So Lukkenaer's comment on people/ dt units of time is meaningless.




Suppose we do not choose the abovementioned construction, but a MIN construction.

Let me compare three variants:



customers_won = MIN(normal_rate_of_winning_customers , potential_customers)

This is Braun’s original solution. It prevents the model from going negative, but it has a dimension error, and the minimum value (zero) of potential_customers is reached asymptotically.



customers_won = MIN(normal_rate_of_winning_customers , potential_customers / dt)

This is Coyle’s construction (without dimension error, but with a “scaling error”):

It also prevents the model from going negative, but only if dt is properly chosen.

Potential customers reaches its minimum value (zero) abruptly.



customers_won = MIN(normal_rate_of_winning_customers , potential_customers / time_constant)

This is my proposal (without dimension error and without scaling error).

It also prevents the model from going negative, is not dependent on dt and potential_customers reaches its minimum value (zero) asymptotically.



In essence in all three variants the second term in the MIN function has the following structure:



customers_won = potential_customers / RESIDING_TIME

In the Braun’s solution RESIDING_TIME equals (implicitly) the UNIT_OF_TIME ( = 1)

(And now the dimension error has vanished).



In Coyle’s solution RESIDING_TIME equals the SOLUTION_INTERVAL ( = DT)

However residing_times should be four times larger than solution_intervals. So there seems to be a violation of this SD-rule, here.



In my model residing_time equals the AVERAGE_TIME_CONSTANT ( = 20).

This construction seems to correspond exactly with Fiddeman’s required first order negative feedback between outflow and stock.



However we now observe clearly that (in the critical domain) the “MIN construction” implies substantial theory (gradual saturation of the process).



Much more conceptually explicit and transparent is the construction given in (B), above. The variables have names which correspond with the theoretical concepts. And if dt is properly chosen, no MIN safeguard will be needed.



(C). Discontinuities are not necessarily unrealistic, but they are less probable at the aggregation level of populations. If corresponding with reality (e.g. real minimum quota, sudden disasters, etc. ), they should be used of course.



Geert Nijland,

email: lukkenaer@planet.nl
Posted by Nijland <lukkenaer@planet.nl>
posting date Mon, 31 Oct 2005 10:19:19 +0100
Bob Eberlein bob vensim.com
Junior Member
Posts: 4
Joined: Fri Mar 29, 2002 3:39 am

MIN/MAX Functions

Post by Bob Eberlein bob vensim.com »

Posted by Bob Eberlein <bob@vensim.com>
I wanted to make a brief comment on this issue. I do think that Tom Fiddaman's post pretty clearly summed up the issue for managing levels so that they don't go negative. And while I do agree with Geoff Coyle that there is a lot of confusion about different time concepts, I am afraid that he has left unstated one of the biggest sources of confusion, the distinction between differential and difference equation conceptualization.

In classic system dynamics, models are conceptualized as if time were continuous. Thus the differential equation representing a level would look something like

d/dt level = rate

or

limit as dt->0 (level(t) - level(t - dt))/dt = rate(t-dt)

Dropping the ""limit as dt->0"" we are left with an algebraic expression which, as good engineers (or students of the infintisimal calculus) we can happily manipulate as long as we remember the one rule which is that only things tending toward 0 as dt tends toward 0 can be divided by dt.

Thus using level/dt is absolutely a no-no if you have conceptualized the model as a continuous time model. On the other hand if you have conceptualized the model as a difference equation, with dt the defined time interval between steps, this formulation makes perfectly good sense.

In practice most of the models I develop are neither true differential equations (where dt is truly meaningless) nor purely difference equations (where dt defines the time interval of interest) but something of a mix - at least conceptually. It almost always turns out to be easiest to solve them as difference equations (using Euler integration) and this also makes it easier to handle discountintuities.

A good quick test of a model conceptualized as differential is to use Runge-Kutta integration to see if there is any behavior that surprises you - if there is something is wrong. Note that your units of measure should always check whether you have conceptualized the model as a difference or differential system.

The long and the short - if you believe that dt is meaningless, then it should never ever appear in any equation except the integral equation defining the level (if that is necessary for the software you are using).

Bob Eberlein
Posted by Bob Eberlein <bob@vensim.com>
posting date Mon, 31 Oct 2005 14:35:54 -0500
Locked