Resource allocation models

This forum contains all archives from the SD Mailing list (go to http://www.systemdynamics.org/forum/ for more information). This is here as a read-only resource, please post any SD related questions to the SD Discussion forum.
Bruce Campbell
Junior Member
Posts: 9
Joined: Fri Mar 29, 2002 3:39 am

Resource allocation models

Post by Bruce Campbell »

kevin.a.agatstein@us.arthurandersen.com wrote:

> One warning about this type of approach though. You need to be certain that
> your capacity allocation is not over allocated. Depending on how you
> conceptualize the problem, you may go above 100% to account for overtime. If
> your assumption of a persons capacity includes "overtime" then you shouldnt go
> above 100%. A way to do this easily is to have one of the time allocations be
> 100% minus the sum of the other time allocation.

Theres possibly another problem with this approach. Ive seen some
anecdotal evidence which suggests that overall productivity does NOT
increase with overtime, especially with regular, unpaid overtime. In
this situation people spend much of the normal working day at extended
meetings, both formal and informal, and take long coffee breaks etc.
Then, when the "overtime" starts, they do their work. This would mean
that their "capacity" does not increase. I guess its another twist on
the old adage that any job will fill the time available.

Unfortunately, Ive not seen any hard evidence for this phenomenon. Id
appreciate it if anyone can direct me to this evidence.

Thanks,

Bruce Campbell


--
Bruce Campbell
Joint Research Centre for Advanced Systems Engineering
Macquarie University 2109
Australia

E-mail: Bruce.Campbell@mq.edu.au
Ph: +61 2 9850 9107
Fax: +61 2 9850 9102
David W Peterson
Junior Member
Posts: 4
Joined: Fri Mar 29, 2002 3:39 am

Resource allocation models

Post by David W Peterson »

The tone of George Backus comments is brilliant, but the substance
contains a couple of errors which ought to be corrected.

First, George Richardsons equation is a step forward, not a redundancy.
The point is to avoid having the allocation exceed the request Tj, and
George R.s equation does that. It may be an inadequate solution, however
-- see below.

Second, George B. seems to have misunderstood the Vensim function alloc_p.
The behavior of alloc_p is as he describes it only for one (extreme)
parameter setting; its actual range of behavior is much more flexible and
realistic, and generally superior to any algebraic weighting scheme,
including MNL, "derived" or not.

The common thread of all these discussions of allocation is that it is
tricky to come up with a formulation that does the right thing under all
circumstances. Bill Wood invented alloc_p to satisfy the following
requirements:

1. The allocations should sum to the total supply or the total demand,
whichever is less.

2. All allocations must be positive or zero.

3. No allocation should be more than the amount requested (this
constraint is the motive behind George Richardsons modification to the
standard weighting formulation).

4. Under conditions of adequate supply, each allocation should equal the
corresponding request, even if the priorities are widely separated (MNL
and other weighting schemes generally fail this test, even with George
Richardsons modification).

5. Under conditions of shortages, uniquely low-priority requesters should
receive very little or nothing; a uniquely high-priority request might
receive virually the entire supply. But if priorities are relatively
close, both low and high priority requests will be partially filled, with
low-priority requests receiving lesser percentages of their requests, and
high-priority requests receiving greater percentages of their requests.

6. If all requesters have equal priorities, they receive equal
percentages of their requests.

Bill Woods solution matches all these tests and more, and therefore
provides a very tool for modeling allocation and market-competition
situations of all stripe. Unfortunately, the Wood algorithm cant be
expressed as closed-form algebra. However, its behavior is easy to
understand and straightforward, but is best described with pictures and
geometry, rather than words. For those without access to Vensim manuals,
a complete description with diagrams will be posted on the vensim.com
website within the next few days.

Alloc_p contains an input parameter called "width", which indicates how
much of a gap there must be between priorities for them to be considered
significantly different. If the width parameter is large compared with the
differences among the various priorities, then the behavior of alloc_p
approaches that suggested by no. 6 in the above list. The opposite
extreme, in which the priorities are different by amounts considerably
larger than the "width" parameter, yields a "first-come-first-serve"
allocation, which is the limiting case that George Backus mistakenly
equates with alloc-p in general.

In the case that one requester has a very high priority, and the others
priorities are clustered relatively closely, one gets a realistic mixed
case of the high-priority request being filled completely, and the others
sharing the left-overs with shortfalls inversely proportional to their
relative priorities.

Alloc_p isnt easy to explain quickly using words alone, but it is the
only allocation method Im aware of that works realistically under a wide
range of requests and priorities. Furthermore, it is elegantly simple and
easy to understand, once you "see" how it works (once again, check out the
diagrams in the Vensim manuals or the vensim.com web site).

Hope this helps.

David

--
David W. Peterson
DavidPeterson@vensim.com
kevin.a.agatstein@us.arthurander
Junior Member
Posts: 6
Joined: Fri Mar 29, 2002 3:39 am

Resource allocation models

Post by kevin.a.agatstein@us.arthurander »

Often when building models which are used for strategic planning of professional
services firms, time is the primary constrained resource which needs to be
allocated. However, the level that I often use is Headcount (actually I use
effective headcount which weights people by experience and / or other factors).
This level of headcount gives me a "capacity," which mulitplied by a per
man-hour productivity, can do work. For example, in the last model of this type
I built, time can be spent training, selling work, or delivering work. The
model was designed to experiment with different allocations of time (sell early
and then deliver vs. train, get some experience, and then go market agressively,
etc.) Thus, each man hour spent on selling fills some stock of sold projects.

One warning about this type of approach though. You need to be certain that
your capacity allocation is not over allocated. Depending on how you
conceptualize the problem, you may go above 100% to account for overtime. If
your assumption of a persons capacity includes "overtime" then you shouldnt go
above 100%. A way to do this easily is to have one of the time allocations be
100% minus the sum of the other time allocation.

In summary:

---------->
------------------------------>
Capacity = Effective Headcount * Fractional Allocation of Time

where Capacity and Fractional Allocation of Time are vectors
SUM(Fractional Allocation of Time) <= Time Constraint


Hope this is useful.

Seasons Greetings,
Kevin Agatstein
kevin.a.agatstein@us.arthurandersen.com
George Richardson
Junior Member
Posts: 7
Joined: Fri Mar 29, 2002 3:39 am

Resource allocation models

Post by George Richardson »

"Jaideep Mukherjee"
Junior Member
Posts: 15
Joined: Fri Mar 29, 2002 3:39 am

Resource allocation models

Post by "Jaideep Mukherjee" »

Dear Steve

[1] When I read "With only a given amount available, how can it (time)
best be managed?" I said Aha - here we are talking about a classical optimal
control problem. Actually your problem is quite a standard one in space
engineering - the "minimum time" problem - if you have a good SD model, then
finding the best time can be solved as an optimal control problem (you can
do it for a bad SD model too, but why do it then??). Depending on the size
of and nonlinearities in the model, it may or may not be computationally
difficult. Will put stuff explaining the details on my website
(optimlator.com) by this weekend.

[2] If you are looking at a traditional SD solution - well, doing a large
number of simulations is one answer [Vensim has a number of tools to let you
do it in a scientific way] - try different values of parameters until you
get a happy fit between going crazy and saying "Enough is enough, if I spend
any more time on this, it will be a sheer waste of time ". I have never
modeled time as a level variable [dont even know if any standard SD package
will allow you to do it, unless you call time something else, but then
....???]. One thought is that you could fix terminal time, or the time
period of simulation [say, you cannot spend > 6 months on a project], and
try to control other variables to satisfy this terminal time constraint. It
is sort of like primal-dual formulation in linear programs - different ways
of looking at the same prolem, except that here you just cant optimize over
time by traditional SD.

[3] This third one is an out-of-the-world solution - it doesnt have much
relevance to your question. It will interest the star trek fans of the next
century. . I was thinking how can time be a level when time is moving
at an independent constant "speed" in the simulation. Voila! and the idea
came that we can let the two times not be the same? How can we do that?
Well, let the system be moving at some speed u relative to where you are
measuring the simulation. In that case simulation time t and system time t*
are different and given naturally by Einsteins t* = t (1 - u^2 / c^2) ^
(1/2). Treat velocity u as a control variable so that you can get almost any
value of t* based on u. If you go at c, the speed of light, t* = 0 and you
have total conservation of time - Nirvana achieved!! Of course there could
be other constraints so that the above given equation may not be valid.

Problems with the above theory of "relativistic system dynamics" - why
should the two times be different - why should the observers frame be
different from the systems frame? can it happen in a realistic situation? I
dont know the answers. Sort of like we wont age at all if we move at the
speed of light - but then what??... I need theoretical physicists to answer
these questions.

Because of this Space Station show, I thought maybe if we could
realistically achieve speeds approaching the speed of light, for example, by
using a black holes fantastic gravity to speed us up as a sort of a sling
shot device (and praying that our calculations were correct!!! else there
will be no trace of our history left), THEN it will be interesting...

Sorry Steve this is what happens when you are reading your emails, watching
TV and trying to eat at the same time.. :->)

Jaideep
From: "Jaideep Mukherjee" <
jaideep@optimlator.com>
George Richardson
Junior Member
Posts: 7
Joined: Fri Mar 29, 2002 3:39 am

Resource allocation models

Post by George Richardson »

Jim Hines notes that the simple familiar weighted average scheme for
allocating time among several competing activities wont give sensible
results when one has more time available than is needed by the sum of all
the activities. Since I seldom have enough time, that seldom occurs to me,
so...

Why not then simply allocate the time to the j-th activity as

MIN( Tj, Wj*Tj/sum(Wi*Ti) * Total time available )

where as before Ti = the desired time on the i-th activity, and Wi is the
weight (priority, importance) on the i-th activity?

The MIN seems appropriate here, giving us a total time allocated that would
not exceed either the time available or the time one wants to spend. This
simple weighted average scheme looks like it would solve Jims observation,
is simple to explain, is very adaptable, and could be made to correspond,
it would seem, to all sorts of prioritization schemes people in real
situations really act on (which we all recognize is the crucial thing to be
modeling here).

But there must be a good reason why this is not a good idea, and why one
would go to something like ALLOCP (which I havent looked at but sounds a
good deal more complex). So whats wrong with this, Jim (or others)?

...George

-------------------------------------------------------------------------
George P. Richardson
G.P.Richardson@Albany.edu
Chair, Dept. of Public Administration and Policy 518-442-5258
Rockefeller College of Public Affairs and Policy 518-442-5298
University at Albany, Albany, NY 12222 http:/www.albany.edu/~gpr
-------------------------------------------------------------------------
"George Backus"
Member
Posts: 33
Joined: Fri Mar 29, 2002 3:39 am

Resource allocation models

Post by "George Backus" »

George Richardson wanted to use:

MIN( Tj, Wj*Tj/sum(Wi*Ti) * Total time available )

to get rid of the "over or under" allocation problems. Note that:

Sum(Wj*Tj/sum(Wi*Ti)) is unity and, therefore, the minimum function is not
needed.

There could, however, be a problem with the simple weighting function
proposed. Time, in the example of the server, is to be allocated based on
the perceived utility of using that time. Therefore, it is better to start
with a utility function and maximize the overall utility of the allocations.
Because we are discussing humans, the utility may change from
moment-to-moment or situation-to-situation without regard for independent
variables. Economist are finally officially recognizing the sometimes
irrational and often random nature of human choices -- even in optimizing
firms (see The Economist, Dec. 18, 1999, pp 63-65.) Under "human
condition" assumptions, the optimal allocation is called Random Utility
Maximization (RUM), as pioneered by Daniel McFadden among others. The
determination of the utility function is far from a guess. Methods to both
hypothesize self-consistent utility functions and estimate them can be found
in may many works. To help derive the utility function itself, see Keeney,
R. L. and Raiffa, H., Decisions with Multiple Objectives, John Wiley & Sons,
New York NY, 1976. To estimate the allocation parameters (to be briefly
illustrated below), see for example, Ben-Akiva, M., Discrete Choice
Analysis: Theory and Applications, MIT Press, Cambridge, MA, 1985, page
103.

An example the utility function in this case could be:

Ui= Ai+B0*Xi

The Ai could be the fraction of a normal day that should be allocated to the
project - or the time it takes to do that task at some specified level of
quality. The B0 (typically no subscript) may be "responsiveness" (related
to uncertainty or elasticity depending on the specification) that changes
the utility. Thus X, in this simple one independent-term example, could be
the backlog, the costs or benefit of using more time, the difference between
the expected time to work on a task and the actual average time spent on the
task, the quality of the results compared to what is needed, etc. All these
could be include as separate terms in the utility function having other Bs
and other Xis.

If the uncertainty or random nature of the utility is Weibully distributed,
the integration over all options (distributions) to maximize the total
utility, results in an allocation or market share (MS) of the total time
as:

MSi=EXP(Ui)/sum(EXP(Uj))

This is the multinomial logit (MNL). If the uncertainty is Gaussian, the MS
is numerically found using what is called the probit formulation. There
are other variations but these are the two most common, and the MNL seems to
best fit the data series/systems SD models addressed.

With the MNL, the time used per task (Ti) is then

Ti=MSi*total time available.

Contributors to this server have often noted problems with this formulation.
I argue that those are misguided views. The allocation is often an
"adjusting" process and the actual MS looks like a level or the result (Ti
in this instance) may look like a level -- given the physics of the actual
process. One must consider whether the allocation is on the margin or on
the average -- even if the time constants of the model would indicate little
difference between marginal and average values relative to the problem being
solved. The *causality* of the process (the topic whose absence most gets me
on the soap box :-), may require information to feedback between the
allocation and the results . In this case, the feedback between the time
used and the time needed. Most of the "failures" of the RUM come from
simply not recognizing whether a marginal or average market share needs to
be used for a self-consistent representation of causality. So far, I have
seen no situation where the derived MNL (rather than an assumed MNL!) does
not self-consistently capture the desired mechanisms/behaviors -- both in
the operational and extreme situations.

The ALLOC_P function in VENSIM is actually the "Sorted-dispatch" function
used in industrial engineering and most commonly used for dispatching power
plants when there are no other constraints than cost minimization. This
process simply cycles through a sorted (by utility) set of resources until a
demand (constraint) is satisfied/met. The problem is that it is an all or
nothing proposition. The resource is selected or it is not. It is true the
only a portion of the LAST resource selected may be used in this algorithm,
but the real allocation may often require a bit of all resource to be used.
(i.e., work may be selected before family or eating, or exercise, but a bad
end comes to those who never select family, eating or exercise....) Andy
Ford (and my work) has often used the "dispatch approach" for electric
industry simulation. Some of this work, can be seen in Andys new book
"Modeling the Environment" but the reference works he noted in electric
utility simulation contain a more complete discussion.

In summary, while the "ALLOC_P" function has its purpose in physical,
quasi-discrete, allocation processes, most system problems associated with
human decisions, have an allocation process best represented using RUM
methods.

The other George

George Backus, President
Policy Assessment Corporation
14604 West 62nd Place
Arvada, CO 80004-3621
Bus: 303-467-3566
Fax: 303-467-3576
Cell: 303-807-8579
Email:
George_Backus@ENERGY2020.com
Bruce Hannon
Junior Member
Posts: 17
Joined: Fri Mar 29, 2002 3:39 am

Resource allocation models

Post by Bruce Hannon »

Very nice as usual George.
I am reminded of the way in which the Japanese produce steel. Nippon and I
suppose another one or two of their largest makers would go to the
government during a time when sales fell below capacity and make the
following offer: We can run only our most efficient plants and meet the
demand at minimum cost, laying off the workers in our other plants, or we
can run all our plants at reduced output, collectively meeting the demand
and keeping everyone employed. The added cost for the second option is X
and X is less than the cost of governmental support of those we unemploy.
Pay us X and we well chose the second option. The government seems to have
always done so. To American steelmakers, this payment appeared to be a
subsidy so they howled for subsidies and import tariffs, not admitting that
they already had support in the form of governmental unemployment benefits.
Consideration of the larger system showed that the Japanese choose the
better option. Or so it seems to me.


Bruce Hannon, Jubilee Professor
Liberal Arts and Sciences
Geog/NCSA
220 Davenport Hall, MC 150
University of Illinois
Urbana, IL 61801
reply to: b-hannon@uiuc.edu
Vita: http://www.staff.uiuc.edu/~j-domier/people/hannon.html
Example Models: http://blizzard.gis.uiuc.edu/
Modeling Books: http://www.springer-ny.com/biology/moddysys/
July 99 Five day Modeling Workshop: http://web.aces.uiuc.edu/aim/model/
217 333-0348 office
217 244 1785 fax
Andy Ford
Junior Member
Posts: 9
Joined: Fri Mar 29, 2002 3:39 am

Resource allocation models

Post by Andy Ford »

George Backus makes a good point when he encourages us to think of the market share equation

MSi = EXP(Ui)/sum(EXP(Uj))

in a system dynamics model of the allocation of a limited resource.
The MSi stands for the market share of alternative i.
The EXP is the exponential function.
And the U stands for the "utility" of each alternative.

As George explains, the utilities are normally represented by simple algebraic expressions whose parameters may be estimated from observed market shares (or markets shares stated in preference surveys). The parameters are normally converters in a system dynamics model. But if you thought peoples attitudes might change (with experience) over time, they could be treated as stock variables as well.

A concrete example of this formulation appears as exercise #2 in Chapter 20 of my book on "Modeling the Environment." This chapter deals with air pollution from vehicles and the "feebate policy" of promoting the sale of cleaner vehicles. The market shares apply to five types of vehicles (conventional cars, electric cars, etc) and the attributes of each car are price, fuel cost, range, horsepower, etc. Many of the algebraic expressions for utility are simple, linear expressions (as in Georges example). Others included quadratic terms, so a nonlinear effect are introduced. The parameters were estimated from "stated preference surveys" of hundreds of people in California.

At this point, you are probably wondering if it makes sense to allocate a fixed amount of time using the same general approach that works for allocating market shares for new vehicle sales? Many of the comments on this topic have focused on the need to maintain a cushion of idle time or slack time. Maybe we could apply Georges suggestion if we explicitly acknowledge that "idle time" or "rest time" is one of the alternatives competing for a share of the available time.

Now, if we were to proceed with this approach, we would expect to statistically estimate the parametrs in the expressions for the utility of each alternative. Do we have the data (either observed preferences or stated preferences) to proceed?

Andy Ford
Program in Environmental Science and Regional Planning
Washington State University
Pullman, WA 99164-4430
USA

Phone: 509 335 7846
Email: FordA@mail.wsu.edu
Website: http://www.wsu.edu/~forda
George Richardson
Junior Member
Posts: 7
Joined: Fri Mar 29, 2002 3:39 am

Resource allocation models

Post by George Richardson »

Jack Homer and I have been corresponding about the resource allocation
problem off-line, and Im discovering just how messy this can be.
Theres a lot here to think about, ranging from how to get a system
dynamics model to allocate in some optimal way, which people might not be
capable of doing, to how to get a system dynamics model to mimic closely
the behavior of individuals and groups allocating something like time on
tasks. The formulations one can get into make it look amazing that each of
us manages to do this every day.

I thought Jacks final summary of our conversation ought to be shared with
the list, so here is what he said:

>The "sorted-dispatch" function ALLOC_P is the right approach for rational
>allocation of time (or other resources) when a single decision-maker is
>involved. For multiple decision-makers, as in the case of a population of
>consumers with diverse utility functions, the logit or probit functions are
>appropriate, as George Backus has described. But, in practice, I think the
>"simple familiar weighted average scheme" (George Richardsons nice phrase)
>is a good enough approximation and is easy to implement (unlike, perhaps,
>ALLOC_P) and explain (unlike, perhaps, logit and probit). Just a conjecture
>for now, but Im guessing that as long as all tasks are formulated as
>backlogs, the differences between these approaches will tend to be minimized
>by the structures strong compensating loop:
>More backlog --> more time allocated --> less backlog.
>By the way, the way I generally write the "simple familiar" equation as:
> Aj = [Wj*Tj / sum(Wi*Ti)] * MIN(Total time available, sum(Ti))
>where Aj=time allocated, Tj=desired time, Wj=importance weight.
>This should solve the problem with Richardsons equation that Hines
>mentioned.
>-Jack Homer

...GPR

-------------------------------------------------------------------------
George P. Richardson
G.P.Richardson@Albany.edu
Chair, Dept. of Public Administration and Policy 518-442-5258
Rockefeller College of Public Affairs and Policy 518-442-5298
University at Albany, Albany, NY 12222 http:/www.albany.edu/~gpr
-------------------------------------------------------------------------
"George Backus"
Member
Posts: 33
Joined: Fri Mar 29, 2002 3:39 am

Resource allocation models

Post by "George Backus" »

It may ultimately be shown that David Peterson is correct in his
perspectives on RUM allocation, but I do not believe the objections he
raised bear out.

I still contend that Alloc_p is an ALGORITHM to allocate resources. As
taken from the VENSIM c-code and Woods own work, its only departure from a
"sorted dispatch" is the addition of "width" that allows a unique (as
opposed to a mathematically arbitrary or multi-valued) allocation across
options with overlapping utility (priority) distributions. In this
situation, the alloc_p allocation is similar to RUM but has a simple uniform
distribution rather than the conventional Weibull or normal distributions.
Note that any distribution other than the Weibull, exponential, and uniform,
does not have a closed form solution for the RUM market share.

The Wood algorithm does indeed pass the six criteria that David notes. My
disagreement is with the appropriateness of those criteria in a causal SD
representation. They are valid but, I believe are not primal. If the
algorithm validly describes how the system (a firm?) actually does the
allocation, then it should be used -- just as I would also claim that we
legitimately use a L-P model in our SD model to dispatch power plants in a
the deregulated California --- because that is what is actually used. From
strict orthodox SD "philosophy," it is not legitimate to include an
algorithm that simply reproduces the hoped for results of an allocation or
any other mechanism. As SDers, we want to simulate the actual mechanism,
as best we can understand it, to capture its dynamic (system) consequences.

As a revised set of annotated "rules:"

1) The allocation should sum to total supply or demand, whichever is part of
the choice function.

Each independent choice should be separately simulated. Clearly, supply and
demand choices come from different entities. Strictly speaking, "orders"
are the "supply" of "demand" being allocated - thus there is only "supply"
from many entities.

2) All allocations should be positive on zero.

This is the same as Davids original and is a quality of RUM (also called
qualitative choice theory -QCT- in its implementation).

3.) The allocation should be based on the perceived utility of the choice
compared to the utility of all other choices in the choice set.

This item originally wanted the allocation to be no more than requested.
With a properly parameterized model, the distribution of the allocation can
be very tight such that the allocation comes out exactly as requested under
all conditions where the allocation can be provided. The reality is,
however, that other than for an idealized discrete allocation, the time or
resource required is not perfectly known. The value of using that resource
for that task remains in comparison to the other choices. If the task is
critical, more time than "requested" may want to be allocated -- even if the
net effect is slack time. Whether you want to pass inspection 40% of the
time (like some Intel chips once did) or be out "six-sigma" (like GE), the
time allocated -- even for mechanized manufacturing -- changes.

Most importantly, the statement that "No allocation should be more than the
amount requested" is a statement of preference (perceived utility). It is
an input and not the mechanism. That preference should be part of the
utility specification. This claim is not true for physically discrete
processes as will be discussed later.

4. Under reference operating conditions, the "requested" allocation of
resources is the allocation that occurs.

A properly specified QCT (RUM) will, by definition fulfill this property.
Its background theory captures how people make choices and how they use the
information to make choices. Having widely separated priorities would have
been compensated for in the utility specification -- by definition -- to
insure the correct (desired) allocation under "reference operating
conditions."

The original statement focuses on "If supply is adequate." This is only one
input decision term. Clearly, given market conditions, prices, costs, etc,
a greater preference to minimize costs or risk should change the allocation.
Supply capability is again an input to the mechanism and not the mechanism -
except in physical discrete allocations.

5) The allocation should be based on the relative value of the utility of
each choice. Each option receives the proportional share as based on the
integrated distribution of weighted utility over the choice space.

This is identical in concept (and reality) to Davids original and is a
quality of QCT.

6. If the utilities are equal, the allocations are equal.

This is identical in concept (and reality) to Davids original and is a
quality of QCT.

7. Physically quantized units can only be allocated on the margin by QCT
but can be allocated on the average (the aggregate) by sorted-dispatch or
other optimization algorithms.

If the issue is orders or orders shipped, this is a discrete problem
(despite its being modeled in a continuous fashion -- please let us not
start the discrete versus continuous discussion again ... :-), then the
sorted dispatch is appropriate for times when supply is adequate. When the
supply is inadequate, the utility of each customer (in the eyes of the
supplier) becomes critical and the QCT is most appropriate.

Note that if orders by a customer are represented by a level, then the
utility of the marginal shipments (the allocation) is a function of whether
the orders have been delivered. This approach of using the allocation for
the margin rather than average does insure that the shipments do not exceed
orders. The decision is now causal and the results are not "assumed" as
they would be if an aggregate or average allocation algorithm were used.
(The use of an average allocation may still be appropriate for simulating
problems not focusing on the allocation process as part of the problem.)

This then brings up the supposed worst problem with QCT (not mentioned by
David): market shares can jump to 99+% for a market participant who can
serve <1%. This problem is well presented in the VENSIM Manual appendix E.

Most small consulting firms, Ventana Systems and mine included, spend
possibly 100% of their time having staff over or under-booked: marketing
efforts result in cycles of boom or bust. (Who says SDers are clever
enough to solve the dynamics of their own businesses?) The new orders come
from the perceived utility -- choice -- based on attractiveness, price,
preferences, or whatever of the offered product. As the new orders come in
as a "boom" period, the company-internal choice function and physical
mechanisms add backlog, change pricing, reject orders, bring on temporary
help, bring on partnered companies, redirect orders to competitors, and even
attempt mergers. If all the relevant choices are simulated on the margin,
then the real implications and responses to excess market share dynamics are
captured in a realistic manner.

I continue to assert that QCT appears to properly simulate all human-valued
decision processes. If the utility function is carefully specified, and
marginal choices are used when feedback exists between the primary choice
and the primary result/impact, then ALL of the objections to QCT vanish.
The sorted dispatch allocation (or even a L-P or some other optimization if
it ACTUALLY used in the real system) is eminently appropriate for physical
allocation of discrete (quantized) resources.

Now does everybody have that sick to their stomach feeling?..

George

George Backus, President
Policy Assessment Corporation
14604 West 62nd Place
Arvada, CO 80004-3621
Bus: 303-467-3566
Fax: 303-467-3576
Cell: 303-807-8579
Email:
George_Backus@ENERGY2020.com
Bryan.A.James@gd-is.com
Junior Member
Posts: 3
Joined: Fri Mar 29, 2002 3:39 am

Resource allocation models

Post by Bryan.A.James@gd-is.com »

>Just a conjecture
>for now, but Im guessing that as long as all tasks are formulated as
>backlogs, the differences between these approaches will tend to be
minimized
>by the structures strong compensating loop:
>More backlog --> more time allocated --> less backlog.

Backlog ... couldnt phrase it any better. My To-Do list hasnt been empty
in years :-)

Seriously, that probably captures many peoples approach to time resource
allocation. But
I would make any activitys priority a function of time. For example, I
may know that I
need to get to the store for food supplies, but numerous other very high
priority activities
must be addressed first. Eventually, however, by the time essential food
supplies have not been
replaced and a meal cannot be prepared or only a very odd one can be
prepared, that trip to
the store has moved up in priority to the top of the list. Reactionary
rather than proactive,
to be sure, but my observation is that many people and organizations
operate this way.

Bryan James
General Dynamics Information Systems
Denver, CO
303-649-7548
bryan.a.james@gd-is.com
"Will Glass-Husain"
Junior Member
Posts: 2
Joined: Fri Mar 29, 2002 3:39 am

Resource allocation models

Post by "Will Glass-Husain" »

Hi,

Interesting discussion on the resource allocation problem. I use a variant
of the ALLOCP formulation frequently that allocates resources with a
prioritized "all-or-nothing" formula. The structure takes a list of
resource requests and a list of priorities. It then sorts the resource
requests by priority and allocates them in order.

The implementation is done using Powersims array capability. (The same can
likely be done with the other SD programs that support arrays).

The first trick is to sort the requests into order by priority. I do this
by creating a 2D array to calculate relative positions, then rearranging the
requests into the right order.

Secondly, the resource allocation is determined by calculating a running
total and comparing with available supply. In other words, if resources A,
B, C, D request 4 each, the running total is [4, 8, 12, 16]. I determine
the allocation by comparing the supply to the running total for each
element. If I have a supply of 10 available, I should supply 4 for the first
two elements and two for the third.

Resource Request: [4, 4, 4, 4]
Running Total: [4, 8, 12, 16]
Actual Supply: [4, 4, 2, 0] sum = 10

Id be glad to share an example model on request.

Incidentally, I also have an implemention of the AllocP function implemented
with Powersim arrays. This can be used to combine with priorized "all or
nothing" approach and the the relative attractiveness formula. However it
is a good deal harder to explain so I tend to use it less often.

Best regards,

WILL


------------------------
Will Glass-Husain <
wglass@powersim.com >
Powersim-
Simulator Solutions Group
667 Folsom St
San Francisco, CA 94107

Phone: (415) 977-1391 (direct)
Fax: (240) 218-6273
http://www.powersim.com/
------------------------
"Jaideep Mukherjee"
Junior Member
Posts: 15
Joined: Fri Mar 29, 2002 3:39 am

Resource allocation models

Post by "Jaideep Mukherjee" »

On resource allocation of time, after GPRs idea of allocation extending to
separate activities (unlike my thinking where I first suggested using
optimal control to minimize TOTAL time - so-called Free Horizon problem), we
have had interesting discussions. I havent looked at alloc_p yet, so my
ideas below dont relate to the battle of the giants:-)

In the absence of clarification by Steve (the person who asked the original
question) as to what his specific problem is, I am letting my imagination
run wild - however, no star trek ideas this time. We consultants love this -
we WILL solve the problem we know best how to solve, irrespective of the
real problem (just kidding folks - good consultants dont do this). HENCE
THE LESSON TO GET REAL PROBLEMS SOLVED: talk, communicate, facilitate,
elicit, ...repeat....

TOTAL time for a project (as a resource available to it) can be minimized
using optimal control, specifically minimum principle, for an appropriate SD
model. When you allocate parts of TOTAL time to separate activities, the
corresponding representation in the discrete (as opposed to continuous
systems described by SD models) world is in terms of multi-stage dynamic
programming problems (again, a subset of optimal control problems). In case
of continuous time problems, in the general case, the dynamic programming
solutions are expressed as nasty partial differential equations (so-called
Hamilton-Jacobi-Bellman equations). So my gut thinking would be that we
should look at each problem carefully to see which solution would be most
applicable. If we can abstract from the problem sufficiently to transform it
to a discrete multistage deterministic DP problem, then we are done with an
optimal allocation in hand (any operations research text would have examples
of this). But if nonlinear dynamics is very important (hence a "system
dynamics" model is necessitated), follow some of the approaches that Backus,
Peterson, GPR, Hines are talking about. If still you want optimal solutions,
then it will really depend on the problem whether we can even get any
solution or not - if it is too highly nonlinear and ill-behaved (stuff of
SD!!) then there is little hope - that is why I think a repeated regime of
approximate optimizations ("optimlations", using different simpler
scenarios) may give us a narrow band of policies to work in, and, with time,
with the developments in math, computing power, the out-of-the-box thinking
of starving grad students, these narrow bands for policies will transform
into sharp curves.

To me an even more intriguing thought is the extension of dynamic
programming problems with multiple players involved as in dynamic game
situations. We are talking here about joining three fields - system
dynamics, game theory, optimal control (via dynamic programming, not minimum
principle, which I explored in my Phd work). So that all the above is not
mushy-fluffy stuff, let me think of a real example where this may be
applicable (if this describes a real problem, then we should work more in
this area to solve it): think of Bell Atlantic getting permit for selling
long-distance phone services recently (yesterday) in US. It has resources of
people, intellectual capital, materials, time, etc. needing allocation to
different activities. The other players in the market are ATT, many smaller
players, Federal government and so on. The "system dynamics" describing
conditions facing each company would be the superset of SD describing
conditions for each of these players, and the SD describing the common world
facing all of them. Now they have the problem of optimizing each of their
multiple resource allocations, satisfying all the regulatory constraints,
and coming out tops in the market too. How does each go about doing it? This
is the subject of further research, and even if not relevant now, will
become so in future... it is simply a matter of one company breaking off and
doing it, then it will be like the bandwagon effect we see in the internet
revolution, or so I hope in these wee hours of the morning...

Best regards

Jaideep

Jaideep Mukherjee, PhD
jaideep@optimlator.com
Bob Eberlein
Member
Posts: 49
Joined: Fri Mar 29, 2002 3:39 am

Resource allocation models

Post by Bob Eberlein »

Thanks to Geroge and George for their postings on this.

The simple formulation suggested by Jack Homer via George,

> Aj = [Wj*Tj / sum(Wi*Ti)] * MIN(Total time available, sum(Ti))

still suffers from the problem that if there is sufficient time
available, some will get more than their Ti while others will get less
unless the Ws are all the same. If the Ws are the same, you get the
vastly simpler allocation technique

Aj = Tj/SUM(Ti) * MIN(Total time available,SUM(T1))

which I do use quite a bit. If you want to get something that works in
time of plenty, and is continuous getting there, the only other proper
solution I have seen is the ALLOCP function.

If you are only trying to carve up a pie into pieces that add up to 1,
as George Backus is, then there are a variety of ways to go. George is
clearly very taken with positing a utility function, and then using
utility maximization as the technique for performing the allocation. I
dont share his enthusiasm for that particular approach but the notion
of using a multistage formulation is a useful one.

The criteria that David Peterson layed out represent the way that he
wants an alloction process to work. They are a combination of both the
physical reality that needs to be realized - noone can supply more than
they have - and the way the allocation should be manageable by the
specification of priorities.

David is precisely correct that ALLOCP is the only thing around that
satisfies both types of criteria - since unpioritized
allocation does not allow different priorities. That said, it would not
be too hard to write a computer program that maximizes arbitrary utility
functions subject to availability contraints. This program would
satisfy Davids stated objectives and Georges at the same time.

To Recap: The difficulty that started this discussion is that when there is a
limited amount of resource you need allocate it. This is a tough
problem and only two formulations presented are sensible enough to do
this and not give one agent (task or product) more than they ask for
while giving another less. Those are the simple unweighted allocation
above and the ALLOCP formulation.

Bob Eberlein
bob@vensim.com
John Sterman
Senior Member
Posts: 117
Joined: Fri Mar 29, 2002 3:39 am

Resource allocation models

Post by John Sterman »

The discussion on formulations for the allocation of resources to multiple
customers has been interesting; thanks to all for useful contributions.
The focus of the discussion has often been finding a parsimonious
formulation (defined as one you can explain successfully to your client)
that meets various requirements (nonnegativity of allocations, sum of
shares = 1, etc.).

Id like to make a few observations that abstract from the merits of allocp
vs MNL.

A good modeler will first try to capture the way the processes relevant to
the clients purpose actually work, "warts and all." After the modeler and
client have confidence that the model captures the structure and decision
making behavior of the people in the system appropriately for their purpose
they can consider new policies including better resource allocation
schemes, and here optimization methods are useful. These two different
phases of modeling have been confused in the discussion. There has been
much concern in the discussion so far that various formulations may
generate a condition in which some customers receive more than the resource
they request. This is viewed by some as a flaw because it is suboptimal or
leads to anomalous model behavior.

Certainly, some formulations can give far too much to some customers under
some circumstances. However, in the real world resource allocation is
often imperfect, and customers often receive more than they request or is
optimal even while others are starved. In actual situations, extra
resources are sometimes detected (often imperfectly and with delays) and
then the allocations are re-adjusted (that is, a set of negative loops work
to correct imbalances). The most important contribution to the discussion,
I think, was Jack Homers observation that these imbalances accumulate in
stocks (e.g. backlogs of work to be done) and that the model should
represent these stocks explicitly. These backlogs generate pressures that
adjust resource allocation, so that the errors created by the imperfect
forecasting and allocation routines managers actually use (e.g. to allocate
staff time to different activities in a project) are eventually detected
and corrected. This netork of compensating negative loops sometimes leads
to "pretty close to optimal" allocations. If there are long delays in
detecting or responding to the imbalances, however, then there can be
extended periods of suboptimal allocation, instability, and other
dysfunctional dynamics, all of which are frequently observed in real
systems.

Accurate modeling of resource allocation procedures in organizations or
markets may in fact require a formulation that allocates resources
suboptimally. Far from being a flaw, the inability of a resource
allocation model to give each customer exactly the resource they demand (or
their priority-weighted share when the resource is scarce) may be a
desirable feature. Of course, the model of resource allocation, like any
model, must be solidly grounded in first-hand observation and data on the
actual process in the real organization - these comments should not be
taken as a justification for sloppy modeling or formulation flaws.

Part of the problem comes from models in which people omit this important
stock and flow structure and instead attempt to treat the resource to be
allocated as a pure flow. For example, a modeler may treat the work to be
done in a project as a flow, and attempt to allocate time to different
activities so that the flow of work completed equals the desired flow in
every phase. By glossing the network of negative feedbacks that balance
the omitted backlogs of work in the real system, the modeler must come up
with a formulation that prevents overallocation because the model lacks the
real world structure that corrects the imbalances created by the actual
allocation process. As is often the case, modelers get into trouble when
they omit the network of feedbacks that in the real system responds to
discrepancies between the desired and actual states of the system and the
stock and flow structures that create disequilibria and enable inflows to
processes to differ from outflows.

In the real world, the disequilibrium stock and flow structures and
negative loops that regulate the stocks compensate for errors in the mental
or formal models managers use to allocate resources. These same loops
similarly compensate for the uncertainty and errors in the model parameters
and formulations capturing the priorities (attractiveness functions or
utilities) associated with each customer and their demand for resources.
Any formulation that only "works" for a particular or very narrow range of
parameter values is suspect since it is unlikely the modeler can estimate
these parameters to the accuracy required. Clients should beware of models
that require a particular value of the "width" parameter in allocp or that
only work if the underlying utilities in the MNL model are Weibull and not
Gaussian.

(One exception to the principle that one should explicitly model the stock
and flow structure of systems and the negative loops that respond to
imbalances arises when the equilibrating loops operate on time scales much
shorter than those of other dynamics of interest. Such circumstances can
lead to the stiff system problem in the numerical solution of differential
equations. There are various numerical methods to handle such situations
(none of which are, I believe, implemented in any of the popular simulation
software packages). It is then sometimes appropriate to treat these fast
processes as always being in equilibrium. A good example is provided by
Erik Mosekildes model of chaos in the rat nephron, in which pressure
equilibration in the nephron is very fast relative to the diffusion of
waste products and the neurotransmitters that regulate the porosity of the
capillary/nephron boundary. This nifty model is available on Tom
Fiddamans web site <http://home.earthlink.net/~tomfid/>, then go to the
model library.)

A second comment on the specific case where worker time is the resource to
be allocated, for example in a project setting: Many problems arise
because time not spent working on the project is modeled as a residual, and
not as an activity with a priority that competes against the pressure to
work on various project activities. In the real world, as some pointed out
in the discussion, you build up a backlog of non-work related activities
when you spend more time on the project. The greater your workweek, the
smaller the time spent sleeping, being with friends and family, maintaining
your personal capital stocks (car, house, clothes, inventories of food and
supplies, bill paying and record keeping), and on personal health and
hobbies. As your sleep deficit builds, the complaints of your family and
friends grow, and the piles of dirty laundry spread, your ability to work
long hours on the project declines. Thus just as you should model the
backlogs of tasks to be done and the negative loops that adjust time
allocations among the various work-related activities, you should model the
stock of non-work tasks to be done. Such models give more realistic
behavior (you can work long hours for a little while, but then have to cut
back to work off the debt you built up to non-work activities). These
models can also easily be extended to include other important feedbacks:
if, despite the pressure, you continue to spend too much time at work, your
friendships fade, your non-work social skills and hobbies erode, and you
may find yourself divorced or without a meaningful relationship with your
children. These conditions feed back to productivity, absenteeism, and
employee turnover. Omitting these feedbacks in a human resource model may
be far more serious than the choice of allocp vs MNL.

The more general point is that all the uses for resources should be
explicitly modeled, without treating one as a residual. That is, the set
of resource demands in an allocation model should be mutually exclusive and
exhaustive. Most of the formulation problems people have pointed to in the
discussion of time allocation so far result from failing to include all the
demands on peoples time in the model.

John Sterman
From: John Sterman <jsterman@MIT.EDU>
Jack Ring
Junior Member
Posts: 2
Joined: Fri Mar 29, 2002 3:39 am

Resource Allocation Models

Post by Jack Ring »

It seems to me that one should model not only the "stock" of unsatisfied
resources and feed that forward into the next allocation operation but also
should model the Resource Request Error and feed that forward to the
managment edification function.

But I am disturbed by the theme that one should model what people do and
how they make decisions. I thought the purpose of modeling was to discern
the key demands and resources so that the allocation process itself is
improved (including manager behaviors).


Jack Ring, 32712 N. 70th St., Snottsdale, AZ 85262-7143
480-488-4615, Cell) 602.369.4615
From: Jack Ring <jring@amug.org>
"Jay W. Forrester"
Senior Member
Posts: 63
Joined: Fri Mar 29, 2002 3:39 am

Resource Allocation Models

Post by "Jay W. Forrester" »

Jay Forrest
Junior Member
Posts: 7
Joined: Fri Mar 29, 2002 3:39 am

Resource Allocation Models

Post by Jay Forrest »

>But I am disturbed by the theme that one should model what people do and
>how they make decisions. I thought the purpose of modeling was to discern
>the key demands and resources so that the allocation process itself is
>improved (including manager behaviors).
>Jack Ring, 32712 N. 70th St., Snottsdale, AZ 85262-7143

As the bulk of my work lies in causal mapping of problems in team
environments this observation is not 100 percent pertinent but does hold
some pertinance to the question posed.

By beginning with the existing situation one builds credibility and
understanding with the team. In my work we identify for leverage points in
causal loop diagrams quantifiably. This provides quick insight into where
likely levers lie for changing the behavior of the system as it is. Once
the causal loop model is validated and accepted it can serve as a point of
departure for identifying causal relationships which, if they were created,
provide high leverage for moving the system, or, if broken, would reduce
the impact of a powerful lever (which is out of the control of the team)
thus benefiting the team.

A similar approach will work with system dynamics though it is my
experience that the SD process can easily loose the teams attention
because results are too slow in being generated. I and SDSG have used our
approach for four years now and have found that the causal approach is much
faster on the front end (for us, at least) and provides a strong basis for
building an SD model in a much shorter time frame which the team will
quickly accept and understand.

One of my biggest concerns about the general practice of SD has long been
that too many practitioners have failed to build understanding in the
client and that, as a result, the client ultimately fails to benefit from
the SD modeling. The most common criticism I have heard from potential SD
clients has been related to "black box models."

I would agree that your goal is ultimately to get to what can be. However,
I hold serious concerns that skipping over the existing situation to the
optimum would create perceptual gaps that may limit the clients ability to
relate to, understand, and apply the work. Starting at the "end" seems to
me to invite "black box" answers!

In part, the answer may reside in the question of whether you consult in
expert mode or in facilitation mode. Clearly I try to build the clients
ability to deal with the issue. Perhaps you provide answers. Which is
better depends on which is successful for you.

Jay Forrest



11606 Highgrove Drive
Houston, Texas 77077
Tel: 713-503-4726
Fax: 281-558-3228
E-mail:
jay@jayforrest.com
"Phil Odence"
Junior Member
Posts: 15
Joined: Fri Mar 29, 2002 3:39 am

Resource allocation models

Post by "Phil Odence" »

Rod Brown is right, time is "a tricky issue." And, in my experience,
definitely causes confusion for novice modelers. Maybe this will help
clarify.

I am pretty confident that it never makes sense to think about time as a
stock. It is the ultimate flow; it never accumulates. (Although as Rod
points out, there are some things which, for convenience, we quantify in
units of time. And they can be accumulated, e.g. hours of film footage or
months of to be completed work in ones backlog.)

The way we tend to define activities in time allocation models, a person can
only do one at a time...at a particular instant in time, one is either
selling work, or doing work, but not both. Within a sufficiently short
timeframe ones time is 100% allocated one way or the other. However, the
timeframes of SD models tend to be such that even within a DT its possible
to be engaged in different activities. And so, it makes sense to use some of
the fractional allocation schemes that have been discussed.

Perhaps a less confusing way to think about allocation is that one is
allocating not time per se but rather a flow of person-hours (or weeks or
months). Note that those hours cant be stored up; if you dont use em they
go away. Even in a model of an individual, we allocate not hours, but
person-hours, in that case with just one person.

Rods point about stickiness (a good one) should not lead one to model time
as a stock. If I understand that issue, its that within the rhythm of a
particular model, it may be difficult for those whos behavior is being
modeled to "turn on a dime" and rapidly reallocate to different activities.
Without getting into a lot of detail, Id suggest making modeling this by
using a scheme by which things adjust to a target (a smooth type structure).

L. Philip Odence
High Performance Systems, Inc.
45 Lyme Road, Suite 300
Hanover, NH 03755
603.643.9636.x107, fx 9502
http://www.hps-inc.com
Locked