If you surf the web for "random utility maximization" "discrete choice"
"LOGIT" "generalized extreme value" "MCI market share" and similar terms
youll find a lot of references to this. Ill save you the trouble by
listing three comprehensive online books:
http://elsa.berkeley.edu/books/choice2.html
http://164.67.167.7/MCI_Book/introduction.htm
http:/
oso.epfl.ch/mbi/papers/discretechoice/paper.html
These are all pretty industrial-strength. I suspect the first would be most
helpful; I havent looked at it for a while but the author does plenty of
work including noneconomic input variables. All of the above have a
statistical estimation flavor, and the restricted functional forms employed
should be used with caution, for robustness reasons Ill outline below.
There is a common practice in SD often called Us/(Us+Them), in which
share[us] = attractiveness[us]/SUM(attractiveness[all options])
attractiveness[us] = F(price[us],quality[us],service[us],...)
this is typically extended by a total market demand, so that
demand[us] = share[us]*total demand
total demand = G(SUM(attractiveness[all options]))
note that using the average instead of the total attractiveness to drive
total demand - a seemingly innoccuous choice - can have pervers effects, so
you have to be a bit careful.
If people were consistent and measurements were perfect, youd expect all
share to go to the single most attractive item. In reality of course people
are different and you cant measure everything, so share gets smeared out
over various items due to the influence of diversity in preferences for
unmeasured attributes. In one extreme, exchange rate spreads across markets
are extremely small because price is the only thing that matters,
information is pervasive, buyers are hightly price sensitive, etc. In the
other extreme, market share of fat and lean remains 50-50 regardless of
price, as Jack Sprat can eat no fat, and his wife can eat no lean.
The big choice here is the form of the attractiveness function -
multiplicative weighting is common (yielding an MCI model):
attr[us] = price[us]^e * quality[us]^k * service[us]^r ...
Obviously all these inputs should be normalized so that this is
dimensionally consistent.
Exponential is also common (yielding logit or MNL):
attr[us] = EXP( e*price[us] + k*quality[us] + r*service[us]
The nice thing about both of the above is that they can be linearized for
easy estimation.
However, you should be suspicious of anything thats easy. Often the right
form of the attractiveness function will be some blend of expressions
thats not linearizable. Reality checks or thought experiments about
extreme conditions are a good way to verify this - e.g. what happens if
price goes to 0 (probably not 100% or infinite share)? what happens if
advertising goes to infinity (again probably not 100% share)? what happens
if advertising goes to 0 (not 0 share)?
There are a number of common extensions of this framework. The most
frequent are hierarchy - e.g. users first choose whether to fly or drive,
then choose an airline or a type of car - and correlation structure (i.e.
stocks) in the attractiveness components or shares - e.g. due to experience
effects, inertia, perception delays, etc. A fancier approach (detailed in
the first book) measures the diversity of users (as described by
distributions of the e, k, r ... parameters describing their choices).
As far as I know these methods fall apart when you introduce limited
supply. I can think of two typical examples:
First, if you think of a product like telecom services, there are hard
service area boundaries that restrict the competitive set available to
potential subscribers - some have no choice, some have one choice, and some
may have two or three. This creates separate unserved, protected, and
competitive niches for the various carriers, which may need to be addressed
separately (I think in theory this could be handled by brute force
application of many simple models or an appropriate GEV model - see the
first book). The problem is that with many actors there will be zillions of
permutations.
Second - and more intractable - is the case of limited capacity or
stockouts (airline seats, cans of soup, cellular calls). These limits break
the assumption that consumer choices are independent, since you cant buy
something if Ive just bought the last one. Vensims ALLOCATE BY PRIORITY
function solves this problem, but with less-satisfying assumptions about
the random component of choice. An LP can also be used for allocation in
such cases, but is even less attractive as it assumes no randomness -
winner takes all up to capacity limits. Other strategies for solution
include taking orders for choices that exceed capacity, backlogging the
excess, and using the backlog as a component of attractiveness. All of the
above have problems so I keep hoping someone will do a dissertation on
discrete choice subject to capacity constraints (or point me to the right
paper).
Interestingly the allocations that come out of all this are quite similar
to what you get with aggregate production functions (try searching the web
for "cobb douglas" "constant elasticity of substitution" or "translog"). On
the surface the stories are a little different but a lot of the fundamental
assumptions can be reduced to the same thing.
Hope this helps.
Tom
****************************************************
Thomas Fiddaman, Ph.D.
Ventana Systems
http://www.vensim.com
8105 SE Nelson Road Tel (253) 851-0124
Olalla, WA 98359 Fax (253) 851-0125
Tom@Vensim.com http://home.earthlink.net/~tomfid
****************************************************