Shaping the Future"" by Popper et al (was Problem Solving ve
Posted: Tue Apr 26, 2005 1:47 pm
Posted by Tom Fiddaman <tom@vensim.com>
WAS Problem Solving versus Optimization (SD5226)
Jack Homer wrote:
>>Many ridiculous things have been said about LTG/SD, and this adds to that
>>list. But because the article is in a magazine with large circulation and
>>seems so dishonest and self-serving, I'm thinking that it deserves a
>>response, perhaps a letter to the editor of Scientific American, and perhaps
>>coming from Dennis Meadows and Jorgen Randers. (I haven't checked with
>>Dennis and Jorgen on this and have no idea whether they would have the time
>>or patience for it.) Does anyone second that idea, or have a different one?
I'll second the idea, with some preliminary thoughts. If Dennis and Jorgen
have grown too weary, or too wise, to take the bait, I'll cowrite with
anyone who cares to sign.
The SciAm article is a perfect illustration of something I said here a few
months ago (SD5057):
>>While there are valid things to criticize about LTG, I get the sense that
>>most critics haven't actually read the book; they're just repeating
>>something they heard about it years ago, probably from someone who also
>>didn't read it.
The authors claim that the runs presented in Limits to Growth were
forecasts. They were explicitly not. There are actually several kinds of
output in the book, from submodels on resources (see my earlier post,
clipping below) and other topics, and from runs of the full World3 model.
Reading from the introduction to the first World3 run presented in the book
(page 121 of my edition):
""... In many cases the information available is not complete. Nevertheless
we believe that the model based on this information is useful even at this
preliminary stage for several reasons.""
""First, we hope that by posing each relationship as a hypothesis, and
emphasizing its importance in the total world system, we may generate
discussion and research that will eventually improve the data we have to
work with. ... ""
""Second, even in the absence of improved data, information now available is
sufficient to generate valid basic behavior modes for the world system.
This is true because the model's feedback loop structure is a much more
important determinant of overall behavior than the exact numbers used to
quantify the feedback loops. ... Since we intend to use the world model
only to answer questions about behavior modes, NOT TO MAKE EXACT
PREDICTIONS [emphasis added], we are primarily concerned with the
correctness of the feedback loop structure and only secondarily with the
accuracy of the data.""
""Third, if decision-makers at any level had access to precise predictions
and scientifically correct analyses of alternate policies, we would
certainly not bother to construct or publish a simulation model based on
partial knowledge. ...""
How you get that from the paragraphs above to ""In presenting the analysis
as a forecast, the authors stretched the model beyond its limits..."" is
beyond me. I skimmed the text for the word ""forecast"" unsuccessfully. The
closest I could come is (pg. 126): ""We can thus say with some confidence
that, under the assumption of no major change in the present system,
population and industrial growth will certainly stop within the next
century, at the latest."" So the jury's out for six more decades.
Overall, the SciAm article strikes me as a brilliant piece of marketing for
RAND without much beef between the buns. Perhaps I'm overly focused on the
technical aspects of the problem; the methods they seem to describe don't
seem markedly different from anything else in decisionmaking under
uncertainty, which has been around for decades. The idea of robust policy
design is also nothing new. The authors hint at interactivity; if they've
developed an automated way to make guided strategy explorations in an
uncertain model that would be quite cool. If they have developed ways to
construct decision rules under uncertainty (as with the safety valve
policies around Kyoto targets) that would be a useful contribution as well.
The authors claim ""Our method thus reduces a complex problem to a small
number of simple choices. Decision makers make the final call. Instead of
fruitlessly debating models and other assumptions, they can focus on the
fundamental trade-offs, fully aware of the surprises that the future may
bring."" A few observations about this statement:
- Good tools for robust strategy development do not preclude the need for
robust models. Wonderland is a particularly unfortunate choice in this
case, as it has some formulation problems that are hard to detect by
inspection due to the lack of units of measure or even clear stock-flow
structure. They should be evident with testing under extreme conditions,
but somehow the scenario testing didn't reveal that here.
- The procedure described still operates in model-as-oracle mode. With
single deterministic simulations users get to argue about whose parameter
or equation is correct. Recognizing genuine scientific uncertainty (e.g.
the probability distribution of the sensitivity of climate to 2x CO2) helps
in some cases, but really just shifts the debate to ""whose distribution is
correct."" I suspect that this procedure is still susceptible to the growing
practice of sowing scientific disinformation to muddy the debate.
- If decision makers don't understand the problem, they're not likely to
act. It's clear that this is frequently the case. For example, Senator
Inhofe recently stated in a floor speech:
>>People are trying to say that the release of CO2 is the cause of climate
>>change. These people have to understand that historically it doesn't work
>>out that way. We went into a time right after World War II when we had an
>>85-percent increase in CO2 emissions. What happened there was that
>>precipitated not a warming period but a cooling period. Again, that is too
>>logical for some of the alarmists to understand. They want so badly to
>>feel a crisis is upon us.
Evidently he doesn't know - or doesn't want to know - that temperature
responds to the stock of CO2 in the atmosphere, not the flow of emissions.
As long as such broad misperceptions prevail, it's hard to see how one can
make much headway through the addition of another layer of complexity.
I believe that models can help solve problems by codifying knowledge that
can be agreed upon so that debate can focus on questions of genuine
uncertainty and differences of value. It seems the authors believe that
too, but I'm not able to fully appreciate their contribution from the SciAm
article. I hope someone will report back to this list on the full book at
http://www.rand.org/publications/MR/MR1626/ .
Tom
Posted by Tom Fiddaman <tom@vensim.com>
posting date Mon, 25 Apr 2005 13:17:58 -0600
WAS Problem Solving versus Optimization (SD5226)
Jack Homer wrote:
>>Many ridiculous things have been said about LTG/SD, and this adds to that
>>list. But because the article is in a magazine with large circulation and
>>seems so dishonest and self-serving, I'm thinking that it deserves a
>>response, perhaps a letter to the editor of Scientific American, and perhaps
>>coming from Dennis Meadows and Jorgen Randers. (I haven't checked with
>>Dennis and Jorgen on this and have no idea whether they would have the time
>>or patience for it.) Does anyone second that idea, or have a different one?
I'll second the idea, with some preliminary thoughts. If Dennis and Jorgen
have grown too weary, or too wise, to take the bait, I'll cowrite with
anyone who cares to sign.
The SciAm article is a perfect illustration of something I said here a few
months ago (SD5057):
>>While there are valid things to criticize about LTG, I get the sense that
>>most critics haven't actually read the book; they're just repeating
>>something they heard about it years ago, probably from someone who also
>>didn't read it.
The authors claim that the runs presented in Limits to Growth were
forecasts. They were explicitly not. There are actually several kinds of
output in the book, from submodels on resources (see my earlier post,
clipping below) and other topics, and from runs of the full World3 model.
Reading from the introduction to the first World3 run presented in the book
(page 121 of my edition):
""... In many cases the information available is not complete. Nevertheless
we believe that the model based on this information is useful even at this
preliminary stage for several reasons.""
""First, we hope that by posing each relationship as a hypothesis, and
emphasizing its importance in the total world system, we may generate
discussion and research that will eventually improve the data we have to
work with. ... ""
""Second, even in the absence of improved data, information now available is
sufficient to generate valid basic behavior modes for the world system.
This is true because the model's feedback loop structure is a much more
important determinant of overall behavior than the exact numbers used to
quantify the feedback loops. ... Since we intend to use the world model
only to answer questions about behavior modes, NOT TO MAKE EXACT
PREDICTIONS [emphasis added], we are primarily concerned with the
correctness of the feedback loop structure and only secondarily with the
accuracy of the data.""
""Third, if decision-makers at any level had access to precise predictions
and scientifically correct analyses of alternate policies, we would
certainly not bother to construct or publish a simulation model based on
partial knowledge. ...""
How you get that from the paragraphs above to ""In presenting the analysis
as a forecast, the authors stretched the model beyond its limits..."" is
beyond me. I skimmed the text for the word ""forecast"" unsuccessfully. The
closest I could come is (pg. 126): ""We can thus say with some confidence
that, under the assumption of no major change in the present system,
population and industrial growth will certainly stop within the next
century, at the latest."" So the jury's out for six more decades.
Overall, the SciAm article strikes me as a brilliant piece of marketing for
RAND without much beef between the buns. Perhaps I'm overly focused on the
technical aspects of the problem; the methods they seem to describe don't
seem markedly different from anything else in decisionmaking under
uncertainty, which has been around for decades. The idea of robust policy
design is also nothing new. The authors hint at interactivity; if they've
developed an automated way to make guided strategy explorations in an
uncertain model that would be quite cool. If they have developed ways to
construct decision rules under uncertainty (as with the safety valve
policies around Kyoto targets) that would be a useful contribution as well.
The authors claim ""Our method thus reduces a complex problem to a small
number of simple choices. Decision makers make the final call. Instead of
fruitlessly debating models and other assumptions, they can focus on the
fundamental trade-offs, fully aware of the surprises that the future may
bring."" A few observations about this statement:
- Good tools for robust strategy development do not preclude the need for
robust models. Wonderland is a particularly unfortunate choice in this
case, as it has some formulation problems that are hard to detect by
inspection due to the lack of units of measure or even clear stock-flow
structure. They should be evident with testing under extreme conditions,
but somehow the scenario testing didn't reveal that here.
- The procedure described still operates in model-as-oracle mode. With
single deterministic simulations users get to argue about whose parameter
or equation is correct. Recognizing genuine scientific uncertainty (e.g.
the probability distribution of the sensitivity of climate to 2x CO2) helps
in some cases, but really just shifts the debate to ""whose distribution is
correct."" I suspect that this procedure is still susceptible to the growing
practice of sowing scientific disinformation to muddy the debate.
- If decision makers don't understand the problem, they're not likely to
act. It's clear that this is frequently the case. For example, Senator
Inhofe recently stated in a floor speech:
>>People are trying to say that the release of CO2 is the cause of climate
>>change. These people have to understand that historically it doesn't work
>>out that way. We went into a time right after World War II when we had an
>>85-percent increase in CO2 emissions. What happened there was that
>>precipitated not a warming period but a cooling period. Again, that is too
>>logical for some of the alarmists to understand. They want so badly to
>>feel a crisis is upon us.
Evidently he doesn't know - or doesn't want to know - that temperature
responds to the stock of CO2 in the atmosphere, not the flow of emissions.
As long as such broad misperceptions prevail, it's hard to see how one can
make much headway through the addition of another layer of complexity.
I believe that models can help solve problems by codifying knowledge that
can be agreed upon so that debate can focus on questions of genuine
uncertainty and differences of value. It seems the authors believe that
too, but I'm not able to fully appreciate their contribution from the SciAm
article. I hope someone will report back to this list on the full book at
http://www.rand.org/publications/MR/MR1626/ .
Tom
Posted by Tom Fiddaman <tom@vensim.com>
posting date Mon, 25 Apr 2005 13:17:58 -0600