Page 1 of 1

confidence bounds with MCMC

Posted: Tue Jul 10, 2018 4:57 pm
by fabioR
Hi,

how to evaluate the confidence bounds of the calibrating parameters by using MCMC as a standalone method? Should I set SENSITIVITY=PAYOFF MCMC equal to a certain #value??

For example, I set the distribution of my payoff as Gaussian. If I use MCMC as a standalone method, can I set SENSITIVITY=PAYOFF MCMC=1.92 to yield 95% confidence bounds without making any assumptions that require a well-behaved (e.g., ellipsoid) likelihood surface?

Thanks

Re: confidence bounds with MCMC

Posted: Tue Jul 10, 2018 6:40 pm
by fabioR
The same question in case I use MCMC after the Powell optimisation. If I set:

:OPTIMIZER=Off
:SENSITIVITY=Payoff_MCMC=1.92
List of parameters

Is it always true that I can assess the confidence bounds without making any assumptions that require a well-behaved (e.g., ellipsoid) likelihood surface? And that in this way I can avoid the question of determining the proper X^2 cutoff entirely?

Because I noted that I do not specify anything after " :SENSITIVITY=Payoff_MCMC= ", the software put the value of 10 automatically

Re: confidence bounds with MCMC

Posted: Tue Jul 10, 2018 9:28 pm
by tomfid
MCMC always avoids any restrictions on the joint likelihood of the parameters, i.e. no ellipsoids. However, you can run into numerical problems if the likely volume is a low-dimensional manifold in a high-dimensional parameter space. This is not as rare as we'd hope. (Low acceptance rate could be a symptom.)

I generally don't use the sensitivity option; I run a standalone MCMC (no Powell, though you might want to use the result of a Powell search as the starting point). In that case, the sample contains all the accepted points. To find the 95% interval, load the sample into a spreadsheet and look at the 2.5% and 97.5% quantiles - those are your bounds, no ChiSq required.

However, all of this assumes that your payoff elements are properly weighted (the SD used to determine the weight is really the standard error of the measurements), no autocorrelation (the Kalman Filter takes care of that, if you can use it). And of course, you implicitly assume that the model is right. So, confidence bounds are always conservative, i.e. the real uncertainty is probably larger.

Re: confidence bounds with MCMC

Posted: Wed Jul 11, 2018 7:29 am
by fabioR
tomfid wrote: Tue Jul 10, 2018 9:28 pm
However, all of this assumes that your payoff elements are properly weighted (the SD used to determine the weight is really the standard error of the measurements), no autocorrelation (the Kalman Filter takes care of that, if you can use it). And of course, you implicitly assume that the model is right. So, confidence bounds are always conservative, i.e. the real uncertainty is probably larger.
Thank you tom! Yes, in my case I used Gaussian,and I estimated the error scale as an optimization parameter (so, I guess that doing so they are "properly weighted"). I never considered the problem of autocorrelation. I will try to use Kalman Filter.

Thanks,
Fabio

Re: confidence bounds with MCMC

Posted: Wed Jul 11, 2018 7:52 am
by fabioR
1. And, do you mean autocorrelation of the residuals?

2. Thwe payoff reporting provides the Durbin-Watson, but how to interpret it?

3. In case there is autocorrelation, is the Kalman Filter the only option for using MCMC?

Re: confidence bounds with MCMC

Posted: Wed Jul 11, 2018 1:00 pm
by tomfid
1 - right
2 - https://en.wikipedia.org/wiki/Durbin%E2 ... _statistic
3 - there are other, less complete ways, but nothing built in (this is something we're thinking about)

Re: confidence bounds with MCMC

Posted: Wed Jul 11, 2018 2:55 pm
by fabioR
tomfid wrote: Wed Jul 11, 2018 1:00 pm 1 - right
2 - https://en.wikipedia.org/wiki/Durbin%E2 ... _statistic
3 - there are other, less complete ways, but nothing built in (this is something we're thinking about)
Thanks Tom. I am wondering WHEN a system dynamics model is not affected by autocorrelation. Indeed, according to Barlas (Formal aspects of model validity and validation in system dynamics: https://pdfs.semanticscholar.org/621a/5 ... 93b7e2.pdf), models outputs are almost always autocorrelated.

Moreover, I cannot understand why you say that MCMC cannot be used in case of autocorrelation. I cannot find anywhere the scientific and mathematical explaination of this.

Thanks

Re: confidence bounds with MCMC

Posted: Wed Jul 11, 2018 3:34 pm
by tomfid
When the residuals are autocorrelated, the likelihood is wrong, because it assumes independence. This biases your confidence bounds; they'll be too narrow because it seems that you have more data than you really do. Global temperature is a good example - there are about 150 years of data, but there are also long term correlated disturbances, so really the data is "worth" more like 15 points.

There are various ways to correct for this, including bootstrapping, but they're all unsatisfactory in that they preserve the assumption that the model is undisturbed. The only way to really get things right is to use the Kalman filter (or a similar unscented or particle/ensemble filter).

Re: confidence bounds with MCMC

Posted: Mon Jul 16, 2018 3:34 pm
by bahri
Hi Tom,

When I was watching optimization process (choosing MCMC instead of Powell), I saw a message "simulation: ""xxx": errors: "xxx" best pay so far "xxx"".
Why are there error statements ?
and then if I stop optimization process, where do I get info about "best parameters/constants"?