Page 1 of 1

MCMC Sampling and Boundary Penalty

Posted: Tue Jul 16, 2024 8:30 pm
by aliakhavan89
Is there a way to avoid sampling from parameter boundaries, apart from manually defining a penalty term in the payoff function? I am fine to get a uniform distribution if MCMC cannot recover a proper posterior distribution. By the way, I have not defined a probability distribution for the prior.
Screenshot 2024-07-16 at 3.53.14 PM.png
Screenshot 2024-07-16 at 3.53.14 PM.png (80.27 KiB) Viewed 10584 times

Re: MCMC Sampling and Boundary Penalty

Posted: Tue Jul 16, 2024 8:43 pm
by tomfid
I guess the likelihood was also uniform over this experiment?

Reflecting off the boundaries is tricky in multidimensions, but we could probably improve on this behavior. In the short run I don't have a good solution though.

A couple possibilities would be to use the logistic transform on the parameter, or give it a Beta prior. For the latter, you can ignore the gamma terms and use x^(a-1)*(1-x)^(b-1) with a=b>>1.

It might be possible to suggest other options if we knew what the param represents.

Re: MCMC Sampling and Boundary Penalty

Posted: Tue Jul 16, 2024 10:41 pm
by aliakhavan89
Thank you! It is a normal log-likelihood. I have attached the toy model here. These are great suggestions. I think defining a prior and including it (mean and standard deviation) in the payoff as the initial condition would resolve the issue. But I guess I should keep the barrier penalty.

Re: MCMC Sampling and Boundary Penalty

Posted: Wed Jul 17, 2024 3:47 pm
by tomfid
What param are we looking at in the histogram?

Re: MCMC Sampling and Boundary Penalty

Posted: Wed Jul 17, 2024 3:49 pm
by tomfid
Oops - missing data.vdfx.

Re: MCMC Sampling and Boundary Penalty

Posted: Wed Jul 17, 2024 4:07 pm
by tomfid
Anyway ... I can replicate this with a simpler example, so no need. Will explore a better boundary reflection approach.

Re: MCMC Sampling and Boundary Penalty

Posted: Wed Jul 17, 2024 4:34 pm
by tomfid
Misspoke earlier though. For the Beta coefficients, a=b>>1 gives a centered Normalish distribution around 0.5. If you want something flattish, but avoiding extremes, you'd want 1<a=b<2.

Re: MCMC Sampling and Boundary Penalty

Posted: Thu Jul 18, 2024 6:49 pm
by aliakhavan89
Hi Tom, Sorry for not uploading the data. I have attached the whole folder here. I did a quick test with the boundary penalty, and it gave a me a very good result, though I added an additional parameter (Exp Sigma, which is not shown in the graph). I'll try to implement your suggestions. Thanks a lot!
Screenshot 2024-07-18 at 2.45.40 PM.png
Screenshot 2024-07-18 at 2.45.40 PM.png (371.21 KiB) Viewed 10488 times

Re: MCMC Sampling and Boundary Penalty

Posted: Fri Jul 26, 2024 12:22 am
by aliakhavan89
I borrowed the boundary penalty formulation from a template that normalizes the estimated parameters, but I didn't properly adjust the equation based on the parameter boundaries I have in the model. So, I got it to work by accident. I'll share a revised version, though now I have some doubts about the template.

Re: MCMC Sampling and Boundary Penalty

Posted: Sat Oct 19, 2024 4:11 pm
by tomfid
I modified the hard bounds constraint behavior so the concentration of mass at the bound no longer happens, at least in a case like this with a uniform likelihood and (implicitly) flat prior over a unit hypercube.

It probably still makes sense to apply some kind of prior within the range. We plan to make this easier, so that you can specify priors within the .voc list, though more complex situations (hierarchy) may still require equations.

Re: MCMC Sampling and Boundary Penalty

Posted: Sat Oct 19, 2024 4:15 pm
by tomfid
This experiment does suggest another interesting case: if all the likelihoods and/or priors have a beta distribution with small a,b then all the density should be concentrated at the extremes of the range (or worse, the corners of the hypercube if it's multidimensional). In theory the algorithm can handle this, but I'm not sure how true it is in practice. It might make sense to apply some kind of rank transformation to normalize things.