Interpreting priors on constrained parameters in GPFlow

狂风中的少年 提交于 2021-01-28 22:16:50

问题


I wasn't sure whether to make an issue on github out of this, but I think this is not so much an issue rather than my lack of understanding, so I post it here.

I would like to put priors on the hyperparameters on the kernels in a GPFlow model (an RBF kernel in this case). This is easy to do -- for example, I can write:

kern.variance.prior = gpf.priors.Gaussian(0, 1)

On the kernel variance parameter.

What I am unsure about is what this statement does with constrained parameters, such as the variance above. It is constrained positive, and the manual writes that there is also an unconstrained representation, log(exp(theta) - 1).

What I would like to understand is what the prior is placed on. Will this normal distribution be placed on the unconstrained representation, or directly on the transformed one? The latter would be a little strange, since it has support for negative values (perhaps I should use only distributions with positive support?).

Thanks!


回答1:


Yes, the distribution is placed on the constrained (+ve) parameter.

Note that the change of variables is accounted for using the Jacobian of the transform.

True, in this case putting a Gaussian prior on a +ve variable makes little sense. The outcome might be that you have a truncated Gaussian prior, but I’d have to check... that’s not how it’s intended to be used!

Perhaps GPflow should warn users if priors are not compatible with constraints? PRs welcome.



来源:https://stackoverflow.com/questions/57067948/interpreting-priors-on-constrained-parameters-in-gpflow

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!