RE: PRIORS

From: Robert Johnson Date: January 30, 2012 technical Source: mail-archive.com
Steve Enclosed is a WinBugs file that simulates from a multivariate normal distribution. The Wishart distribution is very sensitive to the degree of freedom for the prior. Winbugs parameterizes in terms of precision i.e inverse of the variance Robert D. Johnson, Ph.D Chemical Systems Modeling Section Corporate R&D Procter & Gamble 8256 Union Centre Blvd, IP-351 West Chester, OH 45069 [email protected] (513) 634-9827
Quoted reply history
From: [email protected] [mailto:[email protected]] On Behalf Of Stephen Duffull Sent: Monday, January 30, 2012 1:12 PM To: Gastonguay, Marc; Charles Steven Ernest II Cc: [email protected] Subject: RE: [NMusers] PRIORS Hi Charles The choice of the df of the Wishart should be based on what your prior information provides not on what the posterior distribution tells you. I suspect, if your data is relatively uninformative then a sensitivity analysis will show that the posterior is sensitive to the prior. It would then be a matter of determining objectively how your subjective prior will influence your current analysis and this depends on what you want to learn from your analysis... Regards Steve -- From: [email protected] [mailto:[email protected]] On Behalf Of Gastonguay, Marc Sent: Monday, 30 January 2012 12:03 p.m. To: Charles Steven Ernest II Cc: [email protected] Subject: Re: [NMusers] PRIORS Dear Charles, When using $PRIOR, the df specifies the degrees of freedom for the Inverse-Wishart prior distribution on the covariance matrix of the individual random effects (OMEGA). In practical terms, the larger the df, the more informative the prior will be. It's not surprising, then, that your parameter estimates approach the results of the previous model when df is large. The choice to use an informative prior distribution should not be made based on objective function changes. Instead, you might want to consider the rationale for including the prior information in the first place. One strategy would be to use informative priors only where necessary to support components of a previously defined or otherwise known model. If the new data alone do not support estimation of parameters required in this model structure, then you might want to include informative prior distributions on those specific components. Sensitivity to the "informativeness" of the prior could be explored by varying the df (or prior variance for fixed effects), and conclusions from your analysis should probably be viewed in the context of this prior sensitivity. As you indicate, for this particular example, you may not need to use $PRIOR at all, and could simply pool the data in a single analysis. Hope this is useful. Marc On Sun, Jan 29, 2012 at 5:02 PM, Charles Steven Ernest II <[email protected]<mailto:[email protected]>> wrote: I have previously conducted a meta-analysis of PK data that contained extensive and sparse sampling from 330 patients with a run time of ~ 4 days. I know have data from another 200 patients with sparse data. I have created the median and 95th PI from the previous model and overlaid the current data. The results demonstrate that the new data is well described by that model. When the new data is fit with that model, the data does not support using the model as some parameters were unidentifiable. I could conducted an analysis of all the data simultaneously but was interested in another method. Therefore, I have implemented $PRIOR into the model and noticed that with each successive increase of the df, the objective function significantly decreases. However, the THETA values do not change much and are different from the prior estimates used. The only other things that changed besides the objective function were the estimates and SE of the covariance terms and the BSV estimate of the peripheral volume of distribution. These values become more in line with the those observed previously, and the correlation values between them becomes stronger. My question is it justifiable to use such a high df (df=330) based on these significant decreases of objective function and covariance as the information from this meta-analysi would be highly informative. Thanks -- Marc R. Gastonguay, Ph.D.<mailto:[email protected]> Scientific Director Metrum http://metruminstitute.org Metrum Institute is a 501(c)3 non-profit organization. CDOo‚ñDocuments.StdDocumentDescñDocuments.DocumentDescñContainers.ViewDescñViews.ViewDescðStores.StoreDesc"ƒñDocuments.ModelDescñContainers.ModelDescñModels.ModelDescñStores.ElemDescòp h ‚ñTextViews.StdViewDescñTextViews.ViewDescò ƒñTextModels.StdModelDescñTextModels.ModelDescò•  Ý‚ñTextModels.AttributesDescò'*àŒ¤‚ò 8ÿ*àŒ*uTÈ‚ñTextRulers.StdRulerDescñTextRulers.RulerDescòýσñTextRulers.StdStyleDescñTextRulers.StyleDescò†‚ñTextRulers.AttributesDescòPäWô‚LY S¦ùLŸ$ò+E3˜:ëA>I‘P *uTÈ‚òŒmƒòS‚ò8äWô‚LY¦Lò+˜:>I 4*uTÈ‚ò„eƒòK‚ò0äWô‚LY¦Lò+ô*uTÈ‚òŽeƒòK‚ò0äWô‚LY¦Lò+å *uTÈ‚òzeƒòK‚ò0äWô‚LY¦Lò+*uTÈ‚òŽeƒòK‚ò0äWô‚LY¦Lò+3 *uTÈ‚òŒmƒòS‚ò8äWô‚LY¦Lò+˜:>I2*uTÈ‚òeƒòK‚ò0äWô‚LY¦Lò+ èÿmodel { theta[1:p] ~ dmnorm(theta.mean[1:p], omega.inv[1:p, 1:p]) theta.mean[1] <- mu[1] theta.mean[2] <- mu[2] theta.mean[3] <- mu[3] theta.mean[4] <- mu[4] tau ~ dgamma(tau.a, tau.b) sigma <- 1 / sqrt(tau) mu[1:p] ~ dmnorm(mu.prior.mean[1:p], mu.prior.precision[1:p, 1:p]) omega.inv[1:p, 1:p] ~ dwish(omega.inv.matrix[1:p, 1:p], omega.inv.dof) omega[1:p, 1:p] <- inverse(omega.inv[1:p, 1:p]) } list( p = 4,  tau.a =345.8664, tau.b =308122.5, mu.prior.mean = c( 9.772055, 9.358233, -2.137944, -1.853783), mu.prior.precision = structure( .Data = c( 328.729354, -46.8439167, -10.3345782, -1.851925, -46.843917, 30.9111313, -0.4673475, -1.586074, -10.334578, -0.4673475, 3.6007946, 1.403187, -1.851925, -1.5860742, 1.4031872, 3.257598), .Dim = c(4, 4)), omega.inv.matrix = structure( .Data = c( 5.779969, 8.999986, 13.30347, 4.736827, 8.999986, 54.138340, 23.23332, 31.882416, 13.303473, 23.233316, 146.86460, -42.607441, 4.736827, 31.882416, -42.60744, 432.706083), .Dim = c(4, 4)), omega.inv.dof =100.0 ) list( theta = c( 9.772055, 9.358233, -2.137944, -1.853783), tau = 0.001122, mu = c( 9.772055, 9.358233, -2.137944, -1.853783), omega.inv = structure( .Data = c( 5.779969, 8.999986, 13.30347, 4.736827, 8.999986, 54.138340, 23.23332, 31.882416, 13.303473, 23.233316, 146.86460, -42.607441, 4.736827, 31.882416, -42.60744, 432.706083), .Dim = c(4, 4)) ) ‚ñTextControllers.StdCtrlDescñTextControllers.ControllerDescñContainers.ControllerDescñControllers.ControllerDescò ‚òaYƒò?‚ò$0±Zô‚LY‚ò *àŒ ø ø<©[ ø@‚ñDocuments.ControllerDescò ð˜v z™€ü €ü pœk ~Ž
Jan 29, 2012 Charles Steven Ernest II PRIORS
Jan 29, 2012 Marc Gastonguay Re: PRIORS
Jan 30, 2012 Stephen Duffull RE: PRIORS
Jan 30, 2012 Robert Johnson RE: PRIORS
Jan 31, 2012 Stephen Duffull RE: PRIORS
Jan 31, 2012 Charles Steven Ernest II RE: PRIORS