RE: Model Comparison and Viewing of Output

From: Mark Sale Date: August 14, 2007 technical Source: mail-archive.com
Nitin, No one else has yet answered, so I'll give a try, opening myself to the wrath of pretty much everyone. Model Comparison: Everything else remaining the same in a model, should a drop in the objective function on addition of an eta, sigma or a covariance term reflect an improvement in a model. Or comparing AICs, considering eta's and sigma's as additional parameters a correct approach. I understand that these decisions are rather based on multiple criterias like goodness of fit plots, biological sense, physiological plausibility of the parameter estimates etc. But specifically would like to know whether comparing objective function among these models a logical approach? I suspect that no one would question that model selection should be based on multiple criteria. Which criteria, and how to weight them depends on what you hope to accomplish with the model. If your goal is to get a simulation of something you may care little about tests of hypothesis, objective function, estimation correlation and care more about simulation based methods such as PPC or NPDE. Conversely, if your goal is hypothesis testing, then you likely should be very interested in OBJ (and or bootstrap tests of hypothesis). I'd suggest that there are two issues with evaluating models: 1. How good is the model (in comparison to competing models)? 2. What opportunities are there for improving the model? I assume first, that you are only considering models that are biologically plausible (ignoring whether there are degrees of biological plausibility, and whether this should play ! a role in selection) #1 can often be answered to a significant degree with objective measures (OBJ, AIC, PPC, NPDE), although graphics certainly play an important role, we prefer a model that doesn't show bias in graphics. But, improving bias should improve objective measures. #2. Largely, if not entirely, the role of graphics - most useful graphics IMHO are Visual predictive check and post hoc quantitites vs things that might explain any non-randomness in these post hoc quantities (time vs WRES or CWRES, post hoc eta vs covariates etc), but there are many others (see excellent paper by Ene Ette in Pharm Res, Dec 1995.). I would suggest that a goal (a goal, not the goal) of #2 should be to improve #1. If you make an improvement in plots, without a corresponding improvement in some objective measure, you have to be concerned about what else in the model you've made worse. So, to answer your question, if I had two models, that differ o! nly in AIC (equally biologically plausible, plots are equivalent, same NPDE, same PPC), I would not hesitates to choose the model with the lower AIC. What justification could there be for choosing the other model? Of course, things are rarely that simply, and invariably, other things are not the same. Then one has to choose whether some subtle improvement in a plot, or the PPC, or the NPDE is most important for this model selection exercise. Viewing Output via $TABLE option: When we do simulation using NONMEM,the output file contains both the dosing and observation records. Is there a way in NONMEM to specify a priori in the control stream, such that only observation records could be obtained in the simulation output file? Sort of. You can use the BY option (BY = MDV). This will sort by MDV (with the MDV = 0, then MDV = 1). But this is probably not all that helpful. I'm not sure if Xpose ( http://xpose.sourceforge.net/ ) automatically deletes MDV = 1 records (I think it does), the excel macro at Next Level solutions ( http://www.nextlevelsolns.com/downloads.html ) does delete records where MDV <> 0, if you include this in the $TABLE output. Mark Sale MD Next Level Solutions, LLC www.NextLevelSolns.com > -------- Original Message -------- Subject: [NMusers] Model Comparison and Viewing of Output From: Nitin Mehrotra <[EMAIL PROTECTED]> Date: Tue, August 14, 2007 12:18 pm To: > > [email protected] > > Dear NMusers, I had a couple of questions for the group members: 1. Model Comparison: Everything else remaining the same in a model, should a drop in the objective function on addition of an eta, sigma or a covariance term reflect an improvement in a model. Or comparing AICs, considering eta's and sigma's as additional parameters a correct approach. I understand that these decisions are rather based on multiple criterias like goodness of fit plots, biological sense, physiological plausibility of the parameter estimates etc. But specifically would like to know whether comparing objective function among these models a logical approach? 2. Viewing Output via $TABLE option: When we do simulation using NONMEM,the output file contains both the dosing and observation records. Is there a way in NONMEM to specify a priori in the control stream, such that only observation record! > s could be obtained in the simulation output file? Thanks and Regards Nitin Mehrotra > > Nitin Mehrotra, Ph.D Post Doctoral Research Fellow 874 Union Avenue, Suite4.5p/5p Department of Pharmaceutical Sciences University of Tennessee Health Science Center Memphis, TN, USA-38163 901-448-3385 (Lab) [EMAIL PROTECTED] > > Shape Yahoo! in your own image. > > Join our Network Research Panel today!
Aug 14, 2007 Nitin Mehrotra Model Comparison and Viewing of Output
Aug 14, 2007 Mark Sale RE: Model Comparison and Viewing of Output
Aug 15, 2007 Mats Karlsson RE: Model Comparison and Viewing of Output
Aug 15, 2007 Nitin Mehrotra Re: Model Comparison and Viewing of Output
Aug 15, 2007 Nick Holford Re: Model Comparison and Viewing of Output
Aug 16, 2007 Nitin Mehrotra Re: Model Comparison and Viewing of Output