RE: Predictive Performance
Dear Navin,
As Juergen said, VPC is a good graphically tool to indicate a clear bias in
model prediction. One of the limitation of this approach may be when you have a
complex design with diferent doses, different administration schedules and
different covariates (implemented in your population model). In this case you
need to perform several VPC plots splitted by doses, covariates...
Another approach is to compute a metric called Normalized Predictive
Distribution Error (NPDE). NPDE* have been developed to take into account the
full predictive distribution of each individual concentration, and handle
multiple observations within subjects. Under the null hypothesis that a model
under scrutiny describes a validation dataset, the distribution of NPDE should
be the standard normal distribution.
A R package is now available and can be downloaded from WWW.npde.biostat.fr .
Best regards.
Karl
Brendel K., Comets E., Laffont C., Laveille C., Mentré F. Metrics for external
model evaluation with an application to the population pharmacokinetics of
gliclazide. Pharm Res 2006, 23:2036-2049.
-----Message d'origine-----
Quoted reply history
De : [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] la part de Jurgen Bulitta
Envoyé : mardi 3 juillet 2007 02:07
À : navin goyal; [email protected]
Objet : Re: [NMusers] Predictive Performance
Dear Navin,
If you want to assess the predictive performance of a model,
I would highly recommend using visual predictive checks (VPC,
also called simple predictive checks, or degenerate predictive
checks).
Depending on your study design, VPCs might be easy to
implement or more work intensive. I find VPCs much easier
to interpret than DV vs. PRED or DV vs. IPRED plots. VPCs
are also easily communicated to non-modelers.
If the DV vs. IPRED plot looks biased, a model is often not
flexible enough to describe the data. However, there are
situations when the DV vs. IPRED plot looks almost perfect,
but the DV vs. PRED plot is quite biased and the VPC indicates
a clear bias in model predictions. This might be due to problems
with the parameter variability model.
So in essence, I would look at all three of those plots to assess
the appropriateness of a model. If a model is intended for
simulations, the VPC is a powerful tool to visually assess the
predictive performance and to tell if a potential bias in simulations
might be important for the study objectives or not.
Please find some references below.
Best regards
Juergen
Yano Y, Beal SL, Sheiner LB. Evaluating pharmacokinetic/pharmacodynamic models
using the posterior predictive check.
J Pharmacokinet Pharmacodyn. 2001 Apr;28(2):171-92.
Mentre F, Escolano S. Prediction discrepancies for the evaluation of nonlinear
mixed-effects models.
J Pharmacokinet Pharmacodyn. 2006 Jun;33(3):345-67.
-----------------------------------------------
Juergen Bulitta, PhD, Post-doctoral Fellow
Pharmacometrics, University at Buffalo, NY, USA
Phone: +1 716 645 2855 ext. 281, [EMAIL PROTECTED]
-----------------------------------------------
-----Ursprüngliche Nachricht-----
Von: "navin goyal" <[EMAIL PROTECTED]>
Gesendet: 02.07.07 20:10:56
An: nmusers <[email protected]>
Betreff: [NMusers] Predictive Performance
Hi everybody,
I had a question about the Predictive performance of the POPPK Model.
When I am estimating the precision and bias with the POPPK model I have, am I
supposed to use the
individual predictions or the population predictions ???
I am using "Some suggestions for Measuring Predictive Performance" by Sheiner
and Beal : J Pk and Bio Vol (:(4) 1981 :503-512 as reference.
I guess I should be using the population predictions to calculate the precision
and bias as I want to use the model to predict the plasma concentrations. Or
does this choice depend on anything else ??
If I am using the Population predictions then, where else would I be using the
individual Predictions apart from plotting them against the DV to evaluate the
Goodness of Fit?
Thanks in advance
--
--Navin