PPC

18 messages 10 people Latest: Aug 01, 2008

PPC

From: Paul Matthew Westwood Date: July 22, 2008 technical
Hello all, I wonder if someone can give me some tips on PPC. I am working on a midazolam dataset with a pediatric population, and have decided to use PPC as a model validation technique. The dataset I am modelling has up to 43 patients, at different ages, different weights, different times of dosing and sampling, and different doses. I simulated 100 datasets using NONMEM VI, fixing all parameters to the final estimates from the model. The simulated datasets produced had a large proportion of negative concentrations, and also a few impossibly large concentration values. Also the median, 5th and 95th percentiles were not very promising, and the resulting graphs not very clean. Firstly, can I use PPC with any degree of confidence with a dataset such as this, and if so, do I omit the negative concentration values from the analysis? Thanks in advance for any help given. Paul Westwood, PhD Student, QUB, Belfast.

Re: PPC

From: Nick Holford Date: July 22, 2008 technical
Paul, Its not clear to me if you did a VPC (visual predictive check) using just the final estimates of the parameters) or tried to do a posterior predictive check (PPC) including uncertainty on the parameter estimates in the simulation. I dont have any experience with PPC but I dont think its helpful for model evaluation. Its more of a tool for understanding uncertainties of predictions for future studies. I assume you dont have complications like informative dropout processes to complicate the simulation so if you did a VPC and the median of the predictions doesnt match the median of the observations then your model needs more work. Some negative concs are OK but 'impossibly high values' point to problems with your model. So I think you can safely say the VPC has worked very well -- it has told you that you need to think more about your model. You might find some ideas in these references: 1. Tod M, Jullien V, Pons G. Facilitation of drug evaluation in children by population methods and modelling. Clin Pharmacokinet. 2008;47(4):231-43. 2. Anderson BJ, Holford NH. Mechanism-Based Concepts of Size and Maturity in Pharmacokinetics. Annu Rev Pharmacol Toxicol. 2008;48:303-32. Nick Paul Matthew Westwood wrote: > Hello all, > > I wonder if someone can give me some tips on PPC. > I am working on a midazolam dataset with a pediatric population, and have > decided to use PPC as a model validation technique. The dataset I am modelling > has up to 43 patients, at different ages, different weights, different times of > dosing and sampling, and different doses. I simulated 100 datasets using NONMEM > VI, fixing all parameters to the final estimates from the model. The simulated > datasets produced had a large proportion of negative concentrations, and also a > few impossibly large concentration values. Also the median, 5th and 95th > percentiles were not very promising, and the resulting graphs not very clean. > Firstly, can I use PPC with any degree of confidence with a dataset such as > this, and if so, do I omit the negative concentration values from the analysis? > > Thanks in advance for any help given. > > Paul Westwood, > PhD Student, > QUB, > Belfast. -- Nick Holford, Dept Pharmacology & Clinical Pharmacology University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand [EMAIL PROTECTED] tel:+64(9)373-7599x86730 fax:+64(9)373-7090 http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford

Re: FW: PPC

From: Nick Holford Date: July 23, 2008 technical
Paul, The procedure you describe is a way of producing a posterior predictive check but I don't know of any good examples of its use. A simpler way of doing a PPC samples the population parameter estimates from a distribution centered on the final estimates with a variance-covariance based on the estimated standard errors and their correlation. VPCs are not posterior predictive checks because they do not take account of the posterior distribution of the parameter estimates (i.e. the final estimates with their uncertainty). A VPC typically ignores the parameter uncertainty and uses what has been called the degenerate posterior distribution (See Yano Y, Beal SL, Sheiner LB. Evaluating pharmacokinetic/pharmacodynamic models using the posterior predictive check. J Pharmacokinet Pharmacodyn. 2001;28(2):171-92 for terminology, methods and examples). When I spoke of uncertainty I did not mean random variability (OMEGA and SIGMA). A VPC will simulate observations using the final THETA, OMEGA and SIGMA estimates. You can calculate distribution statistics for your observations (such as median and 90% intervals) by combining the observations (one per individual) at each time point to create an empirical distribution. The statistics are then determined from this empirical distribution. In order to get sufficient numbers of points (at least 10 is desirable) you may need to bin observations into time intervals e.g. 0-30 mins, 30-60 mins etc. Nick Paul Matthew Westwood wrote: > ________________________________________
Quoted reply history
> From: Paul Matthew Westwood > Sent: 22 July 2008 13:20 > To: Nick Holford > Subject: RE: [NMusers] PPC > > Nick, > > Thanks for your reply and apologies once again for another confusing email. I > think I am using VPC, which as I understand it is simulating n datasets using > the final parameter estimates gained from the final model, and then taking the > median and 90% confidence interval (for example) for each simulated > concentration and comparing these to the real concentrations. Whereas, PPC is > where you then run the final model through the simulated datasets and compare > selected statistics of these new runs with the original. Is this correct? You > mentioned including uncertainty on the parameter estimates in the simulated > datasets. Would one usually not include uncertainty (fixing the error terms to > zero) in the simulated datasets? Doing this with mine obviously produced much > better concentrations with no negative values and no 'significant' outliers. > Another thing you mentioned is comparing the median of the simulated > concentrations with the median of the original dataset concentrations, but as > there is only one sample for any particular time point would this indicate the > unsuitability of VPC (and furthermore PPC) for this model? > > Thanks again, > Paul. > ________________________________________ > From: [EMAIL PROTECTED] [EMAIL PROTECTED] On Behalf Of Nick Holford [EMAIL > PROTECTED] > Sent: 22 July 2008 10:30 > To: [email protected] > Subject: Re: [NMusers] PPC > > Paul, > > Its not clear to me if you did a VPC (visual predictive check) using > just the final estimates of the parameters) or tried to do a posterior > predictive check (PPC) including uncertainty on the parameter estimates > in the simulation. > > I dont have any experience with PPC but I dont think its helpful for > model evaluation. Its more of a tool for understanding uncertainties of > predictions for future studies. > > I assume you dont have complications like informative dropout processes > to complicate the simulation so if you did a VPC and the median of the > predictions doesnt match the median of the observations then your model > needs more work. > > Some negative concs are OK but 'impossibly high values' point to > problems with your model. > > So I think you can safely say the VPC has worked very well -- it has > told you that you need to think more about your model. You might find > some ideas in these references: > > 1. Tod M, Jullien V, Pons G. Facilitation of drug evaluation in > children by population methods and modelling. Clin Pharmacokinet. > 2008;47(4):231-43. > 2. Anderson BJ, Holford NH. Mechanism-Based Concepts of Size and > Maturity in Pharmacokinetics. Annu Rev Pharmacol Toxicol. 2008;48:303-32. > > Nick > > Paul Matthew Westwood wrote: > > > Hello all, > > > > I wonder if someone can give me some tips on PPC. > > I am working on a midazolam dataset with a pediatric population, and have > > decided to use PPC as a model validation technique. The dataset I am modelling > > has up to 43 patients, at different ages, different weights, different times of > > dosing and sampling, and different doses. I simulated 100 datasets using NONMEM > > VI, fixing all parameters to the final estimates from the model. The simulated > > datasets produced had a large proportion of negative concentrations, and also a > > few impossibly large concentration values. Also the median, 5th and 95th > > percentiles were not very promising, and the resulting graphs not very clean. > > Firstly, can I use PPC with any degree of confidence with a dataset such as > > this, and if so, do I omit the negative concentration values from the analysis? > > > > Thanks in advance for any help given. > > > > Paul Westwood, > > PhD Student, > > QUB, > > Belfast. > > -- > Nick Holford, Dept Pharmacology & Clinical Pharmacology > University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand > [EMAIL PROTECTED] tel:+64(9)373-7599x86730 fax:+64(9)373-7090 > http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford -- Nick Holford, Dept Pharmacology & Clinical Pharmacology University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand [EMAIL PROTECTED] tel:+64(9)373-7599x86730 fax:+64(9)373-7090 http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford

RE: FW: PPC

From: Susan A Willavize Date: July 23, 2008 technical
Hi Nick, I have been following this discussion and I think it is very helpful to many of us. Can you please elaborate on that last part about binning? What is that for? I must have missed something there. Thanks, Susan Susan Willavize, Ph.D. Global Pharmacometrics Group 860-732-6428 This e-mail is classified as Pfizer Confidential; it is confidential and privileged.
Quoted reply history
-----Original Message----- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Nick Holford Sent: Wednesday, July 23, 2008 6:32 AM To: [email protected] Subject: Re: FW: [NMusers] PPC Paul, The procedure you describe is a way of producing a posterior predictive check but I don't know of any good examples of its use. A simpler way of doing a PPC samples the population parameter estimates from a distribution centered on the final estimates with a variance-covariance based on the estimated standard errors and their correlation. VPCs are not posterior predictive checks because they do not take account of the posterior distribution of the parameter estimates (i.e. the final estimates with their uncertainty). A VPC typically ignores the parameter uncertainty and uses what has been called the degenerate posterior distribution (See Yano Y, Beal SL, Sheiner LB. Evaluating pharmacokinetic/pharmacodynamic models using the posterior predictive check. J Pharmacokinet Pharmacodyn. 2001;28(2):171-92 for terminology, methods and examples). When I spoke of uncertainty I did not mean random variability (OMEGA and SIGMA). A VPC will simulate observations using the final THETA, OMEGA and SIGMA estimates. You can calculate distribution statistics for your observations (such as median and 90% intervals) by combining the observations (one per individual) at each time point to create an empirical distribution. The statistics are then determined from this empirical distribution. In order to get sufficient numbers of points (at least 10 is desirable) you may need to bin observations into time intervals e.g. 0-30 mins, 30-60 mins etc. Nick Paul Matthew Westwood wrote: > ________________________________________ > From: Paul Matthew Westwood > Sent: 22 July 2008 13:20 > To: Nick Holford > Subject: RE: [NMusers] PPC > > Nick, > > Thanks for your reply and apologies once again for another confusing email. I think I am using VPC, which as I understand it is simulating n datasets using the final parameter estimates gained from the final model, and then taking the median and 90% confidence interval (for example) for each simulated concentration and comparing these to the real concentrations. Whereas, PPC is where you then run the final model through the simulated datasets and compare selected statistics of these new runs with the original. Is this correct? You mentioned including uncertainty on the parameter estimates in the simulated datasets. Would one usually not include uncertainty (fixing the error terms to zero) in the simulated datasets? Doing this with mine obviously produced much better concentrations with no negative values and no 'significant' outliers. Another thing you mentioned is comparing the median of the simulated concentrations with the median of the original dataset concentrations, but as there is only one sample for any particular time point would this indicate the unsuitability of VPC (and furthermore PPC) for this model? > > Thanks again, > Paul. > ________________________________________ > From: [EMAIL PROTECTED] [EMAIL PROTECTED] On Behalf Of Nick Holford [EMAIL PROTECTED] > Sent: 22 July 2008 10:30 > To: [email protected] > Subject: Re: [NMusers] PPC > > Paul, > > Its not clear to me if you did a VPC (visual predictive check) using > just the final estimates of the parameters) or tried to do a posterior > predictive check (PPC) including uncertainty on the parameter estimates > in the simulation. > > I dont have any experience with PPC but I dont think its helpful for > model evaluation. Its more of a tool for understanding uncertainties of > predictions for future studies. > > I assume you dont have complications like informative dropout processes > to complicate the simulation so if you did a VPC and the median of the > predictions doesnt match the median of the observations then your model > needs more work. > > Some negative concs are OK but 'impossibly high values' point to > problems with your model. > > So I think you can safely say the VPC has worked very well -- it has > told you that you need to think more about your model. You might find > some ideas in these references: > > 1. Tod M, Jullien V, Pons G. Facilitation of drug evaluation in > children by population methods and modelling. Clin Pharmacokinet. > 2008;47(4):231-43. > 2. Anderson BJ, Holford NH. Mechanism-Based Concepts of Size and > Maturity in Pharmacokinetics. Annu Rev Pharmacol Toxicol. 2008;48:303-32. > > Nick > > Paul Matthew Westwood wrote: > >> Hello all, >> >> I wonder if someone can give me some tips on PPC. >> I am working on a midazolam dataset with a pediatric population, and have decided to use PPC as a model validation technique. The dataset I am modelling has up to 43 patients, at different ages, different weights, different times of dosing and sampling, and different doses. I simulated 100 datasets using NONMEM VI, fixing all parameters to the final estimates from the model. The simulated datasets produced had a large proportion of negative concentrations, and also a few impossibly large concentration values. Also the median, 5th and 95th percentiles were not very promising, and the resulting graphs not very clean. >> Firstly, can I use PPC with any degree of confidence with a dataset such as this, and if so, do I omit the negative concentration values from the analysis? >> >> Thanks in advance for any help given. >> >> Paul Westwood, >> PhD Student, >> QUB, >> Belfast. >> >> >> >> > > -- > Nick Holford, Dept Pharmacology & Clinical Pharmacology > University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand > [EMAIL PROTECTED] tel:+64(9)373-7599x86730 fax:+64(9)373-7090 > http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford > > > -- Nick Holford, Dept Pharmacology & Clinical Pharmacology University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand [EMAIL PROTECTED] tel:+64(9)373-7599x86730 fax:+64(9)373-7090 http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford

RE: FW: PPC

From: Mahesh Samtani Date: July 23, 2008 technical
Dear Nick, Thank-you for teaching these important concepts. Could you and others kindly comment on the following 2 aspects: a) The variance-covariance matrix based on the estimated standard errors and their correlation will generate a multi-variate normal distribution for the parameters. However, the posterior distribution of parameters may not be normally dispersed. Wouldn't it be better to use the bootstrap results as a source for getting the uncertainty distribution. I have to admit that the bootstrap method can be quite time-consuming. See one such example at: http://www.page-meeting.org/pdf_assets/2373-MSamtani%20PAGE%20Poster%202007.pdf b) More importantly, after going through the PPC and VPC comparison for several cases I always find that if the parameter estimates have reasonable precision from the original NONMEM run then the PPC and VPC results are essentially identical. This echoes an earlier comment that most of the variation is explained by BSV and RV. Has any one else experienced this behavior also and if so shouldn't VPC be enough for model verification? Kindly advise...Mahesh
Quoted reply history
-----Original Message----- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Willavize, Susan A Sent: Wednesday, July 23, 2008 8:38 AM To: Nick Holford; [email protected] Subject: RE: FW: [NMusers] PPC Hi Nick, I have been following this discussion and I think it is very helpful to many of us. Can you please elaborate on that last part about binning? What is that for? I must have missed something there. Thanks, Susan Susan Willavize, Ph.D. Global Pharmacometrics Group 860-732-6428 This e-mail is classified as Pfizer Confidential; it is confidential and privileged. -----Original Message----- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Nick Holford Sent: Wednesday, July 23, 2008 6:32 AM To: [email protected] Subject: Re: FW: [NMusers] PPC Paul, The procedure you describe is a way of producing a posterior predictive check but I don't know of any good examples of its use. A simpler way of doing a PPC samples the population parameter estimates from a distribution centered on the final estimates with a variance-covariance based on the estimated standard errors and their correlation. VPCs are not posterior predictive checks because they do not take account of the posterior distribution of the parameter estimates (i.e. the final estimates with their uncertainty). A VPC typically ignores the parameter uncertainty and uses what has been called the degenerate posterior distribution (See Yano Y, Beal SL, Sheiner LB. Evaluating pharmacokinetic/pharmacodynamic models using the posterior predictive check. J Pharmacokinet Pharmacodyn. 2001;28(2):171-92 for terminology, methods and examples). When I spoke of uncertainty I did not mean random variability (OMEGA and SIGMA). A VPC will simulate observations using the final THETA, OMEGA and SIGMA estimates. You can calculate distribution statistics for your observations (such as median and 90% intervals) by combining the observations (one per individual) at each time point to create an empirical distribution. The statistics are then determined from this empirical distribution. In order to get sufficient numbers of points (at least 10 is desirable) you may need to bin observations into time intervals e.g. 0-30 mins, 30-60 mins etc. Nick Paul Matthew Westwood wrote: > ________________________________________ > From: Paul Matthew Westwood > Sent: 22 July 2008 13:20 > To: Nick Holford > Subject: RE: [NMusers] PPC > > Nick, > > Thanks for your reply and apologies once again for another confusing email. I think I am using VPC, which as I understand it is simulating n datasets using the final parameter estimates gained from the final model, and then taking the median and 90% confidence interval (for example) for each simulated concentration and comparing these to the real concentrations. Whereas, PPC is where you then run the final model through the simulated datasets and compare selected statistics of these new runs with the original. Is this correct? You mentioned including uncertainty on the parameter estimates in the simulated datasets. Would one usually not include uncertainty (fixing the error terms to zero) in the simulated datasets? Doing this with mine obviously produced much better concentrations with no negative values and no 'significant' outliers. Another thing you mentioned is comparing the median of the simulated concentrations with the median of the original dataset concentrations, but as there is only one sample for any particular time point would this indicate the unsuitability of VPC (and furthermore PPC) for this model? > > Thanks again, > Paul. > ________________________________________ > From: [EMAIL PROTECTED] [EMAIL PROTECTED] On Behalf Of Nick Holford [EMAIL PROTECTED] > Sent: 22 July 2008 10:30 > To: [email protected] > Subject: Re: [NMusers] PPC > > Paul, > > Its not clear to me if you did a VPC (visual predictive check) using > just the final estimates of the parameters) or tried to do a posterior > predictive check (PPC) including uncertainty on the parameter estimates > in the simulation. > > I dont have any experience with PPC but I dont think its helpful for > model evaluation. Its more of a tool for understanding uncertainties of > predictions for future studies. > > I assume you dont have complications like informative dropout processes > to complicate the simulation so if you did a VPC and the median of the > predictions doesnt match the median of the observations then your model > needs more work. > > Some negative concs are OK but 'impossibly high values' point to > problems with your model. > > So I think you can safely say the VPC has worked very well -- it has > told you that you need to think more about your model. You might find > some ideas in these references: > > 1. Tod M, Jullien V, Pons G. Facilitation of drug evaluation in > children by population methods and modelling. Clin Pharmacokinet. > 2008;47(4):231-43. > 2. Anderson BJ, Holford NH. Mechanism-Based Concepts of Size and > Maturity in Pharmacokinetics. Annu Rev Pharmacol Toxicol. 2008;48:303-32. > > Nick > > Paul Matthew Westwood wrote: > >> Hello all, >> >> I wonder if someone can give me some tips on PPC. >> I am working on a midazolam dataset with a pediatric population, and have decided to use PPC as a model validation technique. The dataset I am modelling has up to 43 patients, at different ages, different weights, different times of dosing and sampling, and different doses. I simulated 100 datasets using NONMEM VI, fixing all parameters to the final estimates from the model. The simulated datasets produced had a large proportion of negative concentrations, and also a few impossibly large concentration values. Also the median, 5th and 95th percentiles were not very promising, and the resulting graphs not very clean. >> Firstly, can I use PPC with any degree of confidence with a dataset such as this, and if so, do I omit the negative concentration values from the analysis? >> >> Thanks in advance for any help given. >> >> Paul Westwood, >> PhD Student, >> QUB, >> Belfast. >> >> >> >> > > -- > Nick Holford, Dept Pharmacology & Clinical Pharmacology > University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand > [EMAIL PROTECTED] tel:+64(9)373-7599x86730 fax:+64(9)373-7090 > http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford > > > -- Nick Holford, Dept Pharmacology & Clinical Pharmacology University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand [EMAIL PROTECTED] tel:+64(9)373-7599x86730 fax:+64(9)373-7090 http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford

RE: FW: PPC

From: Mouksassi Mohamad-Samer Date: July 24, 2008 technical
Dear Susan, Binning is to have sufficient number of points to compute quantiles of interest. PSN. 2.2.5 has a predictive check utilities and very extensive options regarding binning and stratifying. The description document may be useful to understand more about binning. For uncertainties you may use the bootstrap distribution or the asymptotic distribution from a covariance step. Kind Regards, Samer
Quoted reply history
-----Original Message----- From: [EMAIL PROTECTED] on behalf of Willavize, Susan A Sent: Wed 7/23/2008 08:38 To: Nick Holford; [email protected] Subject: RE: FW: [NMusers] PPC Hi Nick, I have been following this discussion and I think it is very helpful to many of us. Can you please elaborate on that last part about binning? What is that for? I must have missed something there. Thanks, Susan Susan Willavize, Ph.D. Global Pharmacometrics Group 860-732-6428 This e-mail is classified as Pfizer Confidential; it is confidential and privileged. -----Original Message----- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Nick Holford Sent: Wednesday, July 23, 2008 6:32 AM To: [email protected] Subject: Re: FW: [NMusers] PPC Paul, The procedure you describe is a way of producing a posterior predictive check but I don't know of any good examples of its use. A simpler way of doing a PPC samples the population parameter estimates from a distribution centered on the final estimates with a variance-covariance based on the estimated standard errors and their correlation. VPCs are not posterior predictive checks because they do not take account of the posterior distribution of the parameter estimates (i.e. the final estimates with their uncertainty). A VPC typically ignores the parameter uncertainty and uses what has been called the degenerate posterior distribution (See Yano Y, Beal SL, Sheiner LB. Evaluating pharmacokinetic/pharmacodynamic models using the posterior predictive check. J Pharmacokinet Pharmacodyn. 2001;28(2):171-92 for terminology, methods and examples). When I spoke of uncertainty I did not mean random variability (OMEGA and SIGMA). A VPC will simulate observations using the final THETA, OMEGA and SIGMA estimates. You can calculate distribution statistics for your observations (such as median and 90% intervals) by combining the observations (one per individual) at each time point to create an empirical distribution. The statistics are then determined from this empirical distribution. In order to get sufficient numbers of points (at least 10 is desirable) you may need to bin observations into time intervals e.g. 0-30 mins, 30-60 mins etc. Nick Paul Matthew Westwood wrote: > ________________________________________ > From: Paul Matthew Westwood > Sent: 22 July 2008 13:20 > To: Nick Holford > Subject: RE: [NMusers] PPC > > Nick, > > Thanks for your reply and apologies once again for another confusing email. I think I am using VPC, which as I understand it is simulating n datasets using the final parameter estimates gained from the final model, and then taking the median and 90% confidence interval (for example) for each simulated concentration and comparing these to the real concentrations. Whereas, PPC is where you then run the final model through the simulated datasets and compare selected statistics of these new runs with the original. Is this correct? You mentioned including uncertainty on the parameter estimates in the simulated datasets. Would one usually not include uncertainty (fixing the error terms to zero) in the simulated datasets? Doing this with mine obviously produced much better concentrations with no negative values and no 'significant' outliers. Another thing you mentioned is comparing the median of the simulated concentrations with the median of the original dataset concentrations, but as there is only one sample for any particular time point would this indicate the unsuitability of VPC (and furthermore PPC) for this model? > > Thanks again, > Paul. > ________________________________________ > From: [EMAIL PROTECTED] [EMAIL PROTECTED] On Behalf Of Nick Holford [EMAIL PROTECTED] > Sent: 22 July 2008 10:30 > To: [email protected] > Subject: Re: [NMusers] PPC > > Paul, > > Its not clear to me if you did a VPC (visual predictive check) using > just the final estimates of the parameters) or tried to do a posterior > predictive check (PPC) including uncertainty on the parameter estimates > in the simulation. > > I dont have any experience with PPC but I dont think its helpful for > model evaluation. Its more of a tool for understanding uncertainties of > predictions for future studies. > > I assume you dont have complications like informative dropout processes > to complicate the simulation so if you did a VPC and the median of the > predictions doesnt match the median of the observations then your model > needs more work. > > Some negative concs are OK but 'impossibly high values' point to > problems with your model. > > So I think you can safely say the VPC has worked very well -- it has > told you that you need to think more about your model. You might find > some ideas in these references: > > 1. Tod M, Jullien V, Pons G. Facilitation of drug evaluation in > children by population methods and modelling. Clin Pharmacokinet. > 2008;47(4):231-43. > 2. Anderson BJ, Holford NH. Mechanism-Based Concepts of Size and > Maturity in Pharmacokinetics. Annu Rev Pharmacol Toxicol. 2008;48:303-32. > > Nick > > Paul Matthew Westwood wrote: > >> Hello all, >> >> I wonder if someone can give me some tips on PPC. >> I am working on a midazolam dataset with a pediatric population, and have decided to use PPC as a model validation technique. The dataset I am modelling has up to 43 patients, at different ages, different weights, different times of dosing and sampling, and different doses. I simulated 100 datasets using NONMEM VI, fixing all parameters to the final estimates from the model. The simulated datasets produced had a large proportion of negative concentrations, and also a few impossibly large concentration values. Also the median, 5th and 95th percentiles were not very promising, and the resulting graphs not very clean. >> Firstly, can I use PPC with any degree of confidence with a dataset such as this, and if so, do I omit the negative concentration values from the analysis? >> >> Thanks in advance for any help given. >> >> Paul Westwood, >> PhD Student, >> QUB, >> Belfast. >> >> >> >> > > -- > Nick Holford, Dept Pharmacology & Clinical Pharmacology > University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand > [EMAIL PROTECTED] tel:+64(9)373-7599x86730 fax:+64(9)373-7090 > http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford > > > -- Nick Holford, Dept Pharmacology & Clinical Pharmacology University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand [EMAIL PROTECTED] tel:+64(9)373-7599x86730 fax:+64(9)373-7090 http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford

RE: FW: PPC

From: Mahesh Samtani Date: July 24, 2008 technical
Dear Susan, The cut function in S-plus is quite useful for binning. The cut function creates a category object by dividing continuous data into intervals. One can use the nominal (protocol) times as breakpoints and labels in the cut function. To read more about binning please see the abstract by Drs. Karlsson and Holford on VPC from this year's PAGE meeting. http://www.page-meeting.org/?abstract=1434 Dr. Holford / Dr. Karlsson could you kindly post your presentation from this year's PAGE VPC tutorial on their webpage? Thanks...Mahesh
Quoted reply history
-----Original Message----- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Mouksassi Mohamad-Samer Sent: Thursday, July 24, 2008 11:54 AM To: Willavize, Susan A; Nick Holford; [email protected] Subject: RE: FW: [NMusers] PPC Dear Susan, Binning is to have sufficient number of points to compute quantiles of interest. PSN. 2.2.5 has a predictive check utilities and very extensive options regarding binning and stratifying. The description document may be useful to understand more about binning. For uncertainties you may use the bootstrap distribution or the asymptotic distribution from a covariance step. Kind Regards, Samer -----Original Message----- From: [EMAIL PROTECTED] on behalf of Willavize, Susan A Sent: Wed 7/23/2008 08:38 To: Nick Holford; [email protected] Subject: RE: FW: [NMusers] PPC Hi Nick, I have been following this discussion and I think it is very helpful to many of us. Can you please elaborate on that last part about binning? What is that for? I must have missed something there. Thanks, Susan Susan Willavize, Ph.D. Global Pharmacometrics Group 860-732-6428 This e-mail is classified as Pfizer Confidential; it is confidential and privileged. -----Original Message----- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Nick Holford Sent: Wednesday, July 23, 2008 6:32 AM To: [email protected] Subject: Re: FW: [NMusers] PPC Paul, The procedure you describe is a way of producing a posterior predictive check but I don't know of any good examples of its use. A simpler way of doing a PPC samples the population parameter estimates from a distribution centered on the final estimates with a variance-covariance based on the estimated standard errors and their correlation. VPCs are not posterior predictive checks because they do not take account of the posterior distribution of the parameter estimates (i.e. the final estimates with their uncertainty). A VPC typically ignores the parameter uncertainty and uses what has been called the degenerate posterior distribution (See Yano Y, Beal SL, Sheiner LB. Evaluating pharmacokinetic/pharmacodynamic models using the posterior predictive check. J Pharmacokinet Pharmacodyn. 2001;28(2):171-92 for terminology, methods and examples). When I spoke of uncertainty I did not mean random variability (OMEGA and SIGMA). A VPC will simulate observations using the final THETA, OMEGA and SIGMA estimates. You can calculate distribution statistics for your observations (such as median and 90% intervals) by combining the observations (one per individual) at each time point to create an empirical distribution. The statistics are then determined from this empirical distribution. In order to get sufficient numbers of points (at least 10 is desirable) you may need to bin observations into time intervals e.g. 0-30 mins, 30-60 mins etc. Nick Paul Matthew Westwood wrote: > ________________________________________ > From: Paul Matthew Westwood > Sent: 22 July 2008 13:20 > To: Nick Holford > Subject: RE: [NMusers] PPC > > Nick, > > Thanks for your reply and apologies once again for another confusing email. I think I am using VPC, which as I understand it is simulating n datasets using the final parameter estimates gained from the final model, and then taking the median and 90% confidence interval (for example) for each simulated concentration and comparing these to the real concentrations. Whereas, PPC is where you then run the final model through the simulated datasets and compare selected statistics of these new runs with the original. Is this correct? You mentioned including uncertainty on the parameter estimates in the simulated datasets. Would one usually not include uncertainty (fixing the error terms to zero) in the simulated datasets? Doing this with mine obviously produced much better concentrations with no negative values and no 'significant' outliers. Another thing you mentioned is comparing the median of the simulated concentrations with the median of the original dataset concentrations, but as there is only one sample for any particular time point would this indicate the unsuitability of VPC (and furthermore PPC) for this model? > > Thanks again, > Paul. > ________________________________________ > From: [EMAIL PROTECTED] [EMAIL PROTECTED] On Behalf Of Nick Holford [EMAIL PROTECTED] > Sent: 22 July 2008 10:30 > To: [email protected] > Subject: Re: [NMusers] PPC > > Paul, > > Its not clear to me if you did a VPC (visual predictive check) using > just the final estimates of the parameters) or tried to do a posterior > predictive check (PPC) including uncertainty on the parameter estimates > in the simulation. > > I dont have any experience with PPC but I dont think its helpful for > model evaluation. Its more of a tool for understanding uncertainties of > predictions for future studies. > > I assume you dont have complications like informative dropout processes > to complicate the simulation so if you did a VPC and the median of the > predictions doesnt match the median of the observations then your model > needs more work. > > Some negative concs are OK but 'impossibly high values' point to > problems with your model. > > So I think you can safely say the VPC has worked very well -- it has > told you that you need to think more about your model. You might find > some ideas in these references: > > 1. Tod M, Jullien V, Pons G. Facilitation of drug evaluation in > children by population methods and modelling. Clin Pharmacokinet. > 2008;47(4):231-43. > 2. Anderson BJ, Holford NH. Mechanism-Based Concepts of Size and > Maturity in Pharmacokinetics. Annu Rev Pharmacol Toxicol. 2008;48:303-32. > > Nick > > Paul Matthew Westwood wrote: > >> Hello all, >> >> I wonder if someone can give me some tips on PPC. >> I am working on a midazolam dataset with a pediatric population, and have decided to use PPC as a model validation technique. The dataset I am modelling has up to 43 patients, at different ages, different weights, different times of dosing and sampling, and different doses. I simulated 100 datasets using NONMEM VI, fixing all parameters to the final estimates from the model. The simulated datasets produced had a large proportion of negative concentrations, and also a few impossibly large concentration values. Also the median, 5th and 95th percentiles were not very promising, and the resulting graphs not very clean. >> Firstly, can I use PPC with any degree of confidence with a dataset such as this, and if so, do I omit the negative concentration values from the analysis? >> >> Thanks in advance for any help given. >> >> Paul Westwood, >> PhD Student, >> QUB, >> Belfast. >> >> >> >> > > -- > Nick Holford, Dept Pharmacology & Clinical Pharmacology > University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand > [EMAIL PROTECTED] tel:+64(9)373-7599x86730 fax:+64(9)373-7090 > http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford > > > -- Nick Holford, Dept Pharmacology & Clinical Pharmacology University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand [EMAIL PROTECTED] tel:+64(9)373-7599x86730 fax:+64(9)373-7090 http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford

Re: FW: PPC

From: Nick Holford Date: July 24, 2008 technical
Mahesh, Thanks for this practical advice on how to do binning with S-Plus. Here are some more comments on VPCs and binning: Simulating at the same set of times for every subject is useful because of the usual scatter of observed times around protocol times. VPCs based only on observed times are possible but can be very hard to intepret visually when there is a lot of between subject variability in observation times. It can also be computationally difficult with large data sets which are themselves simulated 1000 times. Note that the simulated values themselves are not binned. There is no need to do binning because you can always simulate enough times to get reliable statistics at each simulation time. Simulation times would normally be based on the nominal protocol time. It can be helpful to simulate more frequently if the protocol was rather sparse. Mats has pointed out that any simulations done at non-observed times cannot give you any diagnostic information about whether the model is predicting well at these non-observed times. The shape of the model predictions can be helpful in understanding where your design was deficient and what models might be identified from the data. If you simulate at non-observed times and you have more than one independent variable (e.g. time and weight) you will almost always want to use the covariates from the original data set for each subject. I choose the observed covariate set which is closest in time to the simulation time. This is not realy binning but it uses the same algorithm of associating observations at times close to the simulation time with the simulation time. The alternative is to try and build a parametric multivariate distribution for covariates to use for simulation -- a procedure full of assumptions and high likelihood of model misspecification. The binning of the observations is frequently necessary in order to get sufficient observations in the sample to compute reasonable statistics (e.g. median, 5%ile, 95%ile). I bin the observations around the times chosen for the simulations. The observed statistics are then plotted as observation median and percentile bands ('the percentile VPC'). A VPC which does not do this but only shows the scatter of observations without showing these observation statistics is of only limited value ('the scatterplot VPC'). The combination of a percentile VPC and a scatterplot VPC is much more useful. Mats and I need to do some additional work on our PAGE tutorial presentation before we post it on the PAGE website. Its not enough just to put the slides on the web. We also want to add some explanatory notes. Best wishes, Nick Samtani, Mahesh [PRDUS] wrote: > Dear Susan, > > The cut function in S-plus is quite useful for binning. The cut function creates a category object by dividing continuous data into intervals. One can use the nominal (protocol) times as breakpoints and labels in the cut function. To read more about binning please see the abstract by Drs. Karlsson and Holford on VPC from this year's PAGE meeting. > > http://www.page-meeting.org/?abstract=1434 > > Dr. Holford / Dr. Karlsson could you kindly post your presentation from this > year's PAGE VPC tutorial on their webpage? > > Thanks...Mahesh >
Quoted reply history
> -----Original Message----- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] Behalf Of Mouksassi > Mohamad-Samer > Sent: Thursday, July 24, 2008 11:54 AM > To: Willavize, Susan A; Nick Holford; [email protected] > Subject: RE: FW: [NMusers] PPC > > Dear Susan, > > Binning is to have sufficient number of points to compute quantiles of interest. > > PSN. 2.2.5 has a predictive check utilities and very extensive options > regarding binning and stratifying. The description document may be useful to > understand more about binning. > > For uncertainties you may use the bootstrap distribution or the asymptotic > distribution from a covariance step. > > Kind Regards, > > Samer > > -----Original Message----- > From: [EMAIL PROTECTED] on behalf of Willavize, Susan A > Sent: Wed 7/23/2008 08:38 > To: Nick Holford; [email protected] > Subject: RE: FW: [NMusers] PPC > > Hi Nick, > > I have been following this discussion and I think it is very helpful to > many of us. Can you please elaborate on that last part about binning? > What is that for? I must have missed something there. > > Thanks, > > Susan Susan Willavize, Ph.D. Global Pharmacometrics Group > > 860-732-6428 > > This e-mail is classified as Pfizer Confidential; it is confidential and > > privileged. > > -----Original Message----- > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] > On Behalf Of Nick Holford > Sent: Wednesday, July 23, 2008 6:32 AM > To: [email protected] > Subject: Re: FW: [NMusers] PPC > > Paul, > > The procedure you describe is a way of producing a posterior predictive check but I don't know of any good examples of its use. A simpler way of > > doing a PPC samples the population parameter estimates from a distribution centered on the final estimates with a variance-covariance > > based on the estimated standard errors and their correlation. VPCs are not posterior predictive checks because they do not take account of the posterior distribution of the parameter estimates (i.e. the final estimates with their uncertainty). A VPC typically ignores the parameter > > uncertainty and uses what has been called the degenerate posterior distribution (See Yano Y, Beal SL, Sheiner LB. Evaluating pharmacokinetic/pharmacodynamic models using the posterior predictive check. J Pharmacokinet Pharmacodyn. 2001;28(2):171-92 for terminology, methods and examples). > > When I spoke of uncertainty I did not mean random variability (OMEGA and > > SIGMA). A VPC will simulate observations using the final THETA, OMEGA and SIGMA estimates. > > You can calculate distribution statistics for your observations (such as > > median and 90% intervals) by combining the observations (one per individual) at each time point to create an empirical distribution. The statistics are then determined from this empirical distribution. In order to get sufficient numbers of points (at least 10 is desirable) you > > may need to bin observations into time intervals e.g. 0-30 mins, 30-60 mins etc. > > Nick > > Paul Matthew Westwood wrote: > > > ________________________________________ > > From: Paul Matthew Westwood > > Sent: 22 July 2008 13:20 > > To: Nick Holford > > Subject: RE: [NMusers] PPC > > > > Nick, > > > > Thanks for your reply and apologies once again for another confusing > > email. I think I am using VPC, which as I understand it is simulating n > datasets using the final parameter estimates gained from the final > model, and then taking the median and 90% confidence interval (for > example) for each simulated concentration and comparing these to the > real concentrations. Whereas, PPC is where you then run the final model > through the simulated datasets and compare selected statistics of these > new runs with the original. Is this correct? You mentioned including > uncertainty on the parameter estimates in the simulated datasets. Would > one usually not include uncertainty (fixing the error terms to zero) in > the simulated datasets? Doing this with mine obviously produced much > better concentrations with no negative values and no 'significant' > outliers. Another thing you mentioned is comparing the median of the > simulated concentrations with the median of the original dataset > concentrations, but as there is only one sample for any particular time > point would this indicate the unsuitability of VPC (and furthermore PPC) > for this model? > > > Thanks again, > > Paul. > > ________________________________________ > > From: [EMAIL PROTECTED] [EMAIL PROTECTED] On > > Behalf Of Nick Holford [EMAIL PROTECTED] > > > Sent: 22 July 2008 10:30 > > To: [email protected] > > Subject: Re: [NMusers] PPC > > > > Paul, > > > > Its not clear to me if you did a VPC (visual predictive check) using > > just the final estimates of the parameters) or tried to do a posterior > > predictive check (PPC) including uncertainty on the parameter > > estimates > > > in the simulation. > > > > I dont have any experience with PPC but I dont think its helpful for > > model evaluation. Its more of a tool for understanding uncertainties > > of > > > predictions for future studies. > > > > I assume you dont have complications like informative dropout > > processes > > > to complicate the simulation so if you did a VPC and the median of the > > predictions doesnt match the median of the observations then your > > model > > > needs more work. > > > > Some negative concs are OK but 'impossibly high values' point to > > problems with your model. > > > > So I think you can safely say the VPC has worked very well -- it has > > told you that you need to think more about your model. You might find > > some ideas in these references: > > > > 1. Tod M, Jullien V, Pons G. Facilitation of drug evaluation in > > children by population methods and modelling. Clin Pharmacokinet. > > 2008;47(4):231-43. > > 2. Anderson BJ, Holford NH. Mechanism-Based Concepts of Size and > > Maturity in Pharmacokinetics. Annu Rev Pharmacol Toxicol. > > 2008;48:303-32. > > > Nick > > > > Paul Matthew Westwood wrote: > > > > > Hello all, > > > > > > I wonder if someone can give me some tips on PPC. > > > I am working on a midazolam dataset with a pediatric population, and > > have decided to use PPC as a model validation technique. The dataset I > am modelling has up to 43 patients, at different ages, different > weights, different times of dosing and sampling, and different doses. I > simulated 100 datasets using NONMEM VI, fixing all parameters to the > final estimates from the model. The simulated datasets produced had a > large proportion of negative concentrations, and also a few impossibly > large concentration values. Also the median, 5th and 95th percentiles > were not very promising, and the resulting graphs not very clean. > > > > Firstly, can I use PPC with any degree of confidence with a dataset > > such as this, and if so, do I omit the negative concentration values > from the analysis? > > > > Thanks in advance for any help given. > > > > > > Paul Westwood, > > > PhD Student, > > > QUB, > > > Belfast. > > > > -- > > Nick Holford, Dept Pharmacology & Clinical Pharmacology > > University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New > > Zealand > > > [EMAIL PROTECTED] tel:+64(9)373-7599x86730 fax:+64(9)373-7090 > > http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford -- Nick Holford, Dept Pharmacology & Clinical Pharmacology University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand [EMAIL PROTECTED] tel:+64(9)373-7599x86730 fax:+64(9)373-7090 http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford

Re: FW: PPC

From: Nick Holford Date: July 25, 2008 technical
Mahesh, Thanks for your further info on VPC and PPC. I agree that the bootstrap distribution of the parameters is probably better than the asymptotic normal distribution implied by NONMEM's covariance step results. I dont have your experience of comparing VPC and PPC so I hope you can find a way to publish these results which are similar to the limited exploration reported by Yano et al. VPC is not the perfect answer for model evaluation but it has some useful properties compared with the traditional methods (standard horizontal residual plots and diagonal residual plots (DV vs PRED and IPRED). I certainly havent seen any reason to use a PPC for model evaluation. It does however have a value (in theory) for predicting the uncertainty in outcome of a future trial. Nick Samtani, Mahesh [PRDUS] wrote: > Dear Nick, > Thank-you for teaching these important concepts. Could you and others kindly > comment on the following 2 aspects: > > a) The variance-covariance matrix based on the estimated standard errors and their correlation will generate a multi-variate normal distribution for the parameters. However, the posterior distribution of parameters may not be normally dispersed. Wouldn't it be better to use the bootstrap results as a source for getting the uncertainty distribution. I have to admit that the bootstrap method can be quite time-consuming. See one such example at: http://www.page-meeting.org/pdf_assets/2373-MSamtani%20PAGE%20Poster%202007.pdf > > b) More importantly, after going through the PPC and VPC comparison for several > cases I always find that if the parameter estimates have reasonable precision > from the original NONMEM run then the PPC and VPC results are essentially > identical. This echoes an earlier comment that most of the variation is > explained by BSV and RV. Has any one else experienced this behavior also and if > so shouldn't VPC be enough for model verification? > > Kindly advise...Mahesh >
Quoted reply history
> -----Original Message----- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] Behalf Of Willavize, Susan A > Sent: Wednesday, July 23, 2008 8:38 AM > To: Nick Holford; [email protected] > Subject: RE: FW: [NMusers] PPC > > Hi Nick, > > I have been following this discussion and I think it is very helpful to > many of us. Can you please elaborate on that last part about binning? > What is that for? I must have missed something there. > > Thanks, > > Susan Susan Willavize, Ph.D. Global Pharmacometrics Group > > 860-732-6428 > > This e-mail is classified as Pfizer Confidential; it is confidential and > > privileged. > > -----Original Message----- > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] > On Behalf Of Nick Holford > Sent: Wednesday, July 23, 2008 6:32 AM > To: [email protected] > Subject: Re: FW: [NMusers] PPC > > Paul, > > The procedure you describe is a way of producing a posterior predictive check but I don't know of any good examples of its use. A simpler way of > > doing a PPC samples the population parameter estimates from a distribution centered on the final estimates with a variance-covariance > > based on the estimated standard errors and their correlation. VPCs are not posterior predictive checks because they do not take account of the posterior distribution of the parameter estimates (i.e. the final estimates with their uncertainty). A VPC typically ignores the parameter > > uncertainty and uses what has been called the degenerate posterior distribution (See Yano Y, Beal SL, Sheiner LB. Evaluating pharmacokinetic/pharmacodynamic models using the posterior predictive check. J Pharmacokinet Pharmacodyn. 2001;28(2):171-92 for terminology, methods and examples). > > When I spoke of uncertainty I did not mean random variability (OMEGA and > > SIGMA). A VPC will simulate observations using the final THETA, OMEGA and SIGMA estimates. > > You can calculate distribution statistics for your observations (such as > > median and 90% intervals) by combining the observations (one per individual) at each time point to create an empirical distribution. The statistics are then determined from this empirical distribution. In order to get sufficient numbers of points (at least 10 is desirable) you > > may need to bin observations into time intervals e.g. 0-30 mins, 30-60 mins etc. > > Nick > > Paul Matthew Westwood wrote: > > > ________________________________________ > > From: Paul Matthew Westwood > > Sent: 22 July 2008 13:20 > > To: Nick Holford > > Subject: RE: [NMusers] PPC > > > > Nick, > > > > Thanks for your reply and apologies once again for another confusing > > email. I think I am using VPC, which as I understand it is simulating n > datasets using the final parameter estimates gained from the final > model, and then taking the median and 90% confidence interval (for > example) for each simulated concentration and comparing these to the > real concentrations. Whereas, PPC is where you then run the final model > through the simulated datasets and compare selected statistics of these > new runs with the original. Is this correct? You mentioned including > uncertainty on the parameter estimates in the simulated datasets. Would > one usually not include uncertainty (fixing the error terms to zero) in > the simulated datasets? Doing this with mine obviously produced much > better concentrations with no negative values and no 'significant' > outliers. Another thing you mentioned is comparing the median of the > simulated concentrations with the median of the original dataset > concentrations, but as there is only one sample for any particular time > point would this indicate the unsuitability of VPC (and furthermore PPC) > for this model? > > > Thanks again, > > Paul. > > ________________________________________ > > From: [EMAIL PROTECTED] [EMAIL PROTECTED] On > > Behalf Of Nick Holford [EMAIL PROTECTED] > > > Sent: 22 July 2008 10:30 > > To: [email protected] > > Subject: Re: [NMusers] PPC > > > > Paul, > > > > Its not clear to me if you did a VPC (visual predictive check) using > > just the final estimates of the parameters) or tried to do a posterior > > predictive check (PPC) including uncertainty on the parameter > > estimates > > > in the simulation. > > > > I dont have any experience with PPC but I dont think its helpful for > > model evaluation. Its more of a tool for understanding uncertainties > > of > > > predictions for future studies. > > > > I assume you dont have complications like informative dropout > > processes > > > to complicate the simulation so if you did a VPC and the median of the > > predictions doesnt match the median of the observations then your > > model > > > needs more work. > > > > Some negative concs are OK but 'impossibly high values' point to > > problems with your model. > > > > So I think you can safely say the VPC has worked very well -- it has > > told you that you need to think more about your model. You might find > > some ideas in these references: > > > > 1. Tod M, Jullien V, Pons G. Facilitation of drug evaluation in > > children by population methods and modelling. Clin Pharmacokinet. > > 2008;47(4):231-43. > > 2. Anderson BJ, Holford NH. Mechanism-Based Concepts of Size and > > Maturity in Pharmacokinetics. Annu Rev Pharmacol Toxicol. > > 2008;48:303-32. > > > Nick > > > > Paul Matthew Westwood wrote: > > > > > Hello all, > > > > > > I wonder if someone can give me some tips on PPC. > > > I am working on a midazolam dataset with a pediatric population, and > > have decided to use PPC as a model validation technique. The dataset I > am modelling has up to 43 patients, at different ages, different > weights, different times of dosing and sampling, and different doses. I > simulated 100 datasets using NONMEM VI, fixing all parameters to the > final estimates from the model. The simulated datasets produced had a > large proportion of negative concentrations, and also a few impossibly > large concentration values. Also the median, 5th and 95th percentiles > were not very promising, and the resulting graphs not very clean. > > > > Firstly, can I use PPC with any degree of confidence with a dataset > > such as this, and if so, do I omit the negative concentration values > from the analysis? > > > > Thanks in advance for any help given. > > > > > > Paul Westwood, > > > PhD Student, > > > QUB, > > > Belfast. > > > > -- > > Nick Holford, Dept Pharmacology & Clinical Pharmacology > > University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New > > Zealand > > > [EMAIL PROTECTED] tel:+64(9)373-7599x86730 fax:+64(9)373-7090 > > http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford -- Nick Holford, Dept Pharmacology & Clinical Pharmacology University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand [EMAIL PROTECTED] tel:+64(9)373-7599x86730 fax:+64(9)373-7090 http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford

Re: FW: PPC

From: Makamal Date: July 25, 2008 technical
Dear Dr. Holford, Please correct me if I am wrong, however my understanding is that asymptotic distribution implied by NONMEM's covariance step approaches normality as the sample size gets larger or we have more data. However, a non parametric bootstrap distribution may have poor coverage with a small sample size as well, since it relies on sampling subjects with repalcement in the data set. So both distributions have problems when sample size is small (e.g. N<30). Therefore I would think when N is large the wald based Confidence Intervals from NONMEM are appropriate enough. It would be helpful to know the criteria when generating a non parametric bootstrap distribution is really advantageous. Thanks, Mohamed Quoting Nick Holford <[EMAIL PROTECTED]>: > Mahesh, > > Thanks for your further info on VPC and PPC. I agree that the bootstrap distribution of the parameters is probably better than the asymptotic normal distribution implied by NONMEM's covariance step results. > > I dont have your experience of comparing VPC and PPC so I hope you can find a way to publish these results which are similar to the limited exploration reported by Yano et al. > > VPC is not the perfect answer for model evaluation but it has some useful properties compared with the traditional methods (standard horizontal residual plots and diagonal residual plots (DV vs PRED and IPRED). I certainly havent seen any reason to use a PPC for model evaluation. It does however have a value (in theory) for predicting the uncertainty in outcome of a future trial. > > Nick > > Samtani, Mahesh [PRDUS] wrote: > > > Dear Nick, > > > > Thank-you for teaching these important concepts. Could you and others kindly comment on the following 2 aspects: > > > > a) The variance-covariance matrix based on the estimated standard errors and their correlation will generate a multi-variate normal distribution for the parameters. However, the posterior distribution of parameters may not be normally dispersed. Wouldn't it be better to use the bootstrap results as a source for getting the uncertainty distribution. I have to admit that the bootstrap method can be quite time-consuming. See one such example at: http://www.page-meeting.org/pdf_assets/2373-MSamtani%20PAGE%20Poster%202007.pdf b) More importantly, after going through the PPC and VPC comparison for several cases I always find that if the parameter estimates have reasonable precision from the original NONMEM run then the PPC and VPC results are essentially identical. This echoes an earlier comment that most of the variation is explained by BSV and RV. Has any one else experienced this behavior also and if so shouldn't VPC be enough for model verification? > > > > Kindly advise...Mahesh > > > > -----Original Message-----
Quoted reply history
> > From: [EMAIL PROTECTED] > > [mailto:[EMAIL PROTECTED] Behalf Of Willavize, Susan A > > Sent: Wednesday, July 23, 2008 8:38 AM > > To: Nick Holford; [email protected] > > Subject: RE: FW: [NMusers] PPC > > > > Hi Nick, > > > > I have been following this discussion and I think it is very helpful to > > many of us. Can you please elaborate on that last part about binning? > > What is that for? I must have missed something there. > > > > Thanks, > > Susan Susan Willavize, Ph.D. Global Pharmacometrics Group > > 860-732-6428 > > > > This e-mail is classified as Pfizer Confidential; it is confidential and > > privileged. -----Original Message----- > > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] > > On Behalf Of Nick Holford > > Sent: Wednesday, July 23, 2008 6:32 AM > > To: [email protected] > > Subject: Re: FW: [NMusers] PPC > > > > Paul, > > > > The procedure you describe is a way of producing a posterior predictive check but I don't know of any good examples of its use. A simpler way of > > > > doing a PPC samples the population parameter estimates from a distribution centered on the final estimates with a variance-covariance > > > > based on the estimated standard errors and their correlation. VPCs are not posterior predictive checks because they do not take account of the posterior distribution of the parameter estimates (i.e. the final estimates with their uncertainty). A VPC typically ignores the parameter > > > > uncertainty and uses what has been called the degenerate posterior distribution (See Yano Y, Beal SL, Sheiner LB. Evaluating pharmacokinetic/pharmacodynamic models using the posterior predictive check. J Pharmacokinet Pharmacodyn. 2001;28(2):171-92 for terminology, methods and examples). > > > > When I spoke of uncertainty I did not mean random variability (OMEGA and > > > > SIGMA). A VPC will simulate observations using the final THETA, OMEGA and SIGMA estimates. > > > > You can calculate distribution statistics for your observations (such as > > > > median and 90% intervals) by combining the observations (one per individual) at each time point to create an empirical distribution. The statistics are then determined from this empirical distribution. In order to get sufficient numbers of points (at least 10 is desirable) you > > > > may need to bin observations into time intervals e.g. 0-30 mins, 30-60 mins etc. > > > > Nick > > > > Paul Matthew Westwood wrote: > > > > > ________________________________________ > > > From: Paul Matthew Westwood > > > Sent: 22 July 2008 13:20 > > > To: Nick Holford > > > Subject: RE: [NMusers] PPC > > > > > > Nick, > > > > > > Thanks for your reply and apologies once again for another confusing > > > > email. I think I am using VPC, which as I understand it is simulating n > > datasets using the final parameter estimates gained from the final > > model, and then taking the median and 90% confidence interval (for > > example) for each simulated concentration and comparing these to the > > real concentrations. Whereas, PPC is where you then run the final model > > through the simulated datasets and compare selected statistics of these > > new runs with the original. Is this correct? You mentioned including > > uncertainty on the parameter estimates in the simulated datasets. Would > > one usually not include uncertainty (fixing the error terms to zero) in > > the simulated datasets? Doing this with mine obviously produced much > > better concentrations with no negative values and no 'significant' > > outliers. Another thing you mentioned is comparing the median of the > > simulated concentrations with the median of the original dataset > > concentrations, but as there is only one sample for any particular time > > point would this indicate the unsuitability of VPC (and furthermore PPC) > > for this model? > > > > > Thanks again, > > > Paul. > > > ________________________________________ > > > From: [EMAIL PROTECTED] [EMAIL PROTECTED] On > > > > Behalf Of Nick Holford [EMAIL PROTECTED] > > > > > Sent: 22 July 2008 10:30 > > > To: [email protected] > > > Subject: Re: [NMusers] PPC > > > > > > Paul, > > > > > > Its not clear to me if you did a VPC (visual predictive check) using > > > just the final estimates of the parameters) or tried to do a posterior > > > predictive check (PPC) including uncertainty on the parameter > > > > estimates > > > > > in the simulation. > > > > > > I dont have any experience with PPC but I dont think its helpful for > > > model evaluation. Its more of a tool for understanding uncertainties > > > > of > > > > > predictions for future studies. > > > > > > I assume you dont have complications like informative dropout > > > > processes > > > > > to complicate the simulation so if you did a VPC and the median of the > > > predictions doesnt match the median of the observations then your > > > > model > > > > > needs more work. > > > > > > Some negative concs are OK but 'impossibly high values' point to > > > problems with your model. > > > > > > So I think you can safely say the VPC has worked very well -- it has > > > told you that you need to think more about your model. You might find > > > some ideas in these references: > > > > > > 1. Tod M, Jullien V, Pons G. Facilitation of drug evaluation in > > > children by population methods and modelling. Clin Pharmacokinet. > > > 2008;47(4):231-43. > > > 2. Anderson BJ, Holford NH. Mechanism-Based Concepts of Size and > > > Maturity in Pharmacokinetics. Annu Rev Pharmacol Toxicol. > > > > 2008;48:303-32. > > > > > Nick > > > > > > Paul Matthew Westwood wrote: > > > > > > > Hello all, > > > > > > > > I wonder if someone can give me some tips on PPC. > > > > I am working on a midazolam dataset with a pediatric population, and > > > > have decided to use PPC as a model validation technique. The dataset I > > am modelling has up to 43 patients, at different ages, different > > weights, different times of dosing and sampling, and different doses. I > > simulated 100 datasets using NONMEM VI, fixing all parameters to the > > final estimates from the model. The simulated datasets produced had a > > large proportion of negative concentrations, and also a few impossibly > > large concentration values. Also the median, 5th and 95th percentiles > > were not very promising, and the resulting graphs not very clean. > > > > > > Firstly, can I use PPC with any degree of confidence with a dataset > > > > such as this, and if so, do I omit the negative concentration values > > from the analysis? > > > > > > Thanks in advance for any help given. > > > > > > > > Paul Westwood, > > > > PhD Student, > > > > QUB, > > > > Belfast. > > > > > > -- > > > Nick Holford, Dept Pharmacology & Clinical Pharmacology > > > University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New > > > > Zealand > > > > > [EMAIL PROTECTED] tel:+64(9)373-7599x86730 fax:+64(9)373-7090 > > > http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford > > -- > Nick Holford, Dept Pharmacology & Clinical Pharmacology > University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand > [EMAIL PROTECTED] tel:+64(9)373-7599x86730 fax:+64(9)373-7090 > http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford Mohamed A. Kamal, Pharm.D. Ph.D. Candidate Department of Pharmaceutical Sciences University of Michigan

Re: FW: PPC

From: Leonid Gibiansky Date: July 25, 2008 technical
I cannot make any general statements but here is the summary of the 13 different models that I tested for comparison of bootstrap and nonmem CI. http://www.quantpharm.com/pdf_files/2572-GibianskyPage2007Poster2007final.pdf Note that all bootstrap samples were appropriately stratified by major covariates (such as study, dose, weight as necessary, etc.). Leonid -------------------------------------- Leonid Gibiansky, Ph.D. President, QuantPharm LLC web: www.quantpharm.com e-mail: LGibiansky at quantpharm.com tel: (301) 767 5566 [EMAIL PROTECTED] wrote: > Dear Dr. Holford, > > Please correct me if I am wrong, however my understanding is that asymptotic distribution implied by NONMEM's covariance step approaches normality as the sample size gets larger or we have more data. However, a non parametric bootstrap distribution may have poor coverage with a small sample size as well, since it relies on sampling subjects with repalcement in the data set. So both distributions have problems when sample size is small (e.g. N<30). Therefore I would think when N is large the wald based Confidence Intervals from NONMEM are appropriate enough. It would be helpful to know the criteria when generating a non parametric bootstrap distribution is really advantageous. > > Thanks, Mohamed > Quoting Nick Holford <[EMAIL PROTECTED]>:

Re: FW: PPC

From: Nick Holford Date: July 26, 2008 technical
Mohamed, When the number of subjects is small then any confidence interval is going to be wide and probably no-one is really interested in it. With studies more suitable for population analysis (at least 25 subjects and preferably over 100 if you want to look for covariate effects) then the CIs may be more interesting. With linear models or parameters which are nearly linear in non-linear models then I would expect quite good agreement between CIs obtained by bootstrapping and by using NONMEM SEs. But the models get interesting when one tries to estimate non-linear parameters e.g. EC50 in an Emax model. In that case the CIs will often be asymmetrical and the normal distribution assumption used to compute SEs from CIs will be wrong. Leonid does not discuss the issue of assymetry of CIs in his poster -- but when I look at Figure 3 I see evidence for disagreement between bootstrap and NONMEM SE based CIs. The scatter of bootstrap points relative to the solid line shows an excess of bootstrap upper CI values above the SE prediction. For the lower CI prediction there also seem to be more bootstrap values above the SE predictions. Its hard to be sure that these upper and lower bootstrap predictions belong to the same parameters but if so this would be evidence for asymmetry of the bootstrap CI. This is exactly what one would expect because the SE method has to assume symmetrical CIs yet the bootstrap estimate is not restricted in this way. I think this poster is a nice example of why correlation coefficients are a very poor way to compare predictions (as pointed out by Sheiner and Beal in their classic paper Sheiner LB, Beal SL. Some Suggestions for Measuring Predictive Performance. J Pharmacokinet Biopharm. 1981;9(4):503-12.). A better way would be to compute the prediction error for the absolute larger CI arm and smaller CI arm obtained by bootstrapping to the symmetrial CI from the SE. If bootstraps CIs are indeed asymmetrical then there would be a difference shown by the mean prediction error ('bias'). Note that I use absolute value larger and smaller CI arm to refer to the larger or smaller part of the CI that is constructed around zero. I dont mean the upper and lower parts of the CI interval. Best wishes, Nick Leonid Gibiansky wrote: > I cannot make any general statements but here is the summary of the 13 different models that I tested for comparison of bootstrap and nonmem CI. > > http://www.quantpharm.com/pdf_files/2572-GibianskyPage2007Poster2007final.pdf > > Note that all bootstrap samples were appropriately stratified by major covariates (such as study, dose, weight as necessary, etc.). > > Leonid > > -------------------------------------- > Leonid Gibiansky, Ph.D. > President, QuantPharm LLC > web: www.quantpharm.com > e-mail: LGibiansky at quantpharm.com > tel: (301) 767 5566 > > [EMAIL PROTECTED] wrote: > > > Dear Dr. Holford, > > > > Please correct me if I am wrong, however my understanding is that asymptotic distribution implied by NONMEM's covariance step approaches normality as the sample size gets larger or we have more data. However, a non parametric bootstrap distribution may have poor coverage with a small sample size as well, since it relies on sampling subjects with repalcement in the data set. So both distributions have problems when sample size is small (e.g. N<30). Therefore I would think when N is large the wald based Confidence Intervals from NONMEM are appropriate enough. It would be helpful to know the criteria when generating a non parametric bootstrap distribution is really advantageous. > > > > Thanks, Mohamed > > Quoting Nick Holford <[EMAIL PROTECTED]>: -- Nick Holford, Dept Pharmacology & Clinical Pharmacology University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand [EMAIL PROTECTED] tel:+64(9)373-7599x86730 fax:+64(9)373-7090 http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford

RE: FW: PPC

From: Matt Hutmacher Date: July 28, 2008 technical
Hi Nick, The log-transform I discussed was just a simple example for a parameter bounded below by 0 (similar to CL which is generally considered lognormally distributed between individuals). Constraints on other parameters can be accommodated as well, such as the logit. I look forward to a publication that details the risks/benefits for permitting lack of convergence in the bootstrap that we can cite in reports. Citing discussion's on nmuser's is difficult. I do agree the bootstrap is quite useful, especially if you don't trust the LRT. I still think it is good to show without a $COV step that the estimates were achieved at a minimum and not a saddle point. For the renal example, if we did not have the correct number in each CLcr group and it influenced CL, then the CI might be too wide since the span of CLcr used to support the estimate of the CLcr covariate parameter would not be constrained to be wide enough (this is similar to Stephen Duffull statement recently that often the covariate distribution is not of sufficient span to have adequate power). Therefore, an LRT and the CI might not show the same signal. Depending on trusting the LRT, one might conclude that less information is known about the CLcr-CL relationship. While larger than nominal coverage is ok with respect to the CI statement, inefficient use of information is expensive. I am assuming an adequate sample size for a reasonable COV step estimate and that the subjects are densely sampled enough to have FOCE adequately approximate the true likelihood. The latter can be reconciled by other methods however. Kind regards, Matt
Quoted reply history
-----Original Message----- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Nick Holford Sent: Friday, July 25, 2008 5:14 PM To: nmusers Subject: Re: FW: [NMusers] PPC Matt, Thanks for your comments which I almost completely agree with. You propose to log transform the parameters so that the resulting unlogged uncertainty will be skewed. But if this does not mean you will get a better picture of the uncertainty. If the 'true' parameter uncertainty is left skewed the log transformation will force some kind of right skewness which would not be correct. The issue of NONMEM bootstrap success rates and confidence intervals has been discussed at length on nmusers. http://www.cognigencorp.com/nonmem/nm/99jul292006.html - search the thread for "slim evidence" http://www.cognigencorp.com/nonmem/nm/99jul152003.html -- search the thread for "assess imprecision" Based on experimental evidence with real and simulated data sets it makes negligible difference to the bootstrap confidence intervals if NONMEM converges and runs the covariance step or if NONMEM terminates with rounding errors. What is more certain is that CI's based on the assumption of normally distributed uncertainty and asymptotic SEs will have the wrong coverage if the true uncertainty is not symmetrical (a common finding for non-linear model parameters). I agree that simple bootstrapping can cause problems as you have outlined but it is a helpful tool when NONMEM refuses to run the covariance step and you want to get some feel for parameter uncertainty. If you took your example of a small renal impairment vs normal study what difference do you think there would be in the 90% CI for clearance based on a naive bootstrap versus some other better constructed procedure? Best wishes, Nick Matt Hutmacher wrote: > Hello all, > I look forward to seeing the tutorial on the web as well. > > I have seen comments that some modelers prefer the non-parametric bootstrap > to the $COV step because it captures skewed distributions. For reasonable > sample sizes, the uncertainty distributions should be normal, and in my > experience, for stable and good fitting models, the results between the > non-parametric bootstrap and the $COV step are highly similar. When sample > sizes are smaller, or a parameter is not well estimated because of the > design - ED50 quickly comes to mind - the nonparametric bootstrap might show > skewness. In this case, the $COV step uncertainty distribution can be > improved by re-parameterizing from ED50=THETA(X) to ED50=EXP(THETA(X)). > Note that this parameterization does not need any boundary constraints (in > $THETA) as well. Maximum likelihood is invariant to this these changes and > so the same objective function and fit (given a stable model) should be > achieved. The uncertainty of THETA(X), for example THETA(X) +/- 2*STANDARD > ERROR (THETA(X)) translates into an ED50 interval of EXP(THETA(X) +/- > 2*STANDARD ERROR (THETA(X)), which is skewed. > > I have seen the nonparametric bootstrap used without thought to how it > should be implemented given the designs and structures of the data. For > example, consider a single dose study with n=6 per group and a study to > assess exposure stratified by CLcr groupings (ie kidney function) with n=8 > per group. Because the number in the dose and CLcr groups are fixed by > design, the nonparametric sampling procedure should sample with replacement > within groups to achieve the fixed number of patients per group by design, > that is n=6 or n=8. If this is not done, then dose and CLcr are > conceptually random with respect to the bootstrap and a sampled data set > could be imbalanced relative to the original designs. These imbalances will > influence the estimated uncertainty distribution and could bias the results. > One can see that this can get complicated quickly to do it right. Another > example would be fitting an Emax model to a biomarker measured over a set of > 5 distinct, fixed concentrations, each replicated n=10 times. If we sample > without regard to the fixed nature of the design, we may fail to get many of > the Emax models to converge, which is unrealistic. This leads to > convergence, which can be another issue. How do I justify in my report > that my 90% confidence intervals are reasonable if only 80% of my bootstraps > converge? Additionally, if the $COV step does not converge and a modeler > uses the nonparametric bootstrap to estimate uncertainty, how does the > modeler demonstrate the estimates achieved a minimum OFV and not at a saddle > point? The $COV step provides this check automatically. > I do not proclaim that the $COV step is perfect, only that it is a useful > and valuable tool in modeling, and that the bootstrap should not be used > without thought. > > To be fair, a drawback to the $COV uncertainty distributions is that > non-positive definite OMEGA matrices can still be sampled, which are > invalid. However, the same parameterization trick as used above can be > implemented to mitigate some of this behavior. Instead of parameterizing > CL=THETA(1)*EXP(ETA(1)) and estimating the variance of ETA(1) in $OMEGA, > this model can be re-parameterized as > CL=EXP(THETA(1))*EXP(EXP(THETA(2))*ETA(2)). In this case the variance of > ETA(1) is set to 1 in $OMEGA and EXP (THETA(2)) provides its estimate. This > will bound the variance component away from 0 and give the uncertainty > distribution some skewness. Correlations between variance components can > also be forced between -1 and 1 by re-parameterization, but this is more > complicated. > > Matt > > -----Original Message----- > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On > Behalf Of Nick Holford > Sent: Friday, July 25, 2008 2:12 AM > To: [email protected] > Subject: Re: FW: [NMusers] PPC > > Mahesh, > > Thanks for your further info on VPC and PPC. I agree that the bootstrap > distribution of the parameters is probably better than the asymptotic > normal distribution implied by NONMEM's covariance step results. > > I dont have your experience of comparing VPC and PPC so I hope you can > find a way to publish these results which are similar to the limited > exploration reported by Yano et al. > > VPC is not the perfect answer for model evaluation but it has some > useful properties compared with the traditional methods (standard > horizontal residual plots and diagonal residual plots (DV vs PRED and > IPRED). I certainly havent seen any reason to use a PPC for model > evaluation. It does however have a value (in theory) for predicting the > uncertainty in outcome of a future trial. > > Nick > > Samtani, Mahesh [PRDUS] wrote: > >> Dear Nick, >> Thank-you for teaching these important concepts. Could you and others >> > kindly comment on the following 2 aspects: > >> a) The variance-covariance matrix based on the estimated standard errors >> > and their correlation will generate a multi-variate normal distribution for > the parameters. However, the posterior distribution of parameters may not be > normally dispersed. Wouldn't it be better to use the bootstrap results as a > source for getting the uncertainty distribution. I have to admit that the > bootstrap method can be quite time-consuming. See one such example at: > > http://www.page-meeting.org/pdf_assets/2373-MSamtani%20PAGE%20Poster%202007. > pdf > >> b) More importantly, after going through the PPC and VPC comparison for >> > several cases I always find that if the parameter estimates have reasonable > precision from the original NONMEM run then the PPC and VPC results are > essentially identical. This echoes an earlier comment that most of the > variation is explained by BSV and RV. Has any one else experienced this > behavior also and if so shouldn't VPC be enough for model verification? > >> Kindly advise...Mahesh >> >> -----Original Message----- >> From: [EMAIL PROTECTED] >> [mailto:[EMAIL PROTECTED] Behalf Of Willavize, Susan A >> Sent: Wednesday, July 23, 2008 8:38 AM >> To: Nick Holford; [email protected] >> Subject: RE: FW: [NMusers] PPC >> >> >> Hi Nick, >> >> I have been following this discussion and I think it is very helpful to >> many of us. Can you please elaborate on that last part about binning? >> What is that for? I must have missed something there. >> >> Thanks, >> Susan >> >> Susan Willavize, Ph.D. >> Global Pharmacometrics Group >> 860-732-6428 >> >> This e-mail is classified as Pfizer Confidential; it is confidential and >> privileged. >> >> >> -----Original Message----- >> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] >> On Behalf Of Nick Holford >> Sent: Wednesday, July 23, 2008 6:32 AM >> To: [email protected] >> Subject: Re: FW: [NMusers] PPC >> >> Paul, >> >> The procedure you describe is a way of producing a posterior predictive >> check but I don't know of any good examples of its use. A simpler way of >> >> doing a PPC samples the population parameter estimates from a >> distribution centered on the final estimates with a variance-covariance >> >> based on the estimated standard errors and their correlation. VPCs are >> not posterior predictive checks because they do not take account of the >> posterior distribution of the parameter estimates (i.e. the final >> estimates with their uncertainty). A VPC typically ignores the parameter >> >> uncertainty and uses what has been called the degenerate posterior >> distribution (See Yano Y, Beal SL, Sheiner LB. Evaluating >> pharmacokinetic/pharmacodynamic models using the posterior predictive >> check. J Pharmacokinet Pharmacodyn. 2001;28(2):171-92 for terminology, >> methods and examples). >> >> When I spoke of uncertainty I did not mean random variability (OMEGA and >> >> SIGMA). A VPC will simulate observations using the final THETA, OMEGA >> and SIGMA estimates. >> >> You can calculate distribution statistics for your observations (such as >> >> median and 90% intervals) by combining the observations (one per >> individual) at each time point to create an empirical distribution. The >> statistics are then determined from this empirical distribution. In >> order to get sufficient numbers of points (at least 10 is desirable) you >> >> may need to bin observations into time intervals e.g. 0-30 mins, 30-60 >> mins etc. >> >> Nick >> >> Paul Matthew Westwood wrote: >> >> >>> ________________________________________ >>> From: Paul Matthew Westwood >>> Sent: 22 July 2008 13:20 >>> To: Nick Holford >>> Subject: RE: [NMusers] PPC >>> >>> Nick, >>> >>> Thanks for your reply and apologies once again for another confusing >>> >>> >> email. I think I am using VPC, which as I understand it is simulating n >> datasets using the final parameter estimates gained from the final >> model, and then taking the median and 90% confidence interval (for >> example) for each simulated concentration and comparing these to the >> real concentrations. Whereas, PPC is where you then run the final model >> through the simulated datasets and compare selected statistics of these >> new runs with the original. Is this correct? You mentioned including >> uncertainty on the parameter estimates in the simulated datasets. Would >> one usually not include uncertainty (fixing the error terms to zero) in >> the simulated datasets? Doing this with mine obviously produced much >> better concentrations with no negative values and no 'significant' >> outliers. Another thing you mentioned is comparing the median of the >> simulated concentrations with the median of the original dataset >> concentrations, but as there is only one sample for any particular time >> point would this indicate the unsuitability of VPC (and furthermore PPC) >> for this model? >> >> >>> Thanks again, >>> Paul. >>> ________________________________________ >>> From: [EMAIL PROTECTED] [EMAIL PROTECTED] On >>> >>> >> Behalf Of Nick Holford [EMAIL PROTECTED] >> >> >>> Sent: 22 July 2008 10:30 >>> To: [email protected] >>> Subject: Re: [NMusers] PPC >>> >>> Paul, >>> >>> Its not clear to me if you did a VPC (visual predictive check) using >>> just the final estimates of the parameters) or tried to do a posterior >>> predictive check (PPC) including uncertainty on the parameter >>> >>> >> estimates >> >> >>> in the simulation. >>> >>> I dont have any experience with PPC but I dont think its helpful for >>> model evaluation. Its more of a tool for understanding uncertainties >>> >>> >> of >> >> >>> predictions for future studies. >>> >>> I assume you dont have complications like informative dropout >>> >>> >> processes >> >> >>> to complicate the simulation so if you did a VPC and the median of the >>> predictions doesnt match the median of the observations then your >>> >>> >> model >> >> >>> needs more work. >>> >>> Some negative concs are OK but 'impossibly high values' point to >>> problems with your model. >>> >>> So I think you can safely say the VPC has worked very well -- it has >>> told you that you need to think more about your model. You might find >>> some ideas in these references: >>> >>> 1. Tod M, Jullien V, Pons G. Facilitation of drug evaluation in >>> children by population methods and modelling. Clin Pharmacokinet. >>> 2008;47(4):231-43. >>> 2. Anderson BJ, Holford NH. Mechanism-Based Concepts of Size and >>> Maturity in Pharmacokinetics. Annu Rev Pharmacol Toxicol. >>> >>> >> 2008;48:303-32. >> >> >>> Nick >>> >>> Paul Matthew Westwood wrote: >>> >>> >>> >>>> Hello all, >>>> >>>> I wonder if someone can give me some tips on PPC. >>>> I am working on a midazolam dataset with a pediatric population, and >>>> >>>> >> have decided to use PPC as a model validation technique. The dataset I >> am modelling has up to 43 patients, at different ages, different >> weights, different times of dosing and sampling, and different doses. I >> simulated 100 datasets using NONMEM VI, fixing all parameters to the >> final estimates from the model. The simulated datasets produced had a >> large proportion of negative concentrations, and also a few impossibly >> large concentration values. Also the median, 5th and 95th percentiles >> were not very promising, and the resulting graphs not very clean. >> >> >>>> Firstly, can I use PPC with any degree of confidence with a dataset >>>> >>>> >> such as this, and if so, do I omit the negative concentration values >> from the analysis? >> >> >>>> Thanks in advance for any help given. >>>> >>>> Paul Westwood, >>>> PhD Student, >>>> QUB, >>>> Belfast. >>>> >>>> >>>> >>>> >>>> >>>> >>> -- >>> Nick Holford, Dept Pharmacology & Clinical Pharmacology >>> University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New >>> >>> >> Zealand >> >> >>> [EMAIL PROTECTED] tel:+64(9)373-7599x86730 fax:+64(9)373-7090 >>> http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford >>> >>> >>> >>> >>> >> >> > > -- Nick Holford, Dept Pharmacology & Clinical Pharmacology University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand [EMAIL PROTECTED] tel:+64(9)373-7599x86730 fax:+64(9)373-7090 http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford

Re: FW: PPC

From: Nick Holford Date: July 30, 2008 technical
Matt, I know its a statistical tradition to use ad hoc transformations to try to make distributions more 'normal' -- indeed some models are useful even if they are wrong. But parametric or non-parametric distributions of uncertainty can be described with statistics such as CIs without having to force them to be 'normal'. There have been two publications beyond nmusers discussions that document that NONMEM's termination status/ability to do $COV does not reflect the quality of the parameter estimates. Please look at the thread I suggested last time: http://www.cognigencorp.com/nonmem/nm/99jul292006.html - search the thread for "slim evidence" There is also a paper from Wonkyung Byon et al. that shows that NONMEM termination status is of no value when performing randomization tests to define Type I error. Byon W, Fletcher CV, Brundage RC. Impact of censoring data below an arbitrary quantification limit on structural model misspecification. J Pharmacokinet Pharmacodyn. 2008;35(1):101-16. Best wishes, Nick Matt Hutmacher wrote: > Hi Nick, > > The log-transform I discussed was just a simple example for a parameter > bounded below by 0 (similar to CL which is generally considered lognormally > distributed between individuals). Constraints on other parameters can be > accommodated as well, such as the logit. I look forward to a publication > that details the risks/benefits for permitting lack of convergence in the > bootstrap that we can cite in reports. Citing discussion's on nmuser's is > difficult. I do agree the bootstrap is quite useful, especially if you > don't trust the LRT. I still think it is good to show without a $COV step > that the estimates were achieved at a minimum and not a saddle point. > > For the renal example, if we did not have the correct number in each CLcr > group and it influenced CL, then the CI might be too wide since the span of > CLcr used to support the estimate of the CLcr covariate parameter would not > be constrained to be wide enough (this is similar to Stephen Duffull > statement recently that often the covariate distribution is not of > sufficient span to have adequate power). Therefore, an LRT and the CI might > not show the same signal. Depending on trusting the LRT, one might conclude > that less information is known about the CLcr-CL relationship. While larger > than nominal coverage is ok with respect to the CI statement, inefficient > use of information is expensive. I am assuming an adequate sample size for > a reasonable COV step estimate and that the subjects are densely sampled > enough to have FOCE adequately approximate the true likelihood. The latter > can be reconciled by other methods however. > > Kind regards, > Matt > > >
Quoted reply history
> -----Original Message----- > From: owner-nmusers > Behalf Of Nick Holford > Sent: Friday, July 25, 2008 5:14 PM > To: nmusers > Subject: Re: FW: [NMusers] PPC > > Matt, > > Thanks for your comments which I almost completely agree with. > > You propose to log transform the parameters so that the resulting > unlogged uncertainty will be skewed. But if this does not mean you will > get a better picture of the uncertainty. If the 'true' parameter > uncertainty is left skewed the log transformation will force some kind > of right skewness which would not be correct. > > The issue of NONMEM bootstrap success rates and confidence intervals has > been discussed at length on nmusers. > http://www.cognigencorp.com/nonmem/nm/99jul292006.html - search the > thread for "slim evidence" > http://www.cognigencorp.com/nonmem/nm/99jul152003.html -- search the > thread for "assess imprecision" > Based on experimental evidence with real and simulated data sets it > makes negligible difference to the bootstrap confidence intervals if > NONMEM converges and runs the covariance step or if NONMEM terminates > with rounding errors. What is more certain is that CI's based on the > assumption of normally distributed uncertainty and asymptotic SEs will > have the wrong coverage if the true uncertainty is not symmetrical (a > common finding for non-linear model parameters). > > I agree that simple bootstrapping can cause problems as you have > outlined but it is a helpful tool when NONMEM refuses to run the > covariance step and you want to get some feel for parameter uncertainty. > If you took your example of a small renal impairment vs normal study > what difference do you think there would be in the 90% CI for clearance > based on a naive bootstrap versus some other better constructed procedure? > > Best wishes, > > Nick > > Matt Hutmacher wrote: > >> Hello all, >> I look forward to seeing the tutorial on the web as well. >> >> I have seen comments that some modelers prefer the non-parametric >> > bootstrap > >> to the $COV step because it captures skewed distributions. For reasonable >> sample sizes, the uncertainty distributions should be normal, and in my >> experience, for stable and good fitting models, the results between the >> non-parametric bootstrap and the $COV step are highly similar. When >> > sample > >> sizes are smaller, or a parameter is not well estimated because of the >> design - ED50 quickly comes to mind - the nonparametric bootstrap might >> > show > >> skewness. In this case, the $COV step uncertainty distribution can be >> improved by re-parameterizing from ED50=THETA(X) to ED50=EXP(THETA(X)). >> Note that this parameterization does not need any boundary constraints (in >> $THETA) as well. Maximum likelihood is invariant to this these changes >> > and > >> so the same objective function and fit (given a stable model) should be >> achieved. The uncertainty of THETA(X), for example THETA(X) +/- >> > 2*STANDARD > >> ERROR (THETA(X)) translates into an ED50 interval of EXP(THETA(X) +/- >> 2*STANDARD ERROR (THETA(X)), which is skewed. >> >> I have seen the nonparametric bootstrap used without thought to how it >> should be implemented given the designs and structures of the data. For >> example, consider a single dose study with n=6 per group and a study to >> assess exposure stratified by CLcr groupings (ie kidney function) with n=8 >> per group. Because the number in the dose and CLcr groups are fixed by >> design, the nonparametric sampling procedure should sample with >> > replacement > >> within groups to achieve the fixed number of patients per group by design, >> that is n=6 or n=8. If this is not done, then dose and CLcr are >> conceptually random with respect to the bootstrap and a sampled data set >> could be imbalanced relative to the original designs. These imbalances >> > will > >> influence the estimated uncertainty distribution and could bias the >> > results. > >> One can see that this can get complicated quickly to do it right. Another >> example would be fitting an Emax model to a biomarker measured over a set >> > of > >> 5 distinct, fixed concentrations, each replicated n=10 times. If we >> > sample > >> without regard to the fixed nature of the design, we may fail to get many >> > of > >> the Emax models to converge, which is unrealistic. This leads to >> convergence, which can be another issue. How do I justify in my report >> that my 90% confidence intervals are reasonable if only 80% of my >> > bootstraps > >> converge? Additionally, if the $COV step does not converge and a modeler >> uses the nonparametric bootstrap to estimate uncertainty, how does the >> modeler demonstrate the estimates achieved a minimum OFV and not at a >> > saddle > >> point? The $COV step provides this check automatically. >> I do not proclaim that the $COV step is perfect, only that it is a useful >> and valuable tool in modeling, and that the bootstrap should not be used >> without thought. >> >> To be fair, a drawback to the $COV uncertainty distributions is that >> non-positive definite OMEGA matrices can still be sampled, which are >> invalid. However, the same parameterization trick as used above can be >> implemented to mitigate some of this behavior. Instead of parameterizing >> CL=THETA(1)*EXP(ETA(1)) and estimating the variance of ETA(1) in $OMEGA, >> this model can be re-parameterized as >> CL=EXP(THETA(1))*EXP(EXP(THETA(2))*ETA(2)). In this case the variance of >> ETA(1) is set to 1 in $OMEGA and EXP (THETA(2)) provides its estimate. >> > This > >> will bound the variance component away from 0 and give the uncertainty >> distribution some skewness. Correlations between variance components can >> also be forced between -1 and 1 by re-parameterization, but this is more >> complicated. >> >> Matt >> >> -----Original Message----- >> From: owner-nmusers >> > On > >> Behalf Of Nick Holford >> Sent: Friday, July 25, 2008 2:12 AM >> To: nmusers >> Subject: Re: FW: [NMusers] PPC >> >> Mahesh, >> >> Thanks for your further info on VPC and PPC. I agree that the bootstrap >> distribution of the parameters is probably better than the asymptotic >> normal distribution implied by NONMEM's covariance step results. >> >> I dont have your experience of comparing VPC and PPC so I hope you can >> find a way to publish these results which are similar to the limited >> exploration reported by Yano et al. >> >> VPC is not the perfect answer for model evaluation but it has some >> useful properties compared with the traditional methods (standard >> horizontal residual plots and diagonal residual plots (DV vs PRED and >> IPRED). I certainly havent seen any reason to use a PPC for model >> evaluation. It does however have a value (in theory) for predicting the >> uncertainty in outcome of a future trial. >> >> Nick >> >> Samtani, Mahesh [PRDUS] wrote: >> >> >>> Dear Nick, >>> Thank-you for teaching these important concepts. Could you and others >>> >>> >> kindly comment on the following 2 aspects: >> >> >>> a) The variance-covariance matrix based on the estimated standard errors >>> >>> >> and their correlation will generate a multi-variate normal distribution >> > for > >> the parameters. However, the posterior distribution of parameters may not >> > be > >> normally dispersed. Wouldn't it be better to use the bootstrap results as >> > a > >> source for getting the uncertainty distribution. I have to admit that the >> bootstrap method can be quite time-consuming. See one such example at: >> >> >> > http://www.page-meeting.org/pdf_assets/2373-MSamtani%20PAGE%20Poster%202007. > >> pdf >> >> >>> b) More importantly, after going through the PPC and VPC comparison for >>> >>> >> several cases I always find that if the parameter estimates have >> > reasonable > >> precision from the original NONMEM run then the PPC and VPC results are >> essentially identical. This echoes an earlier comment that most of the >> variation is explained by BSV and RV. Has any one else experienced this >> behavior also and if so shouldn't VPC be enough for model verification? >> >> >>> Kindly advise...Mahesh >>> >>> -----Original Message----- >>> From: owner-nmusers >>> [mailto:owner-nmusers >>> Sent: Wednesday, July 23, 2008 8:38 AM >>> To: Nick Holford; nmusers >>> Subject: RE: FW: [NMusers] PPC >>> >>> >>> Hi Nick, >>> >>> I have been following this discussion and I think it is very helpful to >>> many of us. Can you please elaborate on that last part about binning? >>> What is that for? I must have missed something there. >>> >>> Thanks, >>> Susan >>> >>> Susan Willavize, Ph.D. >>> Global Pharmacometrics Group >>> 860-732-6428 >>> >>> This e-mail is classified as Pfizer Confidential; it is confidential and >>> privileged. >>> >>> >>> -----Original Message----- >>> From: owner-nmusers >>> On Behalf Of Nick Holford >>> Sent: Wednesday, July 23, 2008 6:32 AM >>> To: nmusers >>> Subject: Re: FW: [NMusers] PPC >>> >>> Paul, >>> >>> The procedure you describe is a way of producing a posterior predictive >>> check but I don't know of any good examples of its use. A simpler way of >>> >>> doing a PPC samples the population parameter estimates from a >>> distribution centered on the final estimates with a variance-covariance >>> >>> based on the estimated standard errors and their correlation. VPCs are >>> not posterior predictive checks because they do not take account of the >>> posterior distribution of the parameter estimates (i.e. the final >>> estimates with their uncertainty). A VPC typically ignores the parameter >>> >>> uncertainty and uses what has been called the degenerate posterior >>> distribution (See Yano Y, Beal SL, Sheiner LB. Evaluating >>> pharmacokinetic/pharmacodynamic models using the posterior predictive >>> check. J Pharmacokinet Pharmacodyn. 2001;28(2):171-92 for terminology, >>> methods and examples). >>> >>> When I spoke of uncertainty I did not mean random variability (OMEGA and >>> >>> SIGMA). A VPC will simulate observations using the final THETA, OMEGA >>> and SIGMA estimates. >>> >>> You can calculate distribution statistics for your observations (such as >>> >>> median and 90% intervals) by combining the observations (one per >>> individual) at each time point to create an empirical distribution. The >>> statistics are then determined from this empirical distribution. In >>> order to get sufficient numbers of points (at least 10 is desirable) you >>> >>> may need to bin observations into time intervals e.g. 0-30 mins, 30-60 >>> mins etc. >>> >>> Nick >>> >>> Paul Matthew Westwood wrote: >>> >>> >>> >>>> ________________________________________ >>>> From: Paul Matthew Westwood >>>> Sent: 22 July 2008 13:20 >>>> To: Nick Holford >>>> Subject: RE: [NMusers] PPC >>>> >>>> Nick, >>>> >>>> Thanks for your reply and apologies once again for another confusing >>>> >>>> >>>> >>> email. I think I am using VPC, which as I understand it is simulating n >>> datasets using the final parameter estimates gained from the final >>> model, and then taking the median and 90% confidence interval (for >>> example) for each simulated concentration and comparing these to the >>> real concentrations. Whereas, PPC is where you then run the final model >>> through the simulated datasets and compare selected statistics of these >>> new runs with the original. Is this correct? You mentioned including >>> uncertainty on the parameter estimates in the simulated datasets. Would >>> one usually not include uncertainty (fixing the error terms to zero) in >>> the simulated datasets? Doing this with mine obviously produced much >>> better concentrations with no negative values and no 'significant' >>> outliers. Another thing you mentioned is comparing the median of the >>> simulated concentrations with the median of the original dataset >>> concentrations, but as there is only one sample for any particular time >>> point would this indicate the unsuitability of VPC (and furthermore PPC) >>> for this model? >>> >>> >>> >>>> Thanks again, >>>> Paul. >>>> ________________________________________ >>>> From: owner-nmusers >>>> >>>> >>>> >>> Behalf Of Nick Holford [n.holford >>> >>> >>> >>>> Sent: 22 July 2008 10:30 >>>> To: nmusers >>>> Subject: Re: [NMusers] PPC >>>> >>>> Paul, >>>> >>>> Its not clear to me if you did a VPC (visual predictive check) using >>>> just the final estimates of the parameters) or tried to do a posterior >>>> predictive check (PPC) including uncertainty on the parameter >>>> >>>> >>>> >>> estimates >>> >>> >>> >>>> in the simulation. >>>> >>>> I dont have any experience with PPC but I dont think its helpful for >>>> model evaluation. Its more of a tool for understanding uncertainties >>>> >>>> >>>> >>> of >>> >>> >>> >>>> predictions for future studies. >>>> >>>> I assume you dont have complications like informative dropout >>>> >>>> >>>> >>> processes >>> >>> >>> >>>> to complicate the simulation so if you did a VPC and the median of the >>>> predictions doesnt match the median of the observations then your >>>> >>>> >>>> >>> model >>> >>> >>> >>>> needs more work. >>>> >>>> Some negative concs are OK but 'impossibly high values' point to >>>> problems with your model. >>>> >>>> So I think you can safely say the VPC has worked very well -- it has >>>> told you that you need to think more about your model. You might find >>>> some ideas in these references: >>>> >>>> 1. Tod M, Jullien V, Pons G. Facilitation of drug evaluation in >>>> children by population methods and modelling. Clin Pharmacokinet. >>>> 2008;47(4):231-43. >>>> 2. Anderson BJ, Holford NH. Mechanism-Based Concepts of Size and >>>> Maturity in Pharmacokinetics. Annu Rev Pharmacol Toxicol. >>>> >>>> >>>> >>> 2008;48:303-32. >>> >>> >>> >>>> Nick >>>> >>>> Paul Matthew Westwood wrote: >>>> >>>> >>>> >>>> >>>>> Hello all, >>>>> >>>>> I wonder if someone can give me some tips on PPC. >>>>> I am working on a midazolam dataset with a pediatric population, and >>>>> >>>>> >>>>> >>> have decided to use PPC as a model validation technique. The dataset I >>> am modelling has up to 43 patients, at different ages, different >>> weights, different times of dosing and sampling, and different doses. I >>> simulated 100 datasets using NONMEM VI, fixing all parameters to the >>> final estimates from the model. The simulated datasets produced had a >>> large proportion of negative concentrations, and also a few impossibly >>> large concentration values. Also the median, 5th and 95th percentiles >>> were not very promising, and the resulting graphs not very clean. >>> >>> >>> >>>>> Firstly, can I use PPC with any degree of confidence with a dataset >>>>> >>>>> >>>>> >>> such as this, and if so, do I omit the negative concentration values >>> from the analysis? >>> >>> >>> >>>>> Thanks in advance for any help given. >>>>> >>>>> Paul Westwood, >>>>> PhD Student, >>>>> QUB, >>>>> Belfast. >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>> -- >>>> Nick Holford, Dept Pharmacology & Clinical Pharmacology >>>> University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New >>>> >>>> >>>> >>> Zealand >>> >>> >>> >>>> n.holford >>>> http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford >>>> >>>> >>>> >>>> >>>> >>>> >>> >>> >>> >> >> > > -- Nick Holford, Dept Pharmacology & Clinical Pharmacology University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand n.holford http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford

Re: FW: PPC

From: Nick Holford Date: July 31, 2008 technical
Matt, I know its a statistical tradition to use ad hoc transformations to try to make distributions more 'normal' -- indeed some models are useful even if they are wrong. But parametric or non-parametric distributions of uncertainty can be described with statistics such as CIs without having to force them to be 'normal'. There have been two publications beyond nmusers discussions that document that NONMEM's termination status/ability to do $COV does not reflect the quality of the parameter estimates. Please look at the thread I suggested last time: http://www.cognigencorp.com/nonmem/nm/99jul292006.html - search the thread for "slim evidence" There is also a paper from Wonkyung Byon et al. that shows that NONMEM termination status is of no value when performing randomization tests to define Type I error. Byon W, Fletcher CV, Brundage RC. Impact of censoring data below an arbitrary quantification limit on structural model misspecification. J Pharmacokinet Pharmacodyn. 2008;35(1):101-16. Best wishes, Nick Matt Hutmacher wrote: > Hi Nick, > > The log-transform I discussed was just a simple example for a parameter > bounded below by 0 (similar to CL which is generally considered lognormally > distributed between individuals). Constraints on other parameters can be > accommodated as well, such as the logit. I look forward to a publication > that details the risks/benefits for permitting lack of convergence in the > bootstrap that we can cite in reports. Citing discussion's on nmuser's is > difficult. I do agree the bootstrap is quite useful, especially if you > don't trust the LRT. I still think it is good to show without a $COV step > that the estimates were achieved at a minimum and not a saddle point. > > For the renal example, if we did not have the correct number in each CLcr > group and it influenced CL, then the CI might be too wide since the span of > CLcr used to support the estimate of the CLcr covariate parameter would not > be constrained to be wide enough (this is similar to Stephen Duffull > statement recently that often the covariate distribution is not of > sufficient span to have adequate power). Therefore, an LRT and the CI might > not show the same signal. Depending on trusting the LRT, one might conclude > that less information is known about the CLcr-CL relationship. While larger > than nominal coverage is ok with respect to the CI statement, inefficient > use of information is expensive. I am assuming an adequate sample size for > a reasonable COV step estimate and that the subjects are densely sampled > enough to have FOCE adequately approximate the true likelihood. The latter > can be reconciled by other methods however. > > Kind regards, > Matt >
Quoted reply history
> -----Original Message----- > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On > Behalf Of Nick Holford > Sent: Friday, July 25, 2008 5:14 PM > To: nmusers > Subject: Re: FW: [NMusers] PPC > > Matt, > > Thanks for your comments which I almost completely agree with. > > You propose to log transform the parameters so that the resulting unlogged uncertainty will be skewed. But if this does not mean you will get a better picture of the uncertainty. If the 'true' parameter uncertainty is left skewed the log transformation will force some kind of right skewness which would not be correct. > > The issue of NONMEM bootstrap success rates and confidence intervals has been discussed at length on nmusers. http://www.cognigencorp.com/nonmem/nm/99jul292006.html - search the thread for "slim evidence" http://www.cognigencorp.com/nonmem/nm/99jul152003.html -- search the thread for "assess imprecision" Based on experimental evidence with real and simulated data sets it makes negligible difference to the bootstrap confidence intervals if NONMEM converges and runs the covariance step or if NONMEM terminates with rounding errors. What is more certain is that CI's based on the assumption of normally distributed uncertainty and asymptotic SEs will have the wrong coverage if the true uncertainty is not symmetrical (a common finding for non-linear model parameters). > > I agree that simple bootstrapping can cause problems as you have outlined but it is a helpful tool when NONMEM refuses to run the covariance step and you want to get some feel for parameter uncertainty. If you took your example of a small renal impairment vs normal study what difference do you think there would be in the 90% CI for clearance based on a naive bootstrap versus some other better constructed procedure? > > Best wishes, > > Nick > > Matt Hutmacher wrote: > > > Hello all, > > I look forward to seeing the tutorial on the web as well. > > > > I have seen comments that some modelers prefer the non-parametric > > bootstrap > > > to the $COV step because it captures skewed distributions. For reasonable > > sample sizes, the uncertainty distributions should be normal, and in my > > experience, for stable and good fitting models, the results between the > > non-parametric bootstrap and the $COV step are highly similar. When > > sample > > > sizes are smaller, or a parameter is not well estimated because of the > > design - ED50 quickly comes to mind - the nonparametric bootstrap might > > show > > > skewness. In this case, the $COV step uncertainty distribution can be > > improved by re-parameterizing from ED50=THETA(X) to ED50=EXP(THETA(X)). > > Note that this parameterization does not need any boundary constraints (in > > $THETA) as well. Maximum likelihood is invariant to this these changes > > and > > > so the same objective function and fit (given a stable model) should be > > achieved. The uncertainty of THETA(X), for example THETA(X) +/- > > 2*STANDARD > > > ERROR (THETA(X)) translates into an ED50 interval of EXP(THETA(X) +/- > > 2*STANDARD ERROR (THETA(X)), which is skewed. > > > > I have seen the nonparametric bootstrap used without thought to how it > > should be implemented given the designs and structures of the data. For > > example, consider a single dose study with n=6 per group and a study to > > assess exposure stratified by CLcr groupings (ie kidney function) with n=8 > > per group. Because the number in the dose and CLcr groups are fixed by > > design, the nonparametric sampling procedure should sample with > > replacement > > > within groups to achieve the fixed number of patients per group by design, > > that is n=6 or n=8. If this is not done, then dose and CLcr are > > conceptually random with respect to the bootstrap and a sampled data set > > could be imbalanced relative to the original designs. These imbalances > > will > > > influence the estimated uncertainty distribution and could bias the > > results. > > > One can see that this can get complicated quickly to do it right. Another > > example would be fitting an Emax model to a biomarker measured over a set > > of > > > 5 distinct, fixed concentrations, each replicated n=10 times. If we > > sample > > > without regard to the fixed nature of the design, we may fail to get many > > of > > > the Emax models to converge, which is unrealistic. This leads to > > convergence, which can be another issue. How do I justify in my report > > that my 90% confidence intervals are reasonable if only 80% of my > > bootstraps > > > converge? Additionally, if the $COV step does not converge and a modeler > > uses the nonparametric bootstrap to estimate uncertainty, how does the > > modeler demonstrate the estimates achieved a minimum OFV and not at a > > saddle > > > point? The $COV step provides this check automatically. > > I do not proclaim that the $COV step is perfect, only that it is a useful > > and valuable tool in modeling, and that the bootstrap should not be used > > > > without thought. > > > > To be fair, a drawback to the $COV uncertainty distributions is that > > non-positive definite OMEGA matrices can still be sampled, which are > > invalid. However, the same parameterization trick as used above can be > > implemented to mitigate some of this behavior. Instead of parameterizing > > CL=THETA(1)*EXP(ETA(1)) and estimating the variance of ETA(1) in $OMEGA, > > this model can be re-parameterized as > > CL=EXP(THETA(1))*EXP(EXP(THETA(2))*ETA(2)). In this case the variance of > > ETA(1) is set to 1 in $OMEGA and EXP (THETA(2)) provides its estimate. > > This > > > will bound the variance component away from 0 and give the uncertainty > > distribution some skewness. Correlations between variance components can > > also be forced between -1 and 1 by re-parameterization, but this is more > > complicated. > > > > Matt > > > > -----Original Message----- > > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] > > On > > > Behalf Of Nick Holford > > Sent: Friday, July 25, 2008 2:12 AM > > To: [email protected] > > Subject: Re: FW: [NMusers] PPC > > > > Mahesh, > > > > Thanks for your further info on VPC and PPC. I agree that the bootstrap distribution of the parameters is probably better than the asymptotic normal distribution implied by NONMEM's covariance step results. > > > > I dont have your experience of comparing VPC and PPC so I hope you can find a way to publish these results which are similar to the limited exploration reported by Yano et al. > > > > VPC is not the perfect answer for model evaluation but it has some useful properties compared with the traditional methods (standard horizontal residual plots and diagonal residual plots (DV vs PRED and IPRED). I certainly havent seen any reason to use a PPC for model evaluation. It does however have a value (in theory) for predicting the uncertainty in outcome of a future trial. > > > > Nick > > > > Samtani, Mahesh [PRDUS] wrote: > > > > > Dear Nick, > > > Thank-you for teaching these important concepts. Could you and others > > > > kindly comment on the following 2 aspects: > > > > > a) The variance-covariance matrix based on the estimated standard errors > > > > and their correlation will generate a multi-variate normal distribution > > for > > > the parameters. However, the posterior distribution of parameters may not > > be > > > normally dispersed. Wouldn't it be better to use the bootstrap results as > > a > > > source for getting the uncertainty distribution. I have to admit that the > > > > bootstrap method can be quite time-consuming. See one such example at: > > http://www.page-meeting.org/pdf_assets/2373-MSamtani%20PAGE%20Poster%202007. > > > pdf > > > > > b) More importantly, after going through the PPC and VPC comparison for > > > > several cases I always find that if the parameter estimates have > > reasonable > > > precision from the original NONMEM run then the PPC and VPC results are > > essentially identical. This echoes an earlier comment that most of the > > variation is explained by BSV and RV. Has any one else experienced this > > behavior also and if so shouldn't VPC be enough for model verification? > > > > > Kindly advise...Mahesh > > > > > > -----Original Message----- > > > From: [EMAIL PROTECTED] > > > [mailto:[EMAIL PROTECTED] Behalf Of Willavize, Susan A > > > Sent: Wednesday, July 23, 2008 8:38 AM > > > To: Nick Holford; [email protected] > > > Subject: RE: FW: [NMusers] PPC > > > > > > Hi Nick, > > > > > > I have been following this discussion and I think it is very helpful to > > > many of us. Can you please elaborate on that last part about binning? > > > What is that for? I must have missed something there. > > > > > > Thanks, > > > > > > Susan Susan Willavize, Ph.D. Global Pharmacometrics Group > > > > > > 860-732-6428 > > > > > > This e-mail is classified as Pfizer Confidential; it is confidential and > > > > > > privileged. > > > > > > -----Original Message----- > > > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] > > > On Behalf Of Nick Holford > > > Sent: Wednesday, July 23, 2008 6:32 AM > > > To: [email protected] > > > Subject: Re: FW: [NMusers] PPC > > > > > > Paul, > > > > > > The procedure you describe is a way of producing a posterior predictive check but I don't know of any good examples of its use. A simpler way of > > > > > > doing a PPC samples the population parameter estimates from a distribution centered on the final estimates with a variance-covariance > > > > > > based on the estimated standard errors and their correlation. VPCs are not posterior predictive checks because they do not take account of the posterior distribution of the parameter estimates (i.e. the final estimates with their uncertainty). A VPC typically ignores the parameter > > > > > > uncertainty and uses what has been called the degenerate posterior distribution (See Yano Y, Beal SL, Sheiner LB. Evaluating pharmacokinetic/pharmacodynamic models using the posterior predictive check. J Pharmacokinet Pharmacodyn. 2001;28(2):171-92 for terminology, methods and examples). > > > > > > When I spoke of uncertainty I did not mean random variability (OMEGA and > > > > > > SIGMA). A VPC will simulate observations using the final THETA, OMEGA and SIGMA estimates. > > > > > > You can calculate distribution statistics for your observations (such as > > > > > > median and 90% intervals) by combining the observations (one per individual) at each time point to create an empirical distribution. The statistics are then determined from this empirical distribution. In order to get sufficient numbers of points (at least 10 is desirable) you > > > > > > may need to bin observations into time intervals e.g. 0-30 mins, 30-60 mins etc. > > > > > > Nick > > > > > > Paul Matthew Westwood wrote: > > > > > > > ________________________________________ > > > > From: Paul Matthew Westwood > > > > Sent: 22 July 2008 13:20 > > > > To: Nick Holford > > > > Subject: RE: [NMusers] PPC > > > > > > > > Nick, > > > > > > > > Thanks for your reply and apologies once again for another confusing > > > > > > email. I think I am using VPC, which as I understand it is simulating n > > > datasets using the final parameter estimates gained from the final > > > model, and then taking the median and 90% confidence interval (for > > > example) for each simulated concentration and comparing these to the > > > real concentrations. Whereas, PPC is where you then run the final model > > > through the simulated datasets and compare selected statistics of these > > > new runs with the original. Is this correct? You mentioned including > > > uncertainty on the parameter estimates in the simulated datasets. Would > > > one usually not include uncertainty (fixing the error terms to zero) in > > > the simulated datasets? Doing this with mine obviously produced much > > > better concentrations with no negative values and no 'significant' > > > outliers. Another thing you mentioned is comparing the median of the > > > simulated concentrations with the median of the original dataset > > > concentrations, but as there is only one sample for any particular time > > > point would this indicate the unsuitability of VPC (and furthermore PPC) > > > for this model? > > > > > > > Thanks again, > > > > Paul. > > > > ________________________________________ > > > > From: [EMAIL PROTECTED] [EMAIL PROTECTED] On > > > > > > Behalf Of Nick Holford [EMAIL PROTECTED] > > > > > > > Sent: 22 July 2008 10:30 > > > > To: [email protected] > > > > Subject: Re: [NMusers] PPC > > > > > > > > Paul, > > > > > > > > Its not clear to me if you did a VPC (visual predictive check) using > > > > just the final estimates of the parameters) or tried to do a posterior > > > > predictive check (PPC) including uncertainty on the parameter > > > > > > estimates > > > > > > > in the simulation. > > > > > > > > I dont have any experience with PPC but I dont think its helpful for > > > > model evaluation. Its more of a tool for understanding uncertainties > > > > > > of > > > > > > > predictions for future studies. > > > > > > > > I assume you dont have complications like informative dropout > > > > > > processes > > > > > > > to complicate the simulation so if you did a VPC and the median of the > > > > predictions doesnt match the median of the observations then your > > > > > > model > > > > > > > needs more work. > > > > > > > > Some negative concs are OK but 'impossibly high values' point to > > > > problems with your model. > > > > > > > > So I think you can safely say the VPC has worked very well -- it has > > > > told you that you need to think more about your model. You might find > > > > some ideas in these references: > > > > > > > > 1. Tod M, Jullien V, Pons G. Facilitation of drug evaluation in > > > > children by population methods and modelling. Clin Pharmacokinet. > > > > 2008;47(4):231-43. > > > > 2. Anderson BJ, Holford NH. Mechanism-Based Concepts of Size and > > > > Maturity in Pharmacokinetics. Annu Rev Pharmacol Toxicol. > > > > > > 2008;48:303-32. > > > > > > > Nick > > > > > > > > Paul Matthew Westwood wrote: > > > > > > > > > Hello all, > > > > > > > > > > I wonder if someone can give me some tips on PPC. > > > > > I am working on a midazolam dataset with a pediatric population, and > > > > > > have decided to use PPC as a model validation technique. The dataset I > > > am modelling has up to 43 patients, at different ages, different > > > weights, different times of dosing and sampling, and different doses. I > > > simulated 100 datasets using NONMEM VI, fixing all parameters to the > > > final estimates from the model. The simulated datasets produced had a > > > large proportion of negative concentrations, and also a few impossibly > > > large concentration values. Also the median, 5th and 95th percentiles > > > were not very promising, and the resulting graphs not very clean. > > > > > > > > Firstly, can I use PPC with any degree of confidence with a dataset > > > > > > such as this, and if so, do I omit the negative concentration values > > > from the analysis? > > > > > > > > Thanks in advance for any help given. > > > > > > > > > > Paul Westwood, > > > > > PhD Student, > > > > > QUB, > > > > > Belfast. > > > > > > > > -- > > > > Nick Holford, Dept Pharmacology & Clinical Pharmacology > > > > University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New > > > > > > Zealand > > > > > > > [EMAIL PROTECTED] tel:+64(9)373-7599x86730 fax:+64(9)373-7090 > > > > http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford -- Nick Holford, Dept Pharmacology & Clinical Pharmacology University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand [EMAIL PROTECTED] tel:+64(9)373-7599x86730 fax:+64(9)373-7090 http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford

RE: PPC

From: Andreas Lindauer Date: July 31, 2008 technical
Hello NMUSERS, Let me just add one more thought on the bootstrap discussion. Sometimes when doing a bootstrap it happens that the runs terminate because of parameter estimates near to the boundary (e.g. values for OMEGA close to 0). When this happens in a considerable number of runs, lets say in 10% of the runs, how would you then calculate the CI for that parameter? Because the bootstrap parameter distribution derived from all runs - irrespective of their termination status - would then be bimodal with one mode close to the boundary. Best regards, Andreas. ____________________________ Andreas Lindauer University of Bonn Department of Clinical Pharmacy An der Immenburg 4 D-53121 Bonn phone:+49 228 73 5781 fax: +49 228 73 9757

RE: PPC

From: William Bachman Date: July 31, 2008 technical
Your bootstrap runs may terminate because of "parameter estimates near to the boundary" due to a new feature in NONMEM VI, the "default boundary test". See the help manual entry for $ESTIMATION: THETABOUNDTEST, OMEGABOUNDTEST, SIGMABOUNDTEST | With NONMEM VI, the estimation step sometimes terminates with the | message | PARAMETER ESTIMATE IS NEAR ITS DEFAULT BOUNDARY. | These options request that the "default boundary test" be per- | formed for THETA, OMEGA, and SIGMA, respectively. THETABOUNDTEST | may also be coded TBT or TBOUNDTEST; OMEGABOUNDTEST may also be | coded OBT or OBOUNDTEST; SIGMABOUNDTEST may also be coded SBT or | SBOUNDTEST. These options are the defaults. | NOTHETABOUNDTEST, NOOMEGABOUNDTEST, NOSIGMABOUNDTEST | Instructs NONMEM to omit the "default boundary test" for this | type of variable, i.e., to behave like NONMEM V in this regard. | Any option listed above may be preceded by "NO". The THETA, | OMEGA, and SIGMA choices are independent of each other. E.g., it | is possible to specify NOOBT (to prevent the "default OMEGA boun- | dary test") and permit both the "default THETA boundary test" and | "default SIGMA boundary test". | Try turning turning off "default boundary test". William J. Bachman, Ph.D. Director, Pharmacometrics R&D Icon Development Solutions 6031 University Blvd., Suite 300 Ellicott City, MD 21043 Office 410-696-3002 Cell 301-467-8635
Quoted reply history
________________________________ From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of andreas lindauer Sent: Thursday, July 31, 2008 7:27 AM To: [email protected] Subject: RE: [NMusers] PPC Hello NMUSERS, Let me just add one more thought on the bootstrap discussion. Sometimes when doing a bootstrap it happens that the runs terminate because of parameter estimates near to the boundary (e.g. values for OMEGA close to 0). When this happens in a considerable number of runs, lets say in 10% of the runs, how would you then calculate the CI for that parameter? Because the bootstrap parameter distribution derived from all runs - irrespective of their termination status - would then be bimodal with one mode close to the boundary. Best regards, Andreas. ____________________________ Andreas Lindauer University of Bonn Department of Clinical Pharmacy An der Immenburg 4 D-53121 Bonn phone:+49 228 73 5781 fax: +49 228 73 9757

Re: PPC

From: Nick Holford Date: August 01, 2008 technical
Andreas, If this happens to 10% of your runs then its pretty strong evidence that the uncertainty in the OMEGA estimate would lead to a 95% CI very close to zero. I would consider simplifying the model and fixing that OMEGA to 0. Then evaluate the model for its intended purpose and decide if this simplification had any impact on the use of the model. With NMVI you can now choose to turn off this warning which may allow you convince yoursefl more clearly that the lower bound of the confidence interval is close to zero. Nick andreas lindauer wrote: > Hello NMUSERS, > > Let me just add one more thought on the bootstrap discussion. Sometimes when doing a bootstrap it happens that the runs terminate because of parameter estimates near to the boundary (e.g. values for OMEGA close to 0). When this happens in a considerable number of runs, lets say in 10% of the runs, how would you then calculate the CI for that parameter? Because the bootstrap parameter distribution derived from all runs - irrespective of their termination status – would then be bimodal with one mode close to the boundary. > > Best regards, Andreas. > > ____________________________ > > Andreas Lindauer > > University of Bonn > > Department of Clinical Pharmacy > > An der Immenburg 4 > > D-53121 Bonn > > phone:+49 228 73 5781 > > fax: +49 228 73 9757 -- Nick Holford, Dept Pharmacology & Clinical Pharmacology University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand [EMAIL PROTECTED] tel:+64(9)373-7599x86730 fax:+64(9)373-7090 http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford