Dear Group,
This is a topic that has been discussed and different schools of thinking exist
to my knowledge. But, I want to restate my case and get some opinions. The
question is about how important to have successful minimization and covariance
if diagnostics make sense. I have developed a two compartment model with a
Phase 3 trial data and minimization was successful but covariance step was not.
I went ahead and did a 1000 run bootstrap and wanted to get the confidence
intervals of parameters. There are 60% runs that are not successfully
minimized and many other do not have covariance step successful. I put
together CI from the runs that have successful minimization and also including
all 1000 runs. There is no difference in the parameter estimate or the
confidence interval (less than 5% change in numbers). The model diagnostics
look good including VPC, NPDE plots, basic gof and a simulation to explain
another trial data. Now, my question is in this particular case do I have to
worry further to make the successful covariance step and increase the number of
runs that gets successfully minimized in the bootstrap even though I cannot
see much difference in the parameter estimates, diagnostics? My bottom line is
not going to change in anyway. I appreciate your expert opinions.
Regards,
Ayyappa
Successful minimization and covariance
8 messages
8 people
Latest: May 24, 2012
Ayyappa,
You have confirmed what several others have also found - NONMEM bootstrap estimates of parameter confidence intervals are not sensitive to the NONMEM termination state (especially whether or not the covariance step is successful). This result is of course only an empirical one and not based on any theory that allows one to draw general conclusions. But until someone comes up with a clear counterexample (and hopefully with an explanation of the circumstances) then I think you can pragmatically accept the bootstrap confidence intervals and get on with more interesting things :-)
Nick
Quoted reply history
On 24/05/2012 2:33 p.m., Ayyappa Chaturvedula wrote:
> Dear Group,
>
> This is a topic that has been discussed and different schools of thinking exist to my knowledge. But, I want to restate my case and get some opinions. The question is about how important to have successful minimization and covariance if diagnostics make sense. I have developed a two compartment model with a Phase 3 trial data and minimization was successful but covariance step was not. I went ahead and did a 1000 run bootstrap and wanted to get the confidence intervals of parameters. There are 60% runs that are not successfully minimized and many other do not have covariance step successful. I put together CI from the runs that have successful minimization and also including all 1000 runs. There is no difference in the parameter estimate or the confidence interval (less than 5% change in numbers). The model diagnostics look good including VPC, NPDE plots, basic gof and a simulation to explain another trial data. Now, my question is in this particular case do I have to worry further to make the successful covariance step and increase the number of runs that gets successfully minimized in the bootstrap even though I cannot see much difference in the parameter estimates, diagnostics? My bottom line is not going to change in anyway. I appreciate your expert opinions.
>
> Regards,
>
> Ayyappa
--
Nick Holford, Professor Clinical Pharmacology
First World Conference on Pharmacometrics, 5-7 September 2012
Seoul, Korea http://www.go-wcop.org
Dept Pharmacology& Clinical Pharmacology, Bldg 505 Room 202D
University of Auckland,85 Park Rd,Private Bag 92019,Auckland,New Zealand
tel:+64(9)923-6730 fax:+64(9)373-7090 mobile:+64(21)46 23 53
email: [email protected]
http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford
I would not worry about the validity of your bootstrapped CI (and I would
include all the runs) but I think you have to worry that your model is
seriously over-parameterized if 60% of bootstrapped runs fail to converge. It
does not mean it is a bad model - just that the data does not permit to fit all
the parameters and you should consider fixing something, or use prior
information or add data from more intensively sampled studies.
Best Regards
Gianluca
Gianluca Nucci, PhD
Clinical Pharmacology
Pfizer PharmaTherapeutics R&D
620 Memorial Drive,
Cambridge, MA 02139
Room # 464
Office 617-551-3525
Mobile 860-405-4824
Fax 860-686-8225
Quoted reply history
From: [email protected] [mailto:[email protected]] On
Behalf Of Ayyappa Chaturvedula
Sent: Thursday, May 24, 2012 8:33 AM
To: [email protected]
Subject: [NMusers] Successful minimization and covariance
Dear Group,
This is a topic that has been discussed and different schools of thinking exist
to my knowledge. But, I want to restate my case and get some opinions. The
question is about how important to have successful minimization and covariance
if diagnostics make sense. I have developed a two compartment model with a
Phase 3 trial data and minimization was successful but covariance step was not.
I went ahead and did a 1000 run bootstrap and wanted to get the confidence
intervals of parameters. There are 60% runs that are not successfully
minimized and many other do not have covariance step successful. I put
together CI from the runs that have successful minimization and also including
all 1000 runs. There is no difference in the parameter estimate or the
confidence interval (less than 5% change in numbers). The model diagnostics
look good including VPC, NPDE plots, basic gof and a simulation to explain
another trial data. Now, my question is in this particular case do I have to
worry further to make the successful covariance step and increase the number of
runs that gets successfully minimized in the bootstrap even though I cannot
see much difference in the parameter estimates, diagnostics? My bottom line is
not going to change in anyway. I appreciate your expert opinions.
Regards,
Ayyappa
Nick
In fairness, Stu Beal advocated strongly for the position taken by Gianluca.
However, my own experience is that runs that fail to converge sometimes
(often?) converge after minor changes to the initial estimates.
Dennis
Dennis Fisher MD
P < (The "P Less Than" Company)
Phone: 1-866-PLessThan (1-866-753-7784)
Fax: 1-866-PLessThan (1-866-753-7784)
www.PLessThan.com
Quoted reply history
On May 24, 2012, at 7:51 AM, Nick Holford wrote:
> Gianluca,
>
> What is your experimental evidence to allow you to conclude that failure of
> convergence is due to over-parameterization? (instead of other things like to
> large value of NSIG)
>
> Nick
>
> On 24/05/2012 4:34 p.m., Nucci, Gianluca wrote:
>>
>> I would not worry about the validity of your bootstrapped CI (and I would
>> include all the runs) but I think you have to worry that your model is
>> seriously over-parameterized if 60% of bootstrapped runs fail to converge.
>> It does not mean it is a bad model – just that the data does not permit to
>> fit all the parameters and you should consider fixing something, or use
>> prior information or add data from more intensively sampled studies.
>> Best Regards
>>
>> Gianluca
>>
>> Gianluca Nucci, PhD
>>
>> Clinical Pharmacology
>>
>> Pfizer PharmaTherapeutics R&D
>>
>> 620 Memorial Drive,
>>
>> Cambridge, MA 02139
>>
>> Room # 464
>>
>> Office 617-551-3525
>>
>> Mobile 860-405-4824
>>
>> Fax 860-686-8225
>>
>> *From:*[email protected] [mailto:[email protected]]
>> *On Behalf Of *Ayyappa Chaturvedula
>> *Sent:* Thursday, May 24, 2012 8:33 AM
>> *To:* [email protected]
>> *Subject:* [NMusers] Successful minimization and covariance
>>
>> Dear Group,
>>
>> This is a topic that has been discussed and different schools of thinking
>> exist to my knowledge. But, I want to restate my case and get some opinions.
>> The question is about how important to have successful minimization and
>> covariance if diagnostics make sense. I have developed a two compartment
>> model with a Phase 3 trial data and minimization was successful but
>> covariance step was not. I went ahead and did a 1000 run bootstrap and
>> wanted to get the confidence intervals of parameters. There are 60% runs
>> that are not successfully minimized and many other do not have covariance
>> step successful. I put together CI from the runs that have successful
>> minimization and also including all 1000 runs. There is no difference in the
>> parameter estimate or the confidence interval (less than 5% change in
>> numbers). The model diagnostics look good including VPC, NPDE plots, basic
>> gof and a simulation to explain another trial data. Now, my question is in
>> this particular case do I have to worry further to make the successful
>> covariance step and increase the number of runs that gets successfully
>> minimized in the bootstrap even though I cannot see much difference in the
>> parameter estimates, diagnostics? My bottom line is not going to change in
>> anyway. I appreciate your expert opinions.
>>
>> Regards,
>>
>> Ayyappa
>>
>
> --
> Nick Holford, Professor Clinical Pharmacology
>
> First World Conference on Pharmacometrics, 5-7 September 2012
> Seoul, Korea http://www.go-wcop.org
>
> Dept Pharmacology& Clinical Pharmacology, Bldg 505 Room 202D
> University of Auckland,85 Park Rd,Private Bag 92019,Auckland,New Zealand
> tel:+64(9)923-6730 fax:+64(9)373-7090 mobile:+64(21)46 23 53
> email: [email protected]
> http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford
>
>
>
I see another long discussion with strong feelings on both side. WRT Stu Beals comments, I had the opportunity to discuss this with him once. His point (which is to my knowledge the only argument for a successful covariance per se) is that a covariance matrix that is not positive definite may represent a "saddle point" in the objective function surface - with 0 2nd derivative in some dimensions, to machine precision. This was about 25 years ago. Since then, I understand that the more modern minimization routines that NONMEM now uses pretty much preclude a saddle point. This leaves the other argument, that it isn't the covariance step per se that is useful but the information gained (SEE, eigenvalues etc) that are useful. But, if we believe that the positive definite-ness of a solution is in question, then the SEE and eigenvalues may also be in question. My experience, and a little preliminary data suggest that more modern diagnostics are better indicators of model stability and overall "goodness". It may be time to move on. Mark Sale MD President, Next Level Solutions, LLC www.NextLevelSolns.com 919-846-9185 A carbon-neutral company See our real time solar energy production at: http://enlighten.enphaseenergy.com/public/systems/aSDz2458
Quoted reply history
-------- Original Message --------
Subject: Re: [NMusers] RE: Successful minimization and covariance
From: Fisher Dennis < [email protected] >
Date: Thu, May 24, 2012 11:53 am
To: Holford Nick < [email protected] >
Cc: [email protected]
Nick
In fairness, Stu Beal advocated strongly for the position taken by Gianluca. However, my own experience is that runs that fail to converge sometimes (often?) converge after minor changes to the initial estimates.
Dennis
Dennis Fisher MD
P < (The "P Less Than" Company)
Phone: 1-866-PLessThan (1-866-753-7784)
Fax: 1-866-PLessThan (1-866-753-7784)
www.PLessThan.com
On May 24, 2012, at 7:51 AM, Nick Holford wrote:
> Gianluca,
>
> What is your experimental evidence to allow you to conclude that failure of convergence is due to over-parameterization? (instead of other things like to large value of NSIG)
>
> Nick
>
> On 24/05/2012 4:34 p.m., Nucci, Gianluca wrote:
>>
>> I would not worry about the validity of your bootstrapped CI (and I would include all the runs) but I think you have to worry that your model is seriously over-parameterized if 60% of bootstrapped runs fail to converge. It does not mean it is a bad model – just that the data does not permit to fit all the parameters and you should consider fixing something, or use prior information or add data from more intensively sampled studies.
>> Best Regards
>>
>> Gianluca
>>
>> Gianluca Nucci, PhD
>>
>> Clinical Pharmacology
>>
>> Pfizer PharmaTherapeutics R&D
>>
>> 620 Memorial Drive,
>>
>> Cambridge, MA 02139
>>
>> Room # 464
>>
>> Office 617-551-3525
>>
>> Mobile 860-405-4824
>>
>> Fax 860-686-8225
>>
>> *From: * [email protected] [ mailto: [email protected] ] *On Behalf Of *Ayyappa Chaturvedula
>> *Sent:* Thursday, May 24, 2012 8:33 AM
>> *To:* [email protected]
>> *Subject:* [NMusers] Successful minimization and covariance
>>
>> Dear Group,
>>
>> This is a topic that has been discussed and different schools of thinking exist to my knowledge. But, I want to restate my case and get some opinions. The question is about how important to have successful minimization and covariance if diagnostics make sense. I have developed a two compartment model with a Phase 3 trial data and minimization was successful but covariance step was not. I went ahead and did a 1000 run bootstrap and wanted to get the confidence intervals of parameters. There are 60% runs that are not successfully minimized and many other do not have covariance step successful. I put together CI from the runs that have successful minimization and also including all 1000 runs. There is no difference in the parameter estimate or the confidence interval (less than 5% change in numbers). The model diagnostics look good including VPC, NPDE plots, basic gof and a simulation to explain another trial data. Now, my question is in this particular case do I have to worry further to make the successful covariance step and increase the number of runs that gets successfully minimized in the bootstrap even though I cannot see much difference in the parameter estimates, diagnostics? My bottom line is not going to change in anyway. I appreciate your expert opinions.
>>
>> Regards,
>>
>> Ayyappa
>>
>
> --
> Nick Holford, Professor Clinical Pharmacology
>
> First World Conference on Pharmacometrics, 5-7 September 2012
> Seoul, Korea http://www.go-wcop.org
>
> Dept Pharmacology& Clinical Pharmacology, Bldg 505 Room 202D
> University of Auckland,85 Park Rd,Private Bag 92019,Auckland,New Zealand
> tel:+64(9)923-6730 fax:+64(9)373-7090 mobile:+64(21)46 23 53
> email: [email protected]
> http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford
>
>
>
Please, let the group know if/how you resolve the problem
Leonid
Original Message:
-----------------
Quoted reply history
From: Ayyappa Chaturvedula [email protected]
Date: Thu, 24 May 2012 12:27:03 -0400
To: [email protected]
Subject: Re: [NMusers] Successful minimization and covariance
Thank you for the suggestions. I will work on those. It is a orally
administered
drug. We do have good spread of the data points to support a 2-com model
and
prior knowledge on the drug supports this.
On May 24, 2012, at 12:21 PM, "[email protected]"
<[email protected]> wrote:
> Ayyappa,
> Since you already have bootstrap data, you may check few things:
> 1. Take any of the runs which has successful covariance step, and look on
> the
> RSEs. Are there any exceeding, say 60-70%? If yes, these parameters are
not
> supported by the data. If all RSEs looks good, this would support validity
> of
> the model
> 2. Compute eigenvalues of the correlation matrix of the bootstrap
parameter
> estimates. Roughly, the values above 1000 (for the ratio of the maximum
to
> minimum of these eigenvalues) may indicate over-parameterization.
> 3. Create a scatter-plot matrix of bootstrap parameter estimates (N by N
> matrix
> of plots where plot i-j is the 1000 parameter-i values plotted against
1000
> parameter-j values, or some variant of this diagnostics). If any of the
> parameters are strongly correlated, you will immediately see it on these
> plots.
>
> If none of these diagnostics reveals any suspicious behavior, I would
> accept the
> model as is.
>
> Another place to look is outliers. Few points with unrealistic
> concentrations
> may lead to minimization or COV failure. In your case, this is not likely
> since
> many bootstrap runs do not converge.
>
> Yes another place is to look on your matrix of random effects: too many
> random
> effects may lead to non-convergences if these effects are not supported by
> the
> data.
>
> Do you have oral, IV or mixed dosing? Was it done in differential
equations
> (ADVAN 6, 8, 9, 13) or as exact solution (ADVAN 3-4)? Were there any
> covaraite
> effect in the model, and if yes, was there a sufficient range of data to
> support
> these effects?
>
> Leonid
>
>
>
> Original Message:
> -----------------
> From: Ayyappa Chaturvedula [email protected]
> Date: Thu, 24 May 2012 08:33:21 -0400
> To: [email protected]
> Subject: [NMusers] Successful minimization and covariance
>
>
> Dear Group,
> This is a topic that has been discussed and different schools of thinking
> exist
> to my knowledge. But, I want to restate my case and get some opinions.
> The
> question is about how important to have successful minimization and
> covariance
> if diagnostics make sense. I have developed a two compartment model with
> a
> Phase 3 trial data and minimization was successful but covariance step was
> not.
> I went ahead and did a 1000 run bootstrap and wanted to get the
confidence
> intervals of parameters. There are 60% runs that are not successfully
> minimized
> and many other do not have covariance step successful. I put together CI
> from
> the runs that have successful minimization and also including all 1000
> runs.
> There is no difference in the parameter estimate or the confidence
interval
> (less than 5% change in numbers). The model diagnostics look good
> including
> VPC, NPDE plots, basic gof and a simulation to explain another trial
data.
> Now,
> my question is in this particular case do I have to worry further to make
> the
> successful covariance step and increase the number of runs that gets
> successfully minimized in the bootstrap even though I cannot see much
> difference in the parameter estimates, diagnostics? My bottom line is not
> going
> to change in anyway. I appreciate your expert opinions.
>
> Regards,
> Ayyappa
>
> --------------------------------------------------------------------
> mail2web LIVE â Free email based on Microsoft® Exchange technology -
> http://link.mail2web.com/LIVE
>
>
--------------------------------------------------------------------
mail2web.com - Microsoft® Exchange solutions from a leading provider -
http://link.mail2web.com/Business/Exchange
Hi Ayyappa,
Some comments and suggestions:
1) Standard diagnostics that suggest the model fits well does not say anything
about whether the parameter estimates you've obtained are reasonably accurate
or precisely estimated. I don't know if your model is over-parameterized or
not but an over-parameterized model can still be a good-fitting model.
2) Comparison of bootstrap CIs between runs that converged versus runs that
didn't versus COV step failures does not imply that the resulting bootstrap
confidence intervals are valid in terms of having the proper coverage
probabilities. The only way to assess the validity of the bootstrap CIs is
through simulation where you can know the true values of the parameters so you
can check how often the bootstrap CIs include the true values of the
parameters. This of course can be onerous to do in practice. If your model
has difficulty with convergence, COV step failures, sensitive to starting
values, etc., then you need to proceed with caution. To this end, it is good
practice to try and understand whether there are any potential limitations in
the design/data that may make it difficult to estimate one or more parameters
of your model. Trying simpler models may help to provide insight regarding the
limitations of your data/design especially if you find that a simpler model
fits nearly as well as your chosen model.
3) You might try re-running your bootstrap runs changing all the starting
values by 15-20% and see whether you end up with similar confidence intervals.
It seems plausible to me that if your model is over-parameterized, one or more
of the parameters may not iterate much from their starting values thus the
confidence intervals might be too small for some parameters giving the false
impression that you know the estimate fairly precisely. Again, you wouldn't
know whether the confidence intervals were too small unless you did a
simulation study to assess the coverage probabilities.
4) You could try a parametric simulation using your model and final estimates
and simulate conditional on the design of your dataset and then re-estimate the
parameters of your model on the simulated data...perhaps do this for 10
subproblems. If you don't have any major design limitations that would suggest
that your model is over-parameterized then one would expect that you won't
have any problems with convergence or COV step failures in fitting the model to
the simulated data because you wouldn't have any model misspecification since
you would be fitting the same model as you used for simulation. On the other
hand, if you do encounter convergence and COV step failures under this "ideal
setting" then you probably do need to look more closely at your data/design and
model to identify potential limitations.
Best,
Ken
Kenneth G. Kowalski
President & CEO
A2PG - Ann Arbor Pharmacometrics Group, Inc.
110 Miller Ave., Garden Suite
Ann Arbor, MI 48104
Work: 734-274-8255
Cell: 248-207-5082
Fax: 734-913-0230
[email protected]
www.a2pg.com
Quoted reply history
-----Original Message-----
From: [email protected] [mailto:[email protected]] On
Behalf Of [email protected]
Sent: Thursday, May 24, 2012 12:54 PM
To: [email protected]
Subject: Re: [NMusers] Successful minimization and covariance
Please, let the group know if/how you resolve the problem Leonid
Original Message:
-----------------
From: Ayyappa Chaturvedula [email protected]
Date: Thu, 24 May 2012 12:27:03 -0400
To: [email protected]
Subject: Re: [NMusers] Successful minimization and covariance
Thank you for the suggestions. I will work on those. It is a orally
administered drug. We do have good spread of the data points to support a 2-com
model and prior knowledge on the drug supports this.
On May 24, 2012, at 12:21 PM, "[email protected]"
<[email protected]> wrote:
> Ayyappa,
> Since you already have bootstrap data, you may check few things:
> 1. Take any of the runs which has successful covariance step, and look
> on the RSEs. Are there any exceeding, say 60-70%? If yes, these
> parameters are
not
> supported by the data. If all RSEs looks good, this would support
> validity of the model 2. Compute eigenvalues of the correlation matrix
> of the bootstrap
parameter
> estimates. Roughly, the values above 1000 (for the ratio of the
> maximum
to
> minimum of these eigenvalues) may indicate over-parameterization.
> 3. Create a scatter-plot matrix of bootstrap parameter estimates (N by
> N matrix of plots where plot i-j is the 1000 parameter-i values
> plotted against
1000
> parameter-j values, or some variant of this diagnostics). If any of
> the parameters are strongly correlated, you will immediately see it on
> these plots.
>
> If none of these diagnostics reveals any suspicious behavior, I would
> accept the model as is.
>
> Another place to look is outliers. Few points with unrealistic
> concentrations may lead to minimization or COV failure. In your case,
> this is not likely since many bootstrap runs do not converge.
>
> Yes another place is to look on your matrix of random effects: too
> many random effects may lead to non-convergences if these effects are
> not supported by the data.
>
> Do you have oral, IV or mixed dosing? Was it done in differential
equations
> (ADVAN 6, 8, 9, 13) or as exact solution (ADVAN 3-4)? Were there any
> covaraite effect in the model, and if yes, was there a sufficient
> range of data to support these effects?
>
> Leonid
>
>
>
> Original Message:
> -----------------
> From: Ayyappa Chaturvedula [email protected]
> Date: Thu, 24 May 2012 08:33:21 -0400
> To: [email protected]
> Subject: [NMusers] Successful minimization and covariance
>
>
> Dear Group,
> This is a topic that has been discussed and different schools of
> thinking exist to my knowledge. But, I want to restate my case and
> get some opinions.
> The
> question is about how important to have successful minimization and
> covariance if diagnostics make sense. I have developed a two
> compartment model with a Phase 3 trial data and minimization was
> successful but covariance step was not.
> I went ahead and did a 1000 run bootstrap and wanted to get the
confidence
> intervals of parameters. There are 60% runs that are not successfully
> minimized
> and many other do not have covariance step successful. I put together CI
> from
> the runs that have successful minimization and also including all 1000
> runs.
> There is no difference in the parameter estimate or the confidence
interval
> (less than 5% change in numbers). The model diagnostics look good
> including VPC, NPDE plots, basic gof and a simulation to explain
> another trial
data.
> Now,
> my question is in this particular case do I have to worry further to
> make the successful covariance step and increase the number of runs
> that gets successfully minimized in the bootstrap even though I
> cannot see much difference in the parameter estimates, diagnostics?
> My bottom line is not going to change in anyway. I appreciate your
> expert opinions.
>
> Regards,
> Ayyappa
>
> --------------------------------------------------------------------
> mail2web LIVE – Free email based on Microsoft® Exchange technology
> - http://link.mail2web.com/LIVE
>
>
--------------------------------------------------------------------
mail2web.com - Microsoft® Exchange solutions from a leading provider -
http://link.mail2web.com/Business/Exchange
Hi Ayyappa,
Were most of the unsuccessful minimizations due to an ETA variance reaching
lower limit?
Erik
Quoted reply history
________________________________________
From: [email protected] [[email protected]] on behalf of
Ayyappa Chaturvedula [[email protected]]
Sent: Thursday, May 24, 2012 2:33 PM
To: [email protected]
Subject: [NMusers] Successful minimization and covariance
Dear Group,
This is a topic that has been discussed and different schools of thinking exist
to my knowledge. But, I want to restate my case and get some opinions. The
question is about how important to have successful minimization and covariance
if diagnostics make sense. I have developed a two compartment model with a
Phase 3 trial data and minimization was successful but covariance step was not.
I went ahead and did a 1000 run bootstrap and wanted to get the confidence
intervals of parameters. There are 60% runs that are not successfully
minimized and many other do not have covariance step successful. I put
together CI from the runs that have successful minimization and also including
all 1000 runs. There is no difference in the parameter estimate or the
confidence interval (less than 5% change in numbers). The model diagnostics
look good including VPC, NPDE plots, basic gof and a simulation to explain
another trial data. Now, my question is in this particular case do I have to
worry further to make the successful covariance step and increase the number of
runs that gets successfully minimized in the bootstrap even though I cannot
see much difference in the parameter estimates, diagnostics? My bottom line is
not going to change in anyway. I appreciate your expert opinions.
Regards,
Ayyappa