Colleagues,
I am curious as to your thoughts about a particular NONMEM issue. I often find myself in a situation where a complex model does not converge to 3 digits ("no of digits: unreportable") yet the objective function is markedly better than a previous model and graphics suggest that the model is quite good (and better than the previous one). Nick Holford has advocated (and I agree) that NONMEM's SE's have minimal utility and the inability to calculate them is not important. However, I have not seen similar discussion about whether one can / should accept a model that did not converge.
The particular situation that I dealing with at the moment is that a dataset that I am analyzing yielded a series of results that did not converge as I added parameters (despite an improving fit and a marked decrease in the objective function), then yet a more complicated model yielded 3.0 significant digits. In this case, there is no problem (I can use this final model for bootstrap, VPC, etc.) but what if none of these models had converged.
Dennis
Dennis Fisher MD
P < (The "P Less Than" Company)
Phone: 1-866-PLessThan (1-866-753-7784)
Fax: 1-415-564-2220
www.PLessThan.com
Models that abort before convergence
7 messages
5 people
Latest: Nov 21, 2008
Dennis,
The hypothesis that NONMEM termination messages do not indicate whether a model is fit for purpose has now been tested numerous times on simulated and real data sets. No evidence has been found to reject this hypothesis e.g.
Look here for my initial explorations of this problem:
http://www.cognigencorp.com/nonmem/nm/99jul152003.html
then you search on nmusers for "minimization terminated" using this URL:
http://www.mail-archive.com/[email protected]/
you will find several threads including:
http://www.mail-archive.com/[email protected]/msg00451.html
In addition to the discussion and references on nmusers you can also look in these publications which report that there was no difference in conclusions drawn by using or ignoring runs which NONMEM did not report as being successful: Ahn JE, Karlsson MO, Dunne A, Ludden TM. Likelihood based approaches to handling data below the quantification limit using NONMEM VI. J Pharmacokinet Pharmacodyn. 2008;35(4):401-21. Byon W, Fletcher CV, Brundage RC. Impact of censoring data below an arbitrary quantification limit on structural model misspecification. J Pharmacokinet Pharmacodyn. 2008;35(1):101-16.
Therefore I recommend ignoring NONMEM's conclusion about whether a run is successful or not and use more informative criteria based on common sense evaluation of parameters and other priors plus credible diagnostics such as VPC and NPDE:
Karlsson MO, Holford NHG. A Tutorial on Visual Predictive Checks. PAGE 17 (2008) Abstr 1434 [wwwpage-meetingorg/?abstract=1434]. 2008. Comets E, Brendel K, Mentré F. Computing normalised prediction distribution errors to evaluate nonlinear mixed-effect models: The npde add-on package for R. Comput Methods Programs Biomed. 2008;90(2):154-66.
Finally, this paper reports a model that terminated with an even more severe error message ('INFINITE OBJECTIVE FUNCTION AT NEXT ITERATION') but the model itself was clearly OK when based on other more informative criteria. It was also acceptable to peer reviewers.
Matthews I, Kirkpatrick C, Holford NHG. Quantitative justification for target concentration intervention - Parameter variability and predictive performance using population pharmacokinetic models for aminoglycosides. British Journal of Clinical Pharmacology. 2004;58(1):8-19.
Nick
Dennis Fisher wrote:
> Colleagues,
>
> I am curious as to your thoughts about a particular NONMEM issue. I often find myself in a situation where a complex model does not converge to 3 digits ("no of digits: unreportable") yet the objective function is markedly better than a previous model and graphics suggest that the model is quite good (and better than the previous one). Nick Holford has advocated (and I agree) that NONMEM's SE's have minimal utility and the inability to calculate them is not important. However, I have not seen similar discussion about whether one can / should accept a model that did not converge.
>
> The particular situation that I dealing with at the moment is that a dataset that I am analyzing yielded a series of results that did not converge as I added parameters (despite an improving fit and a marked decrease in the objective function), then yet a more complicated model yielded 3.0 significant digits. In this case, there is no problem (I can use this final model for bootstrap, VPC, etc.) but what if none of these models had converged.
>
> Dennis
>
> Dennis Fisher MD
> P < (The "P Less Than" Company)
> Phone: 1-866-PLessThan (1-866-753-7784)
> Fax: 1-415-564-2220
> www.PLessThan.com
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
[EMAIL PROTECTED] tel:+64(9)923-6730 fax:+64(9)373-7090
http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford
Dennis,
I do not support extreme views (from places where people walk upside down
:) ) that Nonmem error messages should be ignored: they serve the useful
purpose to alert when Nonmem is having some difficulties, and should always
be part of the picture. If the data looks good, model is simple, then we
need to look for the reason for the poor convergence. Sometimes it helps to
use SIGDIG= 5 or 6 to get 3 significant digits precision. But if you are
working on the limit of the algorithms (as implemented) abilities:
nonlinear model + stiff differential equations + large range of doses and
concentrations, etc., then you face the situation when you cannot force
convergence even if you try hard. On my recent project, none of the
intermediate model converged even though bootstrap provided pretty narrow
CI (so it does not look like over-parametrized model), all diagnostic plots
were good, and the visual predictive check was reasonable. Then you just
blame the algorithm and move on. You loose the ability to justify your
covariate selection based on the objective function drop (which is not a
good idea any way), and may need to provide a little bit more detailed
investigation to convince reviewers (regulatory and/or journal) that the
model is adequate for the intended purpose.
Thanks
Leonid
Original Message:
-----------------
Quoted reply history
From: Dennis Fisher fisher
Date: Tue, 18 Nov 2008 11:21:23 -0800
To: nmusers
Subject: [NMusers] Models that abort before convergence
Colleagues,
I am curious as to your thoughts about a particular NONMEM issue. I
often find myself in a situation where a complex model does not
converge to 3 digits ("no of digits: unreportable") yet the objective
function is markedly better than a previous model and graphics suggest
that the model is quite good (and better than the previous one). Nick
Holford has advocated (and I agree) that NONMEM's SE's have minimal
utility and the inability to calculate them is not important.
However, I have not seen similar discussion about whether one can /
should accept a model that did not converge.
The particular situation that I dealing with at the moment is that a
dataset that I am analyzing yielded a series of results that did not
converge as I added parameters (despite an improving fit and a marked
decrease in the objective function), then yet a more complicated model
yielded 3.0 significant digits. In this case, there is no problem (I
can use this final model for bootstrap, VPC, etc.) but what if none of
these models had converged.
Dennis
Dennis Fisher MD
P < (The "P Less Than" Company)
Phone: 1-866-PLessThan (1-866-753-7784)
Fax: 1-415-564-2220
www.PLessThan.com
--------------------------------------------------------------------
mail2web.com - Microsoft Exchange solutions from a leading provider -
http://link.mail2web.com/Business/Exchange
Dennis,
I do not support extreme views (from places where people walk upside down
:) ) that Nonmem error messages should be ignored: they serve the useful
purpose to alert when Nonmem is having some difficulties, and should always
be part of the picture. If the data looks good, model is simple, then we
need to look for the reason for the poor convergence. Sometimes it helps to
use SIGDIG= 5 or 6 to get 3 significant digits precision. But if you are
working on the limit of the algorithms (as implemented) abilities:
nonlinear model + stiff differential equations + large range of doses and
concentrations, etc., then you face the situation when you cannot force
convergence even if you try hard. On my recent project, none of the
intermediate model converged even though bootstrap provided pretty narrow
CI (so it does not look like over-parametrized model), all diagnostic plots
were good, and the visual predictive check was reasonable. Then you just
blame the algorithm and move on. You loose the ability to justify your
covariate selection based on the objective function drop (which is not a
good idea any way), and may need to provide a little bit more detailed
investigation to convince reviewers (regulatory and/or journal) that the
model is adequate for the intended purpose.
Thanks
Leonid
Original Message:
-----------------
Quoted reply history
From: Dennis Fisher [EMAIL PROTECTED]
Date: Tue, 18 Nov 2008 11:21:23 -0800
To: [email protected], [EMAIL PROTECTED]
Subject: [NMusers] Models that abort before convergence
Colleagues,
I am curious as to your thoughts about a particular NONMEM issue. I
often find myself in a situation where a complex model does not
converge to 3 digits ("no of digits: unreportable") yet the objective
function is markedly better than a previous model and graphics suggest
that the model is quite good (and better than the previous one). Nick
Holford has advocated (and I agree) that NONMEM's SE's have minimal
utility and the inability to calculate them is not important.
However, I have not seen similar discussion about whether one can /
should accept a model that did not converge.
The particular situation that I dealing with at the moment is that a
dataset that I am analyzing yielded a series of results that did not
converge as I added parameters (despite an improving fit and a marked
decrease in the objective function), then yet a more complicated model
yielded 3.0 significant digits. In this case, there is no problem (I
can use this final model for bootstrap, VPC, etc.) but what if none of
these models had converged.
Dennis
Dennis Fisher MD
P < (The "P Less Than" Company)
Phone: 1-866-PLessThan (1-866-753-7784)
Fax: 1-415-564-2220
www.PLessThan.com
--------------------------------------------------------------------
mail2web.com - Microsoft® Exchange solutions from a leading provider -
http://link.mail2web.com/Business/Exchange
Leonid et al, I'm a little confused by this discussion. To make an analogy, assume that drug company A has a wonderful theory that drug B will treat a disease. Theory makes sense by your favorite epistemology criteria etc. But of course, being good scientists, we know that theories must be verified, so we do an experiment, and the data suggest that the theory is wrong. Most of us would criticize as unscientific someone who who discarded the data (didn't point out flaws in the data, didn't provide opposing data, simply discounted it) in favor of continuing to believe the theory. Why do we not apply the same standards here? Theory says that models that do not converge (or fail covariance) are "bad". Data (that so far as I know no one has found to be flawed, nor provided opposing data) suggests that, by at least one criteria (same parameter estimates, same SD of parameter estimates) there are no important differences. I don't disagree that failing a covariance step, or failing to converge provide information about a model. But it doesn't seem to be informative about what we probably really care about -does the line go through the points, how confident are we WRT the precision of the parameters and is the model predictive. I'm not sure if the small number of published examples (of bootstrap with ~500 samples) are a small number of anecdotes or a small number of trials with N ~ 500, but I've run 5 or so myself and found the same to be consistently the case. That is, a successful covariance step is not informative WRT the parameter values or their precision. I suspect others have similar experience. If there are other "studies"/anecdotes with different conclusions, someone should publish them. Otherwise, it seems like we are obligated to abandon this theory in favor of the data. Mark Sale MD
Next Level Solutions, LLC
www.NextLevelSolns.com
919-846-9185
Quoted reply history
-------- Original Message --------
Subject: RE: [NMusers] Models that abort before convergence
From: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
Date: Tue, November 18, 2008 11:13 pm
To: [EMAIL PROTECTED], [email protected] ,
[EMAIL PROTECTED]
Dennis,
I do not support extreme views (from places where people walk upside down
:) ) that Nonmem error messages should be ignored: they serve the useful
purpose to alert when Nonmem is having some difficulties, and should always
be part of the picture. If the data looks good, model is simple, then we
need to look for the reason for the poor convergence. Sometimes it helps to
use SIGDIG= 5 or 6 to get 3 significant digits precision. But if you are
working on the limit of the algorithms (as implemented) abilities:
nonlinear model + stiff differential equations + large range of doses and
concentrations, etc., then you face the situation when you cannot force
convergence even if you try hard. On my recent project, none of the
intermediate model converged even though bootstrap provided pretty narrow
CI (so it does not look like over-parametrized model), all diagnostic plots
were good, and the visual predictive check was reasonable. Then you just
blame the algorithm and move on. You loose the ability to justify your
covariate selection based on the objective function drop (which is not a
good idea any way), and may need to provide a little bit more detailed
investigation to convince reviewers (regulatory and/or journal) that the
model is adequate for the intended purpose.
Thanks
Leonid
Original Message:
-----------------
From: Dennis Fisher [EMAIL PROTECTED]
Date: Tue, 18 Nov 2008 11:21:23 -0800
To: [email protected] , [EMAIL PROTECTED]
Subject: [NMusers] Models that abort before convergence
Colleagues,
I am curious as to your thoughts about a particular NONMEM issue. I
often find myself in a situation where a complex model does not
converge to 3 digits ("no of digits: unreportable") yet the objective
function is markedly better than a previous model and graphics suggest
that the model is quite good (and better than the previous one). Nick
Holford has advocated (and I agree) that NONMEM's SE's have minimal
utility and the inability to calculate them is not important.
However, I have not seen similar discussion about whether one can /
should accept a model that did not converge.
The particular situation that I dealing with at the moment is that a
dataset that I am analyzing yielded a series of results that did not
converge as I added parameters (despite an improving fit and a marked
decrease in the objective function), then yet a more complicated model
yielded 3.0 significant digits. In this case, there is no problem (I
can use this final model for bootstrap, VPC, etc.) but what if none of
these models had converged.
Dennis
Dennis Fisher MD
P < (The "P Less Than" Company)
Phone: 1-866-PLessThan (1-866-753-7784)
Fax: 1-415-564-2220
www.PLessThan.com
--------------------------------------------------------------------
mail2web.com - Microsoft® Exchange solutions from a leading provider -
http://link.mail2web.com/Business/Exchange
Leonid,
[Although Leonid originally wrote personallyy to me he has kindly
allowed me to copy his comments and add mine for nmusers to read]
You gave me an anecdote so let my respond with mine. My first NONMEM
project involved a data set I use today for beginners courses in NONMEM.
http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford/teaching/medsci719/populationpd/
If you use a very naive model and FO then NONMEM reports success and
completes the covariance step. If you use FOCE and the model that Lewis
Sheiner helped me develop (Holford et al. 1993; see URL above for the
code) then the run finishes with rounding errors and no covariance.
[This is compiler dependent which is just another thing to be aware of].
So if I was your student without a good mentor I might have concluded
from the first run that I had a good model when in fact the model was
really very poor. This is the lesson I want to teach. Never trust
NONMEM's successful minimization and covariance to imagine you have a
good model.
Your examples are a priori bad models (as you yourself describe them)
without even looking at the data. My example is based on real data. One
cannot know if the model is good or bad without more insight and
meaningful diagnostics. In those days (1989) we didn't have VPC or NPDE
but we did think about what we understood about the disease and the
drug. We got past the simple "covariance step ran" stopping point and
went on to explore a drug and a disease that became the model for the
central example for the learning and confirming philosophy (Sheiner 1997).
Best wishes,
Nick
Holford NHG, Hashimoto Y, Sheiner LB. Time and theophylline
concentration help explain the recovery of peak flow following acute
airways obstruction. Clin Pharmacokinet. 1993;25(6):506-15.
Sheiner LB. Learning versus confirming in clinical drug development.
Clinical Pharmacology & Therapeutics. 1997;61(3):275-91.
LGibiansky
> Hi Nick,
> It is nice to speak with you even via e-mail
>
> Just an example:
> I had a perfect model, no problem with the convergence and cov steps with
> CL=THETA(1)*EXP(ETA(1))
>
> Now, I tried
> CL=THETA(1)*THETA(2)*EXP(ETA(1))
> You may say that it is stupid, but this is an extreme approximation of
> CL=THETA(1)*THETA(2)**SEX*EXP(ETA(1))
> with all subjects being SEX=1
>
> I got an error message that $COV failed.
> Should I ignore it and move on as you suggest or look for the reasons?
>
> Then, I moved to
> CL=THETA(1)*THETA(2)*EXP(ETA(1)+ETA(2))
> Even more stupid? But this is an approximation of the situation where I
> have BOV for study with just 1 occasion.
> Now I have rounding errors.
>
> If I ignore the diagnostic, I will not recover these problems from the
> VPP
> or DV vs PRED, or OF value. In order to understand the problem we need to
> study the model, but we are prompted by the error message to do so in
> more
> details.
>
> We also need to be careful whom we talk to. I was told many times that
> our
> nmgroup posts are used to teach students, and there are a lot of junior
> guys who learn by their own. I think, your extreme views ( :) ) convey
> the
> wrong messages to this group of people. While I fully agree that you
> personally can choose to ignore the error messages and still find the
> good
> model, I would not advice a person who has limited nonmem experience to
> ignore the program output.
> Best wishes
> Leonid
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
n.holford
http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford
Leonid,
[Although Leonid originally wrote personallyy to me he has kindly allowed me to copy his comments and add mine for nmusers to read]
You gave me an anecdote so let my respond with mine. My first NONMEM project involved a data set I use today for beginners courses in NONMEM. http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford/teaching/medsci719/populationpd/
If you use a very naive model and FO then NONMEM reports success and completes the covariance step. If you use FOCE and the model that Lewis Sheiner helped me develop (Holford et al. 1993; see URL above for the code) then the run finishes with rounding errors and no covariance. [This is compiler dependent which is just another thing to be aware of].
So if I was your student without a good mentor I might have concluded from the first run that I had a good model when in fact the model was really very poor. This is the lesson I want to teach. Never trust NONMEM's successful minimization and covariance to imagine you have a good model.
Your examples are a priori bad models (as you yourself describe them) without even looking at the data. My example is based on real data. One cannot know if the model is good or bad without more insight and meaningful diagnostics. In those days (1989) we didn't have VPC or NPDE but we did think about what we understood about the disease and the drug. We got past the simple "covariance step ran" stopping point and went on to explore a drug and a disease that became the model for the central example for the learning and confirming philosophy (Sheiner 1997).
Best wishes,
Nick
Holford NHG, Hashimoto Y, Sheiner LB. Time and theophylline concentration help explain the recovery of peak flow following acute airways obstruction. Clin Pharmacokinet. 1993;25(6):506-15. Sheiner LB. Learning versus confirming in clinical drug development. Clinical Pharmacology & Therapeutics. 1997;61(3):275-91.
[EMAIL PROTECTED] wrote:
> Hi Nick,
> It is nice to speak with you even via e-mail
>
> Just an example:
> I had a perfect model, no problem with the convergence and cov steps with
> CL=THETA(1)*EXP(ETA(1))
>
> Now, I tried
> CL=THETA(1)*THETA(2)*EXP(ETA(1))
> You may say that it is stupid, but this is an extreme approximation of
> CL=THETA(1)*THETA(2)**SEX*EXP(ETA(1))
> with all subjects being SEX=1
>
> I got an error message that $COV failed.
> Should I ignore it and move on as you suggest or look for the reasons?
>
> Then, I moved to
> CL=THETA(1)*THETA(2)*EXP(ETA(1)+ETA(2))
> Even more stupid? But this is an approximation of the situation where I
> have BOV for study with just 1 occasion.
> Now I have rounding errors.
>
> If I ignore the diagnostic, I will not recover these problems from the VPP
>
> or DV vs PRED, or OF value. In order to understand the problem we need to
>
> study the model, but we are prompted by the error message to do so in more
>
> details.
>
> We also need to be careful whom we talk to. I was told many times that our
>
> nmgroup posts are used to teach students, and there are a lot of junior
>
> guys who learn by their own. I think, your extreme views ( :) ) convey the
>
> wrong messages to this group of people. While I fully agree that you
>
> personally can choose to ignore the error messages and still find the good
>
> model, I would not advice a person who has limited nonmem experience to
> ignore the program output.
> Best wishes
>
> Leonid
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
[EMAIL PROTECTED] tel:+64(9)923-6730 fax:+64(9)373-7090
http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford