Dear NM Users,
I am using trying to model some POPPK data in NONMEM vi
Sometimes I get the following message in the output file
MINIMIZATION TERMINATED
DUE TO ROUNDING ERRORS (ERROR=134)
NO. OF FUNCTION EVALUATIONS USED: 1103
NO. OF SIG. DIGITS UNREPORTABLE
But when I change the SIGDIGITS to a lower value the minimization is
successful. What exactly is happening in this case ? Is there something I am
missing out?
what about the parameter estimates obtained in such a run ?
Another question related to this is that when I bootstrap a model in wings
for nonmem WFN, I get few runs with similar message where in it also says
the same message as above ..MINIMIZATION TERMINATED DUE TO ROUNDING ERRORS
(ERROR=134) NO. OF SIG. DIGITS UNREPORTABLE.
This means that I discard these runs from the calculations ?
I have attached the control stream below :
$SUBROUTINE ADVAN6 TRANS1 TOL=3
$MODEL
COMP=(DEPOT,DEFDOSE);
COMP=(CENTRAL);PLASMA
COMP=(PERIPH);PERIPHERAL
$PK
TVF1=THETA(1)
F1=TVF1*EXP(ETA(1))
TVCL=THETA(2)
CL=TVCL*EXP(ETA(2))
TVVC=THETA(3);vol of dist of drug
VC=TVVC*EXP(ETA(3))
TVK20=THETA(4)
K20=TVK20*EXP(ETA(4))
K12=THETA(5)*EXP(ETA(5));Abso constant
K23=THETA(6)*EXP(ETA(6))
K32=THETA(7)*EXP(ETA(7))
CL=K20*VC
SC=VC; OUTPUT IN ng/ml
$ERROR
IPRED=F
IRES=DV-IPRED
DEL=0
IF (IPRED.EQ.0) DEL=1
IWRE=(1-DEL)*IRES/(IPRED+DEL)
Y=F+F*ERR(1);+ERR(2)
$DES
DADT(1)=-K12*A(1)
DADT(2)=K12*A(1)-K20*A(2)-K23*A(2)+K32*A(3)
DADT(3)=K23*A(2)-K32*A(3)
$THETA
(0.1,0.3,0.7);
(75 FIXED);CL
(0,550,1000);VC
(0.02,0.2,2);K20
(7 FIXED);K12
(0.01 ,0.1,10);K23
(0.01,0.1,10);K32
$OMEGA
(0.01);INH
(0.001);CL
(0.01);VC
(0.01);K20
(0.01);K12
(0.01);K23
(0.01);K32
$SIGMA
(0.04);
$ESTIMATION METHOD=1 SIGDIGITS=2 MAXEVAL=9999 PRINT=10 POSTHOC
$COV MATRIX=S
$TABLE ID TIME EVID CMT IPRED IWRE IRES CL VC K20 F1
NOPRINT ONEHEADER FILE=sdtabdoc5
--
--Navin
Minimization terminated ?
8 messages
7 people
Latest: Jul 22, 2007
Dear Navin,
Regarding your second question you might want to look at a poster abstract in
PAGE 2006 by Nick Holford et al.
http://www.page-meeting.org/default.asp?abstract=992
Regards
Nitin
navin goyal <[EMAIL PROTECTED]> wrote:
Dear NM Users,
I am using trying to model some POPPK data in NONMEM vi
Sometimes I get the following message in the output file
MINIMIZATION TERMINATED
DUE TO ROUNDING ERRORS (ERROR=134)
NO. OF FUNCTION EVALUATIONS USED: 1103
NO. OF SIG. DIGITS UNREPORTABLE
But when I change the SIGDIGITS to a lower value the minimization is
successful. What exactly is happening in this case ? Is there something I am
missing out?
what about the parameter estimates obtained in such a run ?
Another question related to this is that when I bootstrap a model in wings for
nonmem WFN, I get few runs with similar message where in it also says the same
message as above ..MINIMIZATION TERMINATED DUE TO ROUNDING ERRORS (ERROR=134)
NO. OF SIG. DIGITS UNREPORTABLE.
This means that I discard these runs from the calculations ?
I have attached the control stream below :
$SUBROUTINE ADVAN6 TRANS1 TOL=3
$MODEL
COMP=(DEPOT,DEFDOSE);
COMP=(CENTRAL);PLASMA
COMP=(PERIPH);PERIPHERAL
$PK
TVF1=THETA(1)
F1=TVF1*EXP(ETA(1))
TVCL=THETA(2)
CL=TVCL*EXP(ETA(2))
TVVC=THETA(3);vol of dist of drug
VC=TVVC*EXP(ETA(3))
TVK20=THETA(4)
K20=TVK20*EXP(ETA(4))
K12=THETA(5)*EXP(ETA(5));Abso constant
K23=THETA(6)*EXP(ETA(6))
K32=THETA(7)*EXP(ETA(7))
CL=K20*VC
SC=VC; OUTPUT IN ng/ml
$ERROR
IPRED=F
IRES=DV-IPRED
DEL=0
IF (IPRED.EQ.0) DEL=1
IWRE=(1-DEL)*IRES/(IPRED+DEL)
Y=F+F*ERR(1);+ERR(2)
$DES
DADT(1)=-K12*A(1)
DADT(2)=K12*A(1)-K20*A(2)-K23*A(2)+K32*A(3)
DADT(3)=K23*A(2)-K32*A(3)
$THETA
(0.1,0.3,0.7);
(75 FIXED);CL
(0,550,1000);VC
(0.02,0.2,2);K20
(7 FIXED);K12
(0.01 ,0.1,10);K23
(0.01,0.1,10);K32
$OMEGA
(0.01);INH
(0.001);CL
(0.01);VC
(0.01);K20
(0.01);K12
(0.01);K23
(0.01);K32
$SIGMA
(0.04);
$ESTIMATION METHOD=1 SIGDIGITS=2 MAXEVAL=9999 PRINT=10 POSTHOC
$COV MATRIX=S
$TABLE ID TIME EVID CMT IPRED IWRE IRES CL VC K20 F1
NOPRINT ONEHEADER FILE=sdtabdoc5
--
--Navin
Nitin Mehrotra, Ph.D
Post Doctoral Research Fellow
874 Union Avenue, Suite4.5p/5p
Department of Pharmaceutical Sciences
University of Tennessee Health Science Center
Memphis, TN, USA-38163
901-448-3385 (Lab)
[EMAIL PROTECTED]
---------------------------------
Park yourself in front of a world of choices in alternative vehicles.
Visit the Yahoo! Auto Green Center.
Navin,
NONMEM is quite unreliable when it comes to deciding if it has converged. Minor
changes in initial estimates with essentially no difference in the final
estimates and OBJ can produce 1) SUCCESSFUL + COVARIANCE 2) SUCCESSFUL + FAILED
COVARIANCE 3) TERMINATED.
My guess this is because of numerical rounding errors (not the ones that NONMEM
refers to in its error message) so that essentially it becomes a random event
which of these outcomes you get. The bottom line is NOT to pay attention to
NONMEM's declarations of success but to focus on whether the parameters make
sense, whether the fits look good, does a VPC look OK
http://www.page-meeting.org/page/page2005/PAGE2005P105.pdf
and even (if you have got lots of spare time) does the npde fail to reject the
null.
http://www.page-meeting.org/pdf_assets/9146-ecomets_a4page07.pdf
Several investigations of bootstraps have shown that it makes little difference
if you include successful runs only or if you include all runs. The advantage
of all runs is that is simpler to process the results and perhaps the
confidence intervals are more precisely estimated
because you have more runs.
http://www.cognigencorp.com/nonmem/nm/99jul152003.html
http://www.nature.com/clpt/journal/v77/n2/abs/clpt200514a.html
http://www.page-meeting.org/?abstract=992
Nick
navin goyal wrote:
>
> Dear NM Users,
> I am using trying to model some POPPK data in NONMEM vi
> Sometimes I get the following message in the output file
>
> MINIMIZATION TERMINATED
> DUE TO ROUNDING ERRORS (ERROR=134)
> NO. OF FUNCTION EVALUATIONS USED: 1103
> NO. OF SIG. DIGITS UNREPORTABLE
>
> But when I change the SIGDIGITS to a lower value the minimization is
> successful. What exactly
is happening in this case ? Is there something I am missing out?
>
> what about the parameter estimates obtained in such a run ?
>
> Another question related to this is that when I bootstrap a model in wings
> for nonmem WFN, I
get few runs with similar message where in it also says the same message as
above
..MINIMIZATION TERMINATED DUE TO ROUNDING ERRORS (ERROR=134) NO. OF SIG. DIGITS
UNREPORTABLE.
> This means that I discard these runs from the calculations ?
>
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
[EMAIL PROTECTED] tel:+64(9)373-7599x86730 fax:+64(9)373-7090
www.health.auckland.ac.nz/pharmacology/staff/nholford
Nick, I'm interested in exactly what you mean by "unreliable". Is it sensitivity/specificity for a "bad" model? I suspect that we all would prefer if our models converge and have a successful covariance step. And so (I think), models that pass these tests are "better" models than those that don't (everything else being equal). But, if we are unable to find a model that passes these tests, we resort to rationalizing that it really doesn't make any difference, anyway, and so I can move on. You, I, and others have generated data that support this. On the other hand, Stuart would, I'm pretty sure, suggest that models that fail a covariance step should not be considered final, and would cringe at the idea of accepting as final a model that did not converge. I'd also suggest it might be hurdle in getting a paper published. (I'll let the regulatory agencies speak for themselves on this matter) So, I'd suggest that convergence and a covariance step are valuable information and should not be discarded. But, I very much support the value of visual predictive checks, and NPDE. I'd like to add PPC, especially if one checks both a point estimate (AUC, Cmax, Cmin) and some measure of variability (SE of AUC etc), since an artificially large variability can fool PPC. Mark Sale MD Next Level Solutions, LLC
www.NextLevelSolns.com
> -------- Original Message -------- Subject: Re: [NMusers] Minimization terminated ? From: Nick Holford <[EMAIL PROTECTED]> Date: Fri, July 20, 2007 5:04 pm To: nmusers <
>
> [email protected]
>
> >
>
> Navin,NONMEM is quite unreliable when it comes to deciding if it hasconverged. Minor changes in initial estimates with essentially nodifference in the final estimates and OBJ can produce 1) SUCCESSFUL +COVARIANCE 2) SUCCESSFUL + FAILED COVARIANCE 3) TERMINATED. My guess this is because of numerical rounding errors (not the onesthat NONMEM refers to in its error message) so that essentially itbecomes a random event which of these outcomes you get. The bottomline is NOT to pay attention to NONMEM's declarations of success butto focus on whether the parameters make sense, whether the fits lookgood, does a VPC look OK http://www.page-meeting.org/page/page2005/PAGE2005P105.pdf
> and even (if you have got lots of spare time) does the npde fail to
> reject the null.
> http://www.page-meeting.org/pdf_assets/9146-ecomets_a4page07.pdf
>
> Several investigations of bootstraps have shown that it makes little
> difference if you include successful runs only or if you include all
> runs. The advantage of all runs is that is simpler to process the
> results and perhaps the confidence intervals are more precisely
> estimated
> because you have more runs.
> http://www.cognigencorp.com/nonmem/nm/99jul152003.html
> http://www.nature.com/clpt/journal/v77/n2/abs/clpt200514a.html
> http://www.page-meeting.org/?abstract=992
>
> Nick
>
> navin goyal wrote:
> >
> > Dear NM Users,
> > I am using trying to model some POPPK data in NONMEM vi
> > Sometimes I get the following message in the output file
> >
> > MINIMIZATION TERMINATED
> > DUE TO ROUNDING ERRORS (ERROR=134)
> > NO. OF FUNCTION EVALUATIONS USED: 1103
> > NO. OF SIG. DIGITS UNREPORTABLE
> >
> > But when I change the SIGDIGITS to a lower value the minimization is
> successful. What exactly
> is happening in this case ? Is there something I am missing out?
> >
> > what about the parameter estimates obtained in such a run ?
> >
> > Another question related to this is that when I bootstrap a model in
> wings for nonmem WFN, I
> get few runs with similar message where in it also says the same
> message as above
> ..MINIMIZATION TERMINATED DUE TO ROUNDING ERRORS (ERROR=134) NO. OF
> SIG. DIGITS UNREPORTABLE.
> > This means that I discard these runs from the calculations ?
> >
>
> --
> Nick Holford, Dept Pharmacology & Clinical Pharmacology
> University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New
> Zealand
> n.holford@auckland.ac.nz tel:+64(9)373-7599x86730 fax:+64(9)373-7090www.health.auckland.ac.nz/pharmacology/staff/nholford
Mark,
What I wrote:
"NONMEM is quite unreliable when it comes to deciding if it has converged."
What I meant:
"NONMEM is quite unreliable when it comes to helping me to decide if it has
converged in a consistent and meaningful way."
NONMEM is consistent in giving the same unhelpful information about whether it
believes it can claim convergence i.e. with exactly the same compiler+options,
CPU, NONMEM patch level it gives the same numbers (change any of these and of
course you are quite likely to get something different).
But it is inconsistent in the real world sense of giving me a solid feeling
that it really converged in a way that gives me some confidence in the results.
I have cited investigations that show one cannot have this kind of confidence
because the parameter estimate distribution is equivalent whether or not NONMEM
claims to have not converged (e.g. due to rounding errors), converged but no
$COV or converged with $COV.
So I ask for SIGDIG=6 and ignore NONMEM's reported convergence status. I
typically get more than 3 sig digs on the runs that interest me and often more
than 6 and once in a while $COV is successful. But I use other criteria to
judge suitability of the model - esp simulation based checks because these are
in the spirit of what I really want to use the model for. Standard errors are
just part of a historical description of a model with no practical relevance to
predictive checks. Real world application of modelling implicitly or explicitly
requires a prediction from the model.
Yes -- Like you I like to see the run converge and %COV complete but I also
like to see the sun shine every day. If it doesnt shine my life goes on... (its
raining today in Auckland). NONMEMs termination messages are as reliable as the
weather in this part of the world.
Nick
Mark Sale - Next Level Solutions wrote:
>
> Nick,
> I'm interested in exactly what you mean by "unreliable". Is it
> sensitivity/specificity for a "bad" model? I suspect that we all would
> prefer if our models converge and have a successful covariance step. And so
> (I think), models that pass these tests are "better" models than those that
> don't (everything else being equal). But, if we are unable to find a model
> that passes these tests, we resort to rationalizing that it really doesn't
> make any difference, anyway, and so I can move on. You, I, and others have
> generated data that support this. On the other hand, Stuart would, I'm
> pretty sure, suggest that models that fail a covariance step should not be
> considered final, and would cringe at the idea of accepting as final a model
> that did not converge. I'd also suggest it might be hurdle in getting a
> paper published. (I'll let the regulatory agencies speak for themselves on
> this matter) So, I'd suggest that convergence and a covariance step are
> valuable information
and should
> not be discarded.
> But, I very much support the value of visual predictive checks, and NPDE.
> I'd like to add PPC, especially if one checks both a point estimate (AUC,
> Cmax, Cmin) and some measure of variability (SE of AUC etc), since an
> artificially large variability can fool PPC.
>
>
> Mark Sale MD
> Next Level Solutions, LLC
> www.NextLevelSolns.com
>
Quoted reply history
> -------- Original Message --------
> Subject: Re: [NMusers] Minimization terminated ?
> From: Nick Holford <[EMAIL PROTECTED]>
> Date: Fri, July 20, 2007 5:04 pm
> To: nmusers <[email protected]>
>
> Navin,
> NONMEM is quite unreliable when it comes to deciding if it has
> converged. Minor changes in initial estimates with essentially no
> difference in the final estimates and OBJ can produce 1) SUCCESSFUL +
> COVARIANCE 2) SUCCESSFUL + FAILED COVARIANCE 3) TERMINATED.
> My guess this is because of numerical rounding errors (not the ones
> that NONMEM refers to in its error message) so that essentially it
> becomes a random event which of these outcomes you get. The bottom
> line is NOT to pay attention to NONMEM's declarations of success but
> to focus on whether the parameters make sense, whether the fits look
> good, does a VPC look OK
> http://www.page-meeting.org/page/page2005/PAGE2005P105.pdf
> and even (if you have got lots of spare time) does the npde fail to
> reject the null.
> http://www.page-meeting.org/pdf_assets/9146-ecomets_a4page07.pdf
>
> Several investigations of bootstraps have shown that it makes little
> difference if you include successful runs only or if you include all
> runs. The advantage of all runs is that is simpler to process the
> results and perhaps the confidence intervals are more precisely
> estimated
> because you have more runs.
> http://www.cognigencorp.com/nonmem/nm/99jul152003.html
> http://www.nature.com/clpt/journal/v77/n2/abs/clpt200514a.html
> http://www.page-meeting.org/?abstract=992
>
> Nick
>
> navin goyal wrote:
> >
> > Dear NM Users,
> > I am using trying to model some POPPK data in NONMEM vi
> > Sometimes I get the following message in the output file
> >
> > MINIMIZATION TERMINATED
> > DUE TO ROUNDING ERRORS (ERROR=134)
> > NO. OF FUNCTION EVALUATIONS USED: 1103
> > NO. OF SIG. DIGITS UNREPORTABLE
> >
> > But when I change the SIGDIGITS to a lower value the minimization is
> successful. What exactly
> is happening in this case ? Is there something I am missing out?
> >
> > what about the parameter estimates obtained in such a run ?
> >
> > Another question related to this is that when I bootstrap a model in
> wings for nonmem WFN, I
> get few runs with similar message where in it also says the same
> message as above
> ..MINIMIZATION TERMINATED DUE TO ROUNDING ERRORS (ERROR=134) NO. OF
> SIG. DIGITS UNREPORTABLE.
> > This means that I discard these runs from the calculations ?
> >
>
> --
> Nick Holford, Dept Pharmacology & Clinical Pharmacology
> University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New
> Zealand
> [EMAIL PROTECTED] tel:+64(9)373-7599x86730 fax:+64(9)373-7090
> www.health.auckland.ac.nz/pharmacology/staff/nholford
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
[EMAIL PROTECTED] tel:+64(9)373-7599x86730 fax:+64(9)373-7090
www.health.auckland.ac.nz/pharmacology/staff/nholford
Dear NMusers,
There have been some elegant references posted to the usersnet in response to
this question. However, one question has generally gone unanswered. Navin tells
us that NONMEM says MINIMIZATION TERMINATED DUE TO ROUNDING ERRORS (ERROR=134);
then he change the SIGDIGITS to a lower value (this is generally NSIG=2) and
MINIMIZATION SUCCESSUFUL shows up along with the STANDARD ERROR OF ESTIMATE.
This is actually one of the recommended tips in Dr. Bonate's book for Error
134. Dr. Bonate further explains "If the rounding error is a variance component
then this is usually an acceptable solution".
<?xml:namespace prefix = o ns = "urn:schemas-microsoft-com:office:office" />
Kindly note that almost always the parameter estimates are identical between
the run that had minimization terminated and the job that ran successfully with
NSIG=2. Could someone kindly teach us what makes NSIG=3 so important. As an
example, is a volume estimate of 96.3 L that much better than 93 L.
Please advice...Mahesh
Quoted reply history
-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Mark Sale - Next
Level Solutions
Sent: Friday, July 20, 2007 9:05 PM
Cc: nmusers
Subject: RE: [NMusers] Minimization terminated ?
Nick,
I'm interested in exactly what you mean by "unreliable". Is it
sensitivity/specificity for a "bad" model? I suspect that we all would prefer
if our models converge and have a successful covariance step. And so (I
think), models that pass these tests are "better" models than those that don't
(everything else being equal). But, if we are unable to find a model that
passes these tests, we resort to rationalizing that it really doesn't make any
difference, anyway, and so I can move on. You, I, and others have generated
data that support this. On the other hand, Stuart would, I'm pretty sure,
suggest that models that fail a covariance step should not be considered final,
and would cringe at the idea of accepting as final a model that did not
converge. I'd also suggest it might be hurdle in getting a paper published.
(I'll let the regulatory agencies speak for themselves on this matter) So, I'd
suggest that convergence and a covariance step are valuable information and
should not be discarded.
But, I very much support the value of visual predictive checks, and NPDE. I'd
like to add PPC, especially if one checks both a point estimate (AUC, Cmax,
Cmin) and some measure of variability (SE of AUC etc), since an artificially
large variability can fool PPC.
Mark Sale MD
Next Level Solutions, LLC
www.NextLevelSolns.com
-------- Original Message --------
Subject: Re: [NMusers] Minimization terminated ?
From: Nick Holford <[EMAIL PROTECTED]>
Date: Fri, July 20, 2007 5:04 pm
To: nmusers <[email protected]>
Navin,
NONMEM is quite unreliable when it comes to deciding if it has
converged. Minor changes in initial estimates with essentially no
difference in the final estimates and OBJ can produce 1) SUCCESSFUL +
COVARIANCE 2) SUCCESSFUL + FAILED COVARIANCE 3) TERMINATED.
My guess this is because of numerical rounding errors (not the ones
that NONMEM refers to in its error message) so that essentially it
becomes a random event which of these outcomes you get. The bottom
line is NOT to pay attention to NONMEM's declarations of success but
to focus on whether the parameters make sense, whether the fits look
good, does a VPC look OK
http://www.page-meeting.org/page/page2005/PAGE2005P105.pdf
and even (if you have got lots of spare time) does the npde fail to
reject the null.
http://www.page-meeting.org/pdf_assets/9146-ecomets_a4page07.pdf
Several investigations of bootstraps have shown that it makes little
difference if you include successful runs only or if you include all
runs. The advantage of all runs is that is simpler to process the
results and perhaps the confidence intervals are more precisely
estimated
because you have more runs.
http://www.cognigencorp.com/nonmem/nm/99jul152003.html
http://www.nature.com/clpt/journal/v77/n2/abs/clpt200514a.html
http://www.page-meeting.org/?abstract=992
Nick
navin goyal wrote:
>
> Dear NM Users,
> I am using trying to model some POPPK data in NONMEM vi
> Sometimes I get the following message in the output file
>
> MINIMIZATION TERMINATED
> DUE TO ROUNDING ERRORS (ERROR=134)
> NO. OF FUNCTION EVALUATIONS USED: 1103
> NO. OF SIG. DIGITS UNREPORTABLE
>
> But when I change the SIGDIGITS to a lower value the minimization is
successful. What exactly
is happening in this case ? Is there something I am missing out?
>
> what about the parameter estimates obtained in such a run ?
>
> Another question related to this is that when I bootstrap a model in
wings for nonmem WFN, I
get few runs with similar message where in it also says the same
message as above
..MINIMIZATION TERMINATED DUE TO ROUNDING ERRORS (ERROR=134) NO. OF
SIG. DIGITS UNREPORTABLE.
> This means that I discard these runs from the calculations ?
>
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New
Zealand
n.holford http://email.secureserver.net/pcompose.php#Compose @auckland.ac.nz
tel:+64(9)373-7599x86730 fax:+64(9)373-7090
www.health.auckland.ac.nz/pharmacology/staff/nholford
Dear Nick, Dear Mahesh, Dear Mark,
I also noticed that SIGDIG=3 is more reliable in NONMEM VI than it was in
NONMEM V. In some bootstrap runs I did in NONMEM V, SIGDIG=5 gave
lower objective functions than SIGDIG=3 for more complex models. For
simpler and stable models, both SIGDIG choices yielded the same OBJ.
I do not know what exactly is affected in the estimation process in NONMEM
when changing SIGDIG=3 to SIGDIG=5. However, I think to remember that
the stepsize of the path on which NONMEM minimizes the OBJ is larger for
SIGDIG=3 than for SIGDIG=5. A smaller stepsize during the estimation process
seems preferable which might have been the reason why the SIGDIG=5
performed better in more complex bootstrap problems.
Mahesh, I would therefore tend to use SIGDIG=5, even though this is likely
to cause a non-successful termination message.
I am not sure, if a successful termination message and a successful $COV
step should be a criterion for publishing a model. As Nick wrote, WinBugs
and Monolix do not provide such a termination message.
There is a convergence test in the latest version of S-Adapt and this is
helpful for the user. The convergence criterion in NPAG is based on a change
in the OBJ. However, as far as I know, nonparametric algorithms do not
provide standard errors of parameter estimates, unless one bootstraps.
I also prefer to have successful convergence messages and confidence
intervals. However, some programs do not report convergence messages,
because there does not seem to be a natural way to tell a model converged
successfully or not.
In my opinion the criterion of qualifying a model depends on the planned
application of the model. Using a model for simulation purposes or showing
that two groups of patients have a significantly different average
Michaelis-Menten
constant most likely requires different methods of model qualification.
Best regards
Juergen
-----------------------------------------------
Juergen Bulitta, PhD, Post-doctoral Fellow
Pharmacometrics, University at Buffalo, NY, USA
Phone: +1 716 645 2855 ext. 281, [EMAIL PROTECTED]
Hi
Quick note.
> I am not sure, if a successful termination message and a
> successful $COV step should be a criterion for publishing a
> model. As Nick wrote, WinBugs and Monolix do not provide such
> a termination message.
WinBUGS is not based on maximum likelihood theory and doesn't rely on
asymptotic results - hence convergence in an MLE sense is not an issue
(obviously it is much more complex than this - but this is the gist).
Convergence in an MCMC sense remains important.
Monolix is used in an MLE setting - perhaps Marc or France would want to
comment on this with respect to convergence? I believe that the SAEM
algorithm is proven to converge if run long enough - but of course how long
is long enough is for the experts.
Steve
--
Professor Stephen Duffull
Chair of Clinical Pharmacy
School of Pharmacy
University of Otago
PO Box 913 Dunedin
New Zealand
E: [EMAIL PROTECTED]
P: +64 3 479 5044
F: +64 3 479 7034
Design software: www.winpopt.com