Models that abort before convergence Addendum

6 messages 4 people Latest: Nov 21, 2008

Re: Models that abort before convergence Addendum

From: Nick Holford Date: November 20, 2008 technical
Leonid wrote privately to Mark but Mark posted to nmusers: "We were discussing the usefulness of nonmem error messages: what to do if $COV failed, or even if estimation step has not converged successfully (e.e., infinite OF message). Nick point is that we just ignore the error messages. My point is that we erro messages prompt us to study the model because more often than not, error messages point out to real problem (although sometimes they need to be ignored if you are happy with the model)." Nick replied: My point is not "ignore the error messages". It is "Do not use the termination messages as a guide to whether the model is good or bad." (Note that NONMEM does not list them as ERROR messages. They are simply messages about NONMEM's view of the world when it decided to finish the estimation step. Some messages can be ignored (ROUNDING ERRORS) while others (EXCEEDED NUMBER OF FUNCTION EVALUATIONS) probably mean you should restart the model where it finished and keep going. Other messages in NMVI about boundary conditions are usually just a nuisance but if you did happen to be asleep and do not look at your parameter estimates then this is a reminder to wake up. Leonid suggests we use these messages to examine the model. But there is no clue in these messages as to which part of the model (or the data) should be examined. So they are worthless except to remind you that you should be thinking about your model and data. But you MUST think about the model and the data ALWAYS! It makes no difference what termination message you get you must continue to think (the hard part <grin>) and remember the advice of Box: "All models are wrong but some are useful". NONMEM has no idea if your model is wrong. It is always wrong but it seems Leonid is mislead into thinking it is not wrong when NONMEM says MINIMIZATION SUCCESSFUL. NONMEM especially has no idea if your model is useful. Only you and your colleagues who want to use the results of the modelling can decide if its useful. Usefulness can be investigated by model evaluation procedures (e.g. VPC, NPDE, etc) but the final decision will rest with a human brain not NONMEM's randomly generated minimization messages. Nick Mark Sale - Next Level Solutions wrote: > Leonid, > > I agree with your point that failure to converge/and or covariance is a message that the model is a prompt to study the model. I object to those who claim that model that fails covariance is not useful despite data to the contrary (just went around and around with a sponsor about this - actually their stats consultant who basically just kept insisting on the theory regardless of data that we presented to the contrary). But, I think that the messages are completely non-specific - they tell you something is less than ideal, but give no clue as to what. I suspect that graphics are likely to be much more consistently informative, telling you not only that something is less than ideal, but some clue what to do to fix it. As such, I'm not sure that convergence and covariance messages add anything to the process (anything that a good and thorough analyst would have known already, based on VPC, NPC, various post hoc plots etc). > > Mark > > Mark Sale MD > Next Level Solutions, LLC > www.NextLevelSolns.com http://www.nextlevelsolns.com/ > 919-846-9185 >
Quoted reply history
> -------- Original Message -------- > Subject: RE: [NMusers] Models that abort before convergence > From: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> > Date: Wed, November 19, 2008 10:23 pm > To: [EMAIL PROTECTED] > > Mark, > I am sorry, I simply do not understand what you are saying. I do > not want > to bother the group, it could be that I am the only one who is > missing your > point, but could you repeat what exactly you are trying to say? > We were discussing the usefulness of nonmem error messages: what > to do if > $COV failed, or even if estimation step has not converged successfully > (e.e., infinite OF message). > > Nick point is that we just ignore the error messages. > My point is that we erro messages prompt us to study the model > because more > often than not, error messages point out to real problem (although > sometimes they need to be ignored if you are happy with the > model). What is > your opinion? > > Thanks > Leonid > > Original Message: > ----------------- > From: Mark Sale - Next Level Solutions [EMAIL PROTECTED] > Date: Wed, 19 Nov 2008 06:48:36 -0700 > To: [email protected] > Subject: RE: [NMusers] Models that abort before convergence > > <!-- wmLetter_head_start --> > <table align="center" style="empty-cells: show;" bgcolor="#ffffff" > border="0" cellpadding="0" cellspacing="0" width="100%"><tr><td > valign="top" style="text-align: left;" align="left" width="108"><img > src="cid:left@b8c2bfa43541d55928145888195cae14"></td><td > style="vertical-align: top;"><div style="padding: 5px; overflow-x: > auto;"><!-- wmLetter_head_end --><html><body><span > style="font-family:Verdana; color:#000000; font-size:10pt;">Leonid et > al, I'm a little confused by this discussion. To make an > analogy, assume that drug company A has a wonderful theory that > drug B will > treat a disease. Theory makes sense by your favorite epistemology > criteria > etc. But of course, being good scientists, we know that theories > must be > verified, so we do an experiment, and the data suggest that the > theory is > wrong. Most of us would criticize as unscientific someone who who > discarded the data (didn't point out flaws in the data, didn't provide > opposing data, simply discounted it) in favor of continuing to > believe the > theory. Why do we not apply the same standards here? Theory says > that models that do not converge (or fail covariance) are "bad". Data > (that so far as I know no one has found to be flawed, nor provided > opposing > data) suggests that, by at least one criteria (same parameter > estimates, > same SD of parameter estimates) there are no important differences. I > don't disagree that failing a covariance step, or failing to converge > provide information about a model. But it doesn't seem to be > informative > about what we probably really care about -does the line go through the > points, how confident are we WRT the precision of the parameters > and is the > model predictive. I'm not sure if the small number of published > examples (of bootstrap with ~500 samples) are a small number of > anecdotes > or a small number of trials with N ~ 500, but I've run 5 or so > myself and > found the same to be consistently the case. That is, a successful > covariance step is not informative WRT the parameter values or their > precision. I suspect others have similar experience. If there are > other > "studies"/anecdotes with different conclusions, someone should publish > them. Otherwise, it seems like we are obligated to abandon this > theory in > favor of the data. Mark Sale MD > Next Level Solutions, LLC > <a href=" http://www.NextLevelSolns.com > http://www.nextlevelsolns.com/" > mce_href=" http://www.NextLevelSolns.com > http://www.nextlevelsolns.com/">www.NextLevelSolns.com</a > http://www.nextlevelsolns.com%3c/a> > 919-846-9185 > <blockquote webmail="1" style="border-left: 2px solid blue; > margin-left: > 8px; padding-left: 8px; font-size: 10pt; color: black; font-family: > verdana;"> > <div > > -------- Original Message -------- > Subject: RE: [NMusers] Models that abort before convergence > From: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> > Date: Tue, November 18, 2008 11:13 pm > To: [EMAIL PROTECTED], [email protected], > [EMAIL PROTECTED] > > Dennis, > I do not support extreme views (from places where people walk upside > down > :) ) that Nonmem error messages should be ignored: they serve the > useful > purpose to alert when Nonmem is having some difficulties, and should > always > be part of the picture. If the data looks good, model is simple, > then we > need to look for the reason for the poor convergence. Sometimes it > helps > to > use SIGDIG= 5 or 6 to get 3 significant digits precision. But if > you are > working on the limit of the algorithms (as implemented) abilities: > nonlinear model + stiff differential equations + large range of doses > and > concentrations, etc., then you face the situation when you cannot > force > convergence even if you try hard. On my recent project, none of > the > intermediate model converged even though bootstrap provided pretty > narrow > CI (so it does not look like over-parametrized model), all diagnostic > plots > were good, and the visual predictive check was reasonable. Then > you just > blame the algorithm and move on. You loose the ability to justify > your > covariate selection based on the objective function drop (which is > not a > good idea any way), and may need to provide a little bit more > detailed > investigation to convince reviewers (regulatory and/or journal) > that the > model is adequate for the intended purpose. > Thanks > Leonid > > > Original Message: > ----------------- > From: Dennis Fisher [EMAIL PROTECTED] > Date: Tue, 18 Nov 2008 11:21:23 -0800 > To: [email protected], [EMAIL PROTECTED] > Subject: [NMusers] Models that abort before convergence > > > Colleagues, > > I am curious as to your thoughts about a particular NONMEM issue. > I > often find myself in a situation where a complex model does not > converge to 3 digits ("no of digits: unreportable") yet the > objective > function is markedly better than a previous model and graphics > suggest > that the model is quite good (and better than the previous one). > Nick > Holford has advocated (and I agree) that NONMEM's SE's have > minimal > utility and the inability to calculate them is not important. > However, I have not seen similar discussion about whether one can > / > should accept a model that did not converge. > > The particular situation that I dealing with at the moment is that > a > dataset that I am analyzing yielded a series of results that did > not > converge as I added parameters (despite an improving fit and a > marked > decrease in the objective function), then yet a more complicated > model > yielded 3.0 significant digits. In this case, there is no problem > (I > can use this final model for bootstrap, VPC, etc.) but what if > none of > these models had converged. > > Dennis > > Dennis Fisher MD > P < (The "P Less Than" Company) > Phone: 1-866-PLessThan (1-866-753-7784) > Fax: 1-415-564-2220 > <a href=" http://www.PLessThan.com http://www.plessthan.com/" > target="_blank" > mce_href=" http://www.PLessThan.com > http://www.plessthan.com/">www.PLessThan.com</a > http://www.plessthan.com%3c/a> > > > -------------------------------------------------------------------- > mail2web.com - Microsoft® Exchange solutions from a leading > provider - > <a href=" http://link.mail2web.com/Business/Exchange"; target="_blank" > mce_href=" http://link.mail2web.com/Business/Exchange";> http://link.mail2web.c > http://link.mail2web.c/ > om/Business/Exchange</a> > > > > </div> > </blockquote></span></body></html><!-- wmLetter_tail_start > --></div></td></tr></table><!-- wmLetter_tail_end --> > > -------------------------------------------------------------------- > mail2web.com - Microsoft® Exchange solutions from a leading provider - > http://link.mail2web.com/Business/Exchange -- Nick Holford, Dept Pharmacology & Clinical Pharmacology University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand [EMAIL PROTECTED] tel:+64(9)923-6730 fax:+64(9)373-7090 http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford
Nick, Mark, and All, We can argue indefinitely, but let me propose a poll. If you like to participate, reply directly to me (use "reply", not "reply to all"). I will summarize all the replies received up to the end of November. Skip the questions that you do not like to answer, write NA if the question is not applicable. Summaries will be blinded. 1. Would you like Nonmem to stop producing all run-time (not syntax) error/warning messages (134, 137, number of significant digits, etc.) and "MINIMIZATION SUCCESSFUL" messages (YES/NO): 2. Do you remember at least one example when the run-time error message helped you to find an error in your code (YES/NO): 3. In your experience, run-time error messages allow you to detect model errors or problems quicker than it would be done without error messages: (agree/disagree) 4. Have you ever used in your report/publication ANY model that did not have $COV step completed (YES/NO): 5. Have you ever used in your report/publication ANY model that did not converge (YES/NO): 6. Have you ever used in your report/publication FINAL model that did not have $COV step completed (YES/NO): 7. Have you ever used in your report/publication FINAL model that did not converge (YES/NO): 8. Define yourself as novice/intermediate/experienced Nonmem user: Thanks Leonid -------------------------------------- Leonid Gibiansky, Ph.D. President, QuantPharm LLC web: www.quantpharm.com e-mail: LGibiansky at quantpharm.com tel: (301) 767 5566
Nick, Mark, and All, We can argue indefinitely, but let me propose a poll. If you like to participate, reply directly to me (use "reply", not "reply to all"). I will summarize all the replies received up to the end of November. Skip the questions that you do not like to answer, write NA if the question is not applicable. Summaries will be blinded. 1. Would you like Nonmem to stop producing all run-time (not syntax) error/warning messages (134, 137, number of significant digits, etc.) and "MINIMIZATION SUCCESSFUL" messages (YES/NO): 2. Do you remember at least one example when the run-time error message helped you to find an error in your code (YES/NO): 3. In your experience, run-time error messages allow you to detect model errors or problems quicker than it would be done without error messages: (agree/disagree) 4. Have you ever used in your report/publication ANY model that did not have $COV step completed (YES/NO): 5. Have you ever used in your report/publication ANY model that did not converge (YES/NO): 6. Have you ever used in your report/publication FINAL model that did not have $COV step completed (YES/NO): 7. Have you ever used in your report/publication FINAL model that did not converge (YES/NO): 8. Define yourself as novice/intermediate/experienced Nonmem user: Thanks Leonid -------------------------------------- Leonid Gibiansky, Ph.D. President, QuantPharm LLC web: www.quantpharm.com e-mail: LGibiansky at quantpharm.com tel: (301) 767 5566

RE: Models that abort before convergence Addendum

From: Mark Sale Date: November 21, 2008 technical
Leonid, Let me understand: You now have a theory that the way to determine whether the NONMEM error messages are useful (i.e., they tell you something about the model "goodness") is a poll. This, I think is a theory (and one well established in epistomolgy) of how to find an optimal solution - appeal to a large number of presumably well informed people. As data that may be relavant to this theory, I would point out that a poll gave us GW Bush as our 43rd president. Nick, in contrast has suggested that the error messages could be used as a source of random numbers. This also, I think, is a theory without data to support or contradict it. So .... Let me propose a solution - let's generate some data. Suppose we randomly generate 1000 models. We could tests the hypotheses: Are the error messages random (I suspect they are not, that there is some information in them). To test this, see if the error messages are predictive of other (presumably non-random) measure of goodness - NPC and NPDE, and perhaps PPC come to mind. Do the error messages provide information not readily available in NPC, NPDE and PPC. Not really sure how to test this, without some "gold standard" of goodness, except perhaps to compare the different measures to the model that was used to simulate the data (seems like measures based on that would be "correct" in some way??). I need some ideas on this. I can generate, run and extract results from random models (using the GA software) - I already have NPDE and PPC in it, was thinking of adding NPC. Any interest/collaborators?? Mark Sale MD Next Level Solutions, LLC www.NextLevelSolns.com 919-846-9185
Quoted reply history
-------- Original Message -------- Subject: Re: [NMusers] Models that abort before convergence Addendum From: Leonid Gibiansky <[EMAIL PROTECTED]> Date: Thu, November 20, 2008 9:57 pm To: Cc: nmusers < [email protected] > Nick, Mark, and All, We can argue indefinitely, but let me propose a poll. If you like to participate, reply directly to me (use "reply", not "reply to all"). I will summarize all the replies received up to the end of November. Skip the questions that you do not like to answer, write NA if the question is not applicable. Summaries will be blinded. 1. Would you like Nonmem to stop producing all run-time (not syntax) error/warning messages (134, 137, number of significant digits, etc.) and "MINIMIZATION SUCCESSFUL" messages (YES/NO): 2. Do you remember at least one example when the run-time error message helped you to find an error in your code (YES/NO): 3. In your experience, run-time error messages allow you to detect model errors or problems quicker than it would be done without error messages: (agree/disagree) 4. Have you ever used in your report/publication ANY model that did not have $COV step completed (YES/NO): 5. Have you ever used in your report/publication ANY model that did not converge (YES/NO): 6. Have you ever used in your report/publication FINAL model that did not have $COV step completed (YES/NO): 7. Have you ever used in your report/publication FINAL model that did not converge (YES/NO): 8. Define yourself as novice/intermediate/experienced Nonmem user: Thanks Leonid -------------------------------------- Leonid Gibiansky, Ph.D. President, QuantPharm LLC web: www.quantpharm.com e-mail: LGibiansky at quantpharm.com tel: (301) 767 5566
Leonid, I have never reported out as a final model a run that failed to converge or failed the COV step. My guess is that individuals who frequently do probably tend to be more mechanistic in their model building than I am and often push the complexity of their models beyond what can be supported by the data in hand. For those that do report out models that don't converge, I wonder if they have tried re-running their models with different starting values (15-20% different) and see if NONMEM fails to converge at the same set of parameter estimates. My guess is in many cases it won't although both sets of estimates may appear "reasonable" and give similar fits and VPC. For individuals who have strong prior beliefs about their mechanistic models, my thinking is that rather than using approximate maximum likelihood methods and ignoring the diagnostics that might suggest their model is unstable or not fully supported by the data, I think they would be better served by using a Bayesian approach. That way they can be explicit about the strength of their priors and they don't have to worry about convergence and COV step failures. JMHO. Ken Kenneth G. Kowalski President & CEO A2PG - Ann Arbor Pharmacometrics Group, Inc. 110 E. Miller Ave., Garden Suite Ann Arbor, MI 48104 Work: 734-274-8255 Cell: 248-207-5082 Fax: 734-913-0230 [EMAIL PROTECTED]
Quoted reply history
-----Original Message----- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Leonid Gibiansky Sent: Friday, November 21, 2008 3:53 PM To: Mark Sale - Next Level Solutions Cc: nmusers Subject: Re: [NMusers] Models that abort before convergence Addendum Mark, "Useful" is the relative and subjective term. Error messages and convergence information are useful to me (i.e., they make my search of the final model more efficient), and I'd like to understand whether they are useful to other people. I do not try to prove that the model completed without error messages is correct, or that the model completed with rounding error is wrong, or whether the error messages provide information not readily available in NPC, NPDE and PPC. I am interested to see how many people find it useful: full stop here, do not try to interpret the poll beyond this simple statement. In addition, questions 4-7 will help us to understand how widespred is the use of models with failed convergence step and/or with failed minimization step. Thanks Leonid -------------------------------------- Leonid Gibiansky, Ph.D. President, QuantPharm LLC web: www.quantpharm.com e-mail: LGibiansky at quantpharm.com tel: (301) 767 5566 Mark Sale - Next Level Solutions wrote: > > Leonid, > Let me understand: > You now have a theory that the way to determine whether the NONMEM error > messages are useful (i.e., they tell you something about the model > "goodness") is a poll. This, I think is a theory (and one well > established in epistomolgy) of how to find an optimal solution - appeal > to a large number of presumably well informed people. As data that may > be relavant to this theory, I would point out that a poll gave us GW > Bush as our 43rd president. > Nick, in contrast has suggested that the error messages could be used > as a source of random numbers. This also, I think, is a theory without > data to support or contradict it. > So .... > Let me propose a solution - let's generate some data. Suppose we > randomly generate 1000 models. We could tests the hypotheses: > > Are the error messages random (I suspect they are not, that there is > some information in them). To test this, see if the error messages are > predictive of other (presumably non-random) measure of goodness - NPC > and NPDE, and perhaps PPC come to mind. > > Do the error messages provide information not readily available in NPC, > NPDE and PPC. > Not really sure how to test this, without some "gold standard" of > goodness, except perhaps to compare the different measures to the model > that was used to simulate the data (seems like measures based on that > would be "correct" in some way??). I need some ideas on this. > > > I can generate, run and extract results from random models (using the GA > software) - I already have NPDE and PPC in it, was thinking of adding NPC. > > Any interest/collaborators?? > > > > > Mark Sale MD > Next Level Solutions, LLC > www.NextLevelSolns.com http://www.NextLevelSolns.com > 919-846-9185 > > -------- Original Message -------- > Subject: Re: [NMusers] Models that abort before convergence Addendum > From: Leonid Gibiansky <[EMAIL PROTECTED]> > Date: Thu, November 20, 2008 9:57 pm > To: > Cc: nmusers <[email protected]> > > Nick, Mark, and All, > We can argue indefinitely, but let me propose a poll. If you like to > participate, reply directly to me (use "reply", not "reply to all"). I > will summarize all the replies received up to the end of November. Skip > the questions that you do not like to answer, write NA if the question > is not applicable. Summaries will be blinded. > > 1. Would you like Nonmem to stop producing all run-time (not syntax) > error/warning messages (134, 137, number of significant digits, etc.) > and "MINIMIZATION SUCCESSFUL" messages (YES/NO): > > 2. Do you remember at least one example when the run-time error message > helped you to find an error in your code (YES/NO): > > 3. In your experience, run-time error messages allow you to detect > model > errors or problems quicker than it would be done without error > messages: > (agree/disagree) > > 4. Have you ever used in your report/publication ANY model that did not > have $COV step completed (YES/NO): > > 5. Have you ever used in your report/publication ANY model that did not > converge (YES/NO): > > 6. Have you ever used in your report/publication FINAL model that did > not have $COV step completed (YES/NO): > > 7. Have you ever used in your report/publication FINAL model that did > not converge (YES/NO): > > 8. Define yourself as novice/intermediate/experienced Nonmem user: > > Thanks > Leonid > > -------------------------------------- > Leonid Gibiansky, Ph.D. > President, QuantPharm LLC > web: www.quantpharm.com http://www.quantpharm.com > e-mail: LGibiansky at quantpharm.com > tel: (301) 767 5566 > >

Re: Models that abort before convergence Addendum

From: Nick Holford Date: November 21, 2008 technical
Leonid, I dont know what you hope to achieve with your survey. I cannot identify a clear objective that can be reached by analysis of the results e.g. your first question is a multi-part beast with cannot be answered with just one YES/NO response. Mark, Like Leonid, you talk about error messages from NONMEM. If you get an error message from NONMEM you do not get any results. NONMEM stops running. Most of the time you will get an message labelled "ERROR" or sometimes labelled "PROGRAM TERMINATED". When NONMEM detects an error there is no sensible way to relate this to a model result. (I am ignoring dredging into the INTER file which will be deleted anyway unless you meddle with the NONMEM source.) My proposal about random messages relates not to error messages but to status messages which are issued when NONMEM finishes the estimation step or the covariance step. At this stage there will always be parameter estimates ("the results"). PLEASE NOTICE THE DIFFERENCE BETWEEN AN ERROR MESSAGE AND A STATUS MESSAGE. Status messages indicate either success or failure. There may be other distractions such as boundary issues but I am only referring to the binary valued success/failure messages. I have documented in this thread the efforts of several groups (and you have recently indicated similar experiences) that the posterior parameter distributions (and ,in one study, the model choice decisions) obtained by parametric or non-parametric bootstraps are not different in any important way when classified by the estimation and covariance status messages. Thus these posterior distributions of parameters are not associated consistently with these status messages. It seems plausible therefore to propose that the messages themselves are not related to the parameters but are themselves triggered by a random process. My post hoc justification for the lack of useful association between status messages and parameter estimates (which have now been confirmed many times) is as follows: Because minimization and covariance success calculations are dependent on the results of finite precision floating point arithmetic then NONMEM's 'decision' can depend on pseudo-random insignificant bits. This kind of pseudo-random behaviour is more likely when one is pushing the model to reveal the secrets in the data and the numerical methods are most stressed. Simple test cases that do not push the model with the data can be bootstrapped and produce 'success' practically every time. But a real learning type analysis that explores the data will commonly hover over the yes/no decision boundary and thus success becomes a random event with does not signify anything important about the parameter estimates. You propose further experiments with other endpoints e.g. NPDE of observations instead of comparing posterior distributions of parameters. I look forward to your results. Thank you for attributing the idea of using NONMEM status messages as a source of random numbers to me. This is not my suggestion so please go ahead and patent it yourself :-) Ken, You describe me well :-) I am indeed a mechanistic modeller. However, I am an atheist in terms of statistical delusional systems (Bayesian, frequentist, etc) so dont really worry about stating my priors in a formal way. As I failed in maths and never took a statistics paper I like to justify the model choice based on biological priors which is why I always include weight as a covariate for clearance and volume. If run times permit then I prefer to explore the parameter uncertainty with a bootstrap or likelihood profile rather than playing around with initial estimates to randomly end up with minimisation success and asymptotic standard errors. Best wishes, Nick Mark Sale - Next Level Solutions wrote: > Ken, > > Thanks for your comments, and I think your observation about how mechanistic (vs statistically rigorous) the analysts views are is really critical. Clearly Lewis (and at the risk of speaking for him, I think Nick perhaps) have strong views about this. Conversely, I heard many times (and am sympathetic to) the views of some very smart statisticians. So, I suspect we won't resolve this by debating now any more than we have over the past 20 years of debating it. So, I once again propose generating some actual data, which I continue to believe is better than a two decade long debate about theory. > > Mark > > Mark Sale MD > Next Level Solutions, LLC > www.NextLevelSolns.com http://www.NextLevelSolns.com > 919-846-9185 >
Quoted reply history
> -------- Original Message -------- > Subject: RE: [NMusers] Models that abort before convergence Addendum > From: "Ken Kowalski" <[EMAIL PROTECTED]> > Date: Fri, November 21, 2008 4:39 pm > To: "'Leonid Gibiansky'" <[EMAIL PROTECTED]>, "'Mark > Sale - Next Level Solutions'" <[EMAIL PROTECTED]> > Cc: "'nmusers'" <[email protected]> > > Leonid, > > I have never reported out as a final model a run that failed to > converge or failed the COV step. My guess is that individuals who > frequently do probably tend to be more mechanistic in their model > building than I am and often push the complexity of their models > beyond what can be supported by the data in hand. For those that > do report out models that don't converge, I wonder if they have > tried re-running their models with different starting values > (15-20% different) and see if NONMEM fails to converge at the same > set of parameter estimates. My guess is in many cases it won't > although both sets of estimates may appear "reasonable" and give > similar fits and VPC. > > For individuals who have strong prior beliefs about their > mechanistic models, my thinking is that rather than using > approximate maximum likelihood methods and ignoring the > diagnostics that might suggest their model is unstable or not > fully supported by the data, I think they would be better served > by using a Bayesian approach. That way they can be explicit about > the strength of their priors and they don't have to worry about > convergence and COV step failures. JMHO. > > Ken > > Kenneth G. Kowalski > President & CEO > A2PG - Ann Arbor Pharmacometrics Group, Inc. > 110 E. Miller Ave., Garden Suite > Ann Arbor, MI 48104 > Work: 734-274-8255 > Cell: 248-207-5082 > Fax: 734-913-0230 > [EMAIL PROTECTED] > > -----Original Message----- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] > http://email01.secureserver.net/pcompose.php#Compose] On Behalf > Of Leonid Gibiansky > Sent: Friday, November 21, 2008 3:53 PM > To: Mark Sale - Next Level Solutions > Cc: nmusers > Subject: Re: [NMusers] Models that abort before convergence Addendum > > Mark, > "Useful" is the relative and subjective term. Error messages and > convergence information are useful to me (i.e., they make my > search of > the final model more efficient), and I'd like to understand > whether they > are useful to other people. I do not try to prove that the model > completed without error messages is correct, or that the model > completed > with rounding error is wrong, or whether the error messages provide > information not readily available in NPC, NPDE and PPC. I am > interested > to see how many people find it useful: full stop here, do not try to > interpret the poll beyond this simple statement. In addition, > questions > 4-7 will help us to understand how widespred is the use of models > with > failed convergence step and/or with failed minimization step. > Thanks > Leonid > > -------------------------------------- > Leonid Gibiansky, Ph.D. > President, QuantPharm LLC > web: www.quantpharm.com http://www.quantpharm.com > e-mail: LGibiansky at quantpharm.com > tel: (301) 767 5566 > > Mark Sale - Next Level Solutions wrote: > > > > Leonid, > > Let me understand: > > You now have a theory that the way to determine whether the > NONMEM error > > messages are useful (i.e., they tell you something about the model > > "goodness") is a poll. This, I think is a theory (and one well > > established in epistomolgy) of how to find an optimal solution - > appeal > > to a large number of presumably well informed people. As data > that may > > be relavant to this theory, I would point out that a poll gave us GW > > Bush as our 43rd president. > > Nick, in contrast has suggested that the error messages could be > used > > as a source of random numbers. This also, I think, is a theory > without > > data to support or contradict it. > > So .... > > Let me propose a solution - let's generate some data. Suppose we > > randomly generate 1000 models. We could tests the hypotheses: > > > > Are the error messages random (I suspect they are not, that there is > > some information in them). To test this, see if the error > messages are > > predictive of other (presumably non-random) measure of goodness - > NPC > > and NPDE, and perhaps PPC come to mind. > > > > Do the error messages provide information not readily available > in NPC, > > NPDE and PPC. > > Not really sure how to test this, without some "gold standard" of > > goodness, except perhaps to compare the different measures to the > model > > that was used to simulate the data (seems like measures based on > that > > would be "correct" in some way??). I need some ideas on this. > > > > > > I can generate, run and extract results from random models (using > the GA > > software) - I already have NPDE and PPC in it, was thinking of > adding NPC. > > > > Any interest/collaborators?? > > > > > > > > > > Mark Sale MD > > Next Level Solutions, LLC > > www.NextLevelSolns.com http://www.NextLevelSolns.com > http://www.NextLevelSolns.com > > 919-846-9185 > > > > -------- Original Message -------- > > Subject: Re: [NMusers] Models that abort before convergence Addendum > > From: Leonid Gibiansky <[EMAIL PROTECTED]> > > Date: Thu, November 20, 2008 9:57 pm > > To: > > Cc: nmusers <[email protected]> > > > > Nick, Mark, and All, > > We can argue indefinitely, but let me propose a poll. If you like to > > participate, reply directly to me (use "reply", not "reply to > all"). I > > will summarize all the replies received up to the end of > November. Skip > > the questions that you do not like to answer, write NA if the > question > > is not applicable. Summaries will be blinded. > > > > 1. Would you like Nonmem to stop producing all run-time (not syntax) > > error/warning messages (134, 137, number of significant digits, etc.) > > and "MINIMIZATION SUCCESSFUL" messages (YES/NO): > > > > 2. Do you remember at least one example when the run-time error > message > > helped you to find an error in your code (YES/NO): > > > > 3. In your experience, run-time error messages allow you to detect > > model > > errors or problems quicker than it would be done without error > > messages: > > (agree/disagree) > > > > 4. Have you ever used in your report/publication ANY model that > did not > > have $COV step completed (YES/NO): > > > > 5. Have you ever used in your report/publication ANY model that > did not > > converge (YES/NO): > > > > 6. Have you ever used in your report/publication FINAL model that did > > not have $COV step completed (YES/NO): > > > > 7. Have you ever used in your report/publication FINAL model that did > > not converge (YES/NO): > > > > 8. Define yourself as novice/intermediate/experienced Nonmem user: > > > > Thanks > > Leonid > > > > -------------------------------------- > > Leonid Gibiansky, Ph.D. > > President, QuantPharm LLC > > web: www.quantpharm.com http://www.quantpharm.com > http://www.quantpharm.com > > e-mail: LGibiansky at quantpharm.com > > tel: (301) 767 5566 > > > > -- Nick Holford, Dept Pharmacology & Clinical Pharmacology University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand [EMAIL PROTECTED] tel:+64(9)923-6730 fax:+64(9)373-7090 http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford