From: "Justin Wilkins"
Subject: [NMusers] Describing variability
Date:Thu, 27 Mar 2003 15:10:33 +0200
Hi all,
I'm working on the pharmacokinetics of rifampicin in different
populations (3 sets of patients from different sites and times, and one
group of healthy volunteers) with a view to describing the extent to
which those populations differ from one another.
This means I'll have to focus on IIV and IOV components in my analysis,
rather than simple PK parameters. Does anyone have any suggestions about
how to approach this in practice? I'm using the richest patient group
as a starting point for model building in NONMEM.
If you reply, please note that I'm a relative beginner in population PK!
Best regards
Justin Wilkins
Tuberculosis Research Unit
Division of Pharmacology
Department of Medicine
Faculty of Health Sciences
University of Cape Town
---------------------------
K45 Old Main Building
Groote Schuur Hospital
Observatory 7925
South Africa
Tel: +27 21 406 6659
Fax: +27 21 448 1989
Email: jwilkins@uctgsh1.uct.ac.za
http://www.uct.ac.za/depts/pha
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (
http://www.grisoft.com
).
Version: 6.0.463 / Virus Database: 262 - Release Date: 2003/03/17
Describing variability
33 messages
12 people
Latest: Apr 03, 2003
From:"Bhattaram, Atul"
Subject:RE: [NMusers] Describing variability
Date:Thu, 27 Mar 2003 08:50:00 -0500
Hello Justin Wilkins
You can combine all the information from different studies and analyse by
one model. Since you say you have "rich" data you can use FOCE or
FOCE+INTERACTION. I would look at the histograms of the pk parameters and
see if the two groups (healthy and patients) are different. Then you add the
interoccasion variability (IOV)and check the variability estimates. Doing
stepwise will always help you to figure out the importance of each step in
model building.
Venkatesh Atul Bhattaram
CDER, FDA.
From: Nick Holford
Subject:Re: [NMusers] Describing variability
Date:Fri, 28 Mar 2003 10:12:27 +1200
Justin, Atul,
I would suggest you always add the between occasion variability to your model before searching for
fixed effect covariates (e.g. healthy vs patient).
The seminal paper by Karlsson & Sheiner pointed out "Our simulations show that neglecting IOV can
cause significant bias in any of the fixed-effect population parameter estimates".
Karlsson MO, Sheiner LB. The importance of modeling interoccasion variability in population
pharmacokinetic analyses. Journal of Pharmacokinetics & Biopharmaceutics 1993; 21(6):735-50.
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
email:n.holford@auckland.ac.nz tel:+64(9)373-7599x86730 fax:373-7556
http://www.health.auckland.ac.nz/pharmacology/staff/nholford/
From: "Justin Wilkins"
Subject:RE: [NMusers] Describing variability
Date:Mon, 31 Mar 2003 10:47:14 +0200
Dear Nick, Atul, and NMusers...
Thanks for the feedback. I incorporated the approach used the the
Karlsson & Sheiner paper. Some questions arising from what you've
suggested:
1) Why would FOCE be better? It's worth pointing out that one group of
patients was made up of a large cohort sampled sparsely (3x daily, at
random times) on multiple occasions - would this rock the boat, so to
speak? When running a FOCE analysis on the rich patient group, it takes
a great deal longer and invariably generates errors (MINIMIZATION
TERMINATED DUE TO PROXIMITY OF LAST ITERATION EST. TO A VALUE AT WHICH
THE OBJ. FUNC. IS INFINITE (ERROR=136)), even using a higher value for
SIGDIGITS, the conditional statements suggested by Alison Boeckmann in
this list added to $PK and making adjustments to NSIZES and TSIZES.
Also, generated parameter estimates for CL, V and KA are markedly larger
than those generated by a plain FO run.
2) When pooling groups, different sampling strategies were used since a
lot of it is retrospective and not designed specifically for this study.
How should I deal with occasion specification, considering that (all
told) there are about 15 different sampling dates across the group?
3) Finally, why use the SAME constraint in the ETA initial estimates for
all but the first occasion? The Karlsson & Sheiner paper wasn't clear on
that point, and it seems to suggest that the assumption is being made
that IOV is constant across all occasions.
Thanks for the help so far!
Justin
From:"Bhattaram, Atul"
Subject:RE: [NMusers] Describing variability
Date:Mon, 31 Mar 2003 11:57:35 -0500
Hello Justin
Reducing the model dimensionality will help in this situation. The more
information you have from different occasions will enable you to estimate
them reliably. Check the link below for the IOV question which has the reply
by Dr Holford.
http://www.cognigencorp.com/nonmem/nm/94nov171999.html
Venkatesh Atul Bhattaram
CDER, FDA.
From: Nick Holford
Subject: Re: [NMusers] Describing variability
Date:Tue, 01 Apr 2003 07:12:23 +1200
Justin,
Your question about using SAME with BOV is a good one. It is making the assumption that BOV is
constant across all occasions but you need to understand exactly what is the same. It is NOT the
value of ETA but the value of OMEGA ie. the variance of the distribution from which ETA is sampled
randomly on each occasion. So on each occasion a new ETA is used but it comes from the same
distribution as other occasion ETAs.
If you choose to not use the SAME option but instead specify a different OMEGA for each occasion
then the you will still get a different ETA for each occasion but perhaps you would get more
variability in the ETAs on the 2nd occasion compared with the first because OMEGA is bigger for
OCC=1 compared to OCC=2.
I find it hard to think of a situation where you would assume that the size of the random
variability varied from occasion to occasion. Remember you are assuming that the average
variability on each occasion is zero. If you think there is a systematic change so that the
average value of the parameter changes with occasion then you should code this as a function of
THETA and OCC.
I have done some limited testing of estimating BOV with and without SAME. I could find no real
difference in the results when the data was simulated with SAME. The main difference is that you
have extra OMEGA parameters to estimate and run times will be longer. So the bottom line is use
the SAME option unless you can think of a good reason not to.
The definition of occasion is a personal choice. I like to think that CL may vary from dose to
dose so I choose each new dose interval with one or more conc measurements as an occasion.
Why use FOCE? Because it is a better method. FO is quick and dirty. You may be lucky and the
results may the same as FOCE but if they differ then the FOCE results are more likely to be a
better reflection of reality. In my experience FO produces very much larger estimates of OMEGA
than FOCE. I do not trust FO. I do not worry too much about convergence as long as the graphical
fits look good and the parameter estimates are reasonable in a mechanistic sense. Remember that
all the published data comparing FO and FOCE has had to rely on simulations with well behaved
distributions and in all cases I know of simple models. Real data is often quite different. I put
my faith in the theoretical expectation that FOCE is intrinsically a better algorithm rather than
rely on some simple simulations that show FO and FOCE dont seem to be very different.
Nick
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
email:n.holford@auckland.ac.nz tel:+64(9)373-7599x86730 fax:373-7556
http://www.health.auckland.ac.nz/pharmacology/staff/nholford/
From: VPIOTROV@PRDBE.jnj.com
Subject: RE: [NMusers] Describing variability
Date:Tue, 1 Apr 2003 13:36:05 +0200
Nick,
Did I understand you correctly you accept FOCE results even if the run stops due to rounding errors, which
is so common in NONMEM V (hopefully, it will be fixed somehow in NONMEM VI)?
The same question to other nmusers: how do you cope with annoying rounding errors associated with FOCE,
which are especially common in case of dense data?
Best regards,
Vladimir
-----------------------------------------------------------------
Vladimir Piotrovsky, Ph.D.
Research Fellow, Advanced PK-PD Modeling & Simulation
Global Clinical Pharmacokinetics and Clinical Pharmacology
Johnson & Johnson Pharmaceutical Research & Development
Turnhoutseweg 30
B-2340 Beerse
Belgium
Tel: (+3214) 605463
Fax: (+3214) 605834
Email: vpiotrov@prdbe.jnj.com
From: Leonid Gibiansky
Subject: RE: [NMusers] Describing variability
Date:Tue, 01 Apr 2003 07:44:28 -0500
Vladimir,
I usually increase the precision until I get 3 significant digits in the
output, and then happy with it. In one recent project I found out that
changing the ADVAN routine helped to achieve convergence (even when the
results were nearly identical). Saying that, you have to be sure that the
results are independent of the precision. In one of the projects I faced
the situation when the objective function fluctuated widely depending on
the requested precision. This is not good, you would not want to accept it,
would try to change the model, etc. (In that particular case,
log-transformation solved the problem).
Regards,
Leonid
From:"Diane R Mould"
Subject: RE: [NMusers] Describing variability
Date:Tue, 1 Apr 2003 08:03:29 -0500
Dear Vladimir
I also use the same criteria that Nick mentions below. In some cases, getting a model to converge
with the $COV step is not possible although the fits may be quite good and the parameter estimates
are quite reasonable. In such cases, I may accept the model regardless of rounding errors. While it is
true that the errors suggest some instability but it may be the best one can do given the data. Again,
in such cases, I do the best I can to test the model or, as Leonid suggests, try other tricks such as re-
parameterizing the model, running with higher significant digits and then restarting the analysis
with (hopefully) better initial estimates, changing ADVANs, or TOL, etc but these are not always
completely successful.
in most cases, these analyses are meant to suggest to us plausible trends in the data rather than determining
the absolute truth. I look at Nonmem as a good tool for detecting such trends (hypothesis generating if you
will) rather than a tool for testing hypotheses.
Diane
From: "Bachman, William"
Subject:RE: [NMusers] Describing variability
Date:Tue, 1 Apr 2003 08:59:27 -0500
As Diane suggests, you can get an acceptable fit without getting the $COV step to run successfully. $COV is a
bonus if you can get it in some cases (e.g. with data that could have been better than what you've got to work with).
I don't think Nick meant to imply that he would use a run with rounding errors regardless of the number of significant
digits (e.g. significant digits not reported) and Leonid's criteria of 3 digits may be too strict. In some cases 2 digits
is adequate. It's a judgement call.
I also think dismissal of FO as quick and dirty is also a little over the top. It actually does a remarkably good job
for sparse data in cases where you can't even get FOCE to converge. At the risk of sounding like a company stooge, we
need to keep in mind what a daunting problem the nonlinear mixed effect modeling of clinical trial data is! That
being said, there of course is room for improvement. The reason I even bring it up is that I get the impression that
some people may be writing these opinions down as "RULES WRITTEN IN STONE". The judgement calls and opinions
are what make modeling interesting for me. When it becomes all clearly defined or rule-driven, I'll go
do something else!
Bill
From:"Sam Liao"
Subject: RE: [NMusers] Describing variability
Date:Tue, 1 Apr 2003 10:42:05 -0500
I agree will Bill's comments concerning $COV.
In NONMEM analysis, I often have to try different SIG for the same PKPD model to find the one that run
$COV successfully. One wish I have for the next version of NONMEM is to have the search built-in as one option.
Best regards,
Sam Liao, Ph.D.
PharMax Research
20 Second Street,
PO Box 1809,
Jersey City, NJ 07302
phone: 201-7983202
efax: 1-720-2946783
From:"Kowalski, Ken"
Subject: RE: [NMusers] Describing variability
Date:Tue, 1 Apr 2003 11:04:04 -0500
All,
A successful $COV step is not a bonus. $COV step failures and convergence problems (i.e., rounding errors)
are indicative of some form of ill-conditioning or over-parameterization of the model. Granted, such over-
parameterized models may indeed fit the data well and sometimes they may result in reasonable estimates but
that is not guaranteed. Moreover, such estimates are probably not unique...change the starting values by 10%
and you'll probably end up with a different set of estimates that fit the data equally well. One should be extra
cautious in interpreting the parameter estimates and using the model for extrapolation when such instability arises.
Use of such over-parameterized models for inference should probably be supported by simulation studies
on a case-by-case basis.
The frequentist-based methods in NONMEM V rely solely on the data in hand to estimate the parameters in the
model. If one is fitting a complex model that is not completely supported by the data in hand we have
two basic choices:
1) Reduce the complexity of the model to remove ill-conditioning while still providing a good fit to the data, or
2) Make use of additional information regarding the complex model based on prior data and/or beliefs.
The second option is basically to take a Bayesian approach. If one has a lot of confidence that the complex
model is the correct one and the data is consistent with this model but not rich enough to estimate all the
parameters (that's what the rounding errors and $COV step failures are indicating) then one should explicitly make
use of the confidence in this information. If one has a lot of confidence in the value of one or more parameters
that are not well estimated with the existing data then consider fixing it to that value to remove the ill-conditioning.
This can be done more formally taking into account uncertainty in one's prior beliefs by using a Bayesian approach.
Thinking of successful convergence and $COV steps as a luxury (i.e., nice to have but not necessary) is not a good
practice. If you tend to build complex models that exceed the information content of the data but you KNOW your
model is right based on the science, then use a more appropriate tool that incorporates this knowledge. To fit
the complex model using a frequentist-based method without incorporating your prior knowledge and 'pretending'
that the data can accurately and precisely estimate all of the parameters is risky.
JMHO
Ken
From:"Bachman, William"
Subject:RE: [NMusers] Describing variability
Date:Tue, 1 Apr 2003 11:23:16 -0500
Ken,
While those are all certainly good suggestions (and I highly recommend them), there are still some relatively simple
models (read as not over-parameterized) where you won't get a successful $COV (e.g when sampling is limited and
there is just no way you're going to get any more or better data, like pediatric studies.)
Should you not use the model for any purpose? I don't think so. It may still be adequate for descriptive purposes
or planning of further studies. $COV is a bonus in that it gives you added confidence that you have not found a local
minimum (as well as estimates of the standard errors, etc). If the situation warrants, certainly take a Bayesian
approach or do extensive simulation studies, but I don't think that's ALWAYS necessary, do you? You have implied
that I don't think successful convergence or $COV is ever needed or desired. The point I'm trying to make is that some
sort of balanced approach can be taken and sometimes, you have to "go with what you got."
Bill
From:"Diane R Mould"
Subject: RE: [NMusers] Describing variability
Date:Tue, 1 Apr 2003 11:43:26 -0500
Hi again
While I think that most of us would agree with Ken's comments that failing to obtain a covariance step is an
indication of a problem with the model (yes, over parameterization is typically the culprit), I also think some
attention should be paid to the intended use of the model and the stage of development that one is in when
this happens. Perhaps the 'learning versus confirming' aspect should be applied here as well. If the drug is
in the final stages of development and one is attempting to assure that proposed dose regimens will provide
safe and efficacious coverage then I would be very unhappy to accept a PK model that had these sorts of
problems. however, if I were in Phase II and the model was intended as a guideline for possible dose
adjustments (which presumably would be tested in a protocol) then minor issues would be of slightly less
concern. I think that tests such as altering the initial estimate to evaluate the effect on the results
is something that all of us try and that gross instabilities such as are mentioned below are of course even
greater cause for concern.
In addition, the type of model one is dealing with has to be considered as well. Its rare that I cant get a $COV
step with a PK model, but conversely it is often difficult to get this with PKPD models - particularly complex ones
involving disease progression. Long run times further complicate the matter and the relative importance of obtaining
standard errors for such a model may be quite minor.
its difficult to formulate suggestions based on such broad generalities but we do need to keep the use of the
model in mind when making such decisions. However, I do agree with Bill that throwing out a potentially useful
model when a $COV step fails seems inappropriate. I don't think its reasonable to ignore what has been learned by
model development simply because the $COV step fails although I would always be happier if it succeeded.
thanks
Diane
From:Leonid Gibiansky
Subject:RE: [NMusers] Describing variability
Date: Tue, 01 Apr 2003 12:11:44 -0500
Just to make it more specific:
We may need to distinguish the situation when
1. $EST step fails (due to rounding errors) and
2. $EST converges but $COV step fails.
I think (1) is more dangerous and should not be mixed up with (2). We can
skip $COV step for a number of reasons (i.e., long run time), but it would
be best to get $EST convergence, if possible. Still, if we get rounding
errors and cannot get desired precision, we may accept the run if number of
significant digits is 3 (well, may be 2) and the parameter estimates are
independent of the requested precision.
It would be nice to have two NONMEM input parameters to control: precision
of calculations and precision criteria for stop. This would allow to avoid
chasing the process, when the achieved precision is always 0.5 less than
requested one no matter how high (or how low) you fix SIGDIGITS .
Leonid
From:"Kowalski, Ken"
Subject: RE: [NMusers] Describing variability
Date: Tue, 1 Apr 2003 12:15:00 -0500
Bill,
My comments are imbedded below.
Ken
Quoted reply history
-----Original Message-----
From: Bachman, William [mailto:bachmanw@globomax.com]
Sent: Tuesday, April 01, 2003 11:23 AM
To: 'Kowalski, Ken'; Bachman, William; 'Diane R Mould'; VPIOTROV@PRDBE.jnj.com;
n.holford@auckland.ac.nz; nmusers@globomaxnm.com
Subject: RE: [NMusers] Describing variability
Ken,
While those are all certainly good suggestions (and I highly recommend them), there are still some relatively
simple models (read as not over-parameterized) where you won't get a successful $COV (e.g when sampling is
limited and there is just no way you're going to get any more or better data, like pediatric studies.)
[Kowalski, Ken] An over-parameterized model arises when the data cannot support estimating all of
the parameters regardless of the reason. In your example above, the over-parameterization is a result of
the limitations of the design. An over-parameterized model may be considered a simple model with the right
set of data but with a limited set of data it can be overly complex. For example, a dose-response might be correctly
described by a simple Emax model, however, if we only test doses in a narrow range, say in the linear range of
the dose-response, there may be an infinite combination of estimates of Emax and ED50 that will provide a good
fit to the dose-response. Certainly as a descriptive summary of the dose-response the over-parameterized model
fit may be fine but I would be extremely cautious in using these estimates to guide dose selection for a future study
particularly if I was planning to extrapolate to higher doses.
Should you not use the model for any purpose? I don't think so. It may still be adequate for descriptive purposes
[Kowalski, Ken] Agreed, see comment above. or planning of further studies. [Kowalski, Ken]
Using over-parameterized models for planning further studies should be done cautiously recognizing the limitations of
the parameters estimates and the problems in using the model to extrapolate. $COV is a bonus in that it gives
you added confidence that you have not found a local minimum (as well as estimates of the standard errors, etc). If the
situation warrants, certainly take a Bayesian approach or do extensive simulation studies, but I don't think that's
ALWAYS necessary, do you? You have implied that I don't think successful convergence or $COV is ever needed or
desired. The point I'm trying to make is that some sort of balanced approach can be taken and sometimes, you have to "go
with what you got."
[Kowalski, Ken] I wouldn't put it that way. A bonus makes it sound like we don't need to strive to obtain
stable models. To the contrary, that should be the norm. I do recognize that desperate times may call for desperate
measures I'm just concerned that we're sending the wrong message that trivializes the importance of convergence
and $COV step.
Bill
From:"Diane R Mould"
Subject:RE: [NMusers] Describing variability
Date:Tue, 1 Apr 2003 12:52:29 -0500
Leonid
amen! I agree that the big item is that the $EST step is successful (no
rounding errors) and that the $COV step is nice to see. However, to add to
the confusion, I am presently modeling data that ran with $COV under one
compiler with a Pentium III processor and the $COV fails under a different
compiler with Pentium IV processor. what does that mean? ;-)
Diane
From: "Kowalski, Ken"
Subject:RE: [NMusers] Describing variability
Date:Tue, 1 Apr 2003 13:00:26 -0500
Diane,
I'm pretty much in agreement with your comments below and it is certainly a good point that
we need to keep in mind the intended use of the model. But I'm sure we both can dream up
situations where we would not want to use a severely over-parameterized PK/PD model to
design a future study even when we are in a learning mode. Moreover, if we have learned by
model development, then lets make use of those estimates and not just the form of the model
when we fit a new set of data that may have limited information to estimate all the parameters.
If a Bayesian approach is too difficult, why not fix certain estimates based on prior modeling
or pool the data so as to remove the ill-conditioning? That would be my first choice.
Ken
From:"Kowalski, Ken"
Subject:RE: [NMusers] Describing variability
Date:Tue, 1 Apr 2003 14:13:00 -0500
I agree rounding errors are a bigger concern but that still doesn't diminish
the importance of the $COV step and the diagnostics that a successful $COV
step provides (even if one doesn't plan to use the standard errors for
making inference). Moreover, a successful $COV step is not the end in
itself. I've seen situations where an over-parameterized model resulted in
convergence and a successful $COV step and yet, NONMEM will report
correlations between some parameters to 1.000 (to three decimal places).
Such a model fit results in a numerically non-singular Hessian but it is
extremely ill-conditioned even though the $COV step ran successfully. I
suspect the situation that Diane describes below may be an example of this.
Changing compilers which may have different numerical accuracies or changing
starting values, etc. to get the $COV step to run successfully should not be
the end goal. In this setting chances are the model is still
ill-conditioned regardless of whether the $COV step ran. It is important to
inspect the correlation matrix and its eigenvalues (PRINT=E option on $COV)
to assess the stability of the model rather than to simply acknowledge that
the $COV step ran.
I know I come across as too rigid but I'd rather err on that side as opposed
to dismissing the $COV step as if it were something we could do without. In
my opinion, ignoring $COV step failures should be the exception to the rule
and not the rule itself.
Regards,
Ken
From: "Steve Duffull"
Subject:RE: [NMusers] Describing variability
Date: Wed, 2 Apr 2003 08:14:40 +1000
Hi all
My 2c worth. I think that Ken has an important point here - the failure
of $COV due to a non-positive definite R or S matrix is an all or
nothing feature of NONMEM. Models for which $COV work may be
ill-conditioned and models for which $COV does not work may only be just
a 'fraction more' ill-conditioned. For instance I have transferred
matrices from MATLAB to NONMEM and vice versa and found that NONMEM has
described matrices as non-positive definite when MATLAB was happy to
work with them. This suggests that we are dealing also with a degree of
ill-conditioning - and the matrix algebra in NONMEM is perhaps not as
advanced as MATLAB.
In either case accepting or not accepting a model based on an
all-or-nothing response from NONMEM does not sound sensible. This seems
tantamount to saying that someone who is over 65 years of age is old but
someone who is 64.99 years is not old?
Kind regards
Steve
PS You could of course use BUGS :-)
===================================
Stephen Duffull
School of Pharmacy
University of Queensland
Brisbane QLD 4072
Australia
Tel +61 7 3365 8808
Fax +61 7 3365 1688
http://www.uq.edu.au/pharmacy/sduffull/duffull.htm
University Provider Number: 00025B
From: "Kowalski, Ken"
Subject: RE: [NMusers] Describing variability
Date:Wed, 2 Apr 2003 09:26:21 -0500
Steve,
Agreed. And to take it one step further, a model with rounding errors may
only be a 'fraction more' ill-conditioned than a model that converged but
with a failed $COV step. Accepting a model solely on the basis of whether
NONMEM says it converged is another example of the all-or-nothing response
that may not be sensible. We need to make better use of the diagnostics
that NONMEM provides to evaluate the stability of our model rather than just
relying on these all-or-nothing flags.
Although poor precision will typically be associated with estimates of
over-parameterized models, my concern is more with the potential
inaccuracies (biases) of these estimates as the ill-conditioned model could
converge to a local minimum or saddle point even if the model provides a
good fit to the data. Hopefully, if such wildly biased estimates are
obtained they will be obviously unreasonable, but I don't know if we will
always realize it when we are building very complex models.
Ill-conditioning is not a trivial matter that we should be dismissing
lightly.
Regards,
Ken
From: Leonid Gibiansky
Subject:RE: [NMusers] Describing variability
Date:Wed, 02 Apr 2003 10:45:31 -0500
It looks like we agreed that
1. It is not good to use the model that did not converged.
2. It is not good to use the model that converged but does not provide $COV
step.
3. Even if $COV step converged, this is not a guarantee that the model is
correct, since it may be ill-conditioned any way.
Saying that, I would propose to use common sense:
If
1. CL, V, KA etc. estimates are reasonable,
2. PRED vs. DV plot looks good.
3. Variability estimates are within 30-40%.
4. Simulations show good agreements with observed data (i.e., the central
line follows population prediction, 90% CI encompass most of the data)
5. Distributions of random effects are in agreement with our assumptions
(no bias).
6. No visible tends in eta vs covariates plots (for covariate models).
7. No visible trends in eta vs dose group (if any) plots.
8. All the reasonable measures had been taken to force convergence
then accept the model. Otherwise try to reduce/correct it.
This diagnostic is more or less independent of the final model properties,
although I would
1. Try to get $EST to converged if possible.
2. Try to get $COV step converged if possible
3. Do not accept the model if the relative standard error of estimate or
variability is say, more that 100%.
After all, this is not a mathematical theorem or a rigid proof. If the
model is good for the purposes that are formulated at the start of the
analysis, then we may be less strict on the math side.
As was said, "All the models are wrong but some of them are useful"
Leonid
From: "Bachman, William"
Subject: RE: [NMusers] Describing variability
Date:Wed, 2 Apr 2003 11:27:35 -0500
I respectfully disagree with 1. and 2. There will be times when it is
appropriate to use a model that has:
a. terminated due to rounding errors
b. converged but not given a successful $COV step (there are instances when
the model is NOT problematic at all, yet NONMEM will not give a $COV so here
is where you argument falls apart). I will try to find a concrete example.
This is just the facts of NONMEM as it exists today so your statistical
arguments don't apply.
I think it's a good idea to formalize the thought process to some degree.
On the other hand you're reducing the process to a set of RULES that are not
really hard and fast as you seem to think. I'm glad we stimulated
discussion on the subject but I think we're far from a conscensus by any
means.
What I do agree with is Leonid's "common sense" approach (with the exception
of variance estimates within 30-40%, there is no basis for this.)
Bill
From: "Bachman, William"
Subject:RE: [NMusers] Describing variability
Date:Wed, 2 Apr 2003 11:54:15 -0500
No, Leonid, I'm sorry for not being witty enough to catch the humor!
:)
Bill
From: Leonid Gibiansky
Subject:RE: [NMusers] Describing variability
Date: Wed, 02 Apr 2003 11:54:16 -0500
Actually, this part
>It looks like we agreed that
>1. It is not good to use the model that did not converged.
>2. It is not good to use the model that converged but does not provide >$COV
>step.
>3. Even if $COV step converged, this is not a guarantee that the >model is
>correct, since it may be ill-conditioned any way.
>
was a joke (may be not too successful). I tried to show that taking the
problem very rigorously would kill it (although each step will be perfectly
logical):
As to the 30-40%, this is a wish list, I do accept them up to about 100% if
nothing, including FOCE, helps. However, there should be a limit here. It
make no sense to use the parameter with 300% variability, this invalidate
the entire model, you would get what you want on the fitting step, but get
meaningless prediction on the simulation step. So, Bill, at least with you
we are in full agreement !!!
Sorry for confusion....
Leonid
From:Scott VanWart
Subject:RE: [NMusers] Describing variability
Date:Wed, 02 Apr 2003 11:55:44 -0500
To chip in my two cents, the list Leonid prepared was very nice, but of course we all realize that
certain guidelines will not always apply to every situation that arises. It is always wise to think through
all the diagnostics that are available when evaluating a model.
As for the rounding errors problems, I agree with Bill that this does not always indicate that the model
is flawed. It could be that your initial parameter estimates need to be refined to help the search process,
or that NONMEM cannot determine a particular level of precision for a given parameter. I find it helpful to
sometimes "tweak" the system by increasing the number of significant digits in $EST from the default (3) to
a larger number such as 4. This often is enough to prevent the rounding error problems from occurring. If
after following these suggestions the rounding error problems continue to persist, this could be an indication
that the data is not sufficient to estimate that parameter or there may be some other
problem with your model.
Scott
From: "Kowalski, Ken"
Subject:RE: [NMusers] Describing variability
Date:Wed, 2 Apr 2003 12:43:34 -0500
Leonid,
You wrote:
>> 3. Even if $COV step converged, this is not a guarantee that the model is
>> correct, since it may be ill-conditioned any way.
With real data there is no way to know if a model is correctly specified
(hence, the famous statement from Box: "All models are wrong but some are
useful"). Please note however, that an ill-conditioned model does not imply
that the model is wrong. In a previous message I gave the example of a
simple Emax model to describe a dose-response relationship. Assuming that
the Emax model is correct, for a given set of data the Emax model may still
be ill-conditioned if we study too narrow a dose range such that we can't
get reliable estimates of the Emax and ED50. In this case, although the
model is correctly specified we need to be cautious in interpreting the
estimates of Emax and ED50 from an ill-conditioned model fit. In so doing,
if we can make the assessment that the estimates we obtained appear
reasonable, then certainly we might use them. This is the practical aspect
that most of you are willing to rely on when you accept such
over-parameterized models...which is fine provided that you are willing to
make that assessment that the estimates you obtained are indeed reasonable.
I just wonder if we will always know whether our estimates are reasonable.
>> 3. Do not accept the model if the relative standard error of estimate or
>> variability is say, more that 100%.
This is certainly a diagnostic one could look at but there are others that
can help diagnose the degree and nature of the ill-conditioning. For a
successful $COV step the PRINT=E option will report out the eigenvalues of
the correlation matrix sorted from smallest to largest. The ratio of the
largest-to-smallest eigenvalues is often referred to as the condition number
and is a measure of the degree of ill-conditioning. Montgomery & Peck,
Introduction to Linear Regression Analysis, Wiley, 1982, pp. 277-278
suggests that a condition number exceeding 1000 is an indication of severe
ill-conditioning. Inspection of the correlation matrix of the estimates can
help diagnose the nature of the ill-conditioning. In the Emax example I
gave above, the ill-conditioning would result in a pairwise correlation of
the estimates between Emax and ED50 to be very close to 1. Bates & Watts,
Nonlinear Regression Analysis and its Applications, Wiley, 1988, pp.90-91,
suggests that correlations exceeding 0.99 (in absolute value) should be a
cause for concern regarding ill-conditioning.
Ken
From: "Kowalski, Ken"
Subject:RE: [NMusers] Describing variability
Date:Wed, 2 Apr 2003 13:39:49 -0500
Bill,
I would very much like to see a concrete example where you claim that NONMEM
will converge with a failed $COV step but the model is not ill-conditioned.
I'm willing to accept that such a situation can arise. Perhaps it is
related to numerical deficiencies with NONMEM's optimization algorithm and
mathematical operations (basically what Steve Duffull was saying about
matrices that MATLAB had no problems inverting but NONMEM did). Still, my
own experience with failed $COV steps are that they are usually related to
ill-conditioning and not numerical problems with the mathematical
calculations.
Regards,
Ken
ps Is this fun or what? :-)
From:"Hutmacher, Matthew [Non-Employee/1820]"
Subject:RE: [NMusers] Describing variability
Date:Wed, 2 Apr 2003 13:49:08 -0600
Hello everyone,
An example of a model that may not give a COV step but is not necessarily
ill-conditioned is a lag-time model or even a zero order infusion model.
The reason is that the derivative does not exist at the change point, and if
the point estimate is close to a data value, then the COV may not run. This
is typically a problem with FOCE only.
Matt
From:"Kowalski, Ken"
Subject:RE: [NMusers] Describing variability
Date:Wed, 2 Apr 2003 15:10:32 -0500
Good point...I stand corrected. I know to some I have come across as giving
a rigid set of rules for dealing with $COV step failures. Obviously from
the discussions we all have had this is not an easy thing to do. I think
we can all agree that the COV step provides useful information for assessing
the stability of the model but there are situations where one can and should
proceed with a model that has a failed COV step. Hopefully this will end
this thread and we can move on to more important things like keeping up with
the war news! :-)
Ken
From:"Bhattaram, Atul"
Subject:RE: [NMusers] Describing variability
Date:Wed, 2 Apr 2003 15:31:31 -0500
Hello All
One question.
Could someone discuss the merits and demerits of using S MATRIX or instead
of R-MATRIX (Hessian and Cross-Product Gradient) for COV step failures?
Venkatesh Atul Bhattaram
CDER, FDA.
From:VPIOTROV@PRDBE.jnj.com
Subject:RE: [NMusers] Describing variability
Date:Thu, 3 Apr 2003 09:46:19 +0200
Thanks to all who participated in the discussion intiated by the mail below.
_______________________________________________________
From:VPIOTROV@PRDBE.jnj.com
Subject:
Date:Thu, 3 Apr 2003 09:46:19 +0200
Thanks to all who participated in the discussion intiated by the mail below.
In the mean time I have found one more way one can affect the convergence and make it
smoother. I had rounding error problems with the FOCE method when fitting a PK model to
log-transformed data. The run stabilized substantially when I multiplied logarithms by 10
and did the same in the control stream. As this is a transformation of both
sides parameters are not affected.
Best regards,
Vladimir