RE: $OMEGA blocks and log-likelihood profiling
From: "Kowalski, Ken"
Subject: RE:[NMusers] $OMEGA blocks and log-likelihood profiling
Date: Thu, June 3, 2004 12:40 pm
Nick, Jeff, Marc, Nmusers,
I don't think it is hairsplitting depending on your definition of a "badly
formed" or "malformed" model. I tend to equate such statements with a poor
fitting model. My point is that good fitting models can suffer from the
effects of over-parameterization just as much as poor fitting models.
I don't see how you can conclude because 28% of the bootstrap models
converged that the failure of the other 72% is not related to some
systematic feature of the model (although I wouldn't characterize it as some
deficiency in a systematic feature of the model as this might imply that the
model is poorly fitting but rather I would characterize it as a possible
limitation of the data to support the model). Your latest COV step
information regarding the 7% that converged with a successful COV step is
encouraging from the standpoint that over-parameterization is not an issue
for these bootstrap runs but Jeff makes a good point that the ones that did
not converge are censored out from this evaluation. Perhaps the 93% where
the estimation and/or COV step failed is still related to problems with
over-parameterization. For example, if your dataset is based on a pooled
analysis with several studies with varying designs where only small portions
of the data from 1 or 2 studies provide information on some key parameter
then a basic bootstrap resampling scheme that does not stratify by study
and/or these key treatment/design features when performing the resampling
from the original dataset may result in bootstrap datasets that
under-represent key data necessary to support the model. I'm not saying
that this is the issue you are encountering but it certainly is something I
would investigate if I was encountering such a large failure rate in my
bootstrap runs.
I agree with Jeff on his philosophical point that we want to use
bootstrapping to characterize the uncertainty in our parameters and having a
large fraction fail is somewhat disconcerting about our ability to
characterize this uncertainty. I always feel more enlightened when I can
identify the root cause for convergence/COV step failures. From my own
experience these failures are often related to some aspect of
over-parameterization in elements of theta, Omega or Sigma. I'm not always
successful in resolving these failures, but thinking through possible
limitations of the data to support the model and diagnostic NONMEM runs
specifically trying to resolve the convergence/COV step failures are worth
the effort IMHO.
With regards to NONMEM V's estimation methods being a dog I think you are
being too harsh. I'm not fully informed on Marc's or Tom's work with FOCE
and problems with convergence to local minima but I'm willing to bet these
problems are especially pronounced when fitting models that are somewhat
over-parameterized. I draw analogy to Pete Bonate's exercise where he
showed that sensitivity to compiler/NONMEM installations was most pronounced
when fitting ill-conditioned (over-parameterized) models. I think a lot of
things can go wrong with NONMEM when we push the data too hard in supporting
the models we fit. Statements that many of you make trivializing the
importance of trying to get a successful COV step and just as important, to
actually review the COV step output just galvanizes my thinking that the
effects of over-parameterization are too often ignored.
I agree with you that at the end of the day we need to develop good fitting
models that meet the purposes/intended use of the model. However, I
disagree that achieving convergence and successful COV steps in the majority
of the bootstrap runs is an arbitrary hurdle...it's good science.
Ken