RE: $OMEGA blocks and log-likelihood profiling
From: Nick Holford n.holford@auckland.ac.nz
Subject: RE:[NMusers] $OMEGA blocks and log-likelihood profiling
Date: Fri, June 11, 2004 5:25 pm
Matt,
Thanks for your remarks. Most of which I fully agree with. But I do take issue with
your faith in statistical theory. For many years it was commonly accepted that the
difference in log likelihood under the null was distributed chi-square. But we know
from data based experimentation that this is not true for NONMEM. It is also known
that NONMEM's standard error's are of little use for confidence intervals because we
can find from data based experimentation (using bootstraps) that confidence
intervals are often asymmetrical and SE based predictions of parameters such as
OMEGA can easily include impossible negative values. Once again statistical theory
applied to NONMEM is misleading. These data based experimental tests have forced me
to think harder about the assumptions we make when applying statistical theory in
this area.
So why should I accept the 'good statistical practice' notion that getting the $COV
step to run in NONMEM is a marker for a 'stable model' which is somehow more
reliable? It is here that I am asking for data. Can anyone support this hypothesis
with data based experiments using NONMEM?
Of course I would be happy if all my runs converged and the $COV step ran. But in
the real world of non-trivial PKPD analysis this cannot be guaranteed and indeed I
find the closer one gets to a mechanistically plausible model the harder it is to
get these things to happen. So in the real world I live in I feel I cannot rely on
statistical theory for NONMEM results but want some data based backup. The bootstrap
and randomization test are tools for doing this.
You and other have indicated that you think I wish to generalize the results from
one study to all cases. If I have given this impression it was not intentional. I am
not offering a general theory but I am offering an experiment to test a hypothesis.
The hypothesis is that NONMEM runs that converge with $COV are somehow more
reliable/stable than those that do not. I really don't have a good idea how to test
for model 'stability' but in this context I would consider reliability to mean that
the parameter estimates are unbiased. As I understand it this hypothesis is not
built on any mathematical theory but arises from 'good statistical practice'. In the
single case I have tested I can find no evidence to support this hypothesis. I have
read and understand the various viewpoints that have offered reasons why my one
experiment may be misleading. I accept these possibilities (e.g. failed runs might
widen bootstap confidence intervals) but the data at hand gives no obvious support
that this
is happening. I am aware of the need to do some 'detective work' to try to
understand why a model may be failing to converge. There are numerous ad hoc tricks
one can apply e.g. using SLOW or HYBRID or increasing SIGDIG to get results for MSFI
with a lower SIGDIG. But at the end of the day, after months of work, I want to move
forward with the model that best describes the data and seems to explain how the
world works. It is here I am reluctant to throw the baby out with bathwater because
the model fails some test of 'good statistical practice' which I cannot find any
data to support.
Too bad you won't be at PAGE. I look foward to catching up on another occasion.
Nick
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
email:n.holford@auckland.ac.nz tel:+64(9)373-7599x86730 fax:373-7556
http://www.health.auckland.ac.nz/pharmacology/staff/nholford/