RE: $OMEGA blocks and log-likelihood profiling
From: Kowalski, Ken Ken.Kowalski@pfizer.com
Subject: RE: [NMusers] $OMEGA blocks and log-likelihood profiling
Date: Fri, 2 Jul 2004 09:37:48 -0400
Nick,
It sounds like we are not in disagreement about the value of statistical
theory just that we have to understand the assumptions we make when applying
it...no argument there. Just as there is a wealth of PK/PD terminology and
different words are sometimes used to mean the same thing, the same is true
in the statistical literature. I equate stability or rather 'instability'
with 'ill-conditioning' which can be a result of 'over-parameterization'.
Bates and Watts (pp. 86-91) discusses inspecting the correlation matrix of
the parameter estimates to help diagnose collinearity among the parameter
estimates (general ill-conditioning) and/or over-parameterization (fitting
too many parameters). With regards to 'reliability' I'm using the term as
per Webster to mean the extent to which results are reproducible. In this
context a model fit might be unreliable because it is difficult to reproduce
estimates across platforms/compilers, with different starting values that
may only differ by 10%, etc.
Certainly numerical instability of NONMEM itself may indeed be the
explanation for your particular problem, however, if you don't strive to get
the COV step to run and inspect the output during model building before
performing bootstrapping a reviewer could always question whether your model
is stable. Moreover, Pete Bonate's exercise showed that NONMEM differences
across platforms/compilers were most likely to occur when one is dealing
with an unstable model. Thus, numerical instability of NONMEM and model
instability in many cases may just be two sides of the same coin.
Regards,
Ken
_______________________________________________________