Re: Problems with an apparent compiler-senstive model
From: "James G Wright" james@wright-dose.com
Subject: Re: [NMusers] Problems with an apparent compiler-senstive model
Date: Thu, 3 Aug 2006 12:19:46 +0100
I think there are 2 questions in this discussion:-
1) Is an estimated covariance matrix a good way to look at the behaviour of
maximum likelihood estimates ie calculate confidence intervals?
Estimated covariance matrices are quick and useful descriptors of local
behaviour. Likelihood profiling, bootstrapping and MCMC are some
(computationally expensive) alternatives, but could provide richer insight.
Since I caught a nasty dose of Bayesian-ism, I am not just interested in the
local behaviour around my current estimates, but the entire likelihood
surface.
2) Is NONMEM's covariance step good at calculating covariance matrices
and/or diagnosing problems?
I dont believe NONMEM is good at this particular task, partly because
NONMEM works with the likelihood surface defined in terms of all theta's,
eta's and epsilons. This gives a huge (and unsimplifiable) n x n matrices
that are difficult for computers (or people) to invert, particularly if any
of the n(n+1) correlations strays close to 1. In particular, eta's are
often poorly determined and their impact is "linearized" in the NONMEM
likelihood surface. Please note the use of the word "believe" at the start
of this paragraph - this implies I have no actual "proof".
In my experience, inconsistent error messaging is commonplace in more
complex NONMEM models, and the NONMEM user requires a degree of cynicism to
proceed effectively. I have also experienced some moderate
compiler/platform sensitivity with NONMEM - the existence of this
implementation variation may suggest that NONMEMs algorithms are not
well-insulated from rounding errors. However, the truth is that platform
variations are typical in computationally sophisticated applications.
Matrix inversion involves lots of division by (very) small numbers, and this
amplifies error in those small numbers.
There are many tricks that can improve model stability, such as
reparameterization and judiciously removing eta's but I have certainly
encountered a few models that just won't be persuaded to "converge" by
NONMEM's definition without mortally wounding their intellectual basis.
This can be the case despite the "non-convergent" models being excellent
descriptions of the data and well-characterized in terms of the available
data, as demonstrated by likelihood profiling or using alternative software.
The risk of "pseudo" error messages increases as model complexity increases,
so it tends to be the most realistic and biologically insightful models that
are selected against by NONMEM pseudo-error messages.
James G. Wright PhD,
Scientist,
Wright Dose Ltd,
www.wright-dose.com
Tel: UK (0) 772 5636914