RE: OMEGA matrix
It appears my message did not go through as well. So, I trimmed off part of
the email thread to minimize the length in hopes that this will now go through.
Hi All,
I don’t want to re-hash old ground as Nick and I have agreed to disagree about
the value of the $COV step. I still maintain that the output from the $COV
step provides useful diagnostic information. It has never been my position
that failure or success of the $COV step in and of itself is informative of
ill-conditioning or instability of the model. There are certainly cases where
the COV step fails and it is not related to ill-conditioning and successful COV
steps where the diagnostics from the COV step output suggests that the model is
ill-conditioned. So simple success/failure of the $COV step in and of itself
is not very useful. That being said, I still believe we should avoid
over-fitting, over-parameterization, ill-conditioning, instability, etc. and
acknowledge the limitations of our data. How one goes about that assessment
whether through bootstrapping, inspection of $COV step output, or some other
diagnostic assessments is not as critical to me.
Best,
Ken
Quoted reply history
From: [email protected] [mailto:[email protected]] On
Behalf Of Nick Holford
Sent: Tuesday, September 30, 2014 4:51 PM
To: nmusers
Subject: Re: [NMusers] OMEGA matrix
Hi,
As pointed out by others I agree it is essential to consider the existence of
random effect correlations if you wish to make model predictions e.g. to use a
VPC to evaluate a model.
I agree with Jeroen that this should be primarily be an informed choice based
on physiology/pharmacology. 'Blue sky' searches for correlations which when
would have no rational explanation or interpretation should be done with a
great deal of caution.
It can be tricky to explore all possible combinations using the change in OFV
(e.g. with the likelihood ratio test) to guide model selection. A more
straightforward approach is to bootstrap the model with a full covariance block
for all the random effects you suspect may be correlated.
Bootstrapping today is usually a practical option because runs can be easily
performed in parallel on multiple processors on the same machine or on a
cluster. I typically use 100 bootstrap replicates for this purpose and look
for correlations which include zero in the 95% bootstrap confidence interval.
If I find such correlations then I know I should be able to remove those
covariances from the covariance block. I can then re-run the bootstrap and
obtain confidence intervals on all the parameters including the correlations.
Confidence intervals calculated from asymptotic standard errors (if you can get
them) are usually unreliable compared with parametric bootstrap confidence
intervals ( http://www.page-meeting.org/default.asp?abstract=3143).
i don't agree with Ken that "ill-conditioning" or "not stable" based on failure
of the $COVARIANCE step should be used to judge the adequacy of the results.
Experimentally it has been shown that the bootstrap distribution of parameter
uncertainty is not different when comparing runs which terminated and those
which were successful or which completed the $COVARIANCE step.
http://www.mail-archive.com/nmusers%40globomaxnm.com/msg03401.html. See also
http://holford.fmhs.auckland.ac.nz/docs/bootstrap-and-confidence-intervals.pdf
slides 24 to 31.
Best wishes,
Nick
On 1/10/2014 7:57 a.m., Ken Kowalski wrote:
Hi Jeroen,
I think we might be on the same page but I wanted to get clarification about
your suggestion that we “not apply the concept of over-parameterization” with
respect to evaluating the omega structure. I’m assuming by
‘over-parameterization’ you mean a model that has more elements in omega than
might be necessary to be parsimonious. If so, I certainly agree but I wouldn’t
call such a model that has more parameters than necessary to be parsimonious as
necessarily over-parameterized. An over-parameterized model is one in which
there can be an infinite set of solutions to the parameter values that yields
the same fit. Such a setting can occur when the R-matrix in NONMEM is
singular. Such over-parameterized models are often also referred to as being
ill-conditioned or not stable. I think we should always avoid
over-parameterization, ill-conditioning and unstable models regardless of the
source (i.e., fixed effects, IIV random effects and omega-structure, or
residual error structure). However, I do agree that parsimony in omega is
probably not as important as say looking for a parsimonious set of covariate
parameter fixed effects when performing covariate modeling to obtain a final
model for prediction purposes. This is why in my earlier response below I
suggested fitting the “largest omega structure that can be supported by the
data”. What I meant by this statement is that we fit the largest number of
elements of omega while avoiding over-parameterization or ill-conditioning.
Such an omega structure might not be parsimonious (i.e., the smallest omega
structure that adequately describes the features in the data). The point I
was trying to make is that the smallest omega structure that adequately
describes the features in the data may not be a diagonal omega structure (i.e.,
when correlations do exist) particularly if we are interested in describing the
variation in the data and not just in predictions of central tendency.
Best,
Ken