Re: FW: OMEGA HAS A NONZERO BLOCK
From:Nick Holford
Subject:Re: FW: [NMusers] OMEGA HAS A NONZERO BLOCK
Date:Sat, 05 Oct 2002 08:21:32 +1200
Ken,
"Kowalski, Ken" wrote:
>
> Nick,
>
> With regards to your Item 1, I think we are going to have to agree to
> disagree. Throwing away the objective function is not appealing to
me...the
> choice of values for fixing parameters (e.g., elements of Omega) that
you
> consider unrealistic is completely arbitrary.
I do not understand why you think that my wish to fix the correlation to
a value such as 0.5 is "completely arbitrary". I tried to explain that
this choice was because I have a strong prior on the value of this
correlation and especially I *know* that the correlation between CL and
V is *not* 1 (your arbitrary choice) and is not likely to be zero.
The NONMEM objective function is not to be fully trusted when the
estimation process is unstable (see more comments on this below).
> I suspect the reason NONMEM
> never estimates a covariance to be zero is that covariances can be
positive
> or negative so zero is not on the boundary. But what about my analogy
> regarding a variance component (diagonal element of Omega) going to
zero
> which is on the boundary? Surely you've seen NONMEM estimate a zero
> variance component. Isn't a zero variance component estimated for say
ka or
> V unrealistic? Again, this can happen because of lack of information
in the
> design/data to estimate this variance component. Isn't it common
practice
> to then fix this variance component to zero rather than some arbitrary
> non-zero value?
I agree that it is common practice to fix the diagonal element of OMEGA
to zero. This is analogous to fixing the covariance between CL and V to
zero. It is possible from a mechanistic viewpoint and if the data does
not allow NONMEM to obtain a reliable estimate then it is a reasonable
pragmatic approach. Setting it to a positive value based on prior
experience would seem to be an even more reasonable approach (for a
Bayesian). On the other hand, bobody advocates setting the diagonal
element to INF which seems to be analogous to your suggestion of forcing
the correlation to be one i.e. choosing a completely unrealistic value.
> Going back to Steve Duffull's problem, what if by chance
> the Omega reported in the NONMEM output rounded to 3 significant
digits
> didn't have problems (i.e., just squeaked by and was positive
semi-definite)
> and let's say for this to happen the correlation was estimated to be
0.99.
> Doesn't an estimate of 0.99 for the correlation concern you?
It certainly does concern me and when I see this I usually try to change
the model in some way so that a more reasonable estimate (0.9 or less)
is obtained. It seems to happen most commonly for parameters that I have
little prior knowledge so I am happy to accept fixing the covariance to
zero if I cannot get a reasonable non-zero estimate.
>
> Regarding Item 2, fixing the correlation to something less than 1 (say
0.5)
> is going to result in a poorer fit since NONMEM is wanting to estimate
the
> correlation to be 1. As we discussed a year ago, I contend that my
model
> constraining the correlation to 1 will result in a more realistic
simulation
> of the data (i.e., a posterior predictive check) than fixing the
correlation
> to something considerably lower that is not supported by the current
data.
If NONMEM estimates were trustworthy at these extreme values for a
correlation then I would agree with you. But I don't think they are
reliable and so do not trust them. I agree with Leonid "The problem, as
I see it, is that you cannot trust the matrix that you
received from the computations, if it is ill-conditioned. Therefore, you
cannot find this degenerate direction (or at least, can not be sure in
this
relation)".
> Regarding Item 3, I think we are in agreement provided one has a
strong
> enough prior presumably supported by other data. This I think is a
> reasonable alternative to my solution to the ill-conditioned Omega
problem.
> I make the distinction between a strong prior supported by an
independent
> set of data (perhaps data-rich healthy volunteer data) and fixing the
> correlation arbitrarily.
I think we agree here. When one has prior knowledge then it is
reasonable to use it in constructing a model. This is of course the
Bayesian philosophy which I find very appealing and have struggled to
apply using NONMEM.
It seems we differ because you believe that NONMEM is finding a pointer
to the truth buried in the data when it estimates a correlation close to
1 whereas I think it is a pointer to numerical nonsense.
Finally, thank you Peter Bonate for doing some experiments on this issue
which seem to reveal that there is no consistently, reliable answer to
be obtained simply by re-parameterisation. However, further work on the
parameterisation of random effects may well be fruitful. Stuart Beal
recently suggested a reparameterization which was quite helpful in
working around one particular problem I was having so I encourage
everyone to experiment (via simulation) as Peter has done.
Nick
--
Nick Holford, Divn Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
email:n.holford@auckland.ac.nz tel:+64(9)373-7599x6730 fax:373-7556
http://www.health.auckland.ac.nz/pharmacology/staff/nholford/