RE: Standard errors of estimates for strictly positive parameters
Hi Aziz,
Just some comments off the top of my head in a quite informal way: I'm not
really sure that these are the same problem because they dont start with the
same information in the form of parameter constraints. In model 1 you are
asking the optimizer for the unconstrained maximum likelihood solution for
TVCL. OK, this is reasonable in a lot of situations, but not necessairily in
all situations.
In model 2 you add information by forcing TVCL and CL to be positive. If you
think of the optimal solution as some point in N-dimensional space which has to
be searched for, in model 2 you are saying “dont even look in the space where
TVCL or CL is negative”. Even stronger, in model 2 you are also saying “dont
even get close to zero” because the log-normal distribution vanishes towards
zero.
Which solution of these is best for some particular application depends on a
lot of things. One of the things I would think about in this situation is
whether or not my a priori beliefs match with the structual constraints of the
model. Do I really think that the “true” CL could be zero? If yes, then model 2
is hard to defend in that case.
You description of your situation regarding standard errors is a part of the
same thing. When you extrapolate standard errors into low-probability areas you
are checking the boundaries of the probability area. It should not be suprising
that model 1 might tell you that CL is negative since this was part of the
solution space which you allowed. With model 2 your model structure says “dont
even look there”
In short, although these two models might look similar, I think they are really
quite different. This becomes most clear when you consider the low-probability
space.
Sorry for the vauge language.
Warm regards,
Douglas
Quoted reply history
________________________________________
From: [email protected] [[email protected]] on behalf of
Chaouch Aziz [[email protected]]
Sent: Wednesday, February 11, 2015 5:21 PM
To: [email protected]
Subject: [NMusers] Standard errors of estimates for strictly positive parameters
Hi,
I'm interested in generating samples from the asymptotic sampling distribution
of population parameter estimates from a published PKPOP model fitted with
NONMEM. By definition, parameter estimates are asymptotically (multivariate)
normally distributed (unconstrained optimization) with mean M and covariance C,
where M is the vector of parameter estimates and C is the covariance matrix of
estimates (returned by $COV and available in the lst file).
Consider the 2 models below:
Model 1:
TVCL = THETA(1)
CL = TVCL*EXP(ETA(1))
Model 2:
TVCL = EXP(THETA(1))
CL = TVCL*EXP(ETA(1))
It is clear that model 1 and model 2 will provide exactly the same fit.
However, although in both cases the standard error of estimates (SE) will refer
to THETA(1), the asymptotic sampling distribution of TVCL will be normal in
model 1 while it will be lognormal in model 2. Therefore if one is interested
in generating random samples from the asymptotic distribution of TVCL, some of
these samples might be negative in model 1 while they'll remain nicely positive
in model 2. The same would happen with bounds of (asymptotic) confidence
intervals: in model 1 the lower bound of a 95% confidence interval for TVCL
might be negative (unrealistic) while it would remain positive in model 2.
This has obviously no impact for point estimates or even confidence intervals
constructed via non-parametric bootstrap since boundary constraints can be
placed on parameters in NONMEM. But what if one is interested in the asymptotic
covariance matrix of estimates returned by $COV? The asymptotic sampling
distribution of parameter estimates is (multivariate) normal only if the
optimization is unconstrained! Doesn't this then speak in favour of model 2
over model 1? Or does NONMEM take care of it and returns the asymptotic SE of
THETA(1) in model 1 on the log-scale (when boundary constraints are placed on
the parameter)?
Thanks,
Aziz Chaouch
________________________________