Re: FO vs FOCE vs LAPLACIAN
From: Nick Holford <n.holford@auckland.ac.nz>
Subject: [NMusers] Re: FO vs FOCE vs LAPLACIAN
Date: Tue, 22 Jul 2003 15:45:40 +1200
Ken,
I am primarily interested in avoiding local minima (so that I can test model building
hypotheses) and obtaining minimally biased and imprecise parameter estimates. I agree
with you that success or failure of $COV probably does not help diagnose a local minimum
problem. I have no evidence to support this. But what about bias and imprecision?
I have a somewhat anecdotal but nevertheless evidence based comment on this.
I recently completed a PK model analysis using WT, AGE, SCR and SEX as
covariates (697 subjects, 2567 concs). The model did not run $COV, in fact
it didn't even minimize successfully. Other evidence convinced me it was not
far away from an appropriate minimum and because it had a more biologically sound
basis than its more successful neighbours I preferred this model. I bootstrapped
the original data set using the preferred model and found 28% of 1055 bootstrap
runs minimized successfully and 7.1% ran the $COV step.
The mean of the parameters obtained from all bootstrap runs and the mean from those which
ran the $COV step were all within 2%. I conclude that $COV does not indicate lower bias
compared with runs that do not minimize.
To assess imprecision I computed the ratio of the mean standard error from the $COV
successful runs to the bootstrap standard error obtained from all runs. For THETA:se
estimates the $COV SE was on average 3% smaller but for OMEGA:se the $COV SE was 58%
larger than the overall bootstrap SE. I conclude from this that the imprecision of
THETA:se was negligibly different when the $COV step was successful. The difference
in the OMEGA :se may reflect the intrinsic difficulty in obtaining estimates of OMEGA
and OMEGA:se. Perhaps the asymptotic assumptions involved in $COV produce an upward bias.
95% confidence intervals obtained from all the bootstrap runs were very similar to
those obtained from minimization successful and $COV successful runs. The 95% CI
predicted from the asymptotic SE was on average 21% larger (range 15-35%) than the
bootstrap CI.
In order to explore the issue a bit further I simulated a data set using the mean
bootstrap parameter estimates from all runs. I then bootstrapped this simulated
data set (1772 runs). The minimization success rate was double (56%) that of the
original real data bootstrap runs and 12.5% ran $COV.
Because the true parameter values for the simulation are known the absolute bias
can be computed. Only 3 out of 29 parameters had an absolute bias larger than 10%. There
were negligible differences between the absolute bias using estimates from all runs,
minimization successful runs or $COV successful runs. This means the $COV step is not
a guide to reduced bias.
The imprecision pattern was similar with the simulated data but the magnitude of
differences between the mean $COV SE and the mean bootstrap SE were larger than
those seen with the original real dataset. For $COV SE the THETA:se estimates were
about 50% smaller while OMEGA:se were 400% larger than the bootstrap SE. There were
no real differences depending on whether all runs, minimization successful or $COV
successful runs were used ($COV successful runs tended to be a bit larger).
95% confidence intervals obtained from all the bootstrap runs on the simulated dataset
were very similar to those obtained from minimization successful and $COV successful
runs. The 95% CI predicted from the asymptotic SE was on average 22% larger (range
14-46%) than the bootstrap CI.
My conclusion from this empirical exploration of one data set and model suggests that
a successful $COV is of no value for selection of models with improved bias or
imprecision. It is a quicker way of obtaining some idea of the parameter 95% confidence
interval but it is upwardly biased compared with the bootstrap estimate. I am not
typically interested in parameter CIs for every model I run. I am happy to leave that
until I have finished model building and prefer to rely on bootstrap CIs.
I think we are in agreement on almost all issues that you raise except for the diagnostic
value of the $COV in relation to the thing you call "stability". I dont know what
stability means so perhaps you would like to offer a definition and some evidence
for your assertion.
Nick
PS Just in case anyone else it tempted to try this kind of experiment it took just about
2 months continuous operation on a 1.7 GHz Athlon MP2000 to do 2827 bootstraps. I'm still
waiting for a response from the journal editor about the MS describing the preferred model
so I had the time to do the computation while visiting PAGE etc.
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
email:n.holford@auckland.ac.nz tel:+64(9)373-7599x86730 fax:373-7556
http://www.health.auckland.ac.nz/pharmacology/staff/nholford/