RE: $OMEGA blocks and log-likelihood profiling
From: Leonid Gibiansky lgibiansky@emmes.com
Subject: RE:[NMusers] $OMEGA blocks and log-likelihood profiling
Date: Thu, June 10, 2004 4:29 pm
Nick,
I do not have an opinion on the subject that you raised:
"The hypothesis that successful runs (+/- $COV) are better than all
bootstrap runs". However, the question of whether to accept "strongly
failed" bootstrap runs (that have parameters defined with zero significant
digits or with undefined OF) is interesting to discuss. Can someone offer
an example when "strongly failed" NONMEM run was used in any paper or
regulatory submission? My guess would be no. I would simplify the model,
fix some parameters or do any other possible tricks to get reasonable
convergence.
I think the criteria for accepting the bootstrap runs should be
similar. We cannot accept "strongly failed" runs simply because they have
parameters similar to those that converged. They need to be treated as
failed, with undefined parameters. The next question is what to do with
those. One can try to push them to convergence with several starting points
(initial parameters). This would be my choice, but it need to be automated.
Another option is to place them at the tails of the distribution: say if 5%
of runs "strongly failed" then the most you can count on is to look for the
bounds in the 5 to 95% interval (5% set aside for the failed runs). But his
might be too strict. The other option (actually, my favorite) is to look on
the bootstrap as a useful diagnostics of the problem, not the tool to get
confidence intervals with great precision. Then 5% of "strongly failed"
runs can be ignored and the rest used for approximate diagnostics,
investigation of the parameter distributions, publishing nice papers with
bootstrap figures, etc.
Have a nice trip to PAGE and Europe !
Leonid