RE: $OMEGA blocks and log-likelihood profiling
From: "Kowalski, Ken" Ken.Kowalski@pfizer.com
Subject: RE: [NMusers] $OMEGA blocks and log-likelihood profiling
Date: Thu, June 10, 2004 5:14 pm
Nick,
As I indicated in a previous message I don't typically run bootstraps with
$COV. I really don't see the value in summarizing the bootstrap results
with/without a successful COV step. The issue is the high convergence
failure rate and whether or not you can use the estimates from the failed
runs to provide valid inference via the empirical distribution generated
from your bootstrap samples. The COV step only comes into play as a
diagnostic to provide insight into why you are getting such a high
convergence failure rate. Ideally, closer attention to the COV step output
during model building to guide model selection may help to avoid the high
convergence failure rate when performing bootstrapping. In the past you
have indicated that you would much rather just go right into bootstrapping
rather than take the time to get a successful COV step and review this
output for ill-conditioning during model building. That's fine, but then
you are more likely to encounter the problem of a high convergence failure
rate during bootstrapping. In which case you have more work on the back end
to verify that these failed runs can still be used to provide valid
inference regarding the uncertainty in the parameters. You may be fine with
your one example, but in general its a slippery slope to be on and each case
where you have a high convergence failure rate the burden will be on you to
verify that the empirical distributions for the failed and successful runs
are unchanged before pooling them.
If you had a successful COV step from your model fit on the original
dataset, which suggested that your model was stable, and you ended up with
>90% successful convergence in your bootstrap runs, then I couldn't care
less whether you even ran the COV step for the bootstrap runs.
Ken