RE: Successful minimization and covariance
Hi Ayyappa,
Some comments and suggestions:
1) Standard diagnostics that suggest the model fits well does not say anything
about whether the parameter estimates you've obtained are reasonably accurate
or precisely estimated. I don't know if your model is over-parameterized or
not but an over-parameterized model can still be a good-fitting model.
2) Comparison of bootstrap CIs between runs that converged versus runs that
didn't versus COV step failures does not imply that the resulting bootstrap
confidence intervals are valid in terms of having the proper coverage
probabilities. The only way to assess the validity of the bootstrap CIs is
through simulation where you can know the true values of the parameters so you
can check how often the bootstrap CIs include the true values of the
parameters. This of course can be onerous to do in practice. If your model
has difficulty with convergence, COV step failures, sensitive to starting
values, etc., then you need to proceed with caution. To this end, it is good
practice to try and understand whether there are any potential limitations in
the design/data that may make it difficult to estimate one or more parameters
of your model. Trying simpler models may help to provide insight regarding the
limitations of your data/design especially if you find that a simpler model
fits nearly as well as your chosen model.
3) You might try re-running your bootstrap runs changing all the starting
values by 15-20% and see whether you end up with similar confidence intervals.
It seems plausible to me that if your model is over-parameterized, one or more
of the parameters may not iterate much from their starting values thus the
confidence intervals might be too small for some parameters giving the false
impression that you know the estimate fairly precisely. Again, you wouldn't
know whether the confidence intervals were too small unless you did a
simulation study to assess the coverage probabilities.
4) You could try a parametric simulation using your model and final estimates
and simulate conditional on the design of your dataset and then re-estimate the
parameters of your model on the simulated data...perhaps do this for 10
subproblems. If you don't have any major design limitations that would suggest
that your model is over-parameterized then one would expect that you won't
have any problems with convergence or COV step failures in fitting the model to
the simulated data because you wouldn't have any model misspecification since
you would be fitting the same model as you used for simulation. On the other
hand, if you do encounter convergence and COV step failures under this "ideal
setting" then you probably do need to look more closely at your data/design and
model to identify potential limitations.
Best,
Ken
Kenneth G. Kowalski
President & CEO
A2PG - Ann Arbor Pharmacometrics Group, Inc.
110 Miller Ave., Garden Suite
Ann Arbor, MI 48104
Work: 734-274-8255
Cell: 248-207-5082
Fax: 734-913-0230
[email protected]
www.a2pg.com
Quoted reply history
-----Original Message-----
From: [email protected] [mailto:[email protected]] On
Behalf Of [email protected]
Sent: Thursday, May 24, 2012 12:54 PM
To: [email protected]
Subject: Re: [NMusers] Successful minimization and covariance
Please, let the group know if/how you resolve the problem Leonid
Original Message:
-----------------
From: Ayyappa Chaturvedula [email protected]
Date: Thu, 24 May 2012 12:27:03 -0400
To: [email protected]
Subject: Re: [NMusers] Successful minimization and covariance
Thank you for the suggestions. I will work on those. It is a orally
administered drug. We do have good spread of the data points to support a 2-com
model and prior knowledge on the drug supports this.
On May 24, 2012, at 12:21 PM, "[email protected]"
<[email protected]> wrote:
> Ayyappa,
> Since you already have bootstrap data, you may check few things:
> 1. Take any of the runs which has successful covariance step, and look
> on the RSEs. Are there any exceeding, say 60-70%? If yes, these
> parameters are
not
> supported by the data. If all RSEs looks good, this would support
> validity of the model 2. Compute eigenvalues of the correlation matrix
> of the bootstrap
parameter
> estimates. Roughly, the values above 1000 (for the ratio of the
> maximum
to
> minimum of these eigenvalues) may indicate over-parameterization.
> 3. Create a scatter-plot matrix of bootstrap parameter estimates (N by
> N matrix of plots where plot i-j is the 1000 parameter-i values
> plotted against
1000
> parameter-j values, or some variant of this diagnostics). If any of
> the parameters are strongly correlated, you will immediately see it on
> these plots.
>
> If none of these diagnostics reveals any suspicious behavior, I would
> accept the model as is.
>
> Another place to look is outliers. Few points with unrealistic
> concentrations may lead to minimization or COV failure. In your case,
> this is not likely since many bootstrap runs do not converge.
>
> Yes another place is to look on your matrix of random effects: too
> many random effects may lead to non-convergences if these effects are
> not supported by the data.
>
> Do you have oral, IV or mixed dosing? Was it done in differential
equations
> (ADVAN 6, 8, 9, 13) or as exact solution (ADVAN 3-4)? Were there any
> covaraite effect in the model, and if yes, was there a sufficient
> range of data to support these effects?
>
> Leonid
>
>
>
> Original Message:
> -----------------
> From: Ayyappa Chaturvedula [email protected]
> Date: Thu, 24 May 2012 08:33:21 -0400
> To: [email protected]
> Subject: [NMusers] Successful minimization and covariance
>
>
> Dear Group,
> This is a topic that has been discussed and different schools of
> thinking exist to my knowledge. But, I want to restate my case and
> get some opinions.
> The
> question is about how important to have successful minimization and
> covariance if diagnostics make sense. I have developed a two
> compartment model with a Phase 3 trial data and minimization was
> successful but covariance step was not.
> I went ahead and did a 1000 run bootstrap and wanted to get the
confidence
> intervals of parameters. There are 60% runs that are not successfully
> minimized
> and many other do not have covariance step successful. I put together CI
> from
> the runs that have successful minimization and also including all 1000
> runs.
> There is no difference in the parameter estimate or the confidence
interval
> (less than 5% change in numbers). The model diagnostics look good
> including VPC, NPDE plots, basic gof and a simulation to explain
> another trial
data.
> Now,
> my question is in this particular case do I have to worry further to
> make the successful covariance step and increase the number of runs
> that gets successfully minimized in the bootstrap even though I
> cannot see much difference in the parameter estimates, diagnostics?
> My bottom line is not going to change in anyway. I appreciate your
> expert opinions.
>
> Regards,
> Ayyappa
>
> --------------------------------------------------------------------
> mail2web LIVE – Free email based on Microsoft® Exchange technology
> - http://link.mail2web.com/LIVE
>
>
--------------------------------------------------------------------
mail2web.com - Microsoft® Exchange solutions from a leading provider -
http://link.mail2web.com/Business/Exchange