RE: Describing variability
From:"Kowalski, Ken"
Subject:RE: [NMusers] Describing variability
Date:Tue, 1 Apr 2003 14:13:00 -0500
I agree rounding errors are a bigger concern but that still doesn't diminish
the importance of the $COV step and the diagnostics that a successful $COV
step provides (even if one doesn't plan to use the standard errors for
making inference). Moreover, a successful $COV step is not the end in
itself. I've seen situations where an over-parameterized model resulted in
convergence and a successful $COV step and yet, NONMEM will report
correlations between some parameters to 1.000 (to three decimal places).
Such a model fit results in a numerically non-singular Hessian but it is
extremely ill-conditioned even though the $COV step ran successfully. I
suspect the situation that Diane describes below may be an example of this.
Changing compilers which may have different numerical accuracies or changing
starting values, etc. to get the $COV step to run successfully should not be
the end goal. In this setting chances are the model is still
ill-conditioned regardless of whether the $COV step ran. It is important to
inspect the correlation matrix and its eigenvalues (PRINT=E option on $COV)
to assess the stability of the model rather than to simply acknowledge that
the $COV step ran.
I know I come across as too rigid but I'd rather err on that side as opposed
to dismissing the $COV step as if it were something we could do without. In
my opinion, ignoring $COV step failures should be the exception to the rule
and not the rule itself.
Regards,
Ken