[Fwd: Re: Non-positive semi-definite message]
Date: Wed, 15 Aug 2001 17:23:45 -0400
From: Alan Xiao <Alan.Xiao@cognigencorp.com>
Subject: [Fwd: Re: Non-positive semi-definite message]
Quoted reply history
-------- Original Message --------
Subject: Re: Non-positive semi-definite message
Date: Wed, 15 Aug 2001 16:52:12 -0400
From: Alan Xiao <Alan.Xiao@cognigencorp.com>
To: bvatul <bvatul@ufl.edu>
Oh, boy,
I just tried MATRIX=S option in $COV, it just took 20 minutes (I used
$MSFI), compared to about 15-20 hours without this option (also use
$MSFI). And it past the $cov step and the results are consistent with
those of models during the forward selection process (all of the base
models except the last few used $cov but without this option).
Now, any suggestions about the exact explanations to this phenomenon?
Thanks a lot - Atul.
(Hi, Atul, I'll buy you a beer next time when we meet - might be on the
AAPS meeting?).
Alan.
Alan Xiao wrote:
> Dear Atul,
>
> Thanks a lot.
>
> Your third choice (MATRIX=S) is really what I first time heard about
> using it. Does this mean we go around Matrix R (or more
> appropriately, just inverse R) ? The NONMEM manual just mentioned
> that "MATRIX=S requests that the inverse S matrix be used. " At this
> case, does the statistical inference change? (since R and S are two
> different matrices - Hessian and Cross-Product Gradient). Has
> anyone explored the differences - in parameter estimates, statistical
> inferences, etc. - between options MATRIX=R and MATRIX=S over a good
> model (which does not produce the message "non-positive ...")? Does
> this message only happen to MATRIX R (or strictly, inverse R)? or
> also possible to matrix S (inverse S)?
>
> I did try SIGDIG = 4 and the first round results showed "Rounding
> Error". The second round of results (after changing initial guesses)
> are not out yet (it takes more than 30 hours). I did not go so far as
> to SIGDIG = 5. Here, I'm wondering if anyone tried even further
> before, such as SIGDIG = 6 or 7 (if applicable with any reasons)?
>
> For the first choice - minimizing (the number of) covariates, it's
> really helpful during the forward selection step (if using $cov) since
> it could pick up redundant (or correlated) covariates depending on the
> magnitude of p values. However, if it's after the backward
> elimination, ... Couldn't the backward elimination process remove
> the redundant (correlated) covariates from the model? Or simply the
> magnitude of p values is not lower enough (to remove redundant or
> correlated covariates)? If so, how lower is low enough (generally in
> industry) - a silly question, huh? - I use p = 0.001 right now. If
> not so, does this mean we can not (completely) rely on the backward
> elimination step based on the magnitude of the change of the objective
> function values (chi square distribution assumption)?
>
> Sorry for a lot of questions.
>
> Thanks,
>
> Alan.
>
--
***** Alan Xiao, Ph.D ***************
***** PK/PD Scientist ***************
***** Cognigen Corporation **********
***** Tel: 716-633-3463 ext 265 ******