Why does covariance fail?

4 messages 4 people Latest: Jan 31, 2008

Why does covariance fail?

From: Mark Sale Date: January 29, 2008 technical
I'm thinking of doing a somewhat formal analysis of the meaning of a failed covariance step. Some years ago Stu Beal explained that (as I recall), if the covariance step fails you cannot be sure that the minimum isn't a saddle point, which makes sense to me, and is consistent (I think), with the common message from NONMEM R MATRIX ALGORITHMICALLY SINGULAR AND ALGORITHMICALLY NON-POSITIVE-SEMIDEFINITE R MATRIX IS OUTPUT 0COVARIANCE STEP ABORTED I'm also finding one in NONMEM VI that I don't recall from NONMEM V, and I don't know what it means: ERROR RMATX- 1 Then there are messages! that seem to be related to conditional estimates: NUMERICAL HESSIAN OF OBJ. FUNC. FOR COMPUTING CONDITIONAL ESTIMATE IS NON POSITIVE DEFINITE MESSAGE ISSUED FROM COVARIANCE STEP and version VI of NONMEM will refuse to even try the covariance step for various reasons: PARAMETER ESTIMATE IS NEAR ITS BOUNDARY THIS MUST BE ADDRESSED BEFORE THE COVARIANCE STEP CAN BE IMPLEMENTED even, it seems when the parameter estimate is no where near the boundary. I'm thinking of looking at these various reasons that the covariance step fails, and seeing if any of them mean anything WRT whether the model is "good", by some objective measure (PPC, NPDE, predictive check). My question is, is there any way to formally test whether the failure is due to a saddle point in the objective function surface? My understanding of the current search algorithm used by NONMEM is that it is very, very robust WRT saddle points. So, I suspect that the vast majority of the failures are not due to a saddle, but rather just a fairly flat surface, with near 0 first and second derivatives, causing numerical problems inverting it, rather than actually being a saddle point. If the surface is just fairly flat, not a saddle, then I think that the answer is not "wrong", just not especially good, therefore other simulation based tests of "goodness" might be just fine. I suspect that you could test whether it is a saddle point by trying a slightly different value for the parameter (e.g., "minimum" is 10, so try 9.9 and 10.1 and see if the OBJ is better, in each dimension. Would this work? thanks Mark Mark Sale MD Next Level Solutions, LLC www.NextLevelSolns.com 919-846-9185

Re: Why does covariance fail?

From: Leonid Gibiansky Date: January 29, 2008 technical
Hi Mark I am pretty confident (although I do not have a proof) that all (or all reasonably good, meaning that you invested some time trying to make them good) models with failed covariance steps are not the saddle points but over-parametrized models with degenerate direction(s). Some exception could be related to the problems with the odd-type data where I have less experience, so let's restrict the discussion by continuous-type data. The check should be pretty easy. I would just evaluate OF in 100-1000 random points in the vicinity of the solution (automated similar to the bootstrap runs). It is better be random rather than univariate (parameter by parameter) to investigate all possible parameter-space directions. If you have bootstrap results, it could be used instead of this new check since it is very unlikely that all bootstrap runs would stick to the saddle-point rather than move along the gradient to the lower minimum. Answer seems obvious to me (not a saddle points) but it would be interesting to see a more definite results. Please, update if you see anything interesting Thanks Leonid P.S. As far as I remember, this message: PARAMETER ESTIMATE IS NEAR ITS BOUNDARY THIS MUST BE ADDRESSED BEFORE THE COVARIANCE STEP CAN BE IMPLEMENTED can be given if some of the OMEGA or SIGMA elements (including the off-diagonal terms) are close to zero. You can block this case, ICON distributed the patch for it, see archives. see also http://www.cognigencorp.com/nonmem/current/2007-July/0335.html -------------------------------------- Leonid Gibiansky, Ph.D. President, QuantPharm LLC web: www.quantpharm.com e-mail: LGibiansky at quantpharm.com tel: (301) 767 5566 Mark Sale - Next Level Solutions wrote: > I'm thinking of doing a somewhat formal analysis of the meaning of a failed covariance step. Some years ago Stu Beal explained that (as I recall), if the covariance step fails you cannot be sure that the minimum isn't a saddle point, which makes sense to me, and is consistent (I think), with the common message from NONMEM > > R MATRIX ALGORITHMICALLY SINGULAR > AND ALGORITHMICALLY NON-POSITIVE-SEMIDEFINITE > R MATRIX IS OUTPUT > 0COVARIANCE STEP ABORTED > > I'm also finding one in NONMEM VI that I don't recall from NONMEM V, and I don't know what it means: > > ERROR RMATX- 1 > > Then there are messages! that seem to be related to conditional estimates: > NUMERICAL HESSIAN OF OBJ. FUNC. FOR COMPUTING CONDITIONAL ESTIMATE > IS NON POSITIVE DEFINITE > MESSAGE ISSUED FROM COVARIANCE STEP > > and version VI of NONMEM will refuse to even try the covariance step for various reasons: > > PARAMETER ESTIMATE IS NEAR ITS BOUNDARY > THIS MUST BE ADDRESSED BEFORE THE COVARIANCE STEP CAN BE IMPLEMENTED > even, it seems when the parameter estimate is no where near the boundary. > > I'm thinking of looking at these various reasons that the covariance step fails, and seeing if any of them mean anything WRT whether the model is "good", by some objective measure (PPC, NPDE, predictive check). My question is, is there any way to formally test whether the failure is due to a saddle point in the objective function surface? My understanding of the current search algorithm used by NONMEM is that it is very, very robust WRT saddle points. So, I suspect that the vast majority of the failures are not due to a saddle, but rather just a fairly flat surface, with near 0 first and second derivatives, causing numerical problems inverting it, rather than actually being a saddle point. If the surface is just fairly flat, not a saddle, then I think that the answer is not "wrong", just not especially good, therefore other simulation based tests of "goodness" might be just fine. I suspect that you could test whether it is a saddle point by trying a slightly different value for the parameter (e.g., "minimum" is 10, so try 9.9 and 10.1 and see if the OBJ is better, in each dimension. Would this work? > > thanks > Mark > > Mark Sale MD > Next Level Solutions, LLC > www.NextLevelSolns.com http://www.NextLevelSolns.com > 919-846-9185

RE: Why does covariance fail?

From: Jeroen Elassaiss-Schaap Date: January 29, 2008 technical
Mark, For sake of completeness, I would also mention numerical instability as a reason for a failing covariance step. If a pair of parameters differs profoundly in numerical size this can give rise to problems in the covariance step. The solution to this obstacle is to rescale a large or small parameter - or log-transform it. Best regards, Jeroen J. Elassaiss-Schaap Clinical Pharmacology and Kinetics Scientist PK/PD Organon, a part of Schering-Plough Corporation PO Box 20, 5340 BH Oss, Netherlands Phone: + 31 412 66 9320 Fax: + 31 412 66 2506 e-mail: [EMAIL PROTECTED]
Quoted reply history
-----Original Message----- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Mark Sale - Next Level Solutions Sent: Tuesday, 29 January, 2008 19:41 Cc: [email protected] Subject: RE: [NMusers] Why does covariance fail? Leonid Thanks, that makes sense. Overall, I think we have better tools now for pretty much everything that the covariance step is supposed to do (give confidence in the model (should be done with PPC or NPDE), provide SEE and estimation correlations (better done with bootstrap)) etc, mostly because we have faster and parallel computers. I'm hoping to provide some justification for the (very, very rare of course) occasions when I like a model that fails a covariance step. Ideal, I could say "if the covariance step error is XXX, then the model is still OK if it passes this test". Your sampling (I thi! nk technically it is called hyper cube sampling) to generate a multi dimensional likelihood profile makes sense. Mark Sale MD Next Level Solutions, LLC www.NextLevelSolns.com 919-846-9185 -------- Original Message -------- Subject: Re: [NMusers] Why does covariance fail? From: Leonid Gibiansky <[EMAIL PROTECTED]> Date: Tue, January 29, 2008 12:37 pm To: Mark Sale - Next Level Solutions <[EMAIL PROTECTED]> Cc: [email protected] Hi Mark I am pretty confident (although I do not have a proof) that all (or all reasonably good, meaning that you invested some time trying to make them good) models with failed covariance steps are not the saddle points but over-parametrized models with degenerate direction(s). Some exception could be related to the problems with the odd-type data where I have less experience, so let's restrict the discussion by continuous-type data. The check should be pretty easy. I would just evaluate OF in 100-1000 random points in the vicinity of the solution (automated similar to the bootstrap runs). It is better be random rather than univariate (parameter by parameter) to investigate all possible parameter-space directions. If you have bootstrap results, it could be used instead of this new check since it is very unlikely that all bootstrap runs would stick to the saddle-point rather than move along the gradient to the lower minimum. Answer seems obvious to me (not a saddle points) but it would be interesting to see a more definite results. Please, update if you see anything interesting Thanks Leonid P.S. As far as I remember, this message: PARAMETER ESTIMATE IS NEAR ITS BOUNDARY THIS MUST BE ADDRESSED BEFORE THE COVARIANCE STEP CAN BE IMPLEMENTED can be given if some of the OMEGA or SIGMA elements (including the off-diagonal terms) are close to zero. You can block this case, ICON distributed the patch for it, see archives. see also http://www.cognigencorp.com/nonmem/current/2007-July/0335.html -------------------------------------- Leonid Gibiansky, Ph.D. President, QuantPharm LLC web: www.quantpharm.com e-mail: LGibiansky at quantpharm.com tel: (301) 767 5566 Mark Sale - Next Level Solutions wrote: > > I'm thinking of doing a somewhat formal analysis of the meaning of a > failed covariance step. Some years ago Stu Beal explained that (as I > recall), if the covariance step fails you cannot be sure that the > minimum isn't a saddle point, which makes sense to me, and is consistent > (I think), with the common message from NONMEM > R MATRIX ALGORITHMICALLY SINGULAR > AND ALGORITHMICALLY NON-POSITIVE-SEMIDEFINITE > R MATRIX IS OUTPUT > 0COVARIANCE STEP ABORTED > > > I'm also finding one in NONMEM VI that I don't recall from NONMEM V, and > I don't know what it means: > ERROR RMATX- 1 > > Then there are messages! that seem to be related to conditional estimates: > NUMERICAL HESSIAN OF OBJ. FUNC. FOR COMPUTING CONDITIONAL ESTIMATE > IS NON POSITIVE DEFINITE > MESSAGE ISSUED FROM COVARIANCE STEP > > and version VI of NONMEM will refuse to even try the covariance step for > various reasons: > PARAMETER ESTIMATE IS NEAR ITS BOUNDARY > THIS MUST BE ADDRESSED BEFORE THE COVARIANCE STEP CAN BE IMPLEMENTED > even, it seems when the parameter estimate is no where near the boundary. > > I'm thinking of looking at these various reasons that the covariance > step fails, and seeing if any of them mean anything WRT whether the > model is "good", by some objective measure (PPC, NPDE, predictive check). > My question is, is there any way to formally test whether the failure is > due to a saddle point in the objective function surface? My > understanding of the current search algorithm used by NONMEM is that it > is very, very robust WRT saddle points. So, I suspect that the vast > majority of the failures are not due to a saddle, but rather just a > fairly flat surface, with near 0 first and second derivatives, causing > numerical problems inverting it, rather than actually being a saddle > point. If the surface is just fairly flat, not a saddle, then I think > that the answer is not "wrong", just not especially good, therefore > other simulation based tests of "goodness" might be just fine. > I suspect that you could test whether it is a saddle point by trying a > slightly different value for the parameter (e.g., "minimum" is 10, so > try 9.9 and 10.1 and see if the OBJ is better, in each dimension. Would > this work? > > thanks > Mark > > > Mark Sale MD > Next Level Solutions, LLC > www.NextLevelSolns.com http://www.NextLevelSolns.com > 919-846-9185 > > This message and any attachments are solely for the intended recipient. If you are not the intended recipient, disclosure, copying, use or distribution of the information included in this message is prohibited --- Please immediately and permanently delete.

RE: Why does covariance fail?

From: Jurgen Bulitta Date: January 31, 2008 technical
Dear Dr Sale, I agree with what you, Leonid et al. wrote and wanted to add two comments and ask a question. 1) There may be situations, when one cannot apply the bootstrap to obtain confidence intervals. Such situations may occur more often in PD than in PK. Assume there are e.g. two experimental runs of some PD profile that are recorded under say 10 different experimental conditions. Each set of experimental conditions provides information on some of the model parameters, but no single experimental condition provides information on all parameters simultaneously. A bootstrap stratified by experimental condition seems most appropriate. However, sampling with replacement from two replicates makes little sense, as two replicates are not representative of the whole population of profiles for each experimental condition. 2) If you plan to implement an algorithm like: If $COV in NONMEM fails with error NNN, then the model still passes if condition ZZZ is fulfilled. This sounds specific to NONMEM. Other algorithms / programs like the MC-PEM algorithm (e.g. in S-Adapt) seem to always provide standard errors and - as far as I am aware - WinBugs naturally provides credibility intervals. My question: Are you planning to devise a program independent strategy for accepting / rejecting / revising models for that meaningful confidence intervals are difficult to obtain? Thank you & best regards Juergen ----------------------------------------------- Juergen Bulitta, PhD, Post-doctoral Fellow Pharmacometrics, University at Buffalo, NY, USA Phone: +1 716 645 2855 ext. 281, [EMAIL PROTECTED] ----------------------------------------------- -----Ursprüngliche Nachricht----- Von: "Mark Sale - Next Level Solutions" <[EMAIL PROTECTED]> Gesendet: 29.01.08 19:55:14 An: undisclosed-recipients; CC: [email protected] Betreff: RE: [NMusers] Why does covariance fail? Leonid Thanks, that makes sense. Overall, I think we have better tools now for pretty much everything that the covariance step is supposed to do (give confidence in the model (should be done with PPC or NPDE), provide SEE and estimation correlations (better done with bootstrap)) etc, mostly because we have faster and parallel computers. I'm hoping to provide some justification for the (very, very rare of course) occasions when I like a model that fails a covariance step. Ideal, I could say "if the covariance step error is XXX, then the model is still OK if it passes this test". Your sampling (I thi! nk technically it is called hyper cube sampling) to generate a multi dimensional likelihood profile makes sense. Mark Sale MD Next Level Solutions, LLC www.NextLevelSolns.com 919-846-9185
Quoted reply history
-------- Original Message -------- Subject: Re: [NMusers] Why does covariance fail? From: Leonid Gibiansky <[EMAIL PROTECTED]> Date: Tue, January 29, 2008 12:37 pm To: Mark Sale - Next Level Solutions <[EMAIL PROTECTED]> Cc: [email protected] Hi Mark I am pretty confident (although I do not have a proof) that all (or all reasonably good, meaning that you invested some time trying to make them good) models with failed covariance steps are not the saddle points but over-parametrized models with degenerate direction(s). Some exception could be related to the problems with the odd-type data where I have less experience, so let's restrict the discussion by continuous-type data. The check should be pretty easy. I would just evaluate OF in 100-1000 random points in the vicinity of the solution (automated similar to the bootstrap runs). It is better be random rather than univariate (parameter by parameter) to investigate all possible parameter-space directions. If you have bootstrap results, it could be used instead of this new check since it is very unlikely that all bootstrap runs would stick to the saddle-point rather than move along the gradient to the lower minimum. Answer seems obvious to me (not a saddle points) but it would be interesting to see a more definite results. Please, update if you see anything interesting Thanks Leonid P.S. As far as I remember, this message: PARAMETER ESTIMATE IS NEAR ITS BOUNDARY THIS MUST BE ADDRESSED BEFORE THE COVARIANCE STEP CAN BE IMPLEMENTED can be given if some of the OMEGA or SIGMA elements (including the off-diagonal terms) are close to zero. You can block this case, ICON distributed the patch for it, see archives. see also http://www.cognigencorp.com/nonmem/current/2007-July/0335.html -------------------------------------- Leonid Gibiansky, Ph.D. President, QuantPharm LLC web: www.quantpharm.com e-mail: LGibiansky at quantpharm.com tel: (301) 767 5566 Mark Sale - Next Level Solutions wrote: > > I'm thinking of doing a somewhat formal analysis of the meaning of a > failed covariance step. Some years ago Stu Beal explained that (as I > recall), if the covariance step fails you cannot be sure that the > minimum isn't a saddle point, which makes sense to me, and is consistent > (I think), with the common message from NONMEM > R MATRIX ALGORITHMICALLY SINGULAR > AND ALGORITHMICALLY NON-POSITIVE-SEMIDEFINITE > R MATRIX IS OUTPUT > 0COVARIANCE STEP ABORTED > > > I'm also finding one in NONMEM VI that I don't recall from NONMEM V, and > I don't know what it means: > ERROR RMATX- 1 > > Then there are messages! that seem to be related to conditional estimates: > NUMERICAL HESSIAN OF OBJ. FUNC. FOR COMPUTING CONDITIONAL ESTIMATE > IS NON POSITIVE DEFINITE > MESSAGE ISSUED FROM COVARIANCE STEP > > and version VI of NONMEM will refuse to even try the covariance step for > various reasons: > PARAMETER ESTIMATE IS NEAR ITS BOUNDARY > THIS MUST BE ADDRESSED BEFORE THE COVARIANCE STEP CAN BE IMPLEMENTED > even, it seems when the parameter estimate is no where near the boundary. > > I'm thinking of looking at these various reasons that the covariance > step fails, and seeing if any of them mean anything WRT whether the > model is "good", by some objective measure (PPC, NPDE, predictive check). > My question is, is there any way to formally test whether the failure is > due to a saddle point in the objective function surface? My > understanding of the current search algorithm used by NONMEM is that it > is very, very robust WRT saddle points. So, I suspect that the vast > majority of the failures are not due to a saddle, but rather just a > fairly flat surface, with near 0 first and second derivatives, causing > numerical problems inverting it, rather than actually being a saddle > point. If the surface is just fairly flat, not a saddle, then I think > that the answer is not "wrong", just not especially good, therefore > other simulation based tests of "goodness" might be just fine. > I suspect that you could test whether it is a saddle point by trying a > slightly different value for the parameter (e.g., "minimum" is 10, so > try 9.9 and 10.1 and see if the OBJ is better, in each dimension. Would > this work? > > thanks > Mark > > > Mark Sale MD > Next Level Solutions, LLC > www.NextLevelSolns.com http://www.NextLevelSolns.com > 919-846-9185 > >