RE: Slow PRED in tables?

From: Robert Bauer Date: January 30, 2014 technical Source: mail-archive.com
Bob: That is correct, the description for WRESCHOL is listed in nm730.pdf as follows: WRESCHOL (NM73) Normally, population and individual weighted residuals are evaluated by square root of the eigenvalues of the population or individual residual variance. However, an alternative method is to Cholesky decompose the residual variance (suggested by France Mentre, personal communication), by entering the WRESCHOL option. This should be specified only on the first $TABLE command. The Cholesky form has the property of sequentially decorrelating each additional data point in the order of the data set. The standard decomposition method (SVD) used in NONMEM for WRES is by singular value decomposition, which for symmetric matrices, is equivalent to the method of obtaining eigenvalues and eigenvector matrices. Subsequently, the eigenvalues are square rooted, and the "square root of the matrix" is constructed from them and the original eigenvector matrix. The SVD method can be several fold slower than the Cholesky decomposition method. However, there was also some inefficient code that constructed the square root matrix after the SVD step, which resulted in Douglas's problem running several hundred times longer than necessary, due to the very large number of data per subject in his problem. The code has been made more efficient, and will be available in nonmem 7.4. Meanwhile, the WRESHCOL option has efficient code throughout and is available to use in nm73, and can be used as an alternative method, although as Bob said, these WRES's will not map 1 to 1 to the SVD method, but should have the usual statistical properties of decorrelated samples. Robert J. Bauer, Ph.D. Vice President, Pharmacometrics, R&D ICON Development Solutions 7740 Milestone Parkway Suite 150 Hanover, MD 21076 Tel: (215) 616-6428 Mob: (925) 286-0769 Email: [email protected] Web: www.iconplc.com
Quoted reply history
-----Original Message----- From: Bob Leary [mailto:[email protected]] Sent: Wednesday, January 29, 2014 7:03 PM To: Bauer, Robert; Eleveld, DJ; [email protected] Subject: RE: Slow PRED in tables? Doug and Bob, This is point that I have long been interested in. For each individual, the original WRES in NONMEM is based on a Schur decomposition of the estimated covariance matrix (I have verified this numerically using MATLAB) of the residual vector. Thus if the individual covariance matrix of the residuals is C, and the residual vector is r, WRES is define as C^(-1/2)*r, where C= U*diag(lambda)*U' , and the columns of U are the eigenvectors of C, and lambda is the diagonal matrix of eigenvalues. The main point of the computation is that the covariance of C^(-1/2)*r is a unit matrix, where C^(-1/2) = U*lambda(^-1.2)*U' has the rough interpretation of being a square root of C^(-1). Further diagnostic analysis of the components of WRES treats them as if they were independent N(0,1) random variables since indeed are are uncorrelated and normalized to have unit variance (although the normality assumption is doubtful for hgihly nonlinear models). Thus the computation of WRES in NM7.2 requires the computation of all eigenvalues and eigenvectors of the C matrix, which for large matrices can start to be expensive (although the best LAPACK implementation of a symmetric positive definite eigendecomposition uses Jim Demmel's O(N^1/3) routine and maybe considerably faster than what you currently seeing in NM7.2 - I have no way of knowing what NM7.2 actually uses.) But in any event, all the WRES computation is doing is de-correlating and normalizing the residuals. Any matrix W such that W*r has a covariance matrix equal to the unit matrix will do. In particular, choosing W = lower triangular Cholesky decompostion of C^(-1), works just fine and is much faster to compute than to compute than a Schur eigen-decompostion. Based on the name, I assume that WRESCHOL is just WRES implemented with such a Cholesky decomposition. WHich is better - WRES or WRESCHOL ? This is difficult question without further information. It does point out that the details of the WRES results, no matter how done, should be take with a large grain of salt (WRESCHOL numerically will look totally different than WRES, yet they are both equally valid de-correlation/normalizations) Electrical engineers have long been aware of this de-correlation/normalization ambiguity. Basically if C(^-1/2) is any matrix that decorrelates and normalizes the residuals, so is U*C^(-1/2), where U is any orthogonal matrix (U'*U=I) Thus there are an infinite number of choices to use for computing a reasonable WRES for further diagnostic analysis. In the electrical engineering community, any matrix of the form U*C^(-1/2) is called a whitening matrix, since it transforms correlated data into 'whitened', decorrelated data with unit variances. In order to decide which decorrelation is best, one must impose an optimization criterion to pick the particular best whitening matrix, This is not so easy to do,but has given rise to a large literature under the general name of 'projection pursuit' methods. I have long suspected there may be something here worth looking into with respect to population PK/PD model evaluation techniques, but his is still an open question. ________________________________________ From: [email protected] [[email protected]] On Behalf Of Bauer, Robert [[email protected]] Sent: Wednesday, January 29, 2014 3:34 PM To: Eleveld, DJ; [email protected] Subject: [NMusers] RE: Slow PRED in tables? Douglas: Thank you for sharing your code and data with me. When PRED, WRES, or RES are requested, NONMEM calls the weighted residual routine PRRES, which calculates PRED, WRES, and RES together. Your problem has a large number of data per subject (up to 1544) , and evaluating WRES requires some matrix algebra on up to a 1544x1544 matrix, which can take time to compute for each subject. The standard method of calculating WRES in NONMEM 7.3 and lower versions is inefficient, and I have made it more efficient for a future version of nonmem 7.4, so your TABLE step will calculate in 15 minutes. Meantime, in recently released NONMEM 7.3, you can use the WRESCHOL option, and the $TABLE step was calculated in 3 minutes. Robert J. Bauer, Ph.D. Vice President, Pharmacometrics, R&D ICON Development Solutions 7740 Milestone Parkway Suite 150 Hanover, MD 21076 Tel: (215) 616-6428 Mob: (925) 286-0769 Email: [email protected]<mailto:[email protected]> Web: http://www.iconplc.com/ From: [email protected] [mailto:[email protected]] On Behalf Of Eleveld, DJ Sent: Tuesday, January 28, 2014 9:19 AM To: [email protected] Subject: [NMusers] Slow PRED in tables? Hello everyone, I have a curious problem with slow PRED calculations in tables. The estimations are reasonably fast, 471 seconds for 23 iterations. If there is no PRED in any tables then NONMEM finishes a moment after the message about elapsed estimation time. But if is PRED in a table then it takes an extremely long time to return, I waited more than 10 hours before I broke off the run. If convergence of the algorithm is reasonably fast I get the impression that the differential equations are not problematic (it is $DES solved with ADVAN13 TOL=9). In fact if I restart the run with all $OMEGA to (0 FIXED) - to force all ETAs to zero - and do a MAX=0 run then it is quite fast. I would expect calculating PRED would be simply setting ETA=0 and solving the differential equations just once. Am I missing anything in how PRED in a table is calculated? Has anyone seen this kind of thing before? Any advice or tips? Warm regards, Douglas Eleveld ________________________________
Jan 28, 2014 Doug J. Eleveld Slow PRED in tables?
Jan 29, 2014 Robert Bauer RE: Slow PRED in tables?
Jan 30, 2014 Bob Leary RE: Slow PRED in tables?
Jan 30, 2014 Doug J. Eleveld RE: Slow PRED in tables?
Jan 30, 2014 Robert Bauer RE: Slow PRED in tables?
Feb 03, 2014 Jerry Nedelman RE: Slow PRED in tables?
Feb 03, 2014 Bob Leary RE: Slow PRED in tables?
Feb 04, 2014 Alison Boeckmann Re: RE: Slow PRED in tables?