Re: Different EBE estimation between original and enriched dataset with MDV=1

From: Leonid Gibiansky Date: November 26, 2012 technical Source: mail-archive.com
Hi Pascal, You may want to switch to ADVAN13. It is much more stable for stiff problems, and may allow to increase TOL. Thanks Leonid -------------------------------------- Leonid Gibiansky, Ph.D. President, QuantPharm LLC web: www.quantpharm.com e-mail: LGibiansky at quantpharm.com tel: (301) 767 5566
Quoted reply history
On 11/26/2012 2:43 PM, [email protected] wrote: > Dear All, > > Thanks for your detailed response and tricks. I am trying to address > each of them after several trial and errors with your suggestions: > > 1) I have only time-invariant covariates. Buth thanks to Robert and > Bill for mentioning it. I will remember! > > 2) I did not use the EVID=2 for my dummy times. Now I am using them, but > it does not help. > > 3) Starting from non optimized parameters rather than $MSFI as suggested > by Joachim does not help. But I like your explanation. Nevertheless I > can't live with "the differences [...] within the range you would also > find if you did a bootstrap" since those differences change the profiles > I observe. > > 4) The nice trick suggested by Heiner (After the last time point of an > ID you may add a line with EVID=3 (reset event) with the TIME > (TIMERESET>the last datapoint of the ID of interest)may work, but would > probably be too complex to implement for my special dataset since I have > a long history of not evenly spaced dosing. But thanks, Heine, I will > also remember this one. > > 5) Increasing the TOL is the only thing that improves the prediction. > Thanks Leonid you are right when you write "the problem is in the > precision of the integration routine". But with the data I have, I > cannot increase it beyond 8. By the way, in my model I am estimating the > initial condition at baseline in one of my compartment using a random > effect. When the slope after the baseline is large, I got almost no > bias. But when it is a moderate slope, the bias prediction with dummy > points appears and is increasing when the slope is decreasing. This > probably confirms the issue of the precision with integration routine. > > 6) The only solution which I mention in in my 1st Email and that was > also suggested by Jean Lavigne : one separate run for the estimation of > the EBEs and one from the simulation on dummy time points. > > 7.2) Thanks Robert. I am glad to learn that in 7.3 there will be an > option to automatically "fill in extra records with small time > increments, to provide smooth plots". I imagine that using this > utility program will not change the precision of the integration routine > since it will be build in. I will just have to wait a little bit for > getting access to it. > > Kind regards, > > Pascal > > PS > As someone who used to live by the Lake Leman would have said, NONMEM, > sometimes, "It's a kind og magic!" :-) > > From: Herbert Struemper <[email protected]> > To: "[email protected]" <[email protected]> > Date: 26/11/2012 16:13 > Subject: RE: [NMusers] Different EBE estimation between original and > enriched dataset with MDV=1 > Sent by: [email protected] > ------------------------------------------------------------------------ > > Pascal, > I had the same issue a while ago with time-invariant covariates. Back > then with NM6.2, adding an EVID column to the data set and setting > EVID=2 for additional records preserved the ETAs of the original > estimation (while only setting MDV=1 for additional records did not). > Herbert > > Herbert Struemper, Ph.D. > Clinical Pharmacology, Modeling & Simulation > GlaxoSmithKline, RTP, 17.2230.2B > Tel.: 919.483.7762 (GSK-Internal: 7/8-703.7762) > > -----Original Message----- > From: [email protected] [mailto:[email protected]] > On Behalf Of Bauer, Robert > Sent: Sunday, November 25, 2012 9:11 PM > To: Leonid Gibiansky; [email protected] > Cc: [email protected] > Subject: RE: [NMusers] Different EBE estimation between original and > enriched dataset with MDV=1 > > Pascal: > There is one more consideration. If your model depends on the use of > covariate data, then during the numerical integration from time t1 to > t2, where t1 and t2 are times of two contiguous records, which have > values of the covariate c1 and c2, respectively, NONMEM uses the > covariate at time t2 (call it c2)during the interval from t>t1 to t<=t2. > During your original estimation, your data records were, perhaps, as an > example: > > Time covariate MDV > 1.0 1.0 0 > 1.5 2.0 0 > > With the filled in data set, perhaps you filled in the covariates as > follows: > > Time covariate MDV > 1.0 1.0 0 > 1.25 1.0 1 > 1.5 2.0 0 > > Or perhaps you made an interpolation for the covariate at the inserted > time of 1.25, to be 1.5. But NONMEM made the following equivalent > interpretation during your original estimation: > > Time covariate MDV > 1.0 1.0 0 > 1.25 2.0 1 > 1.5 2.0 0 > > That is, when the time record 1.25 was not there, it supplied the > numerical integrater with the covariate value of 2.0 for all times from > >1.0 to <=1.5, as stated earlier. > > Even though MDV=1 on the inserted records, NONEMM simply does not > include the DV of that record in the objective function evaluation, but > will still use the other information for simulation, by simulation I > mean, for the numerical integration during estimation. > > In short, your model has changed regarding the covariate pattern based > on the expanded data set. > > By the way, there is a utility program called finedeata, that actually > facilitates data record filling, with options on how to fill in > covariates, in nonmem7.3 beta. I will send the e-mail to this shortly. > > If you are not using covariates in the manner I described above, then > please ignore my lengthy explanation. > > Robert J. Bauer, Ph.D. > Vice President, Pharmacometrics, R&D > ICON Development Solutions > 7740 Milestone Parkway > Suite 150 > Hanover, MD 21076 > Tel: (215) 616-6428 > Mob: (925) 286-0769 > Email: [email protected] > Web: www.iconplc.com > > -----Original Message----- > From: [email protected] [mailto:[email protected]] > On Behalf Of Leonid Gibiansky > Sent: Friday, November 23, 2012 12:15 PM > To: [email protected] > Cc: [email protected] > Subject: Re: [NMusers] Different EBE estimation between original and > enriched dataset with MDV=1 > > Hi Pascal, > I think the problem is in the precision of the integration routine. With > extra points, you change the ODE integration process and the results. I > would use TOL=10 or higher in the original estimation. I have seen cases > when changing TOL from 6 to 0 or 10 changed the outcome quite significantly. > Leonid > > -------------------------------------- > Leonid Gibiansky, Ph.D. > President, QuantPharm LLC > web: www.quantpharm.com > e-mail: LGibiansky at quantpharm.com > tel: (301) 767 5566 > > On 11/23/2012 11:08 AM, [email protected] wrote: > > Dear NM-User community, > > > > I have a model with 2 differential equations and I use ADVAN6 TOL=5. > > In $DES, I am using T the continuous time variable. The run converges, > > $COV is OK, and the model gives a reasonable fit. In order to compute > > some statistics which cannot be obtained analytically, I need to > > compute individual predictions based on individual POSTHOC parameters > > and an extended grid of time for interpolating the observed times. > > > > So I have > > 1) added to my original dataset extra points regularly spaced with > > MDV=1. To give you an idea, my average observation time is 25, with a > > range going from 5 to 160. So my grid was set so that I have a dummy > > observation every 1 unit of time. > > 2) rerun my model using $MSFI to initialize the pop parameters, with > > MAXEVAL=0 and POSTHOC options so that individual empirical Bayes > > estimates (EBE) parameters for each patient would be first > > re-estimated, then the prediction would be computed. > > > > Then I > > 3) checked that my new predictions computed from the extended dataset > > match the predictions of the original dataset at observed time points. > > I had the surprise to see that for some individuals those predictions > > match, for some others they slightly diverge, and for few others they > > are dramatically different. I checked the EBEs and they were clearly > > different between the original dataset and the one with the dummy points. > > 4) I decided to redo the grid with only one dummy point every 1/4 of > > time unit. The result was less dramatic, but still for most of my > > individuals the EBEs predictions were diverging from the original ones > > computed without the dummy times. > > > > Of course the solution for me is to estimate the EBEs from the > > original dataset, export them in a table and reread them to initialize > > the parameter of my individuals using only dummy time points and no > > observations. > > > > This problem reminds me something that was discussed previously on > > nm-user, but I could not recover the source in the archive. > > > > Anyway is this something known and predictable that when adding dummy > > points with MDV=1 to your original dataset you sometimes get very > > different EBEs ? Are there cases/models/ADVAN where the problem is > > likely to happen? Is their a way to fix it it in NONMEM other than the > > trick I used? > > > > Thanks for your replies! > > > > Kind regards, > > > > Pascal Girard, PhD > > [email protected] > > Head of Modeling & Simulation - Oncology Global Exploratory Medicine > > Merck Serono S.A. * Geneva > > Tel: +41.22.414.3549 > > Cell: +41.79.508.7898 > > > > This message and any attachment are confidential and may be privileged > > or otherwise protected from disclosure. If you are not the intended > > recipient, you must not copy this message or attachment or disclose > > the contents to any other person. If you have received this > > transmission in error, please notify the sender immediately and delete > > the message and any attachment from your system. Merck KGaA, > > Darmstadt, Germany and any of its subsidiaries do not accept liability > > for any omissions or errors in this message which may arise as a > > result of E-Mail-transmission or for damages resulting from any > > unauthorized changes of the content of this message and any attachment > > thereto. Merck KGaA, Darmstadt, Germany and any of its subsidiaries do > > not guarantee that this message is free of viruses and does not accept > > liability for any damages caused by any virus transmitted therewith. > > > > Click _ http://www.merckgroup.com/disclaimer_to access the German, > > French, Spanish and Portuguese versions of this disclaimer. >