Dear NM-User community,
I have a model with 2 differential equations and I use ADVAN6 TOL=5. In
$DES, I am using T the continuous time variable. The run converges, $COV
is OK, and the model gives a reasonable fit. In order to compute some
statistics which cannot be obtained analytically, I need to compute
individual predictions based on individual POSTHOC parameters and an
extended grid of time for interpolating the observed times.
So I have
1) added to my original dataset extra points regularly spaced with MDV=1.
To give you an idea, my average observation time is 25, with a range going
from 5 to 160. So my grid was set so that I have a dummy observation every
1 unit of time.
2) rerun my model using $MSFI to initialize the pop parameters, with
MAXEVAL=0 and POSTHOC options so that individual empirical Bayes estimates
(EBE) parameters for each patient would be first re-estimated, then the
prediction would be computed.
Then I
3) checked that my new predictions computed from the extended dataset
match the predictions of the original dataset at observed time points. I
had the surprise to see that for some individuals those predictions match,
for some others they slightly diverge, and for few others they are
dramatically different. I checked the EBEs and they were clearly different
between the original dataset and the one with the dummy points.
4) I decided to redo the grid with only one dummy point every 1/4 of time
unit. The result was less dramatic, but still for most of my individuals
the EBEs predictions were diverging from the original ones computed
without the dummy times.
Of course the solution for me is to estimate the EBEs from the original
dataset, export them in a table and reread them to initialize the
parameter of my individuals using only dummy time points and no
observations.
This problem reminds me something that was discussed previously on
nm-user, but I could not recover the source in the archive.
Anyway is this something known and predictable that when adding dummy
points with MDV=1 to your original dataset you sometimes get very
different EBEs ? Are there cases/models/ADVAN where the problem is likely
to happen? Is their a way to fix it it in NONMEM other than the trick I
used?
Thanks for your replies!
Kind regards,
Pascal Girard, PhD
[email protected]
Head of Modeling & Simulation - Oncology
Global Exploratory Medicine
Merck Serono S.A. · Geneva
Tel: +41.22.414.3549
Cell: +41.79.508.7898
This message and any attachment are confidential and may be privileged or
otherwise protected from disclosure. If you are not the intended recipient, you
must not copy this message or attachment or disclose the contents to any other
person. If you have received this transmission in error, please notify the
sender immediately and delete the message and any attachment from your system.
Merck KGaA, Darmstadt, Germany and any of its subsidiaries do not accept
liability for any omissions or errors in this message which may arise as a
result of E-Mail-transmission or for damages resulting from any unauthorized
changes of the content of this message and any attachment thereto. Merck KGaA,
Darmstadt, Germany and any of its subsidiaries do not guarantee that this
message is free of viruses and does not accept liability for any damages caused
by any virus transmitted therewith.
Click http://www.merckgroup.com/disclaimer to access the German, French,
Spanish and Portuguese versions of this disclaimer.
Different EBE estimation between original and enriched dataset with MDV=1
11 messages
7 people
Latest: Nov 27, 2012
Dear Pascal,
What you observed is related to “speed” of estimation. With a larger dataset
(many dummies) you slow down the estimation. Roughly similar to using the
SLOW command in $EST. With an estimation that has difficulties to converge
you see a difference in EBEs and other parameters. We saw the same when we
compared runs on installations with different CPU speed.
My recommendation: do not restart with $MSFI but run from non-optimised
initial estimates as you did with the original data set. Anyway, the
differences you saw are probably within the range you would also find if you
did a bootstrap.
Good luck,
Joachim
Joachim Grevel, PhD
Scientific Director
BAST Inc Limited
Loughborough Innovation Centre
Charnwood Building
Holywell Park, Ashby Road
Loughborough, LE11 3AQ
Tel: +44 (0)1509 222908
Confidentiality Notice: This message is private and may contain confidential
and proprietary information. If you have received this message in error,
please notify us and remove it from your system and note that you must not
copy, distribute or take any action in reliance on it. Any unauthorized use
or disclosure of the contents of this message is not permitted and may be
unlawful.
Quoted reply history
From: [email protected] [mailto:[email protected]] On
Behalf Of [email protected]
Sent: 23 November 2012 16:09
To: [email protected]
Subject: [NMusers] Different EBE estimation between original and enriched
dataset with MDV=1
Dear NM-User community,
I have a model with 2 differential equations and I use ADVAN6 TOL=5. In
$DES, I am using T the continuous time variable. The run converges, $COV is
OK, and the model gives a reasonable fit. In order to compute some
statistics which cannot be obtained analytically, I need to compute
individual predictions based on individual POSTHOC parameters and an
extended grid of time for interpolating the observed times.
So I have
1) added to my original dataset extra points regularly spaced with MDV=1. To
give you an idea, my average observation time is 25, with a range going from
5 to 160. So my grid was set so that I have a dummy observation every 1 unit
of time.
2) rerun my model using $MSFI to initialize the pop parameters, with
MAXEVAL=0 and POSTHOC options so that individual empirical Bayes estimates
(EBE) parameters for each patient would be first re-estimated, then the
prediction would be computed.
Then I
3) checked that my new predictions computed from the extended dataset match
the predictions of the original dataset at observed time points. I had the
surprise to see that for some individuals those predictions match, for some
others they slightly diverge, and for few others they are dramatically
different. I checked the EBEs and they were clearly different between the
original dataset and the one with the dummy points.
4) I decided to redo the grid with only one dummy point every 1/4 of time
unit. The result was less dramatic, but still for most of my individuals the
EBEs predictions were diverging from the original ones computed without the
dummy times.
Of course the solution for me is to estimate the EBEs from the original
dataset, export them in a table and reread them to initialize the parameter
of my individuals using only dummy time points and no observations.
This problem reminds me something that was discussed previously on nm-user,
but I could not recover the source in the archive.
Anyway is this something known and predictable that when adding dummy points
with MDV=1 to your original dataset you sometimes get very different EBEs ?
Are there cases/models/ADVAN where the problem is likely to happen? Is
their a way to fix it it in NONMEM other than the trick I used?
Thanks for your replies!
Kind regards,
Pascal Girard, PhD
[email protected]
Head of Modeling & Simulation - Oncology
Global Exploratory Medicine
Merck Serono S.A. · Geneva
Tel: +41.22.414.3549
Cell: +41.79.508.7898
This message and any attachment are confidential and may be privileged or
otherwise protected from disclosure. If you are not the intended recipient,
you must not copy this message or attachment or disclose the contents to any
other person. If you have received this transmission in error, please notify
the sender immediately and delete the message and any attachment from your
system. Merck KGaA, Darmstadt, Germany and any of its subsidiaries do not
accept liability for any omissions or errors in this message which may arise
as a result of E-Mail-transmission or for damages resulting from any
unauthorized changes of the content of this message and any attachment
thereto. Merck KGaA, Darmstadt, Germany and any of its subsidiaries do not
guarantee that this message is free of viruses and does not accept liability
for any damages caused by any virus transmitted therewith.
Click http://www.merckgroup.com/disclaimer to access the German, French,
Spanish and Portuguese versions of this disclaimer.
Dear Pascal,
Here is an idea. You may want to do 1 simulation by having your POSTHOC
estimates in the dataset (like a covariate), let's say your parameters are CL
and V. You will need to fix to zero the ETAs and EPSs (or remove them from the
model) and perhaps have a dummy variable not used in the model for the
simulation. This way NONMEM would simulate directly your individual curves
from your POSTHOC estimates (up to the precision of these PK estimates from
your output file). Note I have not tried this, but I think it should work.
Best regards,
Jean
Quoted reply history
From: [email protected] [mailto:[email protected]] On
Behalf Of [email protected]
Sent: Friday, November 23, 2012 11:09 AM
To: [email protected]
Subject: [NMusers] Different EBE estimation between original and enriched
dataset with MDV=1
Dear NM-User community,
I have a model with 2 differential equations and I use ADVAN6 TOL=5. In $DES, I
am using T the continuous time variable. The run converges, $COV is OK, and the
model gives a reasonable fit. In order to compute some statistics which cannot
be obtained analytically, I need to compute individual predictions based on
individual POSTHOC parameters and an extended grid of time for interpolating
the observed times.
So I have
1) added to my original dataset extra points regularly spaced with MDV=1. To
give you an idea, my average observation time is 25, with a range going from 5
to 160. So my grid was set so that I have a dummy observation every 1 unit of
time.
2) rerun my model using $MSFI to initialize the pop parameters, with MAXEVAL=0
and POSTHOC options so that individual empirical Bayes estimates (EBE)
parameters for each patient would be first re-estimated, then the prediction
would be computed.
Then I
3) checked that my new predictions computed from the extended dataset match
the predictions of the original dataset at observed time points. I had the
surprise to see that for some individuals those predictions match, for some
others they slightly diverge, and for few others they are dramatically
different. I checked the EBEs and they were clearly different between the
original dataset and the one with the dummy points.
4) I decided to redo the grid with only one dummy point every 1/4 of time unit.
The result was less dramatic, but still for most of my individuals the EBEs
predictions were diverging from the original ones computed without the dummy
times.
Of course the solution for me is to estimate the EBEs from the original
dataset, export them in a table and reread them to initialize the parameter of
my individuals using only dummy time points and no observations.
This problem reminds me something that was discussed previously on nm-user, but
I could not recover the source in the archive.
Anyway is this something known and predictable that when adding dummy points
with MDV=1 to your original dataset you sometimes get very different EBEs ? Are
there cases/models/ADVAN where the problem is likely to happen? Is their a way
to fix it it in NONMEM other than the trick I used?
Thanks for your replies!
Kind regards,
Pascal Girard, PhD
[email protected]<mailto:[email protected]>
Head of Modeling & Simulation - Oncology
Global Exploratory Medicine
Merck Serono S.A. * Geneva
Tel: +41.22.414.3549
Cell: +41.79.508.7898
This message and any attachment are confidential and may be privileged or
otherwise protected from disclosure. If you are not the intended recipient, you
must not copy this message or attachment or disclose the contents to any other
person. If you have received this transmission in error, please notify the
sender immediately and delete the message and any attachment from your system.
Merck KGaA, Darmstadt, Germany and any of its subsidiaries do not accept
liability for any omissions or errors in this message which may arise as a
result of E-Mail-transmission or for damages resulting from any unauthorized
changes of the content of this message and any attachment thereto. Merck KGaA,
Darmstadt, Germany and any of its subsidiaries do not guarantee that this
message is free of viruses and does not accept liability for any damages caused
by any virus transmitted therewith.
Click http://www.merckgroup.com/disclaimer to access the German, French,
Spanish and Portuguese versions of this disclaimer.
This electronic transmission may contain confidential and/or proprietary
information and is intended to be for the use of the individual or entity named
above. If you are not the intended recipient, be aware that any disclosure,
copying, distribution or use of the contents of this electronic transmission is
prohibited. If you have received this electronic transmission in error, please
destroy it and immediately notify us of the error. Thank you.
Hi Pascal,
I think the problem is in the precision of the integration routine. With extra points, you change the ODE integration process and the results. I would use TOL=10 or higher in the original estimation. I have seen cases when changing TOL from 6 to 0 or 10 changed the outcome quite significantly.
Leonid
--------------------------------------
Leonid Gibiansky, Ph.D.
President, QuantPharm LLC
web: www.quantpharm.com
e-mail: LGibiansky at quantpharm.com
tel: (301) 767 5566
Quoted reply history
On 11/23/2012 11:08 AM, [email protected] wrote:
> Dear NM-User community,
>
> I have a model with 2 differential equations and I use ADVAN6 TOL=5. In
> $DES, I am using T the continuous time variable. The run converges, $COV
> is OK, and the model gives a reasonable fit. In order to compute some
> statistics which cannot be obtained analytically, I need to compute
> individual predictions based on individual POSTHOC parameters and an
> extended grid of time for interpolating the observed times.
>
> So I have
> 1) added to my original dataset extra points regularly spaced with
> MDV=1. To give you an idea, my average observation time is 25, with a
> range going from 5 to 160. So my grid was set so that I have a dummy
> observation every 1 unit of time.
> 2) rerun my model using $MSFI to initialize the pop parameters, with
> MAXEVAL=0 and POSTHOC options so that individual empirical Bayes
> estimates (EBE) parameters for each patient would be first re-estimated,
> then the prediction would be computed.
>
> Then I
> 3) checked that my new predictions computed from the extended dataset
> match the predictions of the original dataset at observed time points. I
> had the surprise to see that for some individuals those predictions
> match, for some others they slightly diverge, and for few others they
> are dramatically different. I checked the EBEs and they were clearly
> different between the original dataset and the one with the dummy points.
> 4) I decided to redo the grid with only one dummy point every 1/4 of
> time unit. The result was less dramatic, but still for most of my
> individuals the EBEs predictions were diverging from the original ones
> computed without the dummy times.
>
> Of course the solution for me is to estimate the EBEs from the original
> dataset, export them in a table and reread them to initialize the
> parameter of my individuals using only dummy time points and no
> observations.
>
> This problem reminds me something that was discussed previously on
> nm-user, but I could not recover the source in the archive.
>
> Anyway is this something known and predictable that when adding dummy
> points with MDV=1 to your original dataset you sometimes get very
> different EBEs ? Are there cases/models/ADVAN where the problem is
> likely to happen? Is their a way to fix it it in NONMEM other than the
> trick I used?
>
> Thanks for your replies!
>
> Kind regards,
>
> Pascal Girard, PhD
> [email protected]
> Head of Modeling & Simulation - Oncology
> Global Exploratory Medicine
> Merck Serono S.A. · Geneva
> Tel: +41.22.414.3549
> Cell: +41.79.508.7898
>
> This message and any attachment are confidential and may be privileged
> or otherwise protected from disclosure. If you are not the intended
> recipient, you must not copy this message or attachment or disclose the
> contents to any other person. If you have received this transmission in
> error, please notify the sender immediately and delete the message and
> any attachment from your system. Merck KGaA, Darmstadt, Germany and any
> of its subsidiaries do not accept liability for any omissions or errors
> in this message which may arise as a result of E-Mail-transmission or
> for damages resulting from any unauthorized changes of the content of
> this message and any attachment thereto. Merck KGaA, Darmstadt, Germany
> and any of its subsidiaries do not guarantee that this message is free
> of viruses and does not accept liability for any damages caused by any
> virus transmitted therewith.
>
> Click _ http://www.merckgroup.com/disclaimer_to access the German,
> French, Spanish and Portuguese versions of this disclaimer.
Hi Pascal,
In addition to Leonid's answer, if you have time-varying covariates and aren't
explicitly computing the current value in the $DES block and are interpolating
them (with something other than LOCF), that could explain the difference. The
reason would be that NONMEM only resets the value at a new data row, so those
new MDV rows would modify the interpolation. It could also explain the
difference between the individuals if some have larger or smaller changes in
the time-variant covariate.
Thanks,
Bill
Quoted reply history
On Nov 23, 2012, at 12:57 PM, "Leonid Gibiansky" <[email protected]>
wrote:
> Hi Pascal,
> I think the problem is in the precision of the integration routine. With
> extra points, you change the ODE integration process and the results. I would
> use TOL=10 or higher in the original estimation. I have seen cases when
> changing TOL from 6 to 0 or 10 changed the outcome quite significantly.
> Leonid
>
> --------------------------------------
> Leonid Gibiansky, Ph.D.
> President, QuantPharm LLC
> web: www.quantpharm.com
> e-mail: LGibiansky at quantpharm.com
> tel: (301) 767 5566
>
>
>
> On 11/23/2012 11:08 AM, [email protected] wrote:
>> Dear NM-User community,
>>
>> I have a model with 2 differential equations and I use ADVAN6 TOL=5. In
>> $DES, I am using T the continuous time variable. The run converges, $COV
>> is OK, and the model gives a reasonable fit. In order to compute some
>> statistics which cannot be obtained analytically, I need to compute
>> individual predictions based on individual POSTHOC parameters and an
>> extended grid of time for interpolating the observed times.
>>
>> So I have
>> 1) added to my original dataset extra points regularly spaced with
>> MDV=1. To give you an idea, my average observation time is 25, with a
>> range going from 5 to 160. So my grid was set so that I have a dummy
>> observation every 1 unit of time.
>> 2) rerun my model using $MSFI to initialize the pop parameters, with
>> MAXEVAL=0 and POSTHOC options so that individual empirical Bayes
>> estimates (EBE) parameters for each patient would be first re-estimated,
>> then the prediction would be computed.
>>
>> Then I
>> 3) checked that my new predictions computed from the extended dataset
>> match the predictions of the original dataset at observed time points. I
>> had the surprise to see that for some individuals those predictions
>> match, for some others they slightly diverge, and for few others they
>> are dramatically different. I checked the EBEs and they were clearly
>> different between the original dataset and the one with the dummy points.
>> 4) I decided to redo the grid with only one dummy point every 1/4 of
>> time unit. The result was less dramatic, but still for most of my
>> individuals the EBEs predictions were diverging from the original ones
>> computed without the dummy times.
>>
>> Of course the solution for me is to estimate the EBEs from the original
>> dataset, export them in a table and reread them to initialize the
>> parameter of my individuals using only dummy time points and no
>> observations.
>>
>> This problem reminds me something that was discussed previously on
>> nm-user, but I could not recover the source in the archive.
>>
>> Anyway is this something known and predictable that when adding dummy
>> points with MDV=1 to your original dataset you sometimes get very
>> different EBEs ? Are there cases/models/ADVAN where the problem is
>> likely to happen? Is their a way to fix it it in NONMEM other than the
>> trick I used?
>>
>> Thanks for your replies!
>>
>> Kind regards,
>>
>> Pascal Girard, PhD
>> [email protected]
>> Head of Modeling & Simulation - Oncology
>> Global Exploratory Medicine
>> Merck Serono S.A. · Geneva
>> Tel: +41.22.414.3549
>> Cell: +41.79.508.7898
>>
>> This message and any attachment are confidential and may be privileged
>> or otherwise protected from disclosure. If you are not the intended
>> recipient, you must not copy this message or attachment or disclose the
>> contents to any other person. If you have received this transmission in
>> error, please notify the sender immediately and delete the message and
>> any attachment from your system. Merck KGaA, Darmstadt, Germany and any
>> of its subsidiaries do not accept liability for any omissions or errors
>> in this message which may arise as a result of E-Mail-transmission or
>> for damages resulting from any unauthorized changes of the content of
>> this message and any attachment thereto. Merck KGaA, Darmstadt, Germany
>> and any of its subsidiaries do not guarantee that this message is free
>> of viruses and does not accept liability for any damages caused by any
>> virus transmitted therewith.
>>
>> Click _ http://www.merckgroup.com/disclaimer_to access the German,
>> French, Spanish and Portuguese versions of this disclaimer.
My receipt of Bill Denney's e-mail lagged somewhat. My explanation is similar
to his.
Robert J. Bauer, Ph.D.
Vice President, Pharmacometrics, R&D
ICON Development Solutions
7740 Milestone Parkway
Suite 150
Hanover, MD 21076
Tel: (215) 616-6428
Mob: (925) 286-0769
Email: [email protected]
Web: www.iconplc.com
Quoted reply history
-----Original Message-----
From: [email protected] [mailto:[email protected]] On
Behalf Of Denney, William S.
Sent: Friday, November 23, 2012 2:06 PM
To: Leonid Gibiansky
Cc: [email protected]; [email protected]
Subject: Re: [NMusers] Different EBE estimation between original and enriched
dataset with MDV=1
Hi Pascal,
In addition to Leonid's answer, if you have time-varying covariates and aren't
explicitly computing the current value in the $DES block and are interpolating
them (with something other than LOCF), that could explain the difference. The
reason would be that NONMEM only resets the value at a new data row, so those
new MDV rows would modify the interpolation. It could also explain the
difference between the individuals if some have larger or smaller changes in
the time-variant covariate.
Thanks,
Bill
On Nov 23, 2012, at 12:57 PM, "Leonid Gibiansky" <[email protected]>
wrote:
> Hi Pascal,
> I think the problem is in the precision of the integration routine. With
> extra points, you change the ODE integration process and the results. I would
> use TOL=10 or higher in the original estimation. I have seen cases when
> changing TOL from 6 to 0 or 10 changed the outcome quite significantly.
> Leonid
>
> --------------------------------------
> Leonid Gibiansky, Ph.D.
> President, QuantPharm LLC
> web: www.quantpharm.com
> e-mail: LGibiansky at quantpharm.com
> tel: (301) 767 5566
>
>
>
> On 11/23/2012 11:08 AM, [email protected] wrote:
>> Dear NM-User community,
>>
>> I have a model with 2 differential equations and I use ADVAN6 TOL=5. In
>> $DES, I am using T the continuous time variable. The run converges, $COV
>> is OK, and the model gives a reasonable fit. In order to compute some
>> statistics which cannot be obtained analytically, I need to compute
>> individual predictions based on individual POSTHOC parameters and an
>> extended grid of time for interpolating the observed times.
>>
>> So I have
>> 1) added to my original dataset extra points regularly spaced with
>> MDV=1. To give you an idea, my average observation time is 25, with a
>> range going from 5 to 160. So my grid was set so that I have a dummy
>> observation every 1 unit of time.
>> 2) rerun my model using $MSFI to initialize the pop parameters, with
>> MAXEVAL=0 and POSTHOC options so that individual empirical Bayes
>> estimates (EBE) parameters for each patient would be first re-estimated,
>> then the prediction would be computed.
>>
>> Then I
>> 3) checked that my new predictions computed from the extended dataset
>> match the predictions of the original dataset at observed time points. I
>> had the surprise to see that for some individuals those predictions
>> match, for some others they slightly diverge, and for few others they
>> are dramatically different. I checked the EBEs and they were clearly
>> different between the original dataset and the one with the dummy points.
>> 4) I decided to redo the grid with only one dummy point every 1/4 of
>> time unit. The result was less dramatic, but still for most of my
>> individuals the EBEs predictions were diverging from the original ones
>> computed without the dummy times.
>>
>> Of course the solution for me is to estimate the EBEs from the original
>> dataset, export them in a table and reread them to initialize the
>> parameter of my individuals using only dummy time points and no
>> observations.
>>
>> This problem reminds me something that was discussed previously on
>> nm-user, but I could not recover the source in the archive.
>>
>> Anyway is this something known and predictable that when adding dummy
>> points with MDV=1 to your original dataset you sometimes get very
>> different EBEs ? Are there cases/models/ADVAN where the problem is
>> likely to happen? Is their a way to fix it it in NONMEM other than the
>> trick I used?
>>
>> Thanks for your replies!
>>
>> Kind regards,
>>
>> Pascal Girard, PhD
>> [email protected]
>> Head of Modeling & Simulation - Oncology
>> Global Exploratory Medicine
>> Merck Serono S.A. * Geneva
>> Tel: +41.22.414.3549
>> Cell: +41.79.508.7898
>>
>> This message and any attachment are confidential and may be privileged
>> or otherwise protected from disclosure. If you are not the intended
>> recipient, you must not copy this message or attachment or disclose the
>> contents to any other person. If you have received this transmission in
>> error, please notify the sender immediately and delete the message and
>> any attachment from your system. Merck KGaA, Darmstadt, Germany and any
>> of its subsidiaries do not accept liability for any omissions or errors
>> in this message which may arise as a result of E-Mail-transmission or
>> for damages resulting from any unauthorized changes of the content of
>> this message and any attachment thereto. Merck KGaA, Darmstadt, Germany
>> and any of its subsidiaries do not guarantee that this message is free
>> of viruses and does not accept liability for any damages caused by any
>> virus transmitted therewith.
>>
>> Click _ http://www.merckgroup.com/disclaimer_to access the German,
>> French, Spanish and Portuguese versions of this disclaimer.
Pascal,
I had the same issue a while ago with time-invariant covariates. Back then with
NM6.2, adding an EVID column to the data set and setting EVID=2 for additional
records preserved the ETAs of the original estimation (while only setting MDV=1
for additional records did not).
Herbert
Herbert Struemper, Ph.D.
Clinical Pharmacology, Modeling & Simulation
GlaxoSmithKline, RTP, 17.2230.2B
Tel.: 919.483.7762 (GSK-Internal: 7/8-703.7762)
Quoted reply history
-----Original Message-----
From: [email protected] [mailto:[email protected]] On
Behalf Of Bauer, Robert
Sent: Sunday, November 25, 2012 9:11 PM
To: Leonid Gibiansky; [email protected]
Cc: [email protected]
Subject: RE: [NMusers] Different EBE estimation between original and enriched
dataset with MDV=1
Pascal:
There is one more consideration. If your model depends on the use of covariate
data, then during the numerical integration from time t1 to t2, where t1 and t2
are times of two contiguous records, which have values of the covariate c1 and
c2, respectively, NONMEM uses the covariate at time t2 (call it c2)during the
interval from t>t1 to t<=t2. During your original estimation, your data records
were, perhaps, as an example:
Time covariate MDV
1.0 1.0 0
1.5 2.0 0
With the filled in data set, perhaps you filled in the covariates as follows:
Time covariate MDV
1.0 1.0 0
1.25 1.0 1
1.5 2.0 0
Or perhaps you made an interpolation for the covariate at the inserted time of
1.25, to be 1.5. But NONMEM made the following equivalent interpretation
during your original estimation:
Time covariate MDV
1.0 1.0 0
1.25 2.0 1
1.5 2.0 0
That is, when the time record 1.25 was not there, it supplied the numerical
integrater with the covariate value of 2.0 for all times from >1.0 to <=1.5, as
stated earlier.
Even though MDV=1 on the inserted records, NONEMM simply does not include the
DV of that record in the objective function evaluation, but will still use the
other information for simulation, by simulation I mean, for the numerical
integration during estimation.
In short, your model has changed regarding the covariate pattern based on the
expanded data set.
By the way, there is a utility program called finedeata, that actually
facilitates data record filling, with options on how to fill in covariates, in
nonmem7.3 beta. I will send the e-mail to this shortly.
If you are not using covariates in the manner I described above, then please
ignore my lengthy explanation.
Robert J. Bauer, Ph.D.
Vice President, Pharmacometrics, R&D
ICON Development Solutions
7740 Milestone Parkway
Suite 150
Hanover, MD 21076
Tel: (215) 616-6428
Mob: (925) 286-0769
Email: [email protected]
Web: www.iconplc.com
-----Original Message-----
From: [email protected] [mailto:[email protected]] On
Behalf Of Leonid Gibiansky
Sent: Friday, November 23, 2012 12:15 PM
To: [email protected]
Cc: [email protected]
Subject: Re: [NMusers] Different EBE estimation between original and enriched
dataset with MDV=1
Hi Pascal,
I think the problem is in the precision of the integration routine. With extra
points, you change the ODE integration process and the results. I would use
TOL=10 or higher in the original estimation. I have seen cases when changing
TOL from 6 to 0 or 10 changed the outcome quite significantly.
Leonid
--------------------------------------
Leonid Gibiansky, Ph.D.
President, QuantPharm LLC
web: www.quantpharm.com
e-mail: LGibiansky at quantpharm.com
tel: (301) 767 5566
On 11/23/2012 11:08 AM, [email protected] wrote:
> Dear NM-User community,
>
> I have a model with 2 differential equations and I use ADVAN6 TOL=5.
> In $DES, I am using T the continuous time variable. The run converges,
> $COV is OK, and the model gives a reasonable fit. In order to compute
> some statistics which cannot be obtained analytically, I need to
> compute individual predictions based on individual POSTHOC parameters
> and an extended grid of time for interpolating the observed times.
>
> So I have
> 1) added to my original dataset extra points regularly spaced with
> MDV=1. To give you an idea, my average observation time is 25, with a
> range going from 5 to 160. So my grid was set so that I have a dummy
> observation every 1 unit of time.
> 2) rerun my model using $MSFI to initialize the pop parameters, with
> MAXEVAL=0 and POSTHOC options so that individual empirical Bayes
> estimates (EBE) parameters for each patient would be first
> re-estimated, then the prediction would be computed.
>
> Then I
> 3) checked that my new predictions computed from the extended dataset
> match the predictions of the original dataset at observed time points.
> I had the surprise to see that for some individuals those predictions
> match, for some others they slightly diverge, and for few others they
> are dramatically different. I checked the EBEs and they were clearly
> different between the original dataset and the one with the dummy points.
> 4) I decided to redo the grid with only one dummy point every 1/4 of
> time unit. The result was less dramatic, but still for most of my
> individuals the EBEs predictions were diverging from the original ones
> computed without the dummy times.
>
> Of course the solution for me is to estimate the EBEs from the
> original dataset, export them in a table and reread them to initialize
> the parameter of my individuals using only dummy time points and no
> observations.
>
> This problem reminds me something that was discussed previously on
> nm-user, but I could not recover the source in the archive.
>
> Anyway is this something known and predictable that when adding dummy
> points with MDV=1 to your original dataset you sometimes get very
> different EBEs ? Are there cases/models/ADVAN where the problem is
> likely to happen? Is their a way to fix it it in NONMEM other than the
> trick I used?
>
> Thanks for your replies!
>
> Kind regards,
>
> Pascal Girard, PhD
> [email protected]
> Head of Modeling & Simulation - Oncology Global Exploratory Medicine
> Merck Serono S.A. * Geneva
> Tel: +41.22.414.3549
> Cell: +41.79.508.7898
>
> This message and any attachment are confidential and may be privileged
> or otherwise protected from disclosure. If you are not the intended
> recipient, you must not copy this message or attachment or disclose
> the contents to any other person. If you have received this
> transmission in error, please notify the sender immediately and delete
> the message and any attachment from your system. Merck KGaA,
> Darmstadt, Germany and any of its subsidiaries do not accept liability
> for any omissions or errors in this message which may arise as a
> result of E-Mail-transmission or for damages resulting from any
> unauthorized changes of the content of this message and any attachment
> thereto. Merck KGaA, Darmstadt, Germany and any of its subsidiaries do
> not guarantee that this message is free of viruses and does not accept
> liability for any damages caused by any virus transmitted therewith.
>
> Click _ http://www.merckgroup.com/disclaimer_to access the German,
> French, Spanish and Portuguese versions of this disclaimer.
Dear All,
Thanks for your detailed response and tricks. I am trying to address each
of them after several trial and errors with your suggestions:
1) I have only time-invariant covariates. Buth thanks to Robert and Bill
for mentioning it. I will remember!
2) I did not use the EVID=2 for my dummy times. Now I am using them, but
it does not help.
3) Starting from non optimized parameters rather than $MSFI as suggested
by Joachim does not help. But I like your explanation. Nevertheless I
can't live with "the differences [...] within the range you would also
find if you did a bootstrap" since those differences change the profiles I
observe.
4) The nice trick suggested by Heiner (After the last time point of an ID
you may add a line with EVID=3 (reset event) with the TIME (TIMERESET>the
last datapoint of the ID of interest) may work, but would probably be too
complex to implement for my special dataset since I have a long history of
not evenly spaced dosing. But thanks, Heine, I will also remember this
one.
5) Increasing the TOL is the only thing that improves the prediction.
Thanks Leonid you are right when you write "the problem is in the
precision of the integration routine". But with the data I have, I cannot
increase it beyond 8. By the way, in my model I am estimating the initial
condition at baseline in one of my compartment using a random effect. When
the slope after the baseline is large, I got almost no bias. But when it
is a moderate slope, the bias prediction with dummy points appears and is
increasing when the slope is decreasing. This probably confirms the issue
of the precision with integration routine.
6) The only solution which I mention in in my 1st Email and that was also
suggested by Jean Lavigne : one separate run for the estimation of the
EBEs and one from the simulation on dummy time points.
7.2) Thanks Robert. I am glad to learn that in 7.3 there will be an option
to automatically "fill in extra records with small time increments,
to provide smooth plots". I imagine that using this utility program will
not change the precision of the integration routine since it will be build
in. I will just have to wait a little bit for getting access to it.
Kind regards,
Pascal
PS
As someone who used to live by the Lake Leman would have said, NONMEM,
sometimes, "It's a kind og magic!" :-)
Quoted reply history
From: Herbert Struemper <[email protected]>
To: "[email protected]" <[email protected]>
Date: 26/11/2012 16:13
Subject: RE: [NMusers] Different EBE estimation between original
and enriched dataset with MDV=1
Sent by: [email protected]
Pascal,
I had the same issue a while ago with time-invariant covariates. Back then
with NM6.2, adding an EVID column to the data set and setting EVID=2 for
additional records preserved the ETAs of the original estimation (while
only setting MDV=1 for additional records did not).
Herbert
Herbert Struemper, Ph.D.
Clinical Pharmacology, Modeling & Simulation
GlaxoSmithKline, RTP, 17.2230.2B
Tel.: 919.483.7762 (GSK-Internal: 7/8-703.7762)
-----Original Message-----
From: [email protected] [mailto:[email protected]]
On Behalf Of Bauer, Robert
Sent: Sunday, November 25, 2012 9:11 PM
To: Leonid Gibiansky; [email protected]
Cc: [email protected]
Subject: RE: [NMusers] Different EBE estimation between original and
enriched dataset with MDV=1
Pascal:
There is one more consideration. If your model depends on the use of
covariate data, then during the numerical integration from time t1 to t2,
where t1 and t2 are times of two contiguous records, which have values of
the covariate c1 and c2, respectively, NONMEM uses the covariate at time
t2 (call it c2)during the interval from t>t1 to t<=t2. During your
original estimation, your data records were, perhaps, as an example:
Time covariate MDV
1.0 1.0 0
1.5 2.0 0
With the filled in data set, perhaps you filled in the covariates as
follows:
Time covariate MDV
1.0 1.0 0
1.25 1.0 1
1.5 2.0 0
Or perhaps you made an interpolation for the covariate at the inserted
time of 1.25, to be 1.5. But NONMEM made the following equivalent
interpretation during your original estimation:
Time covariate MDV
1.0 1.0 0
1.25 2.0 1
1.5 2.0 0
That is, when the time record 1.25 was not there, it supplied the
numerical integrater with the covariate value of 2.0 for all times from
>1.0 to <=1.5, as stated earlier.
Even though MDV=1 on the inserted records, NONEMM simply does not include
the DV of that record in the objective function evaluation, but will still
use the other information for simulation, by simulation I mean, for the
numerical integration during estimation.
In short, your model has changed regarding the covariate pattern based on
the expanded data set.
By the way, there is a utility program called finedeata, that actually
facilitates data record filling, with options on how to fill in
covariates, in nonmem7.3 beta. I will send the e-mail to this shortly.
If you are not using covariates in the manner I described above, then
please ignore my lengthy explanation.
Robert J. Bauer, Ph.D.
Vice President, Pharmacometrics, R&D
ICON Development Solutions
7740 Milestone Parkway
Suite 150
Hanover, MD 21076
Tel: (215) 616-6428
Mob: (925) 286-0769
Email: [email protected]
Web: www.iconplc.com
-----Original Message-----
From: [email protected] [mailto:[email protected]]
On Behalf Of Leonid Gibiansky
Sent: Friday, November 23, 2012 12:15 PM
To: [email protected]
Cc: [email protected]
Subject: Re: [NMusers] Different EBE estimation between original and
enriched dataset with MDV=1
Hi Pascal,
I think the problem is in the precision of the integration routine. With
extra points, you change the ODE integration process and the results. I
would use TOL=10 or higher in the original estimation. I have seen cases
when changing TOL from 6 to 0 or 10 changed the outcome quite
significantly.
Leonid
--------------------------------------
Leonid Gibiansky, Ph.D.
President, QuantPharm LLC
web: www.quantpharm.com
e-mail: LGibiansky at quantpharm.com
tel: (301) 767 5566
On 11/23/2012 11:08 AM, [email protected] wrote:
> Dear NM-User community,
>
> I have a model with 2 differential equations and I use ADVAN6 TOL=5.
> In $DES, I am using T the continuous time variable. The run converges,
> $COV is OK, and the model gives a reasonable fit. In order to compute
> some statistics which cannot be obtained analytically, I need to
> compute individual predictions based on individual POSTHOC parameters
> and an extended grid of time for interpolating the observed times.
>
> So I have
> 1) added to my original dataset extra points regularly spaced with
> MDV=1. To give you an idea, my average observation time is 25, with a
> range going from 5 to 160. So my grid was set so that I have a dummy
> observation every 1 unit of time.
> 2) rerun my model using $MSFI to initialize the pop parameters, with
> MAXEVAL=0 and POSTHOC options so that individual empirical Bayes
> estimates (EBE) parameters for each patient would be first
> re-estimated, then the prediction would be computed.
>
> Then I
> 3) checked that my new predictions computed from the extended dataset
> match the predictions of the original dataset at observed time points.
> I had the surprise to see that for some individuals those predictions
> match, for some others they slightly diverge, and for few others they
> are dramatically different. I checked the EBEs and they were clearly
> different between the original dataset and the one with the dummy
points.
> 4) I decided to redo the grid with only one dummy point every 1/4 of
> time unit. The result was less dramatic, but still for most of my
> individuals the EBEs predictions were diverging from the original ones
> computed without the dummy times.
>
> Of course the solution for me is to estimate the EBEs from the
> original dataset, export them in a table and reread them to initialize
> the parameter of my individuals using only dummy time points and no
> observations.
>
> This problem reminds me something that was discussed previously on
> nm-user, but I could not recover the source in the archive.
>
> Anyway is this something known and predictable that when adding dummy
> points with MDV=1 to your original dataset you sometimes get very
> different EBEs ? Are there cases/models/ADVAN where the problem is
> likely to happen? Is their a way to fix it it in NONMEM other than the
> trick I used?
>
> Thanks for your replies!
>
> Kind regards,
>
> Pascal Girard, PhD
> [email protected]
> Head of Modeling & Simulation - Oncology Global Exploratory Medicine
> Merck Serono S.A. * Geneva
> Tel: +41.22.414.3549
> Cell: +41.79.508.7898
>
> This message and any attachment are confidential and may be privileged
> or otherwise protected from disclosure. If you are not the intended
> recipient, you must not copy this message or attachment or disclose
> the contents to any other person. If you have received this
> transmission in error, please notify the sender immediately and delete
> the message and any attachment from your system. Merck KGaA,
> Darmstadt, Germany and any of its subsidiaries do not accept liability
> for any omissions or errors in this message which may arise as a
> result of E-Mail-transmission or for damages resulting from any
> unauthorized changes of the content of this message and any attachment
> thereto. Merck KGaA, Darmstadt, Germany and any of its subsidiaries do
> not guarantee that this message is free of viruses and does not accept
> liability for any damages caused by any virus transmitted therewith.
>
> Click _ http://www.merckgroup.com/disclaimer_to access the German,
> French, Spanish and Portuguese versions of this disclaimer.
Hi Pascal,
You may want to switch to ADVAN13. It is much more stable for stiff problems, and may allow to increase TOL.
Thanks
Leonid
--------------------------------------
Leonid Gibiansky, Ph.D.
President, QuantPharm LLC
web: www.quantpharm.com
e-mail: LGibiansky at quantpharm.com
tel: (301) 767 5566
Quoted reply history
On 11/26/2012 2:43 PM, [email protected] wrote:
> Dear All,
>
> Thanks for your detailed response and tricks. I am trying to address
> each of them after several trial and errors with your suggestions:
>
> 1) I have only time-invariant covariates. Buth thanks to Robert and
> Bill for mentioning it. I will remember!
>
> 2) I did not use the EVID=2 for my dummy times. Now I am using them, but
> it does not help.
>
> 3) Starting from non optimized parameters rather than $MSFI as suggested
> by Joachim does not help. But I like your explanation. Nevertheless I
> can't live with "the differences [...] within the range you would also
> find if you did a bootstrap" since those differences change the profiles
> I observe.
>
> 4) The nice trick suggested by Heiner (After the last time point of an
> ID you may add a line with EVID=3 (reset event) with the TIME
> (TIMERESET>the last datapoint of the ID of interest)may work, but would
> probably be too complex to implement for my special dataset since I have
> a long history of not evenly spaced dosing. But thanks, Heine, I will
> also remember this one.
>
> 5) Increasing the TOL is the only thing that improves the prediction.
> Thanks Leonid you are right when you write "the problem is in the
> precision of the integration routine". But with the data I have, I
> cannot increase it beyond 8. By the way, in my model I am estimating the
> initial condition at baseline in one of my compartment using a random
> effect. When the slope after the baseline is large, I got almost no
> bias. But when it is a moderate slope, the bias prediction with dummy
> points appears and is increasing when the slope is decreasing. This
> probably confirms the issue of the precision with integration routine.
>
> 6) The only solution which I mention in in my 1st Email and that was
> also suggested by Jean Lavigne : one separate run for the estimation of
> the EBEs and one from the simulation on dummy time points.
>
> 7.2) Thanks Robert. I am glad to learn that in 7.3 there will be an
> option to automatically "fill in extra records with small time
> increments, to provide smooth plots". I imagine that using this
> utility program will not change the precision of the integration routine
> since it will be build in. I will just have to wait a little bit for
> getting access to it.
>
> Kind regards,
>
> Pascal
>
> PS
> As someone who used to live by the Lake Leman would have said, NONMEM,
> sometimes, "It's a kind og magic!" :-)
>
> From: Herbert Struemper <[email protected]>
> To: "[email protected]" <[email protected]>
> Date: 26/11/2012 16:13
> Subject: RE: [NMusers] Different EBE estimation between original and
> enriched dataset with MDV=1
> Sent by: [email protected]
> ------------------------------------------------------------------------
>
> Pascal,
> I had the same issue a while ago with time-invariant covariates. Back
> then with NM6.2, adding an EVID column to the data set and setting
> EVID=2 for additional records preserved the ETAs of the original
> estimation (while only setting MDV=1 for additional records did not).
> Herbert
>
> Herbert Struemper, Ph.D.
> Clinical Pharmacology, Modeling & Simulation
> GlaxoSmithKline, RTP, 17.2230.2B
> Tel.: 919.483.7762 (GSK-Internal: 7/8-703.7762)
>
> -----Original Message-----
> From: [email protected] [mailto:[email protected]]
> On Behalf Of Bauer, Robert
> Sent: Sunday, November 25, 2012 9:11 PM
> To: Leonid Gibiansky; [email protected]
> Cc: [email protected]
> Subject: RE: [NMusers] Different EBE estimation between original and
> enriched dataset with MDV=1
>
> Pascal:
> There is one more consideration. If your model depends on the use of
> covariate data, then during the numerical integration from time t1 to
> t2, where t1 and t2 are times of two contiguous records, which have
> values of the covariate c1 and c2, respectively, NONMEM uses the
> covariate at time t2 (call it c2)during the interval from t>t1 to t<=t2.
> During your original estimation, your data records were, perhaps, as an
> example:
>
> Time covariate MDV
> 1.0 1.0 0
> 1.5 2.0 0
>
> With the filled in data set, perhaps you filled in the covariates as
> follows:
>
> Time covariate MDV
> 1.0 1.0 0
> 1.25 1.0 1
> 1.5 2.0 0
>
> Or perhaps you made an interpolation for the covariate at the inserted
> time of 1.25, to be 1.5. But NONMEM made the following equivalent
> interpretation during your original estimation:
>
> Time covariate MDV
> 1.0 1.0 0
> 1.25 2.0 1
> 1.5 2.0 0
>
> That is, when the time record 1.25 was not there, it supplied the
> numerical integrater with the covariate value of 2.0 for all times from
> >1.0 to <=1.5, as stated earlier.
>
> Even though MDV=1 on the inserted records, NONEMM simply does not
> include the DV of that record in the objective function evaluation, but
> will still use the other information for simulation, by simulation I
> mean, for the numerical integration during estimation.
>
> In short, your model has changed regarding the covariate pattern based
> on the expanded data set.
>
> By the way, there is a utility program called finedeata, that actually
> facilitates data record filling, with options on how to fill in
> covariates, in nonmem7.3 beta. I will send the e-mail to this shortly.
>
> If you are not using covariates in the manner I described above, then
> please ignore my lengthy explanation.
>
> Robert J. Bauer, Ph.D.
> Vice President, Pharmacometrics, R&D
> ICON Development Solutions
> 7740 Milestone Parkway
> Suite 150
> Hanover, MD 21076
> Tel: (215) 616-6428
> Mob: (925) 286-0769
> Email: [email protected]
> Web: www.iconplc.com
>
> -----Original Message-----
> From: [email protected] [mailto:[email protected]]
> On Behalf Of Leonid Gibiansky
> Sent: Friday, November 23, 2012 12:15 PM
> To: [email protected]
> Cc: [email protected]
> Subject: Re: [NMusers] Different EBE estimation between original and
> enriched dataset with MDV=1
>
> Hi Pascal,
> I think the problem is in the precision of the integration routine. With
> extra points, you change the ODE integration process and the results. I
> would use TOL=10 or higher in the original estimation. I have seen cases
> when changing TOL from 6 to 0 or 10 changed the outcome quite significantly.
> Leonid
>
> --------------------------------------
> Leonid Gibiansky, Ph.D.
> President, QuantPharm LLC
> web: www.quantpharm.com
> e-mail: LGibiansky at quantpharm.com
> tel: (301) 767 5566
>
> On 11/23/2012 11:08 AM, [email protected] wrote:
> > Dear NM-User community,
> >
> > I have a model with 2 differential equations and I use ADVAN6 TOL=5.
> > In $DES, I am using T the continuous time variable. The run converges,
> > $COV is OK, and the model gives a reasonable fit. In order to compute
> > some statistics which cannot be obtained analytically, I need to
> > compute individual predictions based on individual POSTHOC parameters
> > and an extended grid of time for interpolating the observed times.
> >
> > So I have
> > 1) added to my original dataset extra points regularly spaced with
> > MDV=1. To give you an idea, my average observation time is 25, with a
> > range going from 5 to 160. So my grid was set so that I have a dummy
> > observation every 1 unit of time.
> > 2) rerun my model using $MSFI to initialize the pop parameters, with
> > MAXEVAL=0 and POSTHOC options so that individual empirical Bayes
> > estimates (EBE) parameters for each patient would be first
> > re-estimated, then the prediction would be computed.
> >
> > Then I
> > 3) checked that my new predictions computed from the extended dataset
> > match the predictions of the original dataset at observed time points.
> > I had the surprise to see that for some individuals those predictions
> > match, for some others they slightly diverge, and for few others they
> > are dramatically different. I checked the EBEs and they were clearly
> > different between the original dataset and the one with the dummy points.
> > 4) I decided to redo the grid with only one dummy point every 1/4 of
> > time unit. The result was less dramatic, but still for most of my
> > individuals the EBEs predictions were diverging from the original ones
> > computed without the dummy times.
> >
> > Of course the solution for me is to estimate the EBEs from the
> > original dataset, export them in a table and reread them to initialize
> > the parameter of my individuals using only dummy time points and no
> > observations.
> >
> > This problem reminds me something that was discussed previously on
> > nm-user, but I could not recover the source in the archive.
> >
> > Anyway is this something known and predictable that when adding dummy
> > points with MDV=1 to your original dataset you sometimes get very
> > different EBEs ? Are there cases/models/ADVAN where the problem is
> > likely to happen? Is their a way to fix it it in NONMEM other than the
> > trick I used?
> >
> > Thanks for your replies!
> >
> > Kind regards,
> >
> > Pascal Girard, PhD
> > [email protected]
> > Head of Modeling & Simulation - Oncology Global Exploratory Medicine
> > Merck Serono S.A. * Geneva
> > Tel: +41.22.414.3549
> > Cell: +41.79.508.7898
> >
> > This message and any attachment are confidential and may be privileged
> > or otherwise protected from disclosure. If you are not the intended
> > recipient, you must not copy this message or attachment or disclose
> > the contents to any other person. If you have received this
> > transmission in error, please notify the sender immediately and delete
> > the message and any attachment from your system. Merck KGaA,
> > Darmstadt, Germany and any of its subsidiaries do not accept liability
> > for any omissions or errors in this message which may arise as a
> > result of E-Mail-transmission or for damages resulting from any
> > unauthorized changes of the content of this message and any attachment
> > thereto. Merck KGaA, Darmstadt, Germany and any of its subsidiaries do
> > not guarantee that this message is free of viruses and does not accept
> > liability for any damages caused by any virus transmitted therewith.
> >
> > Click _ http://www.merckgroup.com/disclaimer_to access the German,
> > French, Spanish and Portuguese versions of this disclaimer.
>
Hi Leonid,
Thanks for the additional suggestion to use ADVAN13. I was able to
increase TOL up to 16, SIGL to 14, but still have the same biases for the
moderate to almost flat initial slope after baseline when using dummy
points spaced every 1 unit of time. When I reduce number of dummy points
with one dummy point every 4 units of time, the bias almost disappear.
Kind regards,
Pascal
Quoted reply history
From: Leonid Gibiansky <[email protected]>
To: [email protected]
Cc: "[email protected]" <[email protected]>
Date: 26/11/2012 21:40
Subject: Re: [NMusers] Different EBE estimation between original
and enriched dataset with MDV=1
Hi Pascal,
You may want to switch to ADVAN13. It is much more stable for stiff
problems, and may allow to increase TOL.
Thanks
Leonid
--------------------------------------
Leonid Gibiansky, Ph.D.
President, QuantPharm LLC
web: www.quantpharm.com
e-mail: LGibiansky at quantpharm.com
tel: (301) 767 5566
On 11/26/2012 2:43 PM, [email protected] wrote:
> Dear All,
>
> Thanks for your detailed response and tricks. I am trying to address
> each of them after several trial and errors with your suggestions:
>
> 1) I have only time-invariant covariates. Buth thanks to Robert and
> Bill for mentioning it. I will remember!
>
> 2) I did not use the EVID=2 for my dummy times. Now I am using them, but
> it does not help.
>
> 3) Starting from non optimized parameters rather than $MSFI as suggested
> by Joachim does not help. But I like your explanation. Nevertheless I
> can't live with "the differences [...] within the range you would also
> find if you did a bootstrap" since those differences change the profiles
> I observe.
>
> 4) The nice trick suggested by Heiner (After the last time point of an
> ID you may add a line with EVID=3 (reset event) with the TIME
> (TIMERESET>the last datapoint of the ID of interest)may work, but would
> probably be too complex to implement for my special dataset since I have
> a long history of not evenly spaced dosing. But thanks, Heine, I will
> also remember this one.
>
> 5) Increasing the TOL is the only thing that improves the prediction.
> Thanks Leonid you are right when you write "the problem is in the
> precision of the integration routine". But with the data I have, I
> cannot increase it beyond 8. By the way, in my model I am estimating the
> initial condition at baseline in one of my compartment using a random
> effect. When the slope after the baseline is large, I got almost no
> bias. But when it is a moderate slope, the bias prediction with dummy
> points appears and is increasing when the slope is decreasing. This
> probably confirms the issue of the precision with integration routine.
>
> 6) The only solution which I mention in in my 1st Email and that was
> also suggested by Jean Lavigne : one separate run for the estimation of
> the EBEs and one from the simulation on dummy time points.
>
> 7.2) Thanks Robert. I am glad to learn that in 7.3 there will be an
> option to automatically "fill in extra records with small time
> increments, to provide smooth plots". I imagine that using this
> utility program will not change the precision of the integration routine
> since it will be build in. I will just have to wait a little bit for
> getting access to it.
>
> Kind regards,
>
> Pascal
>
> PS
> As someone who used to live by the Lake Leman would have said, NONMEM,
> sometimes, "It's a kind og magic!" :-)
>
>
>
> From: Herbert Struemper <[email protected]>
> To: "[email protected]" <[email protected]>
> Date: 26/11/2012 16:13
> Subject: RE: [NMusers] Different EBE estimation between original and
> enriched dataset with MDV=1
> Sent by: [email protected]
> ------------------------------------------------------------------------
>
>
>
> Pascal,
> I had the same issue a while ago with time-invariant covariates. Back
> then with NM6.2, adding an EVID column to the data set and setting
> EVID=2 for additional records preserved the ETAs of the original
> estimation (while only setting MDV=1 for additional records did not).
> Herbert
>
> Herbert Struemper, Ph.D.
> Clinical Pharmacology, Modeling & Simulation
> GlaxoSmithKline, RTP, 17.2230.2B
> Tel.: 919.483.7762 (GSK-Internal: 7/8-703.7762)
>
>
>
> -----Original Message-----
> From: [email protected] [mailto:[email protected]]
> On Behalf Of Bauer, Robert
> Sent: Sunday, November 25, 2012 9:11 PM
> To: Leonid Gibiansky; [email protected]
> Cc: [email protected]
> Subject: RE: [NMusers] Different EBE estimation between original and
> enriched dataset with MDV=1
>
> Pascal:
> There is one more consideration. If your model depends on the use of
> covariate data, then during the numerical integration from time t1 to
> t2, where t1 and t2 are times of two contiguous records, which have
> values of the covariate c1 and c2, respectively, NONMEM uses the
> covariate at time t2 (call it c2)during the interval from t>t1 to t<=t2.
> During your original estimation, your data records were, perhaps, as an
> example:
>
> Time covariate MDV
> 1.0 1.0 0
> 1.5 2.0 0
>
> With the filled in data set, perhaps you filled in the covariates as
> follows:
>
> Time covariate MDV
> 1.0 1.0 0
> 1.25 1.0 1
> 1.5 2.0 0
>
> Or perhaps you made an interpolation for the covariate at the inserted
> time of 1.25, to be 1.5. But NONMEM made the following equivalent
> interpretation during your original estimation:
>
> Time covariate MDV
> 1.0 1.0 0
> 1.25 2.0 1
> 1.5 2.0 0
>
> That is, when the time record 1.25 was not there, it supplied the
> numerical integrater with the covariate value of 2.0 for all times from
> >1.0 to <=1.5, as stated earlier.
>
> Even though MDV=1 on the inserted records, NONEMM simply does not
> include the DV of that record in the objective function evaluation, but
> will still use the other information for simulation, by simulation I
> mean, for the numerical integration during estimation.
>
> In short, your model has changed regarding the covariate pattern based
> on the expanded data set.
>
>
> By the way, there is a utility program called finedeata, that actually
> facilitates data record filling, with options on how to fill in
> covariates, in nonmem7.3 beta. I will send the e-mail to this shortly.
>
> If you are not using covariates in the manner I described above, then
> please ignore my lengthy explanation.
>
>
>
> Robert J. Bauer, Ph.D.
> Vice President, Pharmacometrics, R&D
> ICON Development Solutions
> 7740 Milestone Parkway
> Suite 150
> Hanover, MD 21076
> Tel: (215) 616-6428
> Mob: (925) 286-0769
> Email: [email protected]
> Web: www.iconplc.com
>
> -----Original Message-----
> From: [email protected] [mailto:[email protected]]
> On Behalf Of Leonid Gibiansky
> Sent: Friday, November 23, 2012 12:15 PM
> To: [email protected]
> Cc: [email protected]
> Subject: Re: [NMusers] Different EBE estimation between original and
> enriched dataset with MDV=1
>
> Hi Pascal,
> I think the problem is in the precision of the integration routine. With
> extra points, you change the ODE integration process and the results. I
> would use TOL=10 or higher in the original estimation. I have seen cases
> when changing TOL from 6 to 0 or 10 changed the outcome quite
significantly.
> Leonid
>
> --------------------------------------
> Leonid Gibiansky, Ph.D.
> President, QuantPharm LLC
> web: www.quantpharm.com
> e-mail: LGibiansky at quantpharm.com
> tel: (301) 767 5566
>
>
>
> On 11/23/2012 11:08 AM, [email protected] wrote:
> > Dear NM-User community,
> >
> > I have a model with 2 differential equations and I use ADVAN6 TOL=5.
> > In $DES, I am using T the continuous time variable. The run
converges,
> > $COV is OK, and the model gives a reasonable fit. In order to compute
> > some statistics which cannot be obtained analytically, I need to
> > compute individual predictions based on individual POSTHOC parameters
> > and an extended grid of time for interpolating the observed times.
> >
> > So I have
> > 1) added to my original dataset extra points regularly spaced with
> > MDV=1. To give you an idea, my average observation time is 25, with a
> > range going from 5 to 160. So my grid was set so that I have a dummy
> > observation every 1 unit of time.
> > 2) rerun my model using $MSFI to initialize the pop parameters, with
> > MAXEVAL=0 and POSTHOC options so that individual empirical Bayes
> > estimates (EBE) parameters for each patient would be first
> > re-estimated, then the prediction would be computed.
> >
> > Then I
> > 3) checked that my new predictions computed from the extended
dataset
> > match the predictions of the original dataset at observed time
points.
> > I had the surprise to see that for some individuals those predictions
> > match, for some others they slightly diverge, and for few others they
> > are dramatically different. I checked the EBEs and they were clearly
> > different between the original dataset and the one with the dummy
points.
> > 4) I decided to redo the grid with only one dummy point every 1/4 of
> > time unit. The result was less dramatic, but still for most of my
> > individuals the EBEs predictions were diverging from the original
ones
> > computed without the dummy times.
> >
> > Of course the solution for me is to estimate the EBEs from the
> > original dataset, export them in a table and reread them to
initialize
> > the parameter of my individuals using only dummy time points and no
> > observations.
> >
> > This problem reminds me something that was discussed previously on
> > nm-user, but I could not recover the source in the archive.
> >
> > Anyway is this something known and predictable that when adding dummy
> > points with MDV=1 to your original dataset you sometimes get very
> > different EBEs ? Are there cases/models/ADVAN where the problem is
> > likely to happen? Is their a way to fix it it in NONMEM other than
the
> > trick I used?
> >
> > Thanks for your replies!
> >
> > Kind regards,
> >
> > Pascal Girard, PhD
> > [email protected]
> > Head of Modeling & Simulation - Oncology Global Exploratory Medicine
> > Merck Serono S.A. * Geneva
> > Tel: +41.22.414.3549
> > Cell: +41.79.508.7898
> >
> > This message and any attachment are confidential and may be
privileged
> > or otherwise protected from disclosure. If you are not the intended
> > recipient, you must not copy this message or attachment or disclose
> > the contents to any other person. If you have received this
> > transmission in error, please notify the sender immediately and
delete
> > the message and any attachment from your system. Merck KGaA,
> > Darmstadt, Germany and any of its subsidiaries do not accept
liability
> > for any omissions or errors in this message which may arise as a
> > result of E-Mail-transmission or for damages resulting from any
> > unauthorized changes of the content of this message and any
attachment
> > thereto. Merck KGaA, Darmstadt, Germany and any of its subsidiaries
do
> > not guarantee that this message is free of viruses and does not
accept
> > liability for any damages caused by any virus transmitted therewith.
> >
> > Click _ http://www.merckgroup.com/disclaimer_to access the German,
> > French, Spanish and Portuguese versions of this disclaimer.
>
Hi Pascal,
This looks like a bug (in Nonmem or in your code) to me. With TOL=16, there should be no numerical problems with ODE. Could you provide more details (code with the initial conditions + sample of the data for one subject where you have this problem)?
Thanks
Leonid
--------------------------------------
Leonid Gibiansky, Ph.D.
President, QuantPharm LLC
web: www.quantpharm.com
e-mail: LGibiansky at quantpharm.com
tel: (301) 767 5566
Quoted reply history
On 11/27/2012 1:59 PM, [email protected] wrote:
> Hi Leonid,
>
> Thanks for the additional suggestion to use ADVAN13. I was able to
> increase TOL up to 16, SIGL to 14, but still have the same biases for
> the moderate to almost flat initial slope after baseline when using
> dummy points spaced every 1 unit of time. When I reduce number of dummy
> points with one dummy point every 4 units of time, the bias almost
> disappear.
>
> Kind regards,
>
> Pascal
>
> From: Leonid Gibiansky <[email protected]>
> To: [email protected]
> Cc: "[email protected]" <[email protected]>
> Date: 26/11/2012 21:40
> Subject: Re: [NMusers] Different EBE estimation between original and
> enriched dataset with MDV=1
> ------------------------------------------------------------------------
>
> Hi Pascal,
> You may want to switch to ADVAN13. It is much more stable for stiff
> problems, and may allow to increase TOL.
> Thanks
> Leonid
>
> --------------------------------------
> Leonid Gibiansky, Ph.D.
> President, QuantPharm LLC
> web: www.quantpharm.com
> e-mail: LGibiansky at quantpharm.com
> tel: (301) 767 5566
>
> On 11/26/2012 2:43 PM, [email protected] wrote:
> > Dear All,
> >
> > Thanks for your detailed response and tricks. I am trying to address
> > each of them after several trial and errors with your suggestions:
> >
> > 1) I have only time-invariant covariates. Buth thanks to Robert and
> > Bill for mentioning it. I will remember!
> >
> > 2) I did not use the EVID=2 for my dummy times. Now I am using them, but
> > it does not help.
> >
> > 3) Starting from non optimized parameters rather than $MSFI as suggested
> > by Joachim does not help. But I like your explanation. Nevertheless I
> > can't live with "the differences [...] within the range you would also
> > find if you did a bootstrap" since those differences change the profiles
> > I observe.
> >
> > 4) The nice trick suggested by Heiner (After the last time point of an
> > ID you may add a line with EVID=3 (reset event) with the TIME
> > (TIMERESET>the last datapoint of the ID of interest)may work, but would
> > probably be too complex to implement for my special dataset since I have
> > a long history of not evenly spaced dosing. But thanks, Heine, I will
> > also remember this one.
> >
> > 5) Increasing the TOL is the only thing that improves the prediction.
> > Thanks Leonid you are right when you write "the problem is in the
> > precision of the integration routine". But with the data I have, I
> > cannot increase it beyond 8. By the way, in my model I am estimating the
> > initial condition at baseline in one of my compartment using a random
> > effect. When the slope after the baseline is large, I got almost no
> > bias. But when it is a moderate slope, the bias prediction with dummy
> > points appears and is increasing when the slope is decreasing. This
> > probably confirms the issue of the precision with integration routine.
> >
> > 6) The only solution which I mention in in my 1st Email and that was
> > also suggested by Jean Lavigne : one separate run for the estimation of
> > the EBEs and one from the simulation on dummy time points.
> >
> > 7.2) Thanks Robert. I am glad to learn that in 7.3 there will be an
> > option to automatically "fill in extra records with small time
> > increments, to provide smooth plots". I imagine that using this
> > utility program will not change the precision of the integration routine
> > since it will be build in. I will just have to wait a little bit for
> > getting access to it.
> >
> > Kind regards,
> >
> > Pascal
> >
> > PS
> > As someone who used to live by the Lake Leman would have said, NONMEM,
> > sometimes, "It's a kind og magic!" :-)
> >
> >
> >
> > From: Herbert Struemper <[email protected]>
> > To: "[email protected]" <[email protected]>
> > Date: 26/11/2012 16:13
> > Subject: RE: [NMusers] Different EBE estimation between original and
> > enriched dataset with MDV=1
> > Sent by: [email protected]
> > ------------------------------------------------------------------------
> >
> >
> >
> > Pascal,
> > I had the same issue a while ago with time-invariant covariates. Back
> > then with NM6.2, adding an EVID column to the data set and setting
> > EVID=2 for additional records preserved the ETAs of the original
> > estimation (while only setting MDV=1 for additional records did not).
> > Herbert
> >
> > Herbert Struemper, Ph.D.
> > Clinical Pharmacology, Modeling & Simulation
> > GlaxoSmithKline, RTP, 17.2230.2B
> > Tel.: 919.483.7762 (GSK-Internal: 7/8-703.7762)
> >
> >
> >
> > -----Original Message-----
> > From: [email protected] [mailto:[email protected]]
> > On Behalf Of Bauer, Robert
> > Sent: Sunday, November 25, 2012 9:11 PM
> > To: Leonid Gibiansky; [email protected]
> > Cc: [email protected]
> > Subject: RE: [NMusers] Different EBE estimation between original and
> > enriched dataset with MDV=1
> >
> > Pascal:
> > There is one more consideration. If your model depends on the use of
> > covariate data, then during the numerical integration from time t1 to
> > t2, where t1 and t2 are times of two contiguous records, which have
> > values of the covariate c1 and c2, respectively, NONMEM uses the
> > covariate at time t2 (call it c2)during the interval from t>t1 to t<=t2.
> > During your original estimation, your data records were, perhaps, as an
> > example:
> >
> > Time covariate MDV
> > 1.0 1.0 0
> > 1.5 2.0 0
> >
> > With the filled in data set, perhaps you filled in the covariates as
> > follows:
> >
> > Time covariate MDV
> > 1.0 1.0 0
> > 1.25 1.0 1
> > 1.5 2.0 0
> >
> > Or perhaps you made an interpolation for the covariate at the inserted
> > time of 1.25, to be 1.5. But NONMEM made the following equivalent
> > interpretation during your original estimation:
> >
> > Time covariate MDV
> > 1.0 1.0 0
> > 1.25 2.0 1
> > 1.5 2.0 0
> >
> > That is, when the time record 1.25 was not there, it supplied the
> > numerical integrater with the covariate value of 2.0 for all times from
> > >1.0 to <=1.5, as stated earlier.
> >
> > Even though MDV=1 on the inserted records, NONEMM simply does not
> > include the DV of that record in the objective function evaluation, but
> > will still use the other information for simulation, by simulation I
> > mean, for the numerical integration during estimation.
> >
> > In short, your model has changed regarding the covariate pattern based
> > on the expanded data set.
> >
> >
> > By the way, there is a utility program called finedeata, that actually
> > facilitates data record filling, with options on how to fill in
> > covariates, in nonmem7.3 beta. I will send the e-mail to this shortly.
> >
> > If you are not using covariates in the manner I described above, then
> > please ignore my lengthy explanation.
> >
> >
> >
> > Robert J. Bauer, Ph.D.
> > Vice President, Pharmacometrics, R&D
> > ICON Development Solutions
> > 7740 Milestone Parkway
> > Suite 150
> > Hanover, MD 21076
> > Tel: (215) 616-6428
> > Mob: (925) 286-0769
> > Email: [email protected]
> > Web: www.iconplc.com
> >
> > -----Original Message-----
> > From: [email protected] [mailto:[email protected]]
> > On Behalf Of Leonid Gibiansky
> > Sent: Friday, November 23, 2012 12:15 PM
> > To: [email protected]
> > Cc: [email protected]
> > Subject: Re: [NMusers] Different EBE estimation between original and
> > enriched dataset with MDV=1
> >
> > Hi Pascal,
> > I think the problem is in the precision of the integration routine. With
> > extra points, you change the ODE integration process and the results. I
> > would use TOL=10 or higher in the original estimation. I have seen cases
> > when changing TOL from 6 to 0 or 10 changed the outcome quite
> significantly.
> > Leonid
> >
> > --------------------------------------
> > Leonid Gibiansky, Ph.D.
> > President, QuantPharm LLC
> > web: www.quantpharm.com
> > e-mail: LGibiansky at quantpharm.com
> > tel: (301) 767 5566
> >
> >
> >
> > On 11/23/2012 11:08 AM, [email protected] wrote:
> > > Dear NM-User community,
> > >
> > > I have a model with 2 differential equations and I use ADVAN6 TOL=5.
> > > In $DES, I am using T the continuous time variable. The run converges,
> > > $COV is OK, and the model gives a reasonable fit. In order to compute
> > > some statistics which cannot be obtained analytically, I need to
> > > compute individual predictions based on individual POSTHOC parameters
> > > and an extended grid of time for interpolating the observed times.
> > >
> > > So I have
> > > 1) added to my original dataset extra points regularly spaced with
> > > MDV=1. To give you an idea, my average observation time is 25, with a
> > > range going from 5 to 160. So my grid was set so that I have a dummy
> > > observation every 1 unit of time.
> > > 2) rerun my model using $MSFI to initialize the pop parameters, with
> > > MAXEVAL=0 and POSTHOC options so that individual empirical Bayes
> > > estimates (EBE) parameters for each patient would be first
> > > re-estimated, then the prediction would be computed.
> > >
> > > Then I
> > > 3) checked that my new predictions computed from the extended dataset
> > > match the predictions of the original dataset at observed time points.
> > > I had the surprise to see that for some individuals those predictions
> > > match, for some others they slightly diverge, and for few others they
> > > are dramatically different. I checked the EBEs and they were clearly
> > > different between the original dataset and the one with the dummy
> points.
> > > 4) I decided to redo the grid with only one dummy point every 1/4 of
> > > time unit. The result was less dramatic, but still for most of my
> > > individuals the EBEs predictions were diverging from the original ones
> > > computed without the dummy times.
> > >
> > > Of course the solution for me is to estimate the EBEs from the
> > > original dataset, export them in a table and reread them to initialize
> > > the parameter of my individuals using only dummy time points and no
> > > observations.
> > >
> > > This problem reminds me something that was discussed previously on
> > > nm-user, but I could not recover the source in the archive.
> > >
> > > Anyway is this something known and predictable that when adding dummy
> > > points with MDV=1 to your original dataset you sometimes get very
> > > different EBEs ? Are there cases/models/ADVAN where the problem is
> > > likely to happen? Is their a way to fix it it in NONMEM other than the
> > > trick I used?
> > >
> > > Thanks for your replies!
> > >
> > > Kind regards,
> > >
> > > Pascal Girard, PhD
> > > [email protected]
> > > Head of Modeling & Simulation - Oncology Global Exploratory Medicine
> > > Merck Serono S.A. * Geneva
> > > Tel: +41.22.414.3549
> > > Cell: +41.79.508.7898
> > >
> > > This message and any attachment are confidential and may be privileged
> > > or otherwise protected from disclosure. If you are not the intended
> > > recipient, you must not copy this message or attachment or disclose
> > > the contents to any other person. If you have received this
> > > transmission in error, please notify the sender immediately and delete
> > > the message and any attachment from your system. Merck KGaA,
> > > Darmstadt, Germany and any of its subsidiaries do not accept liability
> > > for any omissions or errors in this message which may arise as a
> > > result of E-Mail-transmission or for damages resulting from any
> > > unauthorized changes of the content of this message and any attachment
> > > thereto. Merck KGaA, Darmstadt, Germany and any of its subsidiaries do
> > > not guarantee that this message is free of viruses and does not accept
> > > liability for any damages caused by any virus transmitted therewith.
> > >
> > > Click _ http://www.merckgroup.com/disclaimer_to access the German,
> > > French, Spanish and Portuguese versions of this disclaimer.
> >