Hi, while data fitting using NONMEM on a regular PK data set and its log
transformed version I made the following observations
- PK parameters (thetas) were generally similar between regular and when
using LTBS.
-ETA on CL was similar
-ETA on Vc was different between the two runs.
- Sigma was higher in LTBS (51%) than linear (33%)
Now using LTBS, I would have expected to see the ETAs unchanged or actually
decrease and accordingly I observed that the eta values decreased showing
less BSV. However the %RSE for ETA on VC changed from 40% (linear) to 350%
(LTBS) and further the lower 95% CI bound has a negative number for ETA on
Vc (-0.087).
What would be the explanation behind the above observations regarding
increased %RSE using LTBS and a negative lower bound for ETA on Vc? Can a
negative lower bound in ETA be considered as zero?
Also why would the residual vriability increase when using LTBS?
Please note that the PK is multiexponential (may be this is responsible).
Thanks.
Neil
--
Indranil Bhattacharya
Linear VS LTBS
15 messages
9 people
Latest: Aug 25, 2009
Hi Neil,
1. When data are log-transformed the $ERROR block has to change: additive error
becomes true exponential error which cannot be achieved without
log-transformation (Nick, correct me if I am wrong).
2. Error cannot "go away". You claim your structural model (THs) remained
unchanged. Therefore the "amount" of error will remain the same as well. If you
reduce BSV you may have to "pay" for it with increased residual variability.
3. Confidence intervals of ETAs based on standard errors produced during the
covariance step are unreliable (many threads in NMusers). Do bootstrap to
obtain more reliable C.I..
These are my five cents worth of thought in the early morning,
Good luck,
Joachim
--------------------------------------------------------------------------
AstraZeneca UK Limited is a company incorporated in England and Wales with
registered number: 03674842 and a registered office at 15 Stanhope Gate, London
W1K 1LN.
Confidentiality Notice: This message is private and may contain confidential,
proprietary and legally privileged information. If you have received this
message in error, please notify us and remove it from your system and note that
you must not copy, distribute or take any action in reliance on it. Any
unauthorised use or disclosure of the contents of this message is not permitted
and may be unlawful.
Disclaimer: Email messages may be subject to delays, interception, non-delivery
and unauthorised alterations. Therefore, information expressed in this message
is not given or endorsed by AstraZeneca UK Limited unless otherwise notified by
an authorised representative independent of this message. No contractual
relationship is created by this message by any person unless specifically
indicated by agreement in writing other than email.
Monitoring: AstraZeneca UK Limited may monitor email traffic data and content
for the purposes of the prevention and detection of crime, ensuring the
security of our computer systems and checking Compliance with our Code of
Conduct and Policies.
Quoted reply history
-----Original Message-----
From: [email protected] [mailto:[email protected]]on
Behalf Of Indranil Bhattacharya
Sent: 20 August 2009 17:07
To: [email protected]
Subject: [NMusers] Linear VS LTBS
Hi, while data fitting using NONMEM on a regular PK data set and its log
transformed version I made the following observations
- PK parameters (thetas) were generally similar between regular and when
using LTBS.
-ETA on CL was similar
-ETA on Vc was different between the two runs.
- Sigma was higher in LTBS (51%) than linear (33%)
Now using LTBS, I would have expected to see the ETAs unchanged or actually
decrease and accordingly I observed that the eta values decreased showing less
BSV. However the %RSE for ETA on VC changed from 40% (linear) to 350% (LTBS)
and further the lower 95% CI bound has a negative number for ETA on Vc (-0.087).
What would be the explanation behind the above observations regarding increased
%RSE using LTBS and a negative lower bound for ETA on Vc? Can a negative lower
bound in ETA be considered as zero?
Also why would the residual vriability increase when using LTBS?
Please note that the PK is multiexponential (may be this is responsible).
Thanks.
Neil
--
Indranil Bhattacharya
Hi Joachim, thanks for your suggestions/comments.
When using LTBS I had used a different error model and the error block is
shown below
$ERROR
IPRED = -5
IF (F.GT.0) IPRED = LOG(F) ;log transforming predicition
IRES=DV-IPRED
W=1
IWRES=IRES/W ;Uniform Weighting
Y = IPRED + ERR(1)
I also performed bootsrap on both LTBS and non-LTBS models and the non-LTBS
CI were much more tighter and the precision was greater than non-LTBS.
I think the problem plausibly is with the fact that when fitting the
non-transformed data I have used the proportional + additive model while
using LTBS the exponential model (which converts to additional model due to
LTBS) was used. The extra additive component also may be more important in
the non-LTBS model as for some subjects the concentrations were right on
LOQ.
I tried the dual error model for LTBS but does not provide a CV%. So I am
currently running a bootstrap to get the CI when using the dual error model
with LTBS.
Neil
Quoted reply history
On Fri, Aug 21, 2009 at 3:01 AM, Grevel, Joachim <
[email protected]> wrote:
> Hi Neil,
>
> 1. When data are log-transformed the $ERROR block has to change: additive
> error becomes true exponential error which cannot be achieved without
> log-transformation (Nick, correct me if I am wrong).
>
> 2. Error cannot "go away". You claim your structural model (THs) remained
> unchanged. Therefore the "amount" of error will remain the same as well. If
> you reduce BSV you may have to "pay" for it with increased residual
> variability.
>
> 3. Confidence intervals of ETAs based on standard errors produced during
> the covariance step are unreliable (many threads in NMusers). Do bootstrap
> to obtain more reliable C.I..
>
> These are my five cents worth of thought in the early morning,
>
> Good luck,
>
> Joachim
>
> ------------------------------
>
> AstraZeneca UK Limited is a company incorporated in England and Wales with
> registered number: 03674842 and a registered office at 15 Stanhope Gate,
> London W1K 1LN.
>
> *Confidentiality Notice: *This message is private and may contain
> confidential, proprietary and legally privileged information. If you have
> received this message in error, please notify us and remove it from your
> system and note that you must not copy, distribute or take any action in
> reliance on it. Any unauthorised use or disclosure of the contents of this
> message is not permitted and may be unlawful.
>
> *Disclaimer:* Email messages may be subject to delays, interception,
> non-delivery and unauthorised alterations. Therefore, information expressed
> in this message is not given or endorsed by AstraZeneca UK Limited unless
> otherwise notified by an authorised representative independent of this
> message. No contractual relationship is created by this message by any
> person unless specifically indicated by agreement in writing other than
> email.
>
> *Monitoring: *AstraZeneca UK Limited may monitor email traffic data and
> content for the purposes of the prevention and detection of crime, ensuring
> the security of our computer systems and checking compliance with our Code
> of Conduct and policies.
>
> -----Original Message-----
>
>
> *From:* [email protected] [mailto:[email protected]]
> *On Behalf Of *Indranil Bhattacharya
> *Sent:* 20 August 2009 17:07
> *To:* [email protected]
> *Subject:* [NMusers] Linear VS LTBS
>
> Hi, while data fitting using NONMEM on a regular PK data set and its log
> transformed version I made the following observations
>
> - PK parameters (thetas) were generally similar between regular and when
> using LTBS.
> -ETA on CL was similar
> -ETA on Vc was different between the two runs.
> - Sigma was higher in LTBS (51%) than linear (33%)
>
> Now using LTBS, I would have expected to see the ETAs unchanged or actually
> decrease and accordingly I observed that the eta values decreased showing
> less BSV. However the %RSE for ETA on VC changed from 40% (linear) to 350%
> (LTBS) and further the lower 95% CI bound has a negative number for ETA on
> Vc (-0.087).
>
> What would be the explanation behind the above observations regarding
> increased %RSE using LTBS and a negative lower bound for ETA on Vc? Can a
> negative lower bound in ETA be considered as zero?
> Also why would the residual vriability increase when using LTBS?
>
> Please note that the PK is multiexponential (may be this is responsible).
>
> Thanks.
>
> Neil
>
> --
> Indranil Bhattacharya
>
>
--
Indranil Bhattacharya
Hi Neil,
Well if you compare proportional+additive error model with a logarithmic error
model then it shouldnt be suprising that they work differently and give you
different residual variance. Logarithmic error model presumes that the
accuracy of the observations, in absolute terms, becomes very good for low
concentrations. With real-world (i.e. not simulated) measuments this might not
be the case and this is probably the motivation for the proportional+additive
type models. The best error model is the one that best matches the
characteristics of the very(!) complex physical process behind the reporting of
some number as "concentration of substance X in the sample".
If a proportional+additive error model works better than a logarithmic error
model then I would check to see if the observations at small concentrations
(usually the late observations) are possibly dominating the estimation for the
logarithmic model. These samples influence the estimation less for
propotional+additive error model because the additive term.
If you have many observations close to LOQ then there are a number of different
suggestion in the literature how to handle these. I wouldnt make any
conclusions about the best error model until you have decied how you are going
to handle them.
There was some recent discussion on this list about the possibility of a
logarithmic+additive model. It was complicated and I didnt really follow it.
Douglas Eleveld
________________________________
Quoted reply history
Van: [email protected] namens Indranil Bhattacharya
Verzonden: vr 21-8-2009 13:52
Aan: Grevel, Joachim
CC: [email protected]
Onderwerp: Re: [NMusers] Linear VS LTBS
Hi Joachim, thanks for your suggestions/comments.
When using LTBS I had used a different error model and the error block is shown
below
$ERROR
IPRED = -5
IF (F.GT.0) IPRED = LOG(F) ;log transforming predicition
IRES=DV-IPRED
W=1
IWRES=IRES/W ;Uniform Weighting
Y = IPRED + ERR(1)
I also performed bootsrap on both LTBS and non-LTBS models and the non-LTBS CI
were much more tighter and the precision was greater than non-LTBS.
I think the problem plausibly is with the fact that when fitting the
non-transformed data I have used the proportional + additive model while using
LTBS the exponential model (which converts to additional model due to LTBS) was
used. The extra additive component also may be more important in the non-LTBS
model as for some subjects the concentrations were right on LOQ.
I tried the dual error model for LTBS but does not provide a CV%. So I am
currently running a bootstrap to get the CI when using the dual error model
with LTBS.
Neil
On Fri, Aug 21, 2009 at 3:01 AM, Grevel, Joachim
<[email protected]> wrote:
Hi Neil,
1. When data are log-transformed the $ERROR block has to change:
additive error becomes true exponential error which cannot be achieved without
log-transformation (Nick, correct me if I am wrong).
2. Error cannot "go away". You claim your structural model (THs)
remained unchanged. Therefore the "amount" of error will remain the same as
well. If you reduce BSV you may have to "pay" for it with increased residual
variability.
3. Confidence intervals of ETAs based on standard errors produced
during the covariance step are unreliable (many threads in NMusers). Do
bootstrap to obtain more reliable C.I..
These are my five cents worth of thought in the early morning,
Good luck,
Joachim
________________________________
AstraZeneca UK Limited is a company incorporated in England and Wales
with registered number: 03674842 and a registered office at 15 Stanhope Gate,
London W1K 1LN.
Confidentiality Notice: This message is private and may contain
confidential, proprietary and legally privileged information. If you have
received this message in error, please notify us and remove it from your system
and note that you must not copy, distribute or take any action in reliance on
it. Any unauthorised use or disclosure of the contents of this message is not
permitted and may be unlawful.
Disclaimer: Email messages may be subject to delays, interception,
non-delivery and unauthorised alterations. Therefore, information expressed in
this message is not given or endorsed by AstraZeneca UK Limited unless
otherwise notified by an authorised representative independent of this message.
No contractual relationship is created by this message by any person unless
specifically indicated by agreement in writing other than email.
Monitoring: AstraZeneca UK Limited may monitor email traffic data and
content for the purposes of the prevention and detection of crime, ensuring the
security of our computer systems and checking compliance with our Code of
Conduct and policies.
-----Original Message-----
From: [email protected]
[mailto:[email protected]]on Behalf Of Indranil Bhattacharya
Sent: 20 August 2009 17:07
To: [email protected]
Subject: [NMusers] Linear VS LTBS
Hi, while data fitting using NONMEM on a regular PK data set
and its log transformed version I made the following observations
- PK parameters (thetas) were generally similar between
regular and when using LTBS.
-ETA on CL was similar
-ETA on Vc was different between the two runs.
- Sigma was higher in LTBS (51%) than linear (33%)
Now using LTBS, I would have expected to see the ETAs unchanged
or actually decrease and accordingly I observed that the eta values decreased
showing less BSV. However the %RSE for ETA on VC changed from 40% (linear) to
350% (LTBS) and further the lower 95% CI bound has a negative number for ETA on
Vc (-0.087).
What would be the explanation behind the above observations
regarding increased %RSE using LTBS and a negative lower bound for ETA on Vc?
Can a negative lower bound in ETA be considered as zero?
Also why would the residual vriability increase when using LTBS?
Please note that the PK is multiexponential (may be this is
responsible).
Thanks.
Neil
--
Indranil Bhattacharya
--
Indranil Bhattacharya
Neil
Large RSE, inability to converge, failure of the covariance step are often caused by the over-parametrization of the model. If you already have bootstrap, look at the scatter-plot matrix of parameters versus parameters (THATA1 vs THETA2, .., THETA1 vs OMEGA1, ...), these are very informative plots. If you have over-parametrization on the population level, it will be seen in these plots as strong correlations of the parameter estimates.
Also, look on plots of ETAs vs ETAs. If you see strong correlation (close to 1) there, it may indicate over-parametrization on the individual level (too many ETAs in the model).
For random effect with a very large RSE on the variance, I would try to remove it and see what happens with the model: often, this (high RSE) is the indication that the error effect is not needed.
Also, try combined error model (on log-transformed variables):
W1=SQRT(THETA(...)/IPRED**2+THETA(...))
Y = LOG(IPRED) + W1*EPS(1)
$SIGMA
1 FIXED
Why concentrations were on LOQ? Was it because BQLs were inserted as LOQ? Then this is not a good idea.
Thanks
Leonid
--------------------------------------
Leonid Gibiansky, Ph.D.
President, QuantPharm LLC
web: www.quantpharm.com
e-mail: LGibiansky at quantpharm.com
tel: (301) 767 5566
Indranil Bhattacharya wrote:
> Hi Joachim, thanks for your suggestions/comments.
>
> When using LTBS I had used a different error model and the error block is shown below
>
> $ERROR
> IPRED = -5
> IF (F.GT.0) IPRED = LOG(F) ;log transforming predicition
> IRES=DV-IPRED
> W=1
> IWRES=IRES/W ;Uniform Weighting
> Y = IPRED + ERR(1)
>
> I also performed bootsrap on both LTBS and non-LTBS models and the non-LTBS CI were much more tighter and the precision was greater than non-LTBS. I think the problem plausibly is with the fact that when fitting the non-transformed data I have used the proportional + additive model while using LTBS the exponential model (which converts to additional model due to LTBS) was used. The extra additive component also may be more important in the non-LTBS model as for some subjects the concentrations were right on LOQ. I tried the dual error model for LTBS but does not provide a CV%. So I am currently running a bootstrap to get the CI when using the dual error model with LTBS. Neil
>
Quoted reply history
> On Fri, Aug 21, 2009 at 3:01 AM, Grevel, Joachim < [email protected] < mailto: [email protected] >> wrote:
>
> Hi Neil,
>
> 1. When data are log-transformed the $ERROR block has to change:
>
> additive error becomes true exponential error which cannot be
> achieved without log-transformation (Nick, correct me if I am wrong).
>
> 2. Error cannot "go away". You claim your structural model (THs)
>
> remained unchanged. Therefore the "amount" of error will remain the
> same as well. If you reduce BSV you may have to "pay" for it with
> increased residual variability.
>
> 3. Confidence intervals of ETAs based on standard errors produced
>
> during the covariance step are unreliable (many threads in NMusers).
> Do bootstrap to obtain more reliable C.I..
>
> These are my five cents worth of thought in the early morning, Good luck, Joachim
>
> ------------------------------------------------------------------------
>
> AstraZeneca UK Limited is a company incorporated in England and
> Wales with registered number: 03674842 and a registered office at 15
> Stanhope Gate, London W1K 1LN.
>
> *Confidentiality Notice: *This message is private and may contain
> confidential, proprietary and legally privileged information. If you
> have received this message in error, please notify us and remove it
> from your system and note that you must not copy, distribute or take
> any action in reliance on it. Any unauthorised use or disclosure of
> the contents of this message is not permitted and may be unlawful.
>
> *Disclaimer:* Email messages may be subject to delays, interception,
> non-delivery and unauthorised alterations. Therefore, information
> expressed in this message is not given or endorsed by AstraZeneca UK
> Limited unless otherwise notified by an authorised representative
> independent of this message. No contractual relationship is created
> by this message by any person unless specifically indicated by
> agreement in writing other than email.
>
> *Monitoring: *AstraZeneca UK Limited may monitor email traffic data
> and content for the purposes of the prevention and detection of
> crime, ensuring the security of our computer systems and checking
> compliance with our Code of Conduct and policies.
>
> -----Original Message-----
>
> *From:* [email protected]
> <mailto:[email protected]>
> [mailto:[email protected]
> <mailto:[email protected]>]*On Behalf Of *Indranil
> Bhattacharya
> *Sent:* 20 August 2009 17:07
> *To:* [email protected] <mailto:[email protected]>
> *Subject:* [NMusers] Linear VS LTBS
>
> Hi, while data fitting using NONMEM on a regular PK data set
> and its log transformed version I made the following observations
>
> - PK parameters (thetas) were generally similar between
>
> regular and when using LTBS.
> -ETA on CL was similar
> -ETA on Vc was different between the two runs.
> - Sigma was higher in LTBS (51%) than linear (33%)
>
> Now using LTBS, I would have expected to see the ETAs unchanged
>
> or actually decrease and accordingly I observed that the eta
> values decreased showing less BSV. However the %RSE for ETA on
> VC changed from 40% (linear) to 350% (LTBS) and further the
> lower 95% CI bound has a negative number for ETA on Vc (-0.087).
>
> What would be the explanation behind the above observations
>
> regarding increased %RSE using LTBS and a negative lower bound
> for ETA on Vc? Can a negative lower bound in ETA be considered
> as zero?
> Also why would the residual vriability increase when using LTBS?
>
> Please note that the PK is multiexponential (may be this is
>
> responsible).
>
> Thanks. Neil
>
> -- Indranil Bhattacharya
>
> --
> Indranil Bhattacharya
Leonid,
You are once again ignoring the actual evidence that NONMEM VI will fail to converge or not complete the covariance step more or less at random. If you bootstrap simulated data in which the model is known and not overparameterised it has been shown repeatedly that NONMEM VI will sometimes converge and do the covariance step and sometimes fail to converge.
Of course, I agree that overparameterisation could be a cause of convergence problems but I would not agree that this is often the reason.
Bob Bauer has made efforts in NONMEM 7 to try to fix the random termination behaviour and covariance step problems by providing additional control over numerical tolerances. It remains to be seen by direct experiment if NONMEM 7 is indeed less random than NONMEM VI.
BTW in this discussion about LTBS I think it is important to point out that the only systematic study I know of comparing LTBS with untransformed models was the one you reported at the 2008 PAGE meeting (www.page-meeting.org/?abstract=1268). My understanding of your results was that there was no clear advantage of LTBS if INTER was used with non-transformed data: "Models with exponential residual error presented in the log-transformed variables performed similar to the ones fitted in original variables with INTER option. For problems with residual variability exceeding 40%, use of INTER option or log-transformation was necessary to
obtain unbiased estimates of inter- and intra-subject variability."
Do you know of any other systematic studies comparing LTBS with no transformation?
Nick
Leonid Gibiansky wrote:
> Neil
>
> Large RSE, inability to converge, failure of the covariance step are often caused by the over-parametrization of the model. If you already have bootstrap, look at the scatter-plot matrix of parameters versus parameters (THATA1 vs THETA2, .., THETA1 vs OMEGA1, ...), these are very informative plots. If you have over-parametrization on the population level, it will be seen in these plots as strong correlations of the parameter estimates.
>
> Also, look on plots of ETAs vs ETAs. If you see strong correlation (close to 1) there, it may indicate over-parametrization on the individual level (too many ETAs in the model).
>
> For random effect with a very large RSE on the variance, I would try to remove it and see what happens with the model: often, this (high RSE) is the indication that the error effect is not needed.
>
> Also, try combined error model (on log-transformed variables):
>
> W1=SQRT(THETA(...)/IPRED**2+THETA(...))
> Y = LOG(IPRED) + W1*EPS(1)
>
> $SIGMA
> 1 FIXED
>
> Why concentrations were on LOQ? Was it because BQLs were inserted as LOQ? Then this is not a good idea.
>
> Thanks
> Leonid
>
> --------------------------------------
> Leonid Gibiansky, Ph.D.
> President, QuantPharm LLC
> web: www.quantpharm.com
> e-mail: LGibiansky at quantpharm.com
> tel: (301) 767 5566
>
> Indranil Bhattacharya wrote:
>
> > Hi Joachim, thanks for your suggestions/comments.
> >
> > When using LTBS I had used a different error model and the error block is shown below
> >
> > $ERROR
> > IPRED = -5
> > IF (F.GT.0) IPRED = LOG(F) ;log transforming predicition
> > IRES=DV-IPRED
> > W=1
> > IWRES=IRES/W ;Uniform Weighting
> > Y = IPRED + ERR(1)
> >
> > I also performed bootsrap on both LTBS and non-LTBS models and the non-LTBS CI were much more tighter and the precision was greater than non-LTBS. I think the problem plausibly is with the fact that when fitting the non-transformed data I have used the proportional + additive model while using LTBS the exponential model (which converts to additional model due to LTBS) was used. The extra additive component also may be more important in the non-LTBS model as for some subjects the concentrations were right on LOQ. I tried the dual error model for LTBS but does not provide a CV%. So I am currently running a bootstrap to get the CI when using the dual error model with LTBS. Neil
> >
> > On Fri, Aug 21, 2009 at 3:01 AM, Grevel, Joachim < [email protected] < mailto: [email protected] >> wrote:
> >
> > Hi Neil,
> >
> > 1. When data are log-transformed the $ERROR block has to change:
> >
> > additive error becomes true exponential error which cannot be
> >
> > achieved without log-transformation (Nick, correct me if I am wrong). 2. Error cannot "go away". You claim your structural model (THs)
> >
> > remained unchanged. Therefore the "amount" of error will remain the
> > same as well. If you reduce BSV you may have to "pay" for it with
> > increased residual variability.
> >
> > 3. Confidence intervals of ETAs based on standard errors produced
> >
> > during the covariance step are unreliable (many threads in NMusers).
> > Do bootstrap to obtain more reliable C.I..
> > These are my five cents worth of thought in the early morning,
> > Good luck,
> > Joachim
> >
> > ------------------------------------------------------------------------
> >
> > AstraZeneca UK Limited is a company incorporated in England and
> > Wales with registered number: 03674842 and a registered office at 15
> > Stanhope Gate, London W1K 1LN.
> >
> > *Confidentiality Notice: *This message is private and may contain
> > confidential, proprietary and legally privileged information. If you
> > have received this message in error, please notify us and remove it
> > from your system and note that you must not copy, distribute or take
> > any action in reliance on it. Any unauthorised use or disclosure of
> > the contents of this message is not permitted and may be unlawful.
> >
> > *Disclaimer:* Email messages may be subject to delays, interception,
> > non-delivery and unauthorised alterations. Therefore, information
> > expressed in this message is not given or endorsed by AstraZeneca UK
> > Limited unless otherwise notified by an authorised representative
> > independent of this message. No contractual relationship is created
> > by this message by any person unless specifically indicated by
> > agreement in writing other than email.
> >
> > *Monitoring: *AstraZeneca UK Limited may monitor email traffic data
> > and content for the purposes of the prevention and detection of
> > crime, ensuring the security of our computer systems and checking
> > compliance with our Code of Conduct and policies.
> >
> > -----Original Message-----
> >
> > *From:* [email protected]
> > <mailto:[email protected]>
> > [mailto:[email protected]
> > <mailto:[email protected]>]*On Behalf Of *Indranil
> > Bhattacharya
> > *Sent:* 20 August 2009 17:07
> > *To:* [email protected] <mailto:[email protected]>
> > *Subject:* [NMusers] Linear VS LTBS
> >
> > Hi, while data fitting using NONMEM on a regular PK data set
> >
> > and its log transformed version I made the following observations - PK parameters (thetas) were generally similar between
> >
> > regular and when using LTBS.
> > -ETA on CL was similar
> > -ETA on Vc was different between the two runs.
> > - Sigma was higher in LTBS (51%) than linear (33%)
> >
> > Now using LTBS, I would have expected to see the ETAs unchanged
> >
> > or actually decrease and accordingly I observed that the eta
> > values decreased showing less BSV. However the %RSE for ETA on
> > VC changed from 40% (linear) to 350% (LTBS) and further the
> > lower 95% CI bound has a negative number for ETA on Vc (-0.087).
> >
> > What would be the explanation behind the above observations
> >
> > regarding increased %RSE using LTBS and a negative lower bound
> > for ETA on Vc? Can a negative lower bound in ETA be considered
> > as zero?
> > Also why would the residual vriability increase when using LTBS?
> >
> > Please note that the PK is multiexponential (may be this is
> >
> > responsible).
> > Thanks.
> > Neil
> >
> > -- Indranil Bhattacharya
> >
> > --
> > Indranil Bhattacharya
--
Nick Holford, Professor Clinical Pharmacology
Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
[email protected] tel:+64(9)923-6730 fax:+64(9)373-7090
mobile: +64 21 46 23 53
http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford
Nick,
We recently have come across a very sqewed residual distribution (easily
seen in placebo data, where there was no placebo effect) that we modeled
as additive + proportional in the log domain. Additive + proportional
error in untransformed domain was worse. We have not tried more complex
error models in the untransformed domain, so it is not a clean
comparison, but for practical purposes, yes, there may be situations
when log transformation is still useful even with INTER.
Katya
-------------------
Ekaterina Gibiansky
Senior Director, PKPD, Modeling & Simulation
ICON Development Solutions
[email protected]
Quoted reply history
-----Original Message-----
From: [email protected] [mailto:[email protected]]
On Behalf Of Nick Holford
Sent: Friday, August 21, 2009 4:44 PM
To: nmusers
Subject: Re: [NMusers] Linear VS LTBS
Leonid,
You are once again ignoring the actual evidence that NONMEM VI will fail
to converge or not complete the covariance step more or less at random.
If you bootstrap simulated data in which the model is known and not
overparameterised it has been shown repeatedly that NONMEM VI will
sometimes converge and do the covariance step and sometimes fail to
converge.
Of course, I agree that overparameterisation could be a cause of
convergence problems but I would not agree that this is often the
reason.
Bob Bauer has made efforts in NONMEM 7 to try to fix the random
termination behaviour and covariance step problems by providing
additional control over numerical tolerances. It remains to be seen by
direct experiment if NONMEM 7 is indeed less random than NONMEM VI.
BTW in this discussion about LTBS I think it is important to point out
that the only systematic study I know of comparing LTBS with
untransformed models was the one you reported at the 2008 PAGE meeting
(www.page-meeting.org/?abstract=1268). My understanding of your results
was that there was no clear advantage of LTBS if INTER was used with
non-transformed data:
"Models with exponential residual error presented in the log-transformed
variables
performed similar to the ones fitted in original variables with INTER
option. For problems with
residual variability exceeding 40%, use of INTER option or
log-transformation was necessary to
obtain unbiased estimates of inter- and intra-subject variability."
Do you know of any other systematic studies comparing LTBS with no
transformation?
Nick
Leonid Gibiansky wrote:
> Neil
> Large RSE, inability to converge, failure of the covariance step are
> often caused by the over-parametrization of the model. If you already
> have bootstrap, look at the scatter-plot matrix of parameters versus
> parameters (THATA1 vs THETA2, .., THETA1 vs OMEGA1, ...), these are
> very informative plots. If you have over-parametrization on the
> population level, it will be seen in these plots as strong
> correlations of the parameter estimates.
>
> Also, look on plots of ETAs vs ETAs. If you see strong correlation
> (close to 1) there, it may indicate over-parametrization on the
> individual level (too many ETAs in the model).
>
> For random effect with a very large RSE on the variance, I would try
> to remove it and see what happens with the model: often, this (high
> RSE) is the indication that the error effect is not needed.
>
> Also, try combined error model (on log-transformed variables):
>
> W1=SQRT(THETA(...)/IPRED**2+THETA(...))
> Y = LOG(IPRED) + W1*EPS(1)
>
>
> $SIGMA
> 1 FIXED
>
>
> Why concentrations were on LOQ? Was it because BQLs were inserted as
> LOQ? Then this is not a good idea.
> Thanks
> Leonid
>
>
> --------------------------------------
> Leonid Gibiansky, Ph.D.
> President, QuantPharm LLC
> web: www.quantpharm.com
> e-mail: LGibiansky at quantpharm.com
> tel: (301) 767 5566
>
>
>
>
> Indranil Bhattacharya wrote:
>> Hi Joachim, thanks for your suggestions/comments.
>>
>> When using LTBS I had used a different error model and the error
>> block is shown below
>> $ERROR
>> IPRED = -5
>> IF (F.GT.0) IPRED = LOG(F) ;log transforming predicition
>> IRES=DV-IPRED
>> W=1
>> IWRES=IRES/W ;Uniform Weighting
>> Y = IPRED + ERR(1)
>>
>> I also performed bootsrap on both LTBS and non-LTBS models and the
>> non-LTBS CI were much more tighter and the precision was greater than
>> non-LTBS.
>> I think the problem plausibly is with the fact that when fitting the
>> non-transformed data I have used the proportional + additive model
>> while using LTBS the exponential model (which converts to additional
>> model due to LTBS) was used. The extra additive component also may be
>> more important in the non-LTBS model as for some subjects the
>> concentrations were right on LOQ.
>>
>> I tried the dual error model for LTBS but does not provide a CV%. So
>> I am currently running a bootstrap to get the CI when using the dual
>> error model with LTBS.
>>
>> Neil
>>
>> On Fri, Aug 21, 2009 at 3:01 AM, Grevel, Joachim
>> <[email protected]
>> <mailto:[email protected]>> wrote:
>>
>> Hi Neil,
>> 1. When data are log-transformed the $ERROR block has to
>> change:
>> additive error becomes true exponential error which cannot be
>> achieved without log-transformation (Nick, correct me if I am
>> wrong).
>> 2. Error cannot "go away". You claim your structural model
>> (THs)
>> remained unchanged. Therefore the "amount" of error will remain
the
>> same as well. If you reduce BSV you may have to "pay" for it with
>> increased residual variability.
>> 3. Confidence intervals of ETAs based on standard errors
>> produced
>> during the covariance step are unreliable (many threads in
NMusers).
>> Do bootstrap to obtain more reliable C.I..
>> These are my five cents worth of thought in the early
morning,
>> Good luck,
>> Joachim
>>
>>
>>
------------------------------------------------------------------------
>>
>> AstraZeneca UK Limited is a company incorporated in England and
>> Wales with registered number: 03674842 and a registered office at
15
>> Stanhope Gate, London W1K 1LN.
>>
>> *Confidentiality Notice: *This message is private and may contain
>> confidential, proprietary and legally privileged information. If
you
>> have received this message in error, please notify us and remove
it
>> from your system and note that you must not copy, distribute or
take
>> any action in reliance on it. Any unauthorised use or disclosure
of
>> the contents of this message is not permitted and may be
unlawful.
>>
>> *Disclaimer:* Email messages may be subject to delays,
interception,
>> non-delivery and unauthorised alterations. Therefore, information
>> expressed in this message is not given or endorsed by AstraZeneca
UK
>> Limited unless otherwise notified by an authorised representative
>> independent of this message. No contractual relationship is
created
>> by this message by any person unless specifically indicated by
>> agreement in writing other than email.
>>
>> *Monitoring: *AstraZeneca UK Limited may monitor email traffic
data
>> and content for the purposes of the prevention and detection of
>> crime, ensuring the security of our computer systems and checking
>> compliance with our Code of Conduct and policies.
>>
>> -----Original Message-----
>>
>>
>> *From:* [email protected]
>> <mailto:[email protected]>
>> [mailto:[email protected]
>> <mailto:[email protected]>]*On Behalf Of *Indranil
>> Bhattacharya
>> *Sent:* 20 August 2009 17:07
>> *To:* [email protected] <mailto:[email protected]>
>> *Subject:* [NMusers] Linear VS LTBS
>>
>> Hi, while data fitting using NONMEM on a regular PK data set
>> and its log transformed version I made the following
>> observations
>> - PK parameters (thetas) were generally similar
>> between
>> regular and when using LTBS.
>> -ETA on CL was similar
>> -ETA on Vc was different between the two runs.
>> - Sigma was higher in LTBS (51%) than linear (33%)
>> Now using LTBS, I would have expected to see the
>> ETAs unchanged
>> or actually decrease and accordingly I observed that the eta
>> values decreased showing less BSV. However the %RSE for ETA
on
>> VC changed from 40% (linear) to 350% (LTBS) and further the
>> lower 95% CI bound has a negative number for ETA on Vc
(-0.087).
>> What would be the explanation behind the above
>> observations
>> regarding increased %RSE using LTBS and a negative lower
bound
>> for ETA on Vc? Can a negative lower bound in ETA be
considered
>> as zero?
>> Also why would the residual vriability increase when using
LTBS?
>> Please note that the PK is multiexponential (may be
>> this is
>> responsible).
>> Thanks.
>> Neil
>>
>> -- Indranil Bhattacharya
>>
>>
>>
>>
>> --
>> Indranil Bhattacharya
>>
--
Nick Holford, Professor Clinical Pharmacology
Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New
Zealand
[email protected] tel:+64(9)923-6730 fax:+64(9)373-7090
mobile: +64 21 46 23 53
http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford
Mats
Just a comment on your comments below:
"All models are wrong and I see no reason why the exponential error model would be different although I think it is better than the proportional error for most situations. "
"Why would you not be able to get sensible information from models that don't have an additive error component?"
I agree that for estimation purposes a purely proportional or exponential error model often seems to work well and under the principles of "all models are wrong" it may well be appropriately justified. This is probably because estimation processes that we use in standard software are fairly robust to trivial solutions. The theory of optimal design is less forgiving in this light and if you stated that your error was proportional to the observation then it would conclude that there would be no error when there is no observation (which we know is not true due to LOD issues). All designs are optimal when there is zero error since the information matrix would be infinite. Practically, the smallest observation will have least error and hence be in some sense close to optimal.
So, a proportional or exponential only error model should be used with caution in anything other than estimation and not used for the purposes of optimal design.
Steve
--
Katya,
I have no doubt one can find examples that show TBS is better than no transformation. But as Leonid demonstrated that is not a consistent property of TBS.
I did not say that TBS was not useful -- however I have not seen any evidence to say it generally preferable to no transformation. TBS brings its own practical problems so I am rarely motivated to use it.
Nick
Gibiansky, Ekaterina wrote:
> Nick,
>
> We recently have come across a very sqewed residual distribution (easily
> seen in placebo data, where there was no placebo effect) that we modeled
> as additive + proportional in the log domain. Additive + proportional
> error in untransformed domain was worse. We have not tried more complex
> error models in the untransformed domain, so it is not a clean
> comparison, but for practical purposes, yes, there may be situations
> when log transformation is still useful even with INTER.
>
> Katya
>
> -------------------
> Ekaterina Gibiansky
> Senior Director, PKPD, Modeling & Simulation
> ICON Development Solutions
> [email protected]
>
Quoted reply history
> -----Original Message-----
> From: [email protected] [mailto:[email protected]]
> On Behalf Of Nick Holford
> Sent: Friday, August 21, 2009 4:44 PM
> To: nmusers
> Subject: Re: [NMusers] Linear VS LTBS
>
> Leonid,
>
> You are once again ignoring the actual evidence that NONMEM VI will fail
>
> to converge or not complete the covariance step more or less at random. If you bootstrap simulated data in which the model is known and not overparameterised it has been shown repeatedly that NONMEM VI will sometimes converge and do the covariance step and sometimes fail to converge.
>
> Of course, I agree that overparameterisation could be a cause of convergence problems but I would not agree that this is often the
>
> reason.
>
> Bob Bauer has made efforts in NONMEM 7 to try to fix the random termination behaviour and covariance step problems by providing additional control over numerical tolerances. It remains to be seen by direct experiment if NONMEM 7 is indeed less random than NONMEM VI.
>
> BTW in this discussion about LTBS I think it is important to point out that the only systematic study I know of comparing LTBS with untransformed models was the one you reported at the 2008 PAGE meeting (www.page-meeting.org/?abstract=1268). My understanding of your results was that there was no clear advantage of LTBS if INTER was used with non-transformed data:
>
> "Models with exponential residual error presented in the log-transformed
>
> variables
>
> performed similar to the ones fitted in original variables with INTER option. For problems with residual variability exceeding 40%, use of INTER option or log-transformation was necessary to
>
> obtain unbiased estimates of inter- and intra-subject variability."
>
> Do you know of any other systematic studies comparing LTBS with no transformation?
>
> Nick
>
> Leonid Gibiansky wrote:
>
> > Neil
> >
> > Large RSE, inability to converge, failure of the covariance step are often caused by the over-parametrization of the model. If you already have bootstrap, look at the scatter-plot matrix of parameters versus parameters (THATA1 vs THETA2, .., THETA1 vs OMEGA1, ...), these are very informative plots. If you have over-parametrization on the population level, it will be seen in these plots as strong correlations of the parameter estimates.
> >
> > Also, look on plots of ETAs vs ETAs. If you see strong correlation (close to 1) there, it may indicate over-parametrization on the individual level (too many ETAs in the model).
> >
> > For random effect with a very large RSE on the variance, I would try to remove it and see what happens with the model: often, this (high RSE) is the indication that the error effect is not needed.
> >
> > Also, try combined error model (on log-transformed variables):
> >
> > W1=SQRT(THETA(...)/IPRED**2+THETA(...))
> > Y = LOG(IPRED) + W1*EPS(1)
> >
> > $SIGMA
> > 1 FIXED
> >
> > Why concentrations were on LOQ? Was it because BQLs were inserted as LOQ? Then this is not a good idea.
> >
> > Thanks
> > Leonid
> >
> > --------------------------------------
> > Leonid Gibiansky, Ph.D.
> > President, QuantPharm LLC
> > web: www.quantpharm.com
> > e-mail: LGibiansky at quantpharm.com
> > tel: (301) 767 5566
> >
> > Indranil Bhattacharya wrote:
> >
> > > Hi Joachim, thanks for your suggestions/comments.
> > >
> > > When using LTBS I had used a different error model and the error block is shown below
> > >
> > > $ERROR
> > > IPRED = -5
> > > IF (F.GT.0) IPRED = LOG(F) ;log transforming predicition
> > > IRES=DV-IPRED
> > > W=1
> > > IWRES=IRES/W ;Uniform Weighting
> > > Y = IPRED + ERR(1)
> > >
> > > I also performed bootsrap on both LTBS and non-LTBS models and the non-LTBS CI were much more tighter and the precision was greater than
>
> > > non-LTBS.
> > >
> > > I think the problem plausibly is with the fact that when fitting the non-transformed data I have used the proportional + additive model while using LTBS the exponential model (which converts to additional model due to LTBS) was used. The extra additive component also may be
>
> > > more important in the non-LTBS model as for some subjects the concentrations were right on LOQ. I tried the dual error model for LTBS but does not provide a CV%. So I am currently running a bootstrap to get the CI when using the dual error model with LTBS. Neil
> > >
> > > On Fri, Aug 21, 2009 at 3:01 AM, Grevel, Joachim < [email protected] < mailto: [email protected] >> wrote:
> > >
> > > Hi Neil,
> > >
> > > 1. When data are log-transformed the $ERROR block has to change:
> > >
> > > additive error becomes true exponential error which cannot be
> > >
> > > achieved without log-transformation (Nick, correct me if I am wrong). 2. Error cannot "go away". You claim your structural model (THs)
> > >
> > > remained unchanged. Therefore the "amount" of error will remain
>
> the
>
> > > same as well. If you reduce BSV you may have to "pay" for it with
> > > increased residual variability.
> > >
> > > 3. Confidence intervals of ETAs based on standard errors produced
> > >
> > > during the covariance step are unreliable (many threads in
>
> NMusers).
>
> > > Do bootstrap to obtain more reliable C.I..
> > > These are my five cents worth of thought in the early
>
> morning,
>
> > > Good luck,
> > > Joachim
>
> ------------------------------------------------------------------------
>
> > > AstraZeneca UK Limited is a company incorporated in England and
> > > Wales with registered number: 03674842 and a registered office at
>
> 15
>
> > > Stanhope Gate, London W1K 1LN.
> > >
> > > *Confidentiality Notice: *This message is private and may contain
> > > confidential, proprietary and legally privileged information. If
>
> you
>
> > > have received this message in error, please notify us and remove
>
> it
>
> > > from your system and note that you must not copy, distribute or
>
> take
>
> > > any action in reliance on it. Any unauthorised use or disclosure
>
> of
>
> > > the contents of this message is not permitted and may be
>
> unlawful.
>
> > > *Disclaimer:* Email messages may be subject to delays,
>
> interception,
>
> > > non-delivery and unauthorised alterations. Therefore, information
> > > expressed in this message is not given or endorsed by AstraZeneca
>
> UK
>
> > > Limited unless otherwise notified by an authorised representative
> > > independent of this message. No contractual relationship is
>
> created
>
> > > by this message by any person unless specifically indicated by
> > > agreement in writing other than email.
> > >
> > > *Monitoring: *AstraZeneca UK Limited may monitor email traffic
>
> data
>
> > > and content for the purposes of the prevention and detection of
> > > crime, ensuring the security of our computer systems and checking
> > > compliance with our Code of Conduct and policies.
> > >
> > > -----Original Message-----
> > >
> > > *From:* [email protected]
> > > <mailto:[email protected]>
> > > [mailto:[email protected]
> > > <mailto:[email protected]>]*On Behalf Of *Indranil
> > > Bhattacharya
> > > *Sent:* 20 August 2009 17:07
> > > *To:* [email protected] <mailto:[email protected]>
> > > *Subject:* [NMusers] Linear VS LTBS
> > >
> > > Hi, while data fitting using NONMEM on a regular PK data set
> > >
> > > and its log transformed version I made the following observations - PK parameters (thetas) were generally similar between
> > >
> > > regular and when using LTBS.
> > > -ETA on CL was similar
> > > -ETA on Vc was different between the two runs.
> > > - Sigma was higher in LTBS (51%) than linear (33%)
> > >
> > > Now using LTBS, I would have expected to see the ETAs unchanged
> > >
> > > or actually decrease and accordingly I observed that the eta
> > > values decreased showing less BSV. However the %RSE for ETA
>
> on
>
> > > VC changed from 40% (linear) to 350% (LTBS) and further the
> > > lower 95% CI bound has a negative number for ETA on Vc
>
> (-0.087).
>
> > > What would be the explanation behind the above observations
> > >
> > > regarding increased %RSE using LTBS and a negative lower
>
> bound
>
> > > for ETA on Vc? Can a negative lower bound in ETA be
>
> considered
>
> > > as zero?
> > > Also why would the residual vriability increase when using
>
> LTBS?
>
> > > Please note that the PK is multiexponential (may be this is
> > >
> > > responsible).
> > > Thanks.
> > > Neil
> > >
> > > -- Indranil Bhattacharya
> > >
> > > --
> > > Indranil Bhattacharya
--
Nick Holford, Professor Clinical Pharmacology
Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
[email protected] tel:+64(9)923-6730 fax:+64(9)373-7090
mobile: +64 21 46 23 53
http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford
Nick,
Pls see below.
Best regards,
Mats
Mats Karlsson, PhD
Professor of Pharmacometrics
Dept of Pharmaceutical Biosciences
Uppsala University
Box 591
751 24 Uppsala Sweden
phone: +46 18 4714105
fax: +46 18 471 4003
Quoted reply history
From: [email protected] [mailto:[email protected]] On
Behalf Of Nick Holford
Sent: Sunday, August 23, 2009 11:02 PM
To: Leonid Gibiansky
Cc: nmusers
Subject: Re: [NMusers] Linear VS LTBS
Leonid,
This is what I wanted to bring to the attention of nmusers:
"Of course, I agree that overparameterisation could be a cause of
convergence problems but I would not agree that this is often the reason. "
If you can provide some evidence that over-paramerization is *often* the
cause of convergence problems then I will be happy to consider it.
What kind of evidence did you have in mind?
My experience with NM7 beta has not convinced me that the new methods are
helpful compared to FOCE. They require much longer run times and currently
mysterious tuning parameters to do anything useful.
Truly exponential error is never the truth. This is a model that is wrong
and IMHO not useful. You cannot get sensible optimal designs from models
that do not have an additive error component.
All models are wrong and I see no reason why the exponential error model
would be different although I think it is better than the proportional error
for most situations. It seems that you assume that whenever TBS is used,
only an additive error (on the transformed scale) is used. Is that why you
say it is wrong? Or is it because you believe in negative concentrations?
Why would you not be able to get sensible information from models that don't
have an additive error component? (You can of course have a residual error
magnitude that increases with decreasing concentrations without having to
have an additive error; this regardless of whether you use the untransformed
or transformed scale).
Nick
Leonid Gibiansky wrote:
Hi Nick,
You are once again ignoring the actual evidence that NONMEM VI will fail to
converge or not complete the covariance step for over-parametrized problems
:)
Sure, there are cases when it doesn't converge even if the model is
reasonable, but it does not mean that we should ignore these warning signs
of possible ill-parameterization. I think that the group is already tired of
our once-a-year discussions on the topic, so, let's just agree to disagree
one more time :)
Nonmem VII unlike earlier versions will provide you with the standard errors
even for non-converging problems. Also, you will always be able to use
Bayesian or SAEM, and never worry about convergence, just stop it at any
point and do VPC to confirm that the model is good :)
Yes, indeed, I observed that FOCEI with non-transformed variables was always
or nearly always equivalent to FOCEI in log-transformed variables. Still,
truly exponential error cannot be described in original variables, so I
usually try both in the first several models, and then decide which of them
to use fro model development.
Thanks
Leonid
--------------------------------------
Leonid Gibiansky, Ph.D.
President, QuantPharm LLC
web: www.quantpharm.com
e-mail: LGibiansky at quantpharm.com
tel: (301) 767 5566
Nick Holford wrote:
Leonid,
You are once again ignoring the actual evidence that NONMEM VI will fail to
converge or not complete the covariance step more or less at random. If you
bootstrap simulated data in which the model is known and not
overparameterised it has been shown repeatedly that NONMEM VI will sometimes
converge and do the covariance step and sometimes fail to converge.
Of course, I agree that overparameterisation could be a cause of convergence
problems but I would not agree that this is often the reason.
Bob Bauer has made efforts in NONMEM 7 to try to fix the random termination
behaviour and covariance step problems by providing additional control over
numerical tolerances. It remains to be seen by direct experiment if NONMEM 7
is indeed less random than NONMEM VI.
BTW in this discussion about LTBS I think it is important to point out that
the only systematic study I know of comparing LTBS with untransformed models
was the one you reported at the 2008 PAGE meeting
(www.page-meeting.org/?abstract=1268). My understanding of your results was
that there was no clear advantage of LTBS if INTER was used with
non-transformed data:
"Models with exponential residual error presented in the log-transformed
variables
performed similar to the ones fitted in original variables with INTER
option. For problems with
residual variability exceeding 40%, use of INTER option or
log-transformation was necessary to
obtain unbiased estimates of inter- and intra-subject variability."
Do you know of any other systematic studies comparing LTBS with no
transformation?
Nick
Leonid Gibiansky wrote:
Neil
Large RSE, inability to converge, failure of the covariance step are often
caused by the over-parametrization of the model. If you already have
bootstrap, look at the scatter-plot matrix of parameters versus parameters
(THATA1 vs THETA2, .., THETA1 vs OMEGA1, ...), these are very informative
plots. If you have over-parametrization on the population level, it will be
seen in these plots as strong correlations of the parameter estimates.
Also, look on plots of ETAs vs ETAs. If you see strong correlation (close to
1) there, it may indicate over-parametrization on the individual level (too
many ETAs in the model).
For random effect with a very large RSE on the variance, I would try to
remove it and see what happens with the model: often, this (high RSE) is the
indication that the error effect is not needed.
Also, try combined error model (on log-transformed variables):
W1=SQRT(THETA(...)/IPRED**2+THETA(...))
Y = LOG(IPRED) + W1*EPS(1)
$SIGMA
1 FIXED
Why concentrations were on LOQ? Was it because BQLs were inserted as LOQ?
Then this is not a good idea.
Thanks
Leonid
--------------------------------------
Leonid Gibiansky, Ph.D.
President, QuantPharm LLC
web: www.quantpharm.com
e-mail: LGibiansky at quantpharm.com
tel: (301) 767 5566
Indranil Bhattacharya wrote:
Hi Joachim, thanks for your suggestions/comments.
When using LTBS I had used a different error model and the error block is
shown below
$ERROR
IPRED = -5
IF (F.GT.0) IPRED = LOG(F) ;log transforming predicition
IRES=DV-IPRED
W=1
IWRES=IRES/W ;Uniform Weighting
Y = IPRED + ERR(1)
I also performed bootsrap on both LTBS and non-LTBS models and the non-LTBS
CI were much more tighter and the precision was greater than non-LTBS.
I think the problem plausibly is with the fact that when fitting the
non-transformed data I have used the proportional + additive model while
using LTBS the exponential model (which converts to additional model due to
LTBS) was used. The extra additive component also may be more important in
the non-LTBS model as for some subjects the concentrations were right on
LOQ.
I tried the dual error model for LTBS but does not provide a CV%. So I am
currently running a bootstrap to get the CI when using the dual error model
with LTBS.
Neil
On Fri, Aug 21, 2009 at 3:01 AM, Grevel, Joachim
<[email protected] <mailto:[email protected]>
<mailto:[email protected]>> wrote:
Hi Neil,
1. When data are log-transformed the $ERROR block has to change:
additive error becomes true exponential error which cannot be
achieved without log-transformation (Nick, correct me if I am wrong).
2. Error cannot "go away". You claim your structural model (THs)
remained unchanged. Therefore the "amount" of error will remain the
same as well. If you reduce BSV you may have to "pay" for it with
increased residual variability.
3. Confidence intervals of ETAs based on standard errors produced
during the covariance step are unreliable (many threads in NMusers).
Do bootstrap to obtain more reliable C.I..
These are my five cents worth of thought in the early morning,
Good luck,
Joachim
------------------------------------------------------------------------
AstraZeneca UK Limited is a company incorporated in England and
Wales with registered number: 03674842 and a registered office at 15
Stanhope Gate, London W1K 1LN.
*Confidentiality Notice: *This message is private and may contain
confidential, proprietary and legally privileged information. If you
have received this message in error, please notify us and remove it
from your system and note that you must not copy, distribute or take
any action in reliance on it. Any unauthorised use or disclosure of
the contents of this message is not permitted and may be unlawful.
*Disclaimer:* Email messages may be subject to delays, interception,
non-delivery and unauthorised alterations. Therefore, information
expressed in this message is not given or endorsed by AstraZeneca UK
Limited unless otherwise notified by an authorised representative
independent of this message. No contractual relationship is created
by this message by any person unless specifically indicated by
agreement in writing other than email.
*Monitoring: *AstraZeneca UK Limited may monitor email traffic data
and content for the purposes of the prevention and detection of
crime, ensuring the security of our computer systems and checking
compliance with our Code of Conduct and policies.
-----Original Message-----
*From:* [email protected]
<mailto:[email protected]>
<mailto:[email protected]>
[mailto:[email protected]
<mailto:[email protected]>
<mailto:[email protected]>]*On Behalf Of *Indranil
Bhattacharya
*Sent:* 20 August 2009 17:07
*To:* [email protected] <mailto:[email protected]>
<mailto:[email protected]>
*Subject:* [NMusers] Linear VS LTBS
Hi, while data fitting using NONMEM on a regular PK data set
and its log transformed version I made the following observations
- PK parameters (thetas) were generally similar between
regular and when using LTBS.
-ETA on CL was similar
-ETA on Vc was different between the two runs.
- Sigma was higher in LTBS (51%) than linear (33%)
Now using LTBS, I would have expected to see the ETAs
unchanged
or actually decrease and accordingly I observed that the eta
values decreased showing less BSV. However the %RSE for ETA on
VC changed from 40% (linear) to 350% (LTBS) and further the
lower 95% CI bound has a negative number for ETA on Vc (-0.087).
What would be the explanation behind the above observations
regarding increased %RSE using LTBS and a negative lower bound
for ETA on Vc? Can a negative lower bound in ETA be considered
as zero?
Also why would the residual vriability increase when using LTBS?
Please note that the PK is multiexponential (may be this is
responsible).
Thanks.
Neil
-- Indranil Bhattacharya
--
Indranil Bhattacharya
--
Nick Holford, Professor Clinical Pharmacology
Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
[email protected] tel:+64(9)923-6730 fax:+64(9)373-7090
mobile: +64 21 46 23 53
http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford
Hi Mats,
I was wondering when you would join in this discussion :-)
Mats wrote:
> What kind of evidence did you have in mind?
I think it would be pretty hard to provide evidence for Leonid's assertion that overparameterization is often the cause of convergence/covariance failures.
If one could investigate a large sample of models from typical users that have had convergence/covariance probems then it should be possible to determine which models are overparameterized and which are not. It woud then be possible to confirm or deny the assertion that overparameterization is "often" the cause of this kind of problem.
I think Leonid's assertion is simply speculation at this stage. It could be true but there is no evidence for it. On the other hand I and others have provided evidence that convergence/covariance failures are not a sign of a poorly constructed model but are more likely due to defects in NONMEM VI.
> All models are wrong and I see no reason why the exponential error model would be different although I think it is better than the proportional error for most situations. It seems that you assume that whenever TBS is used, only an additive error (on the transformed scale) is used. Is that why you say it is wrong? Or is it because you believe in negative concentrations?
All models are wrong, of course. But some are more wrong than others.
Real measurement systems always have some kind of a random additive error ('baseline noise'). This means that a measurement of true zero with such a system will be distributed around zero -- sometimes negative and sometimes positive. If you talk to chemical analysts and push them to be honest then they will admit that negative measurements are indeed possible. Please note the difference between the true concentration (which can be zero but not negative) and measurements of the true concentration which can be negative.
A residual error model that is *only* exponential does not allow the description of negative concentration measurements. This is the same as having *only* an additive error model on the log transformed scale.
An additive model (or a proportional model which is just a scaled additive model) on the untransformed scale can describe the residual error associated with negative measurements.
Optimal designs based on the results of using only an exponential residual error model will not give sensible designs because the highest precision is at concentration approaching zero and thus approaching infinite time after the dose.
> Why would you not be able to get sensible information from models that don’t have an additive error component? (You can of course have a residual error magnitude that increases with decreasing concentrations without having to have an additive error; this regardless of whether you use the untransformed or transformed scale).
You can, of course, get information from models that ignore the additive residual error. Indeed the additive residual error may well be quite negligible for describing data. If all you are going to do is to describe the past then the model may be adequate. But without some additional component in the residual error it will not be possible to find an optimal design using the methods I have seen (e.g. WinPOPT).
Best wishes,
Nick
Mats Karlsson wrote:
> Nick,
>
> Pls see below.
>
> Best regards,
>
> Mats
>
> Mats Karlsson, PhD
>
> Professor of Pharmacometrics
>
> Dept of Pharmaceutical Biosciences
>
> Uppsala University
>
> Box 591
>
> 751 24 Uppsala Sweden
>
> phone: +46 18 4714105
>
> fax: +46 18 471 4003
>
> *From:* [email protected] [ mailto: [email protected] ] *On Behalf Of *Nick Holford
>
> *Sent:* Sunday, August 23, 2009 11:02 PM
> *To:* Leonid Gibiansky
> *Cc:* nmusers
> *Subject:* Re: [NMusers] Linear VS LTBS
>
> Leonid,
>
> This is what I wanted to bring to the attention of nmusers:
>
> "Of course, I agree that overparameterisation could be a cause of convergence problems but I would not agree that this is often the reason. "
>
> If you can provide some evidence that over-paramerization is **often* *the cause of convergence problems then I will be happy to consider it.
>
> What kind of evidence did you have in mind?
>
> My experience with NM7 beta has not convinced me that the new methods are helpful compared to FOCE. They require much longer run times and currently mysterious tuning parameters to do anything useful.
>
> Truly exponential error is never the truth. This is a model that is wrong and IMHO not useful. You cannot get sensible optimal designs from models that do not have an additive error component.
>
> All models are wrong and I see no reason why the exponential error model would be different although I think it is better than the proportional error for most situations. It seems that you assume that whenever TBS is used, only an additive error (on the transformed scale) is used. Is that why you say it is wrong? Or is it because you believe in negative concentrations?
>
> Why would you not be able to get sensible information from models that don’t have an additive error component? (You can of course have a residual error magnitude that increases with decreasing concentrations without having to have an additive error; this regardless of whether you use the untransformed or transformed scale).
>
> Nick
>
> Leonid Gibiansky wrote:
>
> Hi Nick,
>
> You are once again ignoring the actual evidence that NONMEM VI will fail to converge or not complete the covariance step for over-parametrized problems :)
>
> Sure, there are cases when it doesn't converge even if the model is reasonable, but it does not mean that we should ignore these warning signs of possible ill-parameterization. I think that the group is already tired of our once-a-year discussions on the topic, so, let's just agree to disagree one more time :)
>
> Nonmem VII unlike earlier versions will provide you with the standard errors even for non-converging problems. Also, you will always be able to use Bayesian or SAEM, and never worry about convergence, just stop it at any point and do VPC to confirm that the model is good :)
>
> Yes, indeed, I observed that FOCEI with non-transformed variables was always or nearly always equivalent to FOCEI in log-transformed variables. Still, truly exponential error cannot be described in original variables, so I usually try both in the first several models, and then decide which of them to use fro model development.
>
> Thanks
> Leonid
>
> --------------------------------------
> Leonid Gibiansky, Ph.D.
> President, QuantPharm LLC
> web: www.quantpharm.com http://www.quantpharm.com
> e-mail: LGibiansky at quantpharm.com
> tel: (301) 767 5566
>
> Nick Holford wrote:
>
> Leonid,
>
> You are once again ignoring the actual evidence that NONMEM VI will fail to converge or not complete the covariance step more or less at random. If you bootstrap simulated data in which the model is known and not overparameterised it has been shown repeatedly that NONMEM VI will sometimes converge and do the covariance step and sometimes fail to converge.
>
> Of course, I agree that overparameterisation could be a cause of convergence problems but I would not agree that this is often the reason.
>
> Bob Bauer has made efforts in NONMEM 7 to try to fix the random termination behaviour and covariance step problems by providing additional control over numerical tolerances. It remains to be seen by direct experiment if NONMEM 7 is indeed less random than NONMEM VI.
>
> BTW in this discussion about LTBS I think it is important to point out that the only systematic study I know of comparing LTBS with untransformed models was the one you reported at the 2008 PAGE meeting (www.page-meeting.org/?abstract=1268 < http://www.page-meeting.org/?abstract=1268 >). My understanding of your results was that there was no clear advantage of LTBS if INTER was used with non-transformed data: "Models with exponential residual error presented in the log-transformed variables performed similar to the ones fitted in original variables with INTER option. For problems with residual variability exceeding 40%, use of INTER option or log-transformation was necessary to
>
> obtain unbiased estimates of inter- and intra-subject variability."
>
> Do you know of any other systematic studies comparing LTBS with no transformation?
>
> Nick
>
> Leonid Gibiansky wrote:
>
> Neil
>
> Large RSE, inability to converge, failure of the covariance step are often caused by the over-parametrization of the model. If you already have bootstrap, look at the scatter-plot matrix of parameters versus parameters (THATA1 vs THETA2, .., THETA1 vs OMEGA1, ...), these are very informative plots. If you have over-parametrization on the population level, it will be seen in these plots as strong correlations of the parameter estimates.
>
> Also, look on plots of ETAs vs ETAs. If you see strong correlation (close to 1) there, it may indicate over-parametrization on the individual level (too many ETAs in the model).
>
> For random effect with a very large RSE on the variance, I would try to remove it and see what happens with the model: often, this (high RSE) is the indication that the error effect is not needed.
>
> Also, try combined error model (on log-transformed variables):
>
> W1=SQRT(THETA(...)/IPRED**2+THETA(...))
> Y = LOG(IPRED) + W1*EPS(1)
>
> $SIGMA
> 1 FIXED
>
> Why concentrations were on LOQ? Was it because BQLs were inserted as LOQ? Then this is not a good idea.
>
> Thanks
> Leonid
>
> --------------------------------------
> Leonid Gibiansky, Ph.D.
> President, QuantPharm LLC
> web: www.quantpharm.com http://www.quantpharm.com
> e-mail: LGibiansky at quantpharm.com
> tel: (301) 767 5566
>
> Indranil Bhattacharya wrote:
>
> Hi Joachim, thanks for your suggestions/comments.
>
> When using LTBS I had used a different error model and the error block is shown below
>
> $ERROR
> IPRED = -5
> IF (F.GT.0) IPRED = LOG(F) ;log transforming predicition
> IRES=DV-IPRED
> W=1
> IWRES=IRES/W ;Uniform Weighting
> Y = IPRED + ERR(1)
>
> I also performed bootsrap on both LTBS and non-LTBS models and the non-LTBS CI were much more tighter and the precision was greater than non-LTBS. I think the problem plausibly is with the fact that when fitting the non-transformed data I have used the proportional + additive model while using LTBS the exponential model (which converts to additional model due to LTBS) was used. The extra additive component also may be more important in the non-LTBS model as for some subjects the concentrations were right on LOQ.
>
> I tried the dual error model for LTBS but does not provide a CV%. So I am currently running a bootstrap to get the CI when using the dual error model with LTBS.
>
> Neil
>
Quoted reply history
> On Fri, Aug 21, 2009 at 3:01 AM, Grevel, Joachim < [email protected] < mailto: [email protected] > < mailto: [email protected] >> wrote:
>
> Hi Neil,
> 1. When data are log-transformed the $ERROR block has to change:
> additive error becomes true exponential error which cannot be
> achieved without log-transformation (Nick, correct me if I am wrong).
> 2. Error cannot "go away". You claim your structural model (THs)
> remained unchanged. Therefore the "amount" of error will remain the
> same as well. If you reduce BSV you may have to "pay" for it with
> increased residual variability.
> 3. Confidence intervals of ETAs based on standard errors produced
> during the covariance step are unreliable (many threads in NMusers).
> Do bootstrap to obtain more reliable C.I..
> These are my five cents worth of thought in the early morning,
> Good luck,
> Joachim
>
> ------------------------------------------------------------------------
>
> AstraZeneca UK Limited is a company incorporated in England and
> Wales with registered number: 03674842 and a registered office at 15
> Stanhope Gate, London W1K 1LN.
>
> *Confidentiality Notice: *This message is private and may contain
> confidential, proprietary and legally privileged information. If you
> have received this message in error, please notify us and remove it
> from your system and note that you must not copy, distribute or take
> any action in reliance on it. Any unauthorised use or disclosure of
> the contents of this message is not permitted and may be unlawful.
>
> *Disclaimer:* Email messages may be subject to delays, interception,
> non-delivery and unauthorised alterations. Therefore, information
> expressed in this message is not given or endorsed by AstraZeneca UK
> Limited unless otherwise notified by an authorised representative
> independent of this message. No contractual relationship is created
> by this message by any person unless specifically indicated by
> agreement in writing other than email.
>
> *Monitoring: *AstraZeneca UK Limited may monitor email traffic data
> and content for the purposes of the prevention and detection of
> crime, ensuring the security of our computer systems and checking
> compliance with our Code of Conduct and policies.
>
> -----Original Message-----
>
> *From:* [email protected] < mailto: [email protected] >
>
> <mailto:[email protected]>
> [mailto:[email protected]
> <mailto:[email protected]>]*On Behalf Of *Indranil
> Bhattacharya
> *Sent:* 20 August 2009 17:07
>
> *To:* [email protected] < mailto: [email protected] > < mailto: [email protected] >
>
> *Subject:* [NMusers] Linear VS LTBS
>
> Hi, while data fitting using NONMEM on a regular PK data set
> and its log transformed version I made the following observations
> - PK parameters (thetas) were generally similar between
> regular and when using LTBS.
> -ETA on CL was similar
> -ETA on Vc was different between the two runs.
> - Sigma was higher in LTBS (51%) than linear (33%)
> Now using LTBS, I would have expected to see the ETAs unchanged
> or actually decrease and accordingly I observed that the eta
> values decreased showing less BSV. However the %RSE for ETA on
> VC changed from 40% (linear) to 350% (LTBS) and further the
> lower 95% CI bound has a negative number for ETA on Vc (-0.087).
> What would be the explanation behind the above observations
> regarding increased %RSE using LTBS and a negative lower bound
> for ETA on Vc? Can a negative lower bound in ETA be considered
> as zero?
> Also why would the residual vriability increase when using LTBS?
> Please note that the PK is multiexponential (may be this is
> responsible).
> Thanks.
> Neil
>
> -- Indranil Bhattacharya
>
> --
> Indranil Bhattacharya
>
> --
> Nick Holford, Professor Clinical Pharmacology
> Dept Pharmacology & Clinical Pharmacology
> University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
> [email protected] <mailto:[email protected]> tel:+64(9)923-6730
> fax:+64(9)373-7090
> mobile: +64 21 46 23 53
> http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford
--
Nick Holford, Professor Clinical Pharmacology
Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
[email protected] tel:+64(9)923-6730 fax:+64(9)373-7090
mobile: +64 21 46 23 53
http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford
Mats
Just a comment on your comments below:
"All models are wrong and I see no reason why the exponential error model would
be different although I think it is better than the proportional error for most
situations. "
"Why would you not be able to get sensible information from models that don't
have an additive error component?"
I agree that for estimation purposes a purely proportional or exponential error
model often seems to work well and under the principles of "all models are
wrong" it may well be appropriately justified. This is probably because
estimation processes that we use in standard software are fairly robust to
trivial solutions. The theory of optimal design is less forgiving in this
light and if you stated that your error was proportional to the observation
then it would conclude that there would be no error when there is no
observation (which we know is not true due to LOD issues). All designs are
optimal when there is zero error since the information matrix would be
infinite. Practically, the smallest observation will have least error and
hence be in some sense close to optimal.
So, a proportional or exponential only error model should be used with caution
in anything other than estimation and not used for the purposes of optimal
design.
Steve
--
Hi Steve,
I think you're missing an important point. As I wrote to Nick, you will
never get concentrations reported regardless of their value. At some point,
you will only get the information that concentration is below a limit
(LOQ,LOD,LO?). This you should take into account in your design. Error
models for concentrations below LO? are not entirely unimportant, but will
not have the properties you mention below.
Best regards,
Mats
Mats Karlsson, PhD
Professor of Pharmacometrics
Dept of Pharmaceutical Biosciences
Uppsala University
Box 591
751 24 Uppsala Sweden
phone: +46 18 4714105
fax: +46 18 471 4003
Quoted reply history
-----Original Message-----
From: Stephen Duffull [mailto:[email protected]]
Sent: Monday, August 24, 2009 2:49 AM
To: Mats Karlsson; 'Nick Holford'; 'Leonid Gibiansky'
Cc: 'nmusers'
Subject: RE: [NMusers] Linear VS LTBS
Mats
Just a comment on your comments below:
"All models are wrong and I see no reason why the exponential error model
would be different although I think it is better than the proportional error
for most situations. "
"Why would you not be able to get sensible information from models that
don't have an additive error component?"
I agree that for estimation purposes a purely proportional or exponential
error model often seems to work well and under the principles of "all models
are wrong" it may well be appropriately justified. This is probably because
estimation processes that we use in standard software are fairly robust to
trivial solutions. The theory of optimal design is less forgiving in this
light and if you stated that your error was proportional to the observation
then it would conclude that there would be no error when there is no
observation (which we know is not true due to LOD issues). All designs are
optimal when there is zero error since the information matrix would be
infinite. Practically, the smallest observation will have least error and
hence be in some sense close to optimal.
So, a proportional or exponential only error model should be used with
caution in anything other than estimation and not used for the purposes of
optimal design.
Steve
--
Hi, my approach has been to use both LTBS and un-transformed data and then
see which one characterizes the data better. Then change initial estimates
and see how the model predicts.
My previous experience was when using untransformed with INTER the model was
not able to always converge specially when the PK is multiphasic. However,
when using LTBS (exponential only) the models converged and predicted data
(Phase1, 2 or 3) quite well.
The current data set I am working with did not follow the exact trend and
that is why I had posed the original question.
Also when performing a preliminary bootstrap with 500 subjects I noticed
that LTBS showed bi-modality in Vc and Ka but not when using the
untransformed data.
Neil
Quoted reply history
On Sun, Aug 23, 2009 at 11:46 PM, Stephen Duffull <
[email protected]> wrote:
> Mats
>
> > I think you're missing an important point. As I wrote to Nick, you will
> > never get concentrations reported regardless of their value. At some
> > point,
> > you will only get the information that concentration is below a limit
> > (LOQ,LOD,LO?). This you should take into account in your design. Error
> > models for concentrations below LO? are not entirely unimportant, but
> > will
> > not have the properties you mention below.
>
> I am happy with either accounting for censoring, or including an additive
> error model or both for optimal design use with proportional error models.
> I don't think that proportional only error models in the absence of the
> above is good. So I believe we agree here.
>
> Steve
> --
>
>
--
Indranil Bhattacharya
Sorry to back up a few days on this thread, but I could not
resist piling yet another comment on the traditional covariance
success/convergence debate.
>From a purely algorithmic perspective, the relationship between convergence
>behavior
and success of the covariance step is extremely tenuous.
NONMEM uses a version of a BFGS quasi Newton method to drive the ELS obective
function
optimization. At any given stage, the descent direction is of the form
-(H_BFGS)**-1 * g, where g is the gradient of the objective function
and H_BFGS is a positive definite matrix
that captures accumulated curvature information recovered from the entire
sequence
of previously evaluated gradients at various points. Note that the 'true'
Newton direction is -H**-1 * g, where H is the Hessian evaluated at the current
point.
H also captures localized (to the current point) curvature information,
but is difficult and expensive to evaluate, so BFGS methods use the much more
easily
computed quasi-Newton direction. With accurate gradients, the quasi-Newton
direction
is necessarily a descent direction (since H_BFGS is guaranteed to be positive
definite),
while the true Newton direction need not be. Note that H and H_BFGS
do not bear any necessary relation to each other, although H_BFGS is often
thought
of as a surrogate or approximant for H.
At convergence (usually recognized by the algorithm as the gradient becoming
'sufficiently' small in magnitude), the standard errors in principle are
computed from the square roots of the diagonal elements of H**(-1), assuming H
is positive definite (which is almost always the case if at least a
local minimum has been found - there are some degenerate exceptions, called
non-Morse points where the Hessian is only positive semidefinite at
the local optimum, but these are rarely encountered).
But in practice the true Hessian H is hard to compute,
and NM uses a numerical approximation to the Hessian
(I believe forward difference, meaning for example in one dimension
the second derivative
at x is approximated by [f(x+2*eps) - 2f(x+eps) + f(x )}/eps^2. Even if an
optimal
step size eps is used and f can be computed to
full 15 digit double precision accuracy (which is way too optimistic if the
model
is defined by ODEs which are solved numerically), the best you can do
numerically
with forward differences is about 5 significant
digits of accuracy of agreement of the numerical hessian with the
true Hessian . Central differences, which are much more expensive, do a little
bit better.
But for the hessian inverse, if the numerical hessian or the true Hessian has
a condition
number greater than 10^5 (a condition number of 10^5 is actually quite benign
for most
matrix computations), then there
are no siginificant digits of agreement of the std errors computed from
the numerical hessian and the standard errors computed from the true
hessian. Moreover, in this case a perfectly respectable
positive definite matrix H can easily become indefinite when approximated
numerically,
which will cause the covariance step to fail.
Thus failure of the covariance step means only that the numerical Hessian at
the converged point is not positive definite, but very little can be concluded
from this regarding the true H due to the inherent numerical difficulties
and imprecison in numerical
Hessian computations. Moreover, the numerical Hessian plays no role whatever
in the convergence of the algorithm. Conversely, success of the covariance step
simply means the numerical Hessian is positive definite.
What is well known is that most gradient based (such as simple gradient descent,
newton, quasi-Newton, and conjugate gradient) unconstrained
numerical optimization methods typically work best
(speed ,accuracy, and reliability) when the true Hessian H in the vicinity of
the optimum
has a fairly narrow range of eigenvalues - e.g. when the condition number
max eigenvalue/ min eigenvalue is relatively small. (In the case of conjugate
gradient
methods, it is possible to obtain theoretical bounds on rates of convergence
in terms of condition number with the rates being inversely related to
the condition number).
Of course, a good condition number near the optimum
but not near the starting point does not
preclude convergence problems occuring away from the neighborhood
of the optimum.
But all other things being equal, well conditioned true Hessians (small
condition numbers) in
the neighborhood of the optimum are generally favorable for convergence
behavior. In terms of parameterization,
each additional parameter usually increases the condition number at the optimum
(this can be shown to be strictly true in the case of ordinary linear
regression). So in this sense, overparameterization 'typically'
adversely affects convergence behavior. But unfortunately it is not clear how
to turn this into a practical numerical overparametization criterion,
particularly
given the inherent unreliability of information derived from numerical
Hessians.
Robert H. Leary, PhD
Fellow
Pharsight - A Certara(tm) Company
5625 Dillard Dr., Suite 205
Cary, NC 27511
Phone/Voice Mail: (919) 852-4625, Fax: (919) 859-6871
Email: [email protected]
> This email message (including any attachments) is for the sole use of the
> intended recipient and may contain confidential and proprietary information.
> Any disclosure or distribution to third parties that is not specifically
> authorized by the sender is prohibited. If you are not the intended
> recipient, please contact the sender by reply email and destroy all copies of
> the original message.
Quoted reply history
-----Original Message-----
From: [email protected]
[mailto:[email protected]]on Behalf Of Nick Holford
Sent: Friday, August 21, 2009 16:44 PM
To: nmusers
Subject: Re: [NMusers] Linear VS LTBS
Leonid,
You are once again ignoring the actual evidence that NONMEM VI will fail
to converge or not complete the covariance step more or less at random.
If you bootstrap simulated data in which the model is known and not
overparameterised it has been shown repeatedly that NONMEM VI will
sometimes converge and do the covariance step and sometimes fail to
converge.
Of course, I agree that overparameterisation could be a cause of
convergence problems but I would not agree that this is often the reason.
Bob Bauer has made efforts in NONMEM 7 to try to fix the random
termination behaviour and covariance step problems by providing
additional control over numerical tolerances. It remains to be seen by
direct experiment if NONMEM 7 is indeed less random than NONMEM VI.
BTW in this discussion about LTBS I think it is important to point out
that the only systematic study I know of comparing LTBS with
untransformed models was the one you reported at the 2008 PAGE meeting
(www.page-meeting.org/?abstract=1268). My understanding of your results
was that there was no clear advantage of LTBS if INTER was used with
non-transformed data:
"Models with exponential residual error presented in the log-transformed
variables
performed similar to the ones fitted in original variables with INTER
option. For problems with
residual variability exceeding 40%, use of INTER option or
log-transformation was necessary to
obtain unbiased estimates of inter- and intra-subject variability."
Do you know of any other systematic studies comparing LTBS with no
transformation?
Nick
Leonid Gibiansky wrote:
> Neil
> Large RSE, inability to converge, failure of the covariance step are
> often caused by the over-parametrization of the model. If you already
> have bootstrap, look at the scatter-plot matrix of parameters versus
> parameters (THATA1 vs THETA2, .., THETA1 vs OMEGA1, ...), these are
> very informative plots. If you have over-parametrization on the
> population level, it will be seen in these plots as strong
> correlations of the parameter estimates.
>
> Also, look on plots of ETAs vs ETAs. If you see strong correlation
> (close to 1) there, it may indicate over-parametrization on the
> individual level (too many ETAs in the model).
>
> For random effect with a very large RSE on the variance, I would try
> to remove it and see what happens with the model: often, this (high
> RSE) is the indication that the error effect is not needed.
>
> Also, try combined error model (on log-transformed variables):
>
> W1=SQRT(THETA(...)/IPRED**2+THETA(...))
> Y = LOG(IPRED) + W1*EPS(1)
>
>
> $SIGMA
> 1 FIXED
>
>
> Why concentrations were on LOQ? Was it because BQLs were inserted as
> LOQ? Then this is not a good idea.
> Thanks
> Leonid
>
>
> --------------------------------------
> Leonid Gibiansky, Ph.D.
> President, QuantPharm LLC
> web: www.quantpharm.com
> e-mail: LGibiansky at quantpharm.com
> tel: (301) 767 5566
>
>
>
>
> Indranil Bhattacharya wrote:
>> Hi Joachim, thanks for your suggestions/comments.
>>
>> When using LTBS I had used a different error model and the error
>> block is shown below
>> $ERROR
>> IPRED = -5
>> IF (F.GT.0) IPRED = LOG(F) ;log transforming predicition
>> IRES=DV-IPRED
>> W=1
>> IWRES=IRES/W ;Uniform Weighting
>> Y = IPRED + ERR(1)
>>
>> I also performed bootsrap on both LTBS and non-LTBS models and the
>> non-LTBS CI were much more tighter and the precision was greater than
>> non-LTBS.
>> I think the problem plausibly is with the fact that when fitting the
>> non-transformed data I have used the proportional + additive model
>> while using LTBS the exponential model (which converts to additional
>> model due to LTBS) was used. The extra additive component also may be
>> more important in the non-LTBS model as for some subjects the
>> concentrations were right on LOQ.
>>
>> I tried the dual error model for LTBS but does not provide a CV%. So
>> I am currently running a bootstrap to get the CI when using the dual
>> error model with LTBS.
>>
>> Neil
>>
>> On Fri, Aug 21, 2009 at 3:01 AM, Grevel, Joachim
>> <[email protected]
>> <mailto:[email protected]>> wrote:
>>
>> Hi Neil,
>> 1. When data are log-transformed the $ERROR block has to
>> change:
>> additive error becomes true exponential error which cannot be
>> achieved without log-transformation (Nick, correct me if I am
>> wrong).
>> 2. Error cannot "go away". You claim your structural model
>> (THs)
>> remained unchanged. Therefore the "amount" of error will remain the
>> same as well. If you reduce BSV you may have to "pay" for it with
>> increased residual variability.
>> 3. Confidence intervals of ETAs based on standard errors
>> produced
>> during the covariance step are unreliable (many threads in NMusers).
>> Do bootstrap to obtain more reliable C.I..
>> These are my five cents worth of thought in the early morning,
>> Good luck,
>> Joachim
>>
>>
>> ------------------------------------------------------------------------
>>
>> AstraZeneca UK Limited is a company incorporated in England and
>> Wales with registered number: 03674842 and a registered office at 15
>> Stanhope Gate, London W1K 1LN.
>>
>> *Confidentiality Notice: *This message is private and may contain
>> confidential, proprietary and legally privileged information. If you
>> have received this message in error, please notify us and remove it
>> from your system and note that you must not copy, distribute or take
>> any action in reliance on it. Any unauthorised use or disclosure of
>> the contents of this message is not permitted and may be unlawful.
>>
>> *Disclaimer:* Email messages may be subject to delays, interception,
>> non-delivery and unauthorised alterations. Therefore, information
>> expressed in this message is not given or endorsed by AstraZeneca UK
>> Limited unless otherwise notified by an authorised representative
>> independent of this message. No contractual relationship is created
>> by this message by any person unless specifically indicated by
>> agreement in writing other than email.
>>
>> *Monitoring: *AstraZeneca UK Limited may monitor email traffic data
>> and content for the purposes of the prevention and detection of
>> crime, ensuring the security of our computer systems and checking
>> compliance with our Code of Conduct and policies.
>>
>> -----Original Message-----
>>
>>
>> *From:* [email protected]
>> <mailto:[email protected]>
>> [mailto:[email protected]
>> <mailto:[email protected]>]*On Behalf Of *Indranil
>> Bhattacharya
>> *Sent:* 20 August 2009 17:07
>> *To:* [email protected] <mailto:[email protected]>
>> *Subject:* [NMusers] Linear VS LTBS
>>
>> Hi, while data fitting using NONMEM on a regular PK data set
>> and its log transformed version I made the following
>> observations
>> - PK parameters (thetas) were generally similar
>> between
>> regular and when using LTBS.
>> -ETA on CL was similar
>> -ETA on Vc was different between the two runs.
>> - Sigma was higher in LTBS (51%) than linear (33%)
>> Now using LTBS, I would have expected to see the
>> ETAs unchanged
>> or actually decrease and accordingly I observed that the eta
>> values decreased showing less BSV. However the %RSE for ETA on
>> VC changed from 40% (linear) to 350% (LTBS) and further the
>> lower 95% CI bound has a negative number for ETA on Vc (-0.087).
>> What would be the explanation behind the above
>> observations
>> regarding increased %RSE using LTBS and a negative lower bound
>> for ETA on Vc? Can a negative lower bound in ETA be considered
>> as zero?
>> Also why would the residual vriability increase when using LTBS?
>> Please note that the PK is multiexponential (may be
>> this is
>> responsible).
>> Thanks.
>> Neil
>>
>> -- Indranil Bhattacharya
>>
>>
>>
>>
>> --
>> Indranil Bhattacharya
>>
--
Nick Holford, Professor Clinical Pharmacology
Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
[email protected] tel:+64(9)923-6730 fax:+64(9)373-7090
mobile: +64 21 46 23 53
http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford