RE: Constrain PD values using a logistic transformation
Dear Dr. Gillespie,
Your insight is greatly appreciated. I have 3 follow-up questions:
1.
Is there a typo in the equation Y = 100 * LOG(IPRE/(100-IPRE))+ERR(1).
I guess it should be OK to write IPRE=LOG(A(1)/(100-A(1))) and Y=IPRE+ERR(1) so
I can compare IPRE vs. DV on the diagnostics
2.
By extended logit do you mean: IPRE=LOG((A(1)+0.5)/(100.5-A(1))). I
guess the Yobs would also have to be computed using LOG((Xobs+0.5)/(100.5-Xobs))
3.
I have read some of your elegant work on the AD progression model.
However, I am not sure how to implement the beta distribution in NONMEM. It
appeared to me that a ratio of ADAS-cog divided by ADAS-COGmax of 70 was
computed and then it was put in a logit transform. Could you kindly describe
this method in a little bit detail
Interestingly I haven't received any other responses from NMusers. I read a
couple of papers in the past few days and all of them appear to ignore this
problem. The commonly used additive residual error model is usually being
utilized for scores with limits as endpoints.
Thank-you,
Mahesh
Quoted reply history
________________________________
From: Bill Gillespie [mailto:[email protected]]
Sent: Fri 7/2/2010 3:04 PM
To: Samtani, Mahesh [PRDUS]
Cc: [email protected]
Subject: Re: [NMusers] Constrain PD values using a logistic transformation
Hi Mahesh,
If you plan to use one of the approximate likelihood methods, e.g., FO or FOCE,
you may prefer to transform the data and use an additive model. In other words
transform the data according to Yobs = LOG(Xobs/(100-Xobs)) and use Y = 100 *
LOG(IPRE/(100-IPRE))+ERR(1) where Xobs is the observed data on the restricted
range.
Since you have some data at the extremes, you may want to extend the range used
for the extended logit to (-0.5, 100.5) or something similar. Otherwise you'll
end up with under- or over-flows.
Regarded other transformations, anything that transforms from a bounded
interval to the real line is potentially fair game. For example you could use
probit or complimetary log-log transformations extended to (0, 100). Another
approach would be to use an beta distribution extended to (0, 100) instead of
(0, 1) for the likelihood. Such an approach is described for a model of
ADAS-cog scores as a function of time (see the Alzheimer's disease progression
model at http://opendiseasemodels.org http://opendiseasemodels.org/ ,
specifically the model used for the "raw" scores).
Cheers,
Bill Gillespie
On Jul 1, 2010, at 3:47 PM, Samtani, Mahesh [PRDUS] wrote:
Dear NMusers,
I am trying to model some PD data, which has a lower bound of zero and
an upper bound of 100. I was wondering how to implement this restriction and if
it was possible to use the general logistic transformation in the $ERROR block
shown below:
$ERROR
IPRE=A(1)
LT=LOG(IPRE/(100-IPRE))+ERR(1)
Y=(100*EXP(LT))/(1+EXP(LT))
If this is appropriate, do I understand correctly that this is NOT a
transform both sides approach; i.e. DV stays in its original or natural form.
Finally, the logistic transformation extends from -? to +?. However,
the dataset does have a small number of values that are zeros and 100 (Five
zeros and a couple of 100s in a dataset of about 700 observations). Do these
small number of extreme values in the dataset cause problem when the LT term is
back transformed above.
Any other method and references for papers that use these types of
constraints would be greatly appreciated.
Warm regards and thanks in advance...MNS