Re: Error model
Navin,
Another model that can be applied in the log-transofrmed domain is documented in:
http://huxley.phor.com/nonmem/nm/99apr242002.html
and
http://huxley.phor.com/nonmem/nm/99jan071999.html
It has similar properties of the ADD+PROP in the log domain. The concentrations that are low are weighted less. In fact since it is in the log domain, the concentrations that are high are weighted lower as well, meaning the middling concentrations have the highest weight. It is mentioned in:
SL Beal. /Ways to Fit a PK Model with Some Data Below the Qunatification Limit/ J. Pharmacokin.Pharamcodyn. 28, p. 481-504.
It is given in Equation 11. He states
"Logrithmically tranformed ... observations whos pharmacokientic predictions become theretically small, but both their centraltendency and variance seem to remain constant and above certain levels (assuming that the assay is accurate, this can only happen with the kinetics are misspecified), in which case another useful model for the logrithmically transformed observations is ... (EQ 11 here) .. where m is an extra positively constraned parameters."
Just FYI.
Matthew Fidler
[EMAIL PROTECTED]
navin goyal wrote:
> Dear Nonmem users,
>
> I am analysing a POPPK data with sparse sampling
>
> The dosing is an IV infusion over one hour and we have data for time points 0 (predose), 1 (end of infusion) and 2 (one hour post infusion) The drug has a half life of approx 4 hours. The dose is given once every fourth day When I ran my control stream and looked at the output table, I got some IPREDs at time predose time points where the DV was 0
>
> the event ID EVID for these time points was 4 (reset)
> (almost 20 half lives)
>
> I was wondering why did NONMEM predict concentrations at these time points ?? there were a couple of time points like this.
>
> I started with untransformed data and fitted my model.
> but after bootstrapping the errors on etas and sigma were very high.
>
> I log transformed the data , which improved the etas but the sigma shot upto more than 100% ( is it because the data is very sparse ??? or I need to use a better error model ???) Are there any other error models that could be used with the log transformed data, apart from the
>
> Y=Log(f)+EPS(1)
>
> Any suggestions would be appreciated
>
> --
>
> --Navin