Re: Model building algorithm
From: LSheiner <lewis@c255.ucsf.edu>
Subject: Re: Model building algorithm
Date: Mon, 24 Apr 2000 13:54:44 -0700
Nick Holford wrote:
>
> LSheiner wrote:
> >
> > I think the injunctions you have heard about building the structural model
> > first were not stated as they should have been: the idea is to create the
structural
> > model in a context of a flexible inter-indivdiuual variance model, so
> > Bill's idea of putting etas on everything goes along with that philosophy.
> > Indeed, I think we have all seen cases where doing what Bill saqys, but
> > limiting OMEGA to be strictly diagonal has led to problems in model building.
>
> My 2 cents -- a flexible between subject variability (BSV) model means
> to me having a full block OMEGA structure so that any parameter which
> has BSV is by default assumed to be correlated with any other parameter.
> The rationale for this is that NOT including the covariance implies a
> specific assumption that the covariance is zero. This assumption might
> be reasonable for the covariance between a set of PK parameters and a
> set of PD parameters in the same model but I would not think it
> reasonable a priori to have a zero covariance within the set of PK
> parameters or within the set of PD parameters.
>
> Note that the same idea applies to within subject variability (WSV). The
> commonly known example of WSV uses occasion as a covariate to identify
> between occasion variability (BOV) (aka IOV). An OMEGA BLOCK should be
> considered the default for parameters with BOV e.g. BOV in a PK model
> may often be due to differences in bioavailability which will be
> reflected in the covariance between CL/F and V/F. Failure to use an
> OMEGA BLOCK for BOV is the same as saying biovailability does not
> influence CL/F and V/F (assuming there is not a an explicit OMEGA for F
> in the model).
I agree -- when I said "limiting OMEGA to be strictly diagonal has led to problems in model building" I thought I was saying taht one was well advised to use a full OMEGA - i.e., with off-diagonal elements non-zero.
>
> > I generally now-a-days, use a 2 x 2 OMEGA while building my
> > regression model, one eta scales Y
> > (i.e., Y = F*(1+eta) or F*EXP(eta)) and one eta scales X (usually
> > time ... This is implemented as TSCALE = exp(eta), where TSCALE is
> > allowed). Effectively, then the generic variability model is
> > F = fn(time*exp(eta1))*exp(eta2).
>
> Can you explain why you would use this rather than the more common model
> for variability in Y e.g.
> F = fn(time*exp(eta1))*exp(eps1) ?
Again, perhaps not clear - I was talking about inter-individual variability - in NONMEM speak, F models the prediction (regression function), and Y models the observation. So to be complete, I was actually suggesting:
F = fn(time*exp(eta1))*exp(eta2)
Y = Fexp(eps1) + eps2
>
> How do you interpret the eta on Time? If you had used an additive model
> e.g. TSCALE=eta then I would think of this as reflecting random error in
> measurement of time but I have difficulty understanding the subject
> specific magnitude. I find it even harder to interpret a proportional
> model where the error in time gets bigger the longer one waits.
It's not an error, but a source of variation. The eta on time simple rescales the time scale for each indivdiual: things run "slower" for some and "faster" for others - it covers, in a non-parametric way, diffferences between individuals in the "time" dimension, much as the eta multiplying F covers differences among indivduals in the spatial dimension. The time-scaling works for example for inter-species scaling, and corresponds to the idea of normalizing time to species lifetimes so all lifetimes are unity after transformation.
--
_/ _/ _/_/ _/_/_/ _/_/_/ Lewis B Sheiner, MD (lewis@c255.ucsf.edu)
_/ _/ _/ _/_ _/_/ Professor: Lab. Med., Bioph. Sci., Med.
_/ _/ _/ _/ _/ Box 0626, UCSF, SF, CA, 94143-0626
_/_/ _/_/ _/_/_/ _/ 415-476-1965 (v), 415-476-2796 (fax)