RE: Describing variability
From:"Kowalski, Ken"
Subject: RE: [NMusers] Describing variability
Date: Tue, 1 Apr 2003 12:15:00 -0500
Bill,
My comments are imbedded below.
Ken
Quoted reply history
-----Original Message-----
From: Bachman, William [mailto:bachmanw@globomax.com]
Sent: Tuesday, April 01, 2003 11:23 AM
To: 'Kowalski, Ken'; Bachman, William; 'Diane R Mould'; VPIOTROV@PRDBE.jnj.com;
n.holford@auckland.ac.nz; nmusers@globomaxnm.com
Subject: RE: [NMusers] Describing variability
Ken,
While those are all certainly good suggestions (and I highly recommend them), there are still some relatively
simple models (read as not over-parameterized) where you won't get a successful $COV (e.g when sampling is
limited and there is just no way you're going to get any more or better data, like pediatric studies.)
[Kowalski, Ken] An over-parameterized model arises when the data cannot support estimating all of
the parameters regardless of the reason. In your example above, the over-parameterization is a result of
the limitations of the design. An over-parameterized model may be considered a simple model with the right
set of data but with a limited set of data it can be overly complex. For example, a dose-response might be correctly
described by a simple Emax model, however, if we only test doses in a narrow range, say in the linear range of
the dose-response, there may be an infinite combination of estimates of Emax and ED50 that will provide a good
fit to the dose-response. Certainly as a descriptive summary of the dose-response the over-parameterized model
fit may be fine but I would be extremely cautious in using these estimates to guide dose selection for a future study
particularly if I was planning to extrapolate to higher doses.
Should you not use the model for any purpose? I don't think so. It may still be adequate for descriptive purposes
[Kowalski, Ken] Agreed, see comment above. or planning of further studies. [Kowalski, Ken]
Using over-parameterized models for planning further studies should be done cautiously recognizing the limitations of
the parameters estimates and the problems in using the model to extrapolate. $COV is a bonus in that it gives
you added confidence that you have not found a local minimum (as well as estimates of the standard errors, etc). If the
situation warrants, certainly take a Bayesian approach or do extensive simulation studies, but I don't think that's
ALWAYS necessary, do you? You have implied that I don't think successful convergence or $COV is ever needed or
desired. The point I'm trying to make is that some sort of balanced approach can be taken and sometimes, you have to "go
with what you got."
[Kowalski, Ken] I wouldn't put it that way. A bonus makes it sound like we don't need to strive to obtain
stable models. To the contrary, that should be the norm. I do recognize that desperate times may call for desperate
measures I'm just concerned that we're sending the wrong message that trivializes the importance of convergence
and $COV step.
Bill