Re: Probabilistic model
From: "Leonid Gibiansky"
Subject: Re: [NMusers] Probabilistic model
Date: Tue, May 17, 2005 8:40 pm
Let me add an example in support of Nick's suggestion:
In the project (real data, consecutive PK, then PK/PD) that motivated my small
example we noticed
that the expected score
ESC= SUM(SCORE_i*P_i)
defined as a sum of (level * probability of the score at that level) described the
observed data
with a very good accuracy. That motivated two continuous models. In one, we fitted
ESC as defined
above to the observed DV (score). The second model was a model for ESC as an EMAX
function of
concentration. Individual predictions of these two continuous models were as good as
individual
predictions of the probabilistic model. We tried predictive check simulations and
found out that all
three models over-estimated the frequency of the highest scores (with the strongest
effect). The
probabilistic model was slightly better than continuous in this regard. Continuous
models took much
less time (many hours instead of many days) and efforts to converge (e.g., initial
values of the
parameters were obtained by FO; then FOCEI converged starting from the FO final
estimates): this was
much simpler than guessing initial conditions for the probabilistic model. Both
types of models
predicted a very similar covariate PD effect (requiring about 25-30% dose adjustment
for a subgroup
of patients). Continuous models were more stable and they actually converged (i.e.,
start from
different initial conditions led to similar solutions) while the probabilistic model
exhibited
behavior described in the original example that started this discussion.
Based on this example, it would be hard to recommend any of the approaches over
the other: each
has own advantages and problems.
Leonid