PopED and SSE comparison
Hi,
I have been working on a fairly complex differential equation based model with
an objective to optimize study design particularly for number of subjects in an
experiment. The PK model is a lme model between dose and AUC and the PKPD model
consists of a placebo component and a drug effect component with fixed PK model
parameters from the PK model (both developed in NONMEM). Design optimization is
run using PopED.
My interest lies particularly in the drug effect parameters of the model (Emax
and EAUC50). I have log transformed parameters as part of the MU model (I am
using SAEM, in NM7.2) and I calculated NONMEM %RSEs for the untransformed
parameters as SE(log_transformed)*100, which were around 11 and 25 %RSE.
When the same design was set up in PopED and evaluated using FO and reduced FIM
option, the precision of the parameters from the expected FIM (calculated as
sqrt(expected parameter variances) * 100) were over predicted for the drug
effect parameters particularly EAUC50 (~18% and 75%).
When I ran an SSE (N=200, given the complexity of the model and the long run
times associated with it, inspite of using parallelization) with the original
design, the %RSE (calculated as the sd *100/sqrt(200) from the
sse_results.csv), showed much smaller imprecision smaller than what NONMEM
provided (< 2%RSE). I evaluated the precision for other designs using PopED and
for a few of those designs ran the SSE as well. I have a similar observation
that the PopED precisions were much larger than the SSE runs.
I have the following questions:
1. Am I missing something in the calculation of %RSE involving log transformed
parameters that I am seeing such odd results from the three approaches? Is
there a better way to compare these results across these approaches in such a
case of log transformed parameters (eg. using CI of the log transformed
parameters)?
2. Does estimation method (SAEM in NONMEM vs FO/FOCE in PopED) play a role in
such differences?
3. Should SSE be considered gold standard? How should I interpret the results
if I see a bias in the model parameters from SSE?
4. As you are aware, we can fix some of the parameters in PopED and do an
evaluation. To compare such results with SSE, should I fix the same parameters
that were fixed in PopED and run an SSE?
I would like to hear your thoughts on what is the best way to identify a future
design in such a situation? I appreciate your timely help!
Thanks,
Pavan.