Re: PopDesign: PopED and SSE comparison
Hi Pavan
I, too, am confused by how you compute the relative standard errors (RSE) of
your parameters and don't understand your logic in computing the RSE of the
back-transformed value. Can you explain how you get to these approximations?
My initial suggestion would be to use the same model in estimation and with
your FIM calculation (PopED), meaning you should log-transform the parameters
in the PopED model as well. Then you just use the same calculation in both the
SSE and from the FIM, RSE = sd/param*100%.
As for your other questions:
2. estimation method can play a role but France Mentre and colleagues have
shown for a number of examples that the reduced FIM does well in predicting
SAEM results.
3. I would consider SSE the gold standard in predicting RSEs (but it is clearly
much slower at evaluating many designs). The fact that you can take Bias into
account is a bonus enabling you to better evaluate your design.
4. If you FIX a parameter in PopED you are assuming that you will FIX a
parameter in estimation as well, so if you want to compare RSE between an SSE
and FIM calculation then you should have the parameter fixed in both settings.
However, fixing a parameter in an optimal design calculation can also be used
to force a design to focus on other parameters in your model, but for
predictions of RSE values one should always match how the estimation model will
do things.
Best regards
Andy
Andrew Hooker, Ph.D.
Associate Professor of Pharmacometrics
Dept. of Pharmaceutical Biosciences
Uppsala University
Box 591, 751 24, Uppsala, Sweden
Phone: +46 18 471 4355
Mobile: +46 768 000 725
www.farmbio.uu.se/research/researchgroups/pharmacometrics/
Quoted reply history
On Oct 8, 2013, at 18:54 , pavan kumar <[email protected]> wrote:
> Hi Leonid,
>
> The model parameters were log transformed and I was calculating the SE of
> back-transformed parameter. That should be ~ SE (log transformed
> parameter)*100, since
> SE (log-tranformed parameter) ~ SE (back-transformed
> parameter)/Popmean(back-transformed parameter).
> In case of SSE, that is equivalent to sd*100/sqrt(N).
>
> Thanks,
> Pavan.
>
>
> From: Leonid Gibiansky <[email protected]>
> To: pavan kumar <[email protected]>
> Cc: "[email protected]" <[email protected]>; nmusers
> <[email protected]>
> Sent: Tuesday, 8 October 2013 11:48 AM
> Subject: Re: [NMusers] PopED and SSE comparison
>
> Was this a typo:
> "When I ran an SSE, the %RSE (calculated as the sd *100/sqrt(200)" ?
>
> I think this should be divided by the mean of the parameter values to
> get %RSE.
>
> Same for
> "the precision of the parameters from the expected FIM (calculated as
> sqrt(expected parameter variances) * 100)",
> should it be divided by the parameter value?
>
> Leonid
>
>
> --------------------------------------
> Leonid Gibiansky, Ph.D.
> President, QuantPharm LLC
> web: www.quantpharm.com
> e-mail: LGibiansky at quantpharm.com
> tel: (301) 767 5566
>
>
>
> On 10/8/2013 10:50 AM, pavan kumar wrote:
> > Hi,
> > I have been working on a fairly complex differential equation based
> > model with an objective to optimize study design particularly for number
> > of subjects in an experiment. The PK model is a lme model between dose
> > and AUC and the PKPD model consists of a placebo component and a drug
> > effect component with fixed PK model parameters from the PK model (both
> > developed in NONMEM). Design optimization is run using PopED.
> > My interest lies particularly in the drug effect parameters of the model
> > (Emax and EAUC50). I have log transformed parameters as part of the MU
> > model (I am using SAEM, in NM7.2) and I calculated NONMEM %RSEs for
> > the untransformed parameters as SE(log_transformed)*100, which were
> > around 11 and 25 %RSE.
> > When the same design was set up in PopED and evaluated using FO and
> > reduced FIM option, the precision of the parameters from the expected
> > FIM (calculated as sqrt(expected parameter variances) * 100) were over
> > predicted for the drug effect parameters particularly EAUC50 (~18% and
> > 75%).
> > When I ran an SSE (N=200, given the complexity of the model and the long
> > run times associated with it, inspite of using parallelization) with the
> > original design, the %RSE (calculated as the sd *100/sqrt(200) from the
> > sse_results.csv), showed much smaller imprecision smaller than what
> > NONMEM provided (< 2%RSE). I evaluated the precision for other designs
> > using PopED and for a few of those designs ran the SSE as well. I have a
> > similar observation that the PopED precisions were much larger than the
> > SSE runs.
> > I have the following questions:
> > 1. Am I missing something in the calculation of %RSE involving log
> > transformed parameters that I am seeing such odd results from the three
> > approaches? Is there a better way to compare these results across these
> > approaches in such a case of log transformed parameters (eg. using CI of
> > the log transformed parameters)?
> > 2. Does estimation method (SAEM in NONMEM vs FO/FOCE in PopED) play a
> > role in such differences?
> > 3. Should SSE be considered gold standard? How should I interpret the
> > results if I see a bias in the model parameters from SSE?
> > 4. As you are aware, we can fix some of the parameters in PopED and do
> > an evaluation. To compare such results with SSE, should I fix the same
> > parameters that were fixed in PopED and run an SSE?
> > I would like to hear your thoughts on what is the best way to identify a
> > future design in such a situation? I appreciate your timely help!
> > Thanks,
> > Pavan.
>
>