Dear NMusers,
I have two questions regarding the statistical model when performing
external validation. I have a dataset and would like to validate a
published model with POSTHOC method i.e. $EST METHOD=0 POSTHOC MAXEVAL=0.
1. The model added etas in proportional way, i.e. Para = THETA * (1+ETA)
and this made the posthoc estimation fail due to the negative individual
parameter estimate in some subjects. I constrained it to be positive by
adding ABS function i.e. Para = THETA * ABS(1+ETA), and the estimation can
be successfully running. I was wondering if there is better workaround?
2. OMEGA value influences individual ETAs in POSTHOC estimation. Should we
assign $SIGMA with model value or lab (where external data was determined)
assay error value? If we use model value, it's understandable that $SIGMA
contains unexplained variability and thus it is a part of the model.
However, I may also understand it as that model value contains the
unexplained variability for original data (in which the model was created)
but not for external data. I'm a little confused about it. Can someone help
me out?
I would appreciate any response! Many thanks in advance!
Your sincerely,
Tingjie Guo
ETAs & SIGMA in external validation
10 messages
5 people
Latest: Apr 13, 2018
Dear Tingjie,
If I understand your description correctly, you would like to evaluate the
published model (and point estimates of population parameters) using GoF plots
(residual-error and eta-plots), rather than via simulation (e.g. VPC or PPC)?
At least for the latter it would be necessary to constrain individual
parameters from (zero and) negative space (for parameters which must be
positive).
The solution you initially implemented will bias the parameter distribution
severely, since only values greater than or equal to the typical parameter
value is allowed.
For estimation you can add NOABORT to the $ESTIMATION line. Right after the
individual parameter has been assigned its value you can check that it is
positive:
PARA = TVPARA * (1+ETA(1))
IF (PARA.LE.0) EXIT 1 23
In simulation mode, you can instead draw a new eta in subjects that have a
negative parameter value.
I have written some code for you below, but please check for any typos :>)
Also, notice this is an example which avoids negative parameter values for a
single parameter, but you can implement the same solution with multiple
individual parameters in the DO WHILE block.
Also, before you go ahead and try to fix anything related to etas in
estimation: Check that the code and data you have put in place is reasonable.
The first subject that fails with a negative parameter value: Can you find
anything particular in your dataset for this individual?
For example, you may have included zero DV values in your data set, or you may
have coded a missing covariate as -99.
Finally, the residual error is usually much larger than the assay error,
inflated by e.g. adherence and errors in sample collection, imperfect model,
etc.
Many things may change from one study to the next. A well controlled study (or
in some cases a better assay) could result in lower residual error.
More commonly, changes in population or inclusion criteria may change IIV in
parameters (as well as typical or population values).
However, as a starting point for your external evaluation, it may be good to
assume that all population parameters are the same as in the published model,
both fixed and random effects.
Best wishes
Jakob
TVPARA = THETA(1)
PARA = TVPARA * (1+ETA(1))
;Sampling etas until the new subject has Para>0
IF(ICALL.EQ.4.AND.NEWIND.NE.2) THEN
DO WHILE (PARA.LE.0)
CALL SIMETA(ETA)
PARA = TVPARA * (1+ETA(1))
ENDDO
ENDIF
;Etas that do not need resampling should be declared after the above DO WHILE
block. They should follow below
[…]
$SIMULATION (123456 NEW) (7891011 UNIFORM) ONLYSIMULATION […] ; The SIMETA
requires an additional seed number, see nmhelp for more info
Jakob Ribbing, Ph.D.
Senior Consultant, Pharmetheus AB
Cell/Mobile: +46 (0)70 514 33 77
[email protected]
www.pharmetheus.com http://www.pharmetheus.com/
Phone, Office: +46 (0)18 513 328
Uppsala Science Park, Dag Hammarskjölds väg 52B
SE-752 37 Uppsala, Sweden
>
It would be better to use
$EST METHOD=1 INTERACTION MAXEVAL=0
(at least if the original model was fit with INTERACTION option and residual error model is not additive).
One option is to use Para = THETA * EXP(ETA)
You would be changing the model, but the model is not too good any way if you need to restrict Para > 0 artificially.
SIGMA should be taken from the model.
Leonid
Quoted reply history
On 4/6/2018 12:32 PM, Tingjie Guo wrote:
> Dear NMusers,
>
> I have two questions regarding the statistical model when performing external validation. I have a dataset and would like to validate a published model with POSTHOC method i.e. $EST METHOD=0 POSTHOC MAXEVAL=0.
>
> 1. The model added etas in proportional way, i.e. Para = THETA * (1+ETA) and this made the posthoc estimation fail due to the negative individual parameter estimate in some subjects. I constrained it to be positive by adding ABS function i.e. Para = THETA * ABS(1+ETA), and the estimation can be successfully running. I was wondering if there is better workaround?
>
> 2. OMEGA value influences individual ETAs in POSTHOC estimation. Should we assign $SIGMA with model value or lab (where external data was determined) assay error value? If we use model value, it's understandable that $SIGMA contains unexplained variability and thus it is a part of the model. However, I may also understand it as that model value contains the unexplained variability for original data (in which the model was created) but not for external data. I'm a little confused about it. Can someone help me out?
>
> I would appreciate any response! Many thanks in advance!
>
> Your sincerely,
>
> Tingjie Guo
Hi Tingjie,
It does not: Sigma squared is the sum of all error variances, and assay error
in most cases is only a small contribution to this sum.
There are exceptions, but when applying a previous model to new data it is
rarely the first modification that comes to my mind.
Given your objectives with the model, maybe the best evaluation would be to
obtain individual parameters based on subject’s first visit (IGNORE later time
points in $DATA), and then see how well these etas predict the DV at subsequent
visits?
To split by visit is only an example, obviously, if your project is in
anesthesia it may be too late for dose adjustment after the visit is over, and
in other cases observations over several days or weeks may be more relevant,
for prediction of even later points in time.
Best regards
Jakob
Quoted reply history
> On 6 Apr 2018, at 21:40, Tingjie Guo <[email protected]> wrote:
>
> Small correction in question 2: SIGMA (instead of OMEGA) value influences
> individual ETAs...
>
>
> @Leonid, @Jakob, Thank you both for your input.
>
> @Jakob, You are right, I'm interested in individual ETAs. The idea is to
> evaluate the predictive ability of the model in particular subjects (external
> data) in order to guide clinical care for these subjects. Does this purpose
> alter your opinion on SIGMA choice?
>
>
> Yours sincerely,
> Tingjie Guo
>
>
> On Fri, Apr 6, 2018 at 7:51 PM, Leonid Gibiansky <[email protected]
> <mailto:[email protected]>> wrote:
> It would be better to use
>
> $EST METHOD=1 INTERACTION MAXEVAL=0
>
> (at least if the original model was fit with INTERACTION option and residual
> error model is not additive).
>
> One option is to use Para = THETA * EXP(ETA)
> You would be changing the model, but the model is not too good any way if you
> need to restrict Para > 0 artificially.
>
> SIGMA should be taken from the model.
>
> Leonid
>
>
>
> On 4/6/2018 12:32 PM, Tingjie Guo wrote:
> Dear NMusers,
>
> I have two questions regarding the statistical model when performing external
> validation. I have a dataset and would like to validate a published model
> with POSTHOC method i.e. $EST METHOD=0 POSTHOC MAXEVAL=0.
>
> 1. The model added etas in proportional way, i.e. Para = THETA * (1+ETA) and
> this made the posthoc estimation fail due to the negative individual
> parameter estimate in some subjects. I constrained it to be positive by
> adding ABS function i.e. Para = THETA * ABS(1+ETA), and the estimation can be
> successfully running. I was wondering if there is better workaround?
>
> 2. OMEGA value influences individual ETAs in POSTHOC estimation. Should we
> assign $SIGMA with model value or lab (where external data was determined)
> assay error value? If we use model value, it's understandable that $SIGMA
> contains unexplained variability and thus it is a part of the model. However,
> I may also understand it as that model value contains the unexplained
> variability for original data (in which the model was created) but not for
> external data. I'm a little confused about it. Can someone help me out?
>
> I would appreciate any response! Many thanks in advance!
>
> Your sincerely,
>
> Tingjie Guo
>
>
Hi Tingjie,
A lot of great tips and explanations already. Just wanted to add my two cents.
POSTHOC will estimate the most likely ETA for each individual, taking into
account the known population parameters THETA and OMEGA. “Most likely” means:
1. An individual parameter as close as possible to the typical value (i.e.
ETA=0).
The likelihood of an ETA is evaluated through the probability density function
of a normal distribution with mean 0 and variance OMEGA.
2. A model prediction as close as possible to the observed values.
The likelihood of a model prediction is evaluated through the probability
density function of the residual error model.
We find the most likely ETA by using maximum likelihood estimation (I do not
know the exact algorithm, but I use Nelder-Mead in my own software and that
produces the same results as nonmem).
You have two questions:
1. Can I constrict the ETA search space so only realistic ETA’s are found?
You can, but that would change your original model, and would require
re-estimating THETA and OMEGA. For some parameters (e.g. disease progression,
or LOG(BASELINE) ), an absolute inter-individual variability on a parameter may
make sense.
You may want to re-evaluate (as suggested previously) whether this is valid for
all parameters. In other words: whether the parameter IIV is truly symmetric
normal distributed.
In any case, posthoc estimations are linked to the original model. If
close-to-zero parameter values are unlikely to appear in the training dataset,
then OMEGA should be small, and therefore negative values of a parameter will
probably not be estimated anyway (part 1 of our maximum likelihood estimation
explained above). And if the model does not make any sense with negative
parameter values, the model predictions will be very far off from the observed
values as well (part 2 of our maximum likelihood estimation).
I suggest you re-evaluate the ETA distributions of your original model, and
consider using a lognormal IIV instead.
You could also explore graphically the input data for subjects with negative
ETA values. Possibly the observed input data can only be explained through
negative parameter values?
@Jakob: Could you explain how “The solution you initially implemented will bias
the parameter distribution severely, since only values greater than or equal to
the typical parameter value is allowed.” ? In case of an IIV of e.g. 20% CV,
(1+ETA) would require 5 standard deviations on ETA before it becomes negative.
1. Which error model should I use? Should I only use the assay error?
Residual error comes from many sources. Assay error is only one of these.
Others include model misspecification, dosing errors, true dose deviations
(e.g. use of generics, or inaccuracies in preparing an infusion), bad recording
of sample times, etc. Unless there is a good reason to assume your new data was
not subjected to the same errors as the training dataset, you should keep the
same residual error model.
I myself am still struggling with this question:
“Should we again sample residual error when we simulate from EBE estimates? Or
should we estimate individual parameter uncertainty from the OFIM and use only
that?”
Best regards,
Ruben Faelens
Scientist at SGS Exprimo
PhD Student at KULeuven
Quoted reply history
From: [email protected] [mailto:[email protected]] On
Behalf Of Tingjie Guo
Sent: vrijdag 6 april 2018 18:32
To: [email protected]
Subject: [NMusers] ETAs & SIGMA in external validation
Dear NMusers,
I have two questions regarding the statistical model when performing external
validation. I have a dataset and would like to validate a published model with
POSTHOC method i.e. $EST METHOD=0 POSTHOC MAXEVAL=0.
1. The model added etas in proportional way, i.e. Para = THETA * (1+ETA) and
this made the posthoc estimation fail due to the negative individual parameter
estimate in some subjects. I constrained it to be positive by adding ABS
function i.e. Para = THETA * ABS(1+ETA), and the estimation can be successfully
running. I was wondering if there is better workaround?
2. OMEGA value influences individual ETAs in POSTHOC estimation. Should we
assign $SIGMA with model value or lab (where external data was determined)
assay error value? If we use model value, it's understandable that $SIGMA
contains unexplained variability and thus it is a part of the model. However, I
may also understand it as that model value contains the unexplained variability
for original data (in which the model was created) but not for external data.
I'm a little confused about it. Can someone help me out?
I would appreciate any response! Many thanks in advance!
Your sincerely,
Tingjie Guo
Information in this email and any attachments is confidential and intended
solely for the use of the individual(s) to whom it is addressed or otherwise
directed. Please note that any views or opinions presented in this email are
solely those of the author and do not necessarily represent those of the
Company. Finally, the recipient should check this email and any attachments for
the presence of viruses. The Company accepts no liability for any damage caused
by any virus transmitted by this email. All SGS services are rendered in
accordance with the applicable SGS conditions of service available on request
and accessible at http://www.sgs.com/en/Terms-and-Conditions.aspx
Hi Ruben,
A quick response to your comment about simulations and residual error:
“Should we again sample residual error when we simulate from EBE estimates?
Or should we estimate individual parameter uncertainty from the OFIM and
use only that?”
This depends on what you question you want your simulation to answer.
If your question is “what is my best guess about the underlying physical
processes of a model” then you should simulate without residual error.
On the other hand, if your question is “what kind of observations am I likely
to see if I performed an experiment” then you should include residual error.
I would think any time you want to simulate randomly selected individuals you
should use individual parameter uncertainty.
Warm regards,
Douglas Eleveld
Quoted reply history
From: [email protected] [mailto:[email protected]] On
Behalf Of Faelens, Ruben (Belgium)
Sent: maandag 9 april 2018 23:52
To: Tingjie Guo; [email protected]
Subject: RE: [NMusers] ETAs & SIGMA in external validation
Hi Tingjie,
A lot of great tips and explanations already. Just wanted to add my two cents.
POSTHOC will estimate the most likely ETA for each individual, taking into
account the known population parameters THETA and OMEGA. “Most likely” means:
1. An individual parameter as close as possible to the typical value (i.e.
ETA=0).
The likelihood of an ETA is evaluated through the probability density function
of a normal distribution with mean 0 and variance OMEGA.
2. A model prediction as close as possible to the observed values.
The likelihood of a model prediction is evaluated through the probability
density function of the residual error model.
We find the most likely ETA by using maximum likelihood estimation (I do not
know the exact algorithm, but I use Nelder-Mead in my own software and that
produces the same results as nonmem).
You have two questions:
1. Can I constrict the ETA search space so only realistic ETA’s are found?
You can, but that would change your original model, and would require
re-estimating THETA and OMEGA. For some parameters (e.g. disease progression,
or LOG(BASELINE) ), an absolute inter-individual variability on a parameter may
make sense.
You may want to re-evaluate (as suggested previously) whether this is valid for
all parameters. In other words: whether the parameter IIV is truly symmetric
normal distributed.
In any case, posthoc estimations are linked to the original model. If
close-to-zero parameter values are unlikely to appear in the training dataset,
then OMEGA should be small, and therefore negative values of a parameter will
probably not be estimated anyway (part 1 of our maximum likelihood estimation
explained above). And if the model does not make any sense with negative
parameter values, the model predictions will be very far off from the observed
values as well (part 2 of our maximum likelihood estimation).
I suggest you re-evaluate the ETA distributions of your original model, and
consider using a lognormal IIV instead.
You could also explore graphically the input data for subjects with negative
ETA values. Possibly the observed input data can only be explained through
negative parameter values?
@Jakob: Could you explain how “The solution you initially implemented will bias
the parameter distribution severely, since only values greater than or equal to
the typical parameter value is allowed.” ? In case of an IIV of e.g. 20% CV,
(1+ETA) would require 5 standard deviations on ETA before it becomes negative.
1. Which error model should I use? Should I only use the assay error?
Residual error comes from many sources. Assay error is only one of these.
Others include model misspecification, dosing errors, true dose deviations
(e.g. use of generics, or inaccuracies in preparing an infusion), bad recording
of sample times, etc. Unless there is a good reason to assume your new data was
not subjected to the same errors as the training dataset, you should keep the
same residual error model.
I myself am still struggling with this question:
“Should we again sample residual error when we simulate from EBE estimates? Or
should we estimate individual parameter uncertainty from the OFIM and use only
that?”
Best regards,
Ruben Faelens
Scientist at SGS Exprimo
PhD Student at KULeuven
From: [email protected]<mailto:[email protected]>
[mailto:[email protected]] On Behalf Of Tingjie Guo
Sent: vrijdag 6 april 2018 18:32
To: [email protected]<mailto:[email protected]>
Subject: [NMusers] ETAs & SIGMA in external validation
Dear NMusers,
I have two questions regarding the statistical model when performing external
validation. I have a dataset and would like to validate a published model with
POSTHOC method i.e. $EST METHOD=0 POSTHOC MAXEVAL=0.
1. The model added etas in proportional way, i.e. Para = THETA * (1+ETA) and
this made the posthoc estimation fail due to the negative individual parameter
estimate in some subjects. I constrained it to be positive by adding ABS
function i.e. Para = THETA * ABS(1+ETA), and the estimation can be successfully
running. I was wondering if there is better workaround?
2. OMEGA value influences individual ETAs in POSTHOC estimation. Should we
assign $SIGMA with model value or lab (where external data was determined)
assay error value? If we use model value, it's understandable that $SIGMA
contains unexplained variability and thus it is a part of the model. However, I
may also understand it as that model value contains the unexplained variability
for original data (in which the model was created) but not for external data.
I'm a little confused about it. Can someone help me out?
I would appreciate any response! Many thanks in advance!
Your sincerely,
Tingjie Guo
Information in this email and any attachments is confidential and intended
solely for the use of the individual(s) to whom it is addressed or otherwise
directed. Please note that any views or opinions presented in this email are
solely those of the author and do not necessarily represent those of the
Company. Finally, the recipient should check this email and any attachments for
the presence of viruses. The Company accepts no liability for any damage caused
by any virus transmitted by this email. All SGS services are rendered in
accordance with the applicable SGS conditions of service available on request
and accessible at http://www.sgs.com/en/Terms-and-Conditions.aspx
________________________________
Hi Ruben,
I think I misread Tingjies original posting as taking ABS(ETA), whereas his
initial attempt was actually ABS(1+ETA), which is less problematic.
The latter would not bias simulations much if IIV is e.g. 30% CV, agreed.
However, as Tingjies is mainly interested in estimation, I believe that without
the ABS-correction, no subject will have the EBE at ETA <= -1 for a parameter
that could not be <=0.
Unless possibly in a subject which is a) uninformative on that parameter and b)
where the eta is also part of an omega-block - a scenario which seems unlikely
to me, but may occur in theory.
Implementing the ABS-korrection ETA=-1.2 would give the same solution
(parameter value) as ETA=-0.8, but at a higher OFV for that subject.
It seems to me, if negative parameter values are only a problem in the eta
search for the EBE, whereas the EBE for individual parameters are always
positive, then it should be more straightforward to use FOCE, with the addition
e.g.:
IF(PARA.LT.0.001) PARA=0.001
Probably, no subject will have such a low individual parameter value, when
looking into the table output?
If there are any such subjects I would look for errors in the data set and
nonmem code (as outlined in my initial reply).
The above concerns estimation.
In simulation (unless %CV is low), we may get a fraction of subject with
PARA=0.001, which may be an unreasonably low parameter value.
Whether that is acceptable or not depends on the objectives and in this case
there was no need for simulations even for model evaluation (?), so I will not
elaborate further.
Cheers
Jakob
@Ruben@Jakob Very worthwhile discusstion! I would like to raise an
extended question: if the model contains one covariate, the values of which
from external data make parameters negative, what would be the optimal
solution for this?
@Ruben Out of curiosity, why did you use Nelder-Mead method instead of
others in your software? And what do you mean OFIM?
Met vriendelijke groet
,
T
G
Quoted reply history
On Tue, Apr 10, 2018 at 3:19 PM, Jakob Ribbing <jakob.ribbing@pharmetheus
.com> wrote:
> Hi Ruben,
>
> I think I misread Tingjies original posting as taking ABS(ETA), whereas
> his initial attempt was actually ABS(1+ETA), which is less problematic.
> The latter would not bias simulations much if IIV is e.g. 30% CV, agreed.
>
> However, as Tingjies is mainly interested in estimation, I believe that
> without the ABS-correction, no subject will have the EBE at ETA <= -1 for a
> parameter that could not be <=0.
> Unless possibly in a subject which is a) uninformative on that parameter
> and b) where the eta is also part of an omega-block - a scenario which
> seems unlikely to me, but may occur in theory.
>
> Implementing the ABS-korrection ETA=-1.2 would give the same solution
> (parameter value) as ETA=-0.8, but at a higher OFV for that subject.
> It seems to me, if negative parameter values are only a problem in the eta
> search for the EBE, whereas the EBE for individual parameters are always
> positive, then it should be more straightforward to use FOCE, with the
> addition e.g.:
> IF(PARA.LT.0.001) PARA=0.001
> Probably, no subject will have such a low individual parameter value, when
> looking into the table output?
> If there are any such subjects I would look for errors in the data set and
> nonmem code (as outlined in my initial reply).
>
> The above concerns estimation.
> In simulation (unless %CV is low), we may get a fraction of subject with
> PARA=0.001, which may be an unreasonably low parameter value.
> Whether that is acceptable or not depends on the objectives and in this
> case there was no need for simulations even for model evaluation (?), so I
> will not elaborate further.
>
> Cheers
>
> Jakob
>
>
>
Hi Tingjie,
Assuming (zero and) negative parameter values are not allowed, you could
change from e.g. a linear model to a power model, which is as close as possible
to the linear model, in the range of covariate values from the original
publication.
If the publication lists e.g. median, mean and 95% CI of the covariate values
(maybe this is hoping for too much?), then you can generate e.g. a normal or
log-normal distribution of covariate values that reflect these statistics as
closely as possible.
Then you can optimize the power model to resemble the linear model as closely
as possible on these covariate-parameter data.
Best wishes
Jakob
Jakob Ribbing, Ph.D.
Senior Consultant, Pharmetheus AB
Cell/Mobile: +46 (0)70 514 33 77
[email protected]
www.pharmetheus.com http://www.pharmetheus.com/
Phone, Office: +46 (0)18 513 328
Uppsala Science Park, Dag Hammarskjölds väg 52B
SE-752 37 Uppsala, Sweden
This communication is confidential and is only intended for the use of the
individual or entity to which it is directed. It may contain information that
is privileged and exempt from disclosure under applicable law. If you are not
the intended recipient please notify us immediately. Please do not copy it or
disclose its contents to any other person.
Hi Tingjie,
I used Nelder-Mead because it is the default method in R optim(). No other
reasoning.
With regards to OFIM: the inverse of the hessian of the likelihood at the
optimum ETA is an estimate for the standard error of this ETA estimate. This is
called the Observed Fisher Information Matrix.
If you will forgive me the childish language, this can be explained
intuitively: the second derivative describes how 'pointy' the OFV is. It shows
how much the objective function changes when you 'jiggle' around the ETA
parameters.
A very pointy OFV means a high change in OFV for different estimates, and
therefore high certainty and low residual error.
An almost flat OFV means different estimates give similar OFV (are equally
likely), and therefore a low certainty and high residual error.
Subjects with no information will have ETA =0 as the maximum likelihood
estimate (shrinkage), but the uncertainty will be equal to population IIV.
I forgot the exact formulas though, you can find it in literature discussing
d-optimality.
In my view, taking uncertainty into account on posthoc estimates is an elegant
solution to sparse profiles, but I have rarely seen it applied in practice. I
am not entirely certain whether the asymptotic convergence of OFIM to the
residual error applies for ETA estimates either, especially in the case of
sparse sampling. Which is why I searched for feedback from the list.
Anyway, the above is largely an academic interest anyway. Good luck with your
project!
Please excuse my brevity, this was sent from a mobile device
Quoted reply history
________________________________
From: Tingjie Guo <[email protected]>
Sent: Friday, April 13, 2018 5:20:40 PM
To: Jakob Ribbing
Cc: Faelens, Ruben (Belgium); [email protected]
Subject: Re: [NMusers] ETAs & SIGMA in external validation
@Ruben@Jakob Very worthwhile discusstion! I would like to raise an extended
question: if the model contains one covariate, the values of which from
external data make parameters negative, what would be the optimal solution for
this?
@Ruben Out of curiosity, why did you use Nelder-Mead method instead of others
in your software? And what do you mean OFIM?
Met vriendelijke groet
,
T
G
On Tue, Apr 10, 2018 at 3:19 PM, Jakob Ribbing
<[email protected]<mailto:[email protected]>> wrote:
Hi Ruben,
I think I misread Tingjies original posting as taking ABS(ETA), whereas his
initial attempt was actually ABS(1+ETA), which is less problematic.
The latter would not bias simulations much if IIV is e.g. 30% CV, agreed.
However, as Tingjies is mainly interested in estimation, I believe that without
the ABS-correction, no subject will have the EBE at ETA <= -1 for a parameter
that could not be <=0.
Unless possibly in a subject which is a) uninformative on that parameter and b)
where the eta is also part of an omega-block - a scenario which seems unlikely
to me, but may occur in theory.
Implementing the ABS-korrection ETA=-1.2 would give the same solution
(parameter value) as ETA=-0.8, but at a higher OFV for that subject.
It seems to me, if negative parameter values are only a problem in the eta
search for the EBE, whereas the EBE for individual parameters are always
positive, then it should be more straightforward to use FOCE, with the addition
e.g.:
IF(PARA.LT.0.001) PARA=0.001
Probably, no subject will have such a low individual parameter value, when
looking into the table output?
If there are any such subjects I would look for errors in the data set and
nonmem code (as outlined in my initial reply).
The above concerns estimation.
In simulation (unless %CV is low), we may get a fraction of subject with
PARA=0.001, which may be an unreasonably low parameter value.
Whether that is acceptable or not depends on the objectives and in this case
there was no need for simulations even for model evaluation (?), so I will not
elaborate further.
Cheers
Jakob
Information in this email and any attachments is confidential and intended
solely for the use of the individual(s) to whom it is addressed or otherwise
directed. Please note that any views or opinions presented in this email are
solely those of the author and do not necessarily represent those of the
Company. Finally, the recipient should check this email and any attachments for
the presence of viruses. The Company accepts no liability for any damage caused
by any virus transmitted by this email. All SGS services are rendered in
accordance with the applicable SGS conditions of service available on request
and accessible at http://www.sgs.com/en/Terms-and-Conditions.aspx