From:"Bachman, William"
Subject:RE: [NMusers] large CV
Date:Wed, 5 Mar 2003 08:50:21 -0500
A couple of possibilities:
1. interindividual variability in V2 may actually be large
2. you might not have the information in your data to determine the interindividual variability
in V2 (interindividual variability in the peripheral parameters is often poorly
determined, you may want to omit the eta's on V2 and Q)
3. with dense data and high variability, use of FOCE method is recommended (METHOD=1)
4. also, use INTERACTION with METHOD=1 if possible
5. unless you have observations in the peripheral compartment, you don't need S3=V3 (but won't
have an effect on your model if you leave it in).
William J. Bachman, Ph.D.
GloboMax LLC
7250 Parkway Drive, Suite 430
Hanover, MD 21076
410-782-2212
bachmanw@globomax.com
large CV
12 messages
10 people
Latest: Mar 06, 2003
From:"Farrell, Colm"
Subject:RE: [NMusers] large CV
Date:Wed, 5 Mar 2003 14:10:22 -0000
The control stream shows an additive error structure for IIV on V2, whereas the
calculation of the associated CV is for proportional/exponential error structure.
Colm Farrell
GloboMax LLC
From:"Kowalski, Ken"
Subject:RE: [NMusers] large CV
Date:Wed, 5 Mar 2003 09:12:46 -0500
Luciane,
FO often estimates large CVs with dense data. You should try FOCE and since you are using
a constant CV residual error model you should also use the interaction option
(i.e., use METHOD=1 INTERACTION on the $EST statement).
Good luck.
Ken
From:"Howard Lee"
Subject: RE: [NMusers] large CV
Date: Wed, 5 Mar 2003 09:23:15 -0500
Dear Luciane,
You modeled `additive' interindividual variability for V2 as the following:
TV2=THETA(2)
V2=TV2+ETA(2)
Therefore, SQRT(OMEGA(V2)) is SD, and your CV (%) is [SD/THETA(2)]*100 = [SQRT(OMEGA(V2))/THETA(2)]*100,
which I think will give you smaller CV for V2.
Hope this helps.
Thank you.
Howard
Howard Lee, MD, PhD
Assistant Professor
Center for Drug Development Science
Department of Pharmacology, Georgetown University
School of Medicine, Box 571441
Washington, DC 20057-1441
USA
Tel: 202-687-8198
Fax: 202-687-0193
From:"Sam Liao"
Subject: RE: [NMusers] large CV
Date:Wed, 5 Mar 2003 09:25:42 -0500
Dear Luciane:
two more possibilities,
1) You don't have initial estimates in theta and omega for KA.
2) It may be quite possible that it converge to a local minimum. How is the diagnostic plots look like?
In my experience, the inital estimates for two-compartment model are very critical.
Best regards,
Sam Liao, Ph.D.
PharMax Research
PO Box 1809,
20 Second Street,
Jersey City, NJ 07302
phone: 201-7983202
efax: 1-720-2946783
From:Iaki Fernndez de Trocniz
Subject:Re: [NMusers] large CV
Date: Wed, 05 Mar 2003 15:30:27 +0100
Dear Luciane,
V2 has been coded as TV2+ETA(2) instead of TV2*EXP(ETA(2)),
therefore the expression you are using to compute the CV is not
appropriate;
in your case the units of the variance depend on the units of the
parameter.
In addition, you have non-sparse data, perhaps you might consider the
use of the FOCE with
INTERACTION estimation method. It is possible that in that case the
estimates
of inter-subject variability are more realistic.
Best regards,
Iaki.
From:Chuanpu Hu
Subject: RE: [NMusers] large CV
Date:Wed, 5 Mar 2003 09:44:25 -0500:
Luciane,
You used lognormal distribution for every parameter except V2, where you used additive normal. In that case,
SQRT(OMEGA(V2))*100
depends on the unit ov V2 and is not the CV of V2. You may want to rethink that model.
Chuanpu
--------------------------------------------------------------------------
Chuanpu Hu, Ph.D.
Research Modeling and Simulation
Clinical Pharmacology Discovery Medicine
GlaxoSmithKline
P.O. Box 13398
Five Moore Drive
Research Triangle Park, NC 27709
Tel: 919-483-8205
Fax: 919-483-6380
From:"Bachman, William"
Subject: RE: [NMusers] large CV
Date: Wed, 5 Mar 2003 10:37:02 -0500
Your covariance step simply aborted for the reason stated (see Manual V, p.145). What would I do?
The first thing I would do is look at the omega estimates (the interindividual variance estimates), if any
are very small (approaching zero, e.g. E-05) or very large, I would remove them from the model
because they are poorly estimated.
William J. Bachman, Ph.D.
GloboMax LLC
7250 Parkway Drive, Suite 430
Hanover, MD 21076
410-782-2212
bachmanw@globomax.com
From:"Venkatesh Atul Bhattaram"
Subject: Re: [NMusers] large CV
Date:Wed, 5 Mar 2003 11:08:40 -0500
Hello Lucaine
Clearly some of the estimates of etas are bad. You might want to fix them to zero or to any previously
reported value if the subjects belong to the same population and not to any special population. Also I
would explore the plots of different etas and try to reduce the "dimensionality" of the model. You will
find that when you plot the etas and do a 90 degree projection, most of the variability will be explained
by one of the two parameters and you will also notice the scale differences. So it looks like simplification
of the model will be a good solution either in terms of fixing or introducing correlations for which you
might need some explanations in terms of covariate information.
Venkatesh Atul Bhattaram
Post-doctoral Fellow
University of Florida
Gainesville-32610
From:"atul"
Subject:Re: [NMusers] large CV
Date: Wed, 5 Mar 2003 09:26:09 -0800
Hello Lucaine
Try FOCE. It will result in much better estimates. Also look at the correlations between
different estimates and include them in the model.
Venkatesh Atul Bhattaram
Post-doctoral Fellow
University of Florida
Gainesville-32610
From:"Alice I Nichols"
Subject: RE: [NMusers] large CV
Date:Wed, 5 Mar 2003 10:34:26 -0800
Dear Lucianne,
Your model may be overspecified given the available data. I was interested in adding that
an essential step in this process is to check the standard error for this parameter
(eta2) to see if the estimate you obtain is meaningful. Please look over the SEs you get for
all your parameters. If you have a parameter with a SE that is very large and the
estimated parameter is having minimal impact on model you may need to drop this
parameter from your model.
Alice
Alice Nichols, PhD
Hawthorne Research and Consulting, INC
132 Hawthorne Rd
King of Prussia, PA 19406
PH:610-878-9112 / FX:610-878-9113
nichols@bellatlantic.net
From:VPIOTROV@PRDBE.jnj.com
Subject:RE: [NMusers] large CV
Date:Thu, 6 Mar 2003 11:20:59 +0100
When selecting ETAs to be excluded, I would not recommend to rely on omega estimates. This way you
could exclude a random effect, which would be essential. Better way (although not ideal, too) is to run
sequentially a series of reduced models with each ETA excluded one at a time and then compare MOF
values. ETAs associated with an insignificant increase of MOF can be safely excluded.
Best regards,
Vladimir
_______________________________________________________