Probabilistic model

16 messages 7 people Latest: May 19, 2005

Probabilistic model

From: Leonid Gibiansky Date: May 13, 2005 technical
From: "Leonid Gibiansky" leonidg@metrumrg.com Subject: [NMusers] Probabilistic model Date: Fri, May 13, 2005 12:54 pm Dear All, I recently worked with the PK/PD model with the categorical 6 level score PD data. Example of the code is given below, just to avoid description of the model. The run time was about an hour, so I was able to experiment with the model. I used the following procedure: 1. Run the model with some initial conditions 2. Looked on the final estimate and updated the initial conditions according to the following rule: new initial parameter value = final estimate * exp(mean eta value). Mean eta value varied between 0 and 0.3 (exponential eta model) during the first 2-3 iterations and between 0 and 0.15 later on. 3. Run the model with new initial conditions, etc. After 10 or so iterations, objective function decreased by about 100 points (with some visible improvement of the fit), variances of the random effects decreased by 2-3 times. It looks like the OF surface is very shallow, with a lot of local minimums that attract the solution. My question is whether you have seen the same behavior in similar problems, and if yes, how can we improve convergence (modify the model?), how to make sure that the final result is valid? Should such fine-tuning after convergence be a necessary step of any similar model? Have you developed any rules/scripts to automate the process? Thanks Leonid ;Model Desc: PK/PD model $PROB RUN# 005M $INPUT C = DROP ID TIME AMT DV EVID MDV WT $DATA data.csv IGNORE=C $SUBROUTINES ADVAN7 TRANS1 $MODEL NCOMPS=3 COMP=COMP1; CENTRAL COMP=COMP2; PERIPH COMP=COMP3; EFFECT $PK ;PK K12=0.0526 K21=0.0241 V1 = 0.73*WT K10 = 0.0151 ; EFFECT COMPARTMENT KE0=THETA(1)*EXP(ETA(1)) K13=0.001*K10 K31=KE0 V3= K13*V1/K31 ; PD SLOP= THETA(2)*EXP(ETA(2)) EC50= THETA(8)*EXP(ETA(4)) ; Baseline odds B0=THETA(3)*EXP(ETA(3)) B1=B0+THETA(4) B2=B1+THETA(5) B3=B2+THETA(6) B4=B3+THETA(7) $ERROR ; Drug effect CE=A(3)/V3 EFF=SLOP*CE/(EC50+CE) ;LOGITS FOR Y<=5,Y<=4, Y<=3, Y<=2, Y<=1, Y<=0 C0=EXP(B0 + EFF) C1=EXP(B1 + EFF) C2=EXP(B2 + EFF) C3=EXP(B3 + EFF) C4=EXP(B4 + EFF) ;CUMULATIVE PROBABILITIES P0=C0/(1+C0) P1=C1/(1+C1) P2=C2/(1+C2) P3=C3/(1+C3) P4=C4/(1+C4) ; P(Y=M) PR5 = (1 -P4) PR4 = (P4-P3) PR3 = (P3-P2) PR2 = (P2-P1) PR1 = (P1-P0) PR0 = P0 IF (DV.LT.0.5) Y=PR0 IF (DV.GE.0.5.AND.DV.LT.1.5) Y=PR1 IF (DV.GE.1.5.AND.DV.LT.2.5) Y=PR2 IF (DV.GE.2.5.AND.DV.LT.3.5) Y=PR3 IF (DV.GE.3.5.AND.DV.LT.4.5) Y=PR4 IF (DV.GE.4.5) Y=PR5 $THETA ... $OMEGA ..... $EST MAXEVAL=9999 SIGDIG = 4 METHOD=1 LIKE LAPLACE NUMERICAL NOABORT

RE: Probabilistic model

From: Kenneth Kowalski Date: May 13, 2005 technical
From: "Kowalski, Ken" Ken.Kowalski@pfizer.com Subject: RE: [NMusers] Probabilistic model Date: Fri, May 13, 2005 1:44 pm Leonid, I've never had much success in fitting ordered categorical models with more than a single eta on the baseline of the logit domain i.e., logit(p) = base + eff + eta. Also, since the logit model transforms the (0,1) probability domain to the real number domain (-inf, inf) the values of theta3 through theta7, could be positive or negative depending on whether the baseline proportions are < or > p=0.5. Thus, I'm not sure why you want to use an exp(eta) on the baseline response. Ken

Re: Probabilistic model

From: Leonid Gibiansky Date: May 13, 2005 technical
From: "Leonid Gibiansky" leonidg@metrumrg.com Subject: Re: [NMusers] Probabilistic model Date: Fri, May 13, 2005 2:05 pm Ken, I have a very detailed (once a minute) PD measurements, so I thought that I can determine individual KE0, EMAX, EC50 (parameters with random effects). Probabilistic part has only one (baseline logit) eta. B0 estimate is around -15, so it should not be positive for any subject (probability of the event without a drug is nearly zero, much smaller than 0.5). If so, additive or proportional eta models should be similar; this is not the source of the problem, but I agree that additive eta may be adequate in this case. Thanks for the reply Leonid

RE: Probabilistic model

From: Vladimir Piotrovskij Date: May 17, 2005 technical
From: "Piotrovskij, Vladimir [PRDBE]" VPIOTROV@PRDBE.jnj.com Subject: RE: [NMusers] Probabilistic model Date: Tue, May 17, 2005 9:01 am Leonid, Each time I deal with ordered categorical responses with more than 3 - 4 categories I observe the same behavior as you describe. What I recommend is to combine some categories to make the objective fuction surface less flat. Best regards, Vladimir -------------------------------------------------------------------- Vladimir Piotrovsky, PhD Research Fellow Global Clinical Pharmacokinetics & Clinical Pharmacology J&J Pharmaceutical Research and Development Beerse, Belgium

Re: Probabilistic model

From: Nick Holford Date: May 17, 2005 technical
From: "Nick Holford" n.holford@auckland.ac.nz Subject: Re: [NMusers] Probabilistic model Date: Tue, May 17, 2005 9:25 am Vladimir, If the only way to get a solution is to throw away information by combining categories then what relevance does the solution have? An alternative is to treat the categories as continuous variables which NONMEM is perhaps able to handle more robustly and certainly the resulting parameters are more readily interpretable. Nick

Re: Probabilistic model

From: Chuanpu Hu Date: May 17, 2005 technical
From: Chuanpu.Hu@sanofi-aventis.com Subject: Re: [NMusers] Probabilistic model Date: Tue, May 17, 2005 10:29 am While treating categorical variables as continuous allows better estimation, often it is improper because the categories are not homogenious. For example, difference between category 1 and 2 is usually not comparable to that between category 2 and 3. Combining caegories will make sense if one does not care whether the category is 1 or 2, and the main objective is to assess the probability of, say, <=2 vs >2. However, if that is not clear a priori, then combining caegories would lose information indeed. I suspect that the problem still lies in over-parameterization, e.g., the number of ETAs, based on the phenominom Leonid describes. Chuanpu ------------------------------------------------------------------- Chuanpu Hu, Ph.D. Biostatistics sanofi-aventis 9 Great Valley Parkway Malvern, PA 19355-1304 Tel: (610) 889-6774 Fax: (610) 889-6932

RE: Probabilistic model

From: Vladimir Piotrovskij Date: May 17, 2005 technical
From: "Piotrovskij, Vladimir [PRDBE]" Subject: RE: [NMusers] Probabilistic model Date: Tue, May 17, 2005 10:46 am Nick, I think it's a bit risky to treat 6 categories as a continuous variable. I usually do this with 10 categories or so. You are right saying that combining categories is not an ideal solution since some information is lost. However, modeling is always a way to compress information: instead of, say, 1000 observations you get just a few parameters. Best regards, Vladimir

Re: Probabilistic model

From: Nick Holford Date: May 17, 2005 technical
From: "Nick Holford" Subject: Re: [NMusers] Probabilistic model Date: Tue, May 17, 2005 7:41 pm Chuanpu et al, Thanks for taking the bait :-) I knew you and your statistical colleagues would take me to task for even thinking of treating categorical variables as continuous. While I have heard this caution many times before I have never seen a numerical example which illustrates the practical consequences given some realistic example. There are other examples of statistical 'knowledge' that have not been borne out when examined by experiment (e.g. distribution of delta OBJ under the null, meaningfulness of getting covariance step to run when assessing parameter reliability) so I wonder if anyone has done any work with NONMEM in this area? Many uses of categorical variables in drug development reflect naive attempts by investigators to capture what is really a continuous scale variable e.g. pain, neutropenia. IMHO such categorical scales are intepreted by those who look at the results as if they were indeed a continuous scale variable. It seems quite reasonable if you have even a 5 point categorical scale to consider this as continuous. Depending on the a priori knowledge of the system you might choose to fix the residual error on each category to some reasonable value e.g. a pain score on a 5 point scale might have a residual SD of 0.5 units (i.e. 3 is usually clearly different from 2 or 4). Nick

Re: Probabilistic model

From: Leonid Gibiansky Date: May 17, 2005 technical
From: "Leonid Gibiansky" Subject: Re: [NMusers] Probabilistic model Date: Tue, May 17, 2005 8:40 pm Let me add an example in support of Nick's suggestion: In the project (real data, consecutive PK, then PK/PD) that motivated my small example we noticed that the expected score ESC= SUM(SCORE_i*P_i) defined as a sum of (level * probability of the score at that level) described the observed data with a very good accuracy. That motivated two continuous models. In one, we fitted ESC as defined above to the observed DV (score). The second model was a model for ESC as an EMAX function of concentration. Individual predictions of these two continuous models were as good as individual predictions of the probabilistic model. We tried predictive check simulations and found out that all three models over-estimated the frequency of the highest scores (with the strongest effect). The probabilistic model was slightly better than continuous in this regard. Continuous models took much less time (many hours instead of many days) and efforts to converge (e.g., initial values of the parameters were obtained by FO; then FOCEI converged starting from the FO final estimates): this was much simpler than guessing initial conditions for the probabilistic model. Both types of models predicted a very similar covariate PD effect (requiring about 25-30% dose adjustment for a subgroup of patients). Continuous models were more stable and they actually converged (i.e., start from different initial conditions led to similar solutions) while the probabilistic model exhibited behavior described in the original example that started this discussion. Based on this example, it would be hard to recommend any of the approaches over the other: each has own advantages and problems. Leonid

RE: Probabilistic model

From: Mats Karlsson Date: May 18, 2005 technical
From: "Mats Karlsson" Subject: RE: [NMusers] Probabilistic model Date: Wed, May 18, 2005 3:03 am Hi Nick, The primary endpoints of sildenafil were six-category scales. The statistical analysis plan said that these were to be treated as continuous endpoints. Therefore in the PKPD analysis we (Peter Milligan, Scott Marshall and I) analysed them as such but also as categorical endpoints. I have to say that model development as categorical data was considerably simpler than that as continuous. However, in the end there were no differences in conclusions between the two approaches when based on simulations from the two models. This was presented at PAGE in 1999. I can probably find you the presentation if you're interested. Best regards, Mats -- Mats Karlsson, PhD Professor of Pharmacometrics Div. of Pharmacokinetics and Drug Therapy Dept. of Pharmaceutical Biosciences Faculty of Pharmacy Uppsala University Box 591 SE-751 24 Uppsala Sweden phone +46 18 471 4105 fax +46 18 471 4003 mats.karlsson@farmbio.uu.se

Re: Probabilistic model

From: Nick Holford Date: May 18, 2005 technical
From: "Nick Holford" Subject: Re: [NMusers] Probabilistic model Date: Wed, May 18, 2005 6:07 am Mats, Leonid's experience comparing continuous and categorical models was in line with my own intuition -- continuous models ran more quickly and were easier to develop (in part because parameters were more meaningful). Yet you seem to have the opposite experience. Can you explain in more detail why you say "model development as categorical data was considerably simpler than that as continuous"? Nick

Re: Probabilistic model

From: Jeffrey A Wald Date: May 18, 2005 technical
From: jeffrey.a.wald@gsk.com Subject: Re: [NMusers] Probabilistic model Date: Wed, May 18, 2005 8:33 am You cannot throw away information you do not possess. If you have a 6 category scale but a few of the categories are not populated with a sufficient number of observations, then combing them is perfectly valid and will add stability to the final solution. A bigger danger in my mind is to assume that you can extrapolate, on the basis of arbitrarily converting categories to continuous responses, to nonobserved responses. This is not necessarily a function of the number of categories. Take an 11-point pain scale. You might have very robust (and apparently continuous data) in the high to middle range of the scale. Now treat patients with an mildly effective drug. Absent a large placebo response, you are just not going to see enough of the 0's, 1's and 2's to resolve individual probabilities for these scores. As a friend and erstwhile mentor would say, "there is no substitute for no data". (I am still trying to figure that one out :-) Jeff Jeff Wald, PhD jeffrey.a.wald@gsk.com Clinical Pharmacokinetics/Modeling and Simulation Neurology and GI RTP, NC

Re: Probabilistic model

From: Chuanpu Hu Date: May 18, 2005 technical
From: Chuanpu.Hu@sanofi-aventis.com Subject: Re: [NMusers] Probabilistic model Date: Wed, May 18, 2005 12:00 pm Nick, I knew you were up to something. :-) Still, I think your comments came from situations where the interest was in some kind of "averaged" response. I agree that in many instances you can get reasonable conclusions. I just prefer not to deal with the potential objections/difficulties. However, in an instance of my work with Steve Shafer (COST B1 1997 - details never published), categorical analysis did show insights/knowledge that we would not get if data were treated as categorical. If that would be of value, maybe I should try to convince Steve that it is worthwhile to finish the paper. ;-) Chuanpu

RE: Probabilistic model

From: Mats Karlsson Date: May 19, 2005 technical
From: "Mats Karlsson" Subject: RE: [NMusers] Probabilistic model Date: Thu, May 19, 2005 2:46 am Hi Nick, For a 6-grade scale (0-5), predictions outside the range has even less meaning than those in between the categories. Therefore, we needed to put quite some effort into appropriate constraining such that the model could simulate well. Also, maybe not surprising, it is difficult to create a residual error model that appropriately describes the residual error when the observed data are constrained to 6 levels and a considerable amount lies at the edges of the prediction range. Best regards, Mats -- Mats Karlsson, PhD Professor of Pharmacometrics Div. of Pharmacokinetics and Drug Therapy Dept. of Pharmaceutical Biosciences Faculty of Pharmacy Uppsala University Box 591 SE-751 24 Uppsala Sweden phone +46 18 471 4105 fax +46 18 471 4003 mats.karlsson@farmbio.uu.se

Re: Probabilistic model

From: Nick Holford Date: May 19, 2005 technical
From: "Nick Holford" Subject: Re: [NMusers] Probabilistic model Date: Thu, May 19, 2005 5:44 am Mats, I guess it must be hard with sildenafil whichever model you choose? ;-) Nick

Re: Probabilistic model

From: Nick Holford Date: May 19, 2005 technical
From: "Nick Holford" n.holford@auckland.ac.nz Subject: Re: [NMusers] Probabilistic model Date: Thu, May 19, 2005 10:47 pm Jeff, jeffrey.a.wald@gsk.com wrote: > > You cannot throw away information you do not possess. But if you have information and merge it with other information without keeping track of the original state then information must be lost. You CAN throw away information that you do possess --- but it isn't a good idea. This is why I do not like the idea of combining categories. > If you have a 6 category scale but a few of the categories are not populated with a sufficient number of observations, then combing them is perfectly valid and will add stability to the final solution. > > A bigger danger in my mind is to assume that you can extrapolate, on the basis of arbitrarily converting categories to continuous responses, to nonobserved responses. This is not necessarily a function of the number of categories. Take an 11-point pain scale. You might have very robust (and apparently continuous data) in the high to middle range of the scale. Now treat patients with an mildly effective drug. Absent a large placebo response, you are just not going to see enough of the 0's, 1's and 2's to resolve individual probabilities for these scores. > This is a different issue. The design may indeed make it hard to identify certain levels of response but this is a problem for continuous as well as categorical analysis too. Nick -- Nick Holford, Dept Pharmacology & Clinical Pharmacology University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand email:n.holford@auckland.ac.nz tel:+64(9)373-7599x86730 fax:373-7556 http://www.health.auckland.ac.nz/pharmacology/staff/nholford/ _______________________________________________________