RE: Parameterization!!!
From: "KOWALSKI, KENNETH G. [PHR/1825]" <kenneth.g.kowalski@pharmacia.com>
Subject: RE: Parameterization!!!
Date: Thu, 10 May 2001 09:43:54 -0500
Leonid and all NMUSERs,
Good question. This topic came up at the Mufpada meeting last week. I take the following steps when I build population models:
1) Determine the structural model (e.g., 1 vs 2 comp)
2) Determine the variance structure for Omega and Sigma
3) Develop the covariate models
In step 3 I like to develop a full model with all the covariates in the model simultaneously. There is a lot of information that can be obtained from a full model fit and the issue of whether one should use a diagonal or block Omega during the covariate model building step goes away if you start with the full model and work backwards. Of course there can be issues with convergence of a full model when the covariates are highly collinear. For example, I have a researcher that always wants to investigate BSA as well as WT on V and CL. Because BSA and WT are so highly correlated it doesn't make sense to build a full model that includes both of these covariates. I usually include WT and then verify that any trends in the etas vs BSA are accounted for if WT is in the model. If we judiciously consider our covariates I think we can have more success in fitting full models.
In my experience where the correlation between CL and V is estimated to be quite high from a base model fit, I have yet to encounter a covariate or set of covariates included in the full model or final model that have strong enough signals to drive the correlation between CL and V to zero. Thus, I always work with the fullest block Omega that I can estimate from my base model and use that in fitting the full model and subsequent covariate models run in search of the most parsimonious final model. Because of this, even if I employ forward or forward/backward stepwise algorithms to build the covariate model, I tend to use the fullest Omega that I can estimate from my base model.
I will take this opportunity to make a plug for some research that my colleague, Matt Hutmacher, and I have been working on. Although stepwise procedures often find good fitting models they do have their deficiencies (particularly when there is high collinearity among the covariates) and there is no guarantee that they will find the best model among the 2^k possible models (all combinations of presence or absence of k covariate parameters). We have developed an algorithm that makes use of the estimates of the thetas and its corresponding covariance matrix of the estimates from a full model fit to approximate the likelihood ratio test statistic (difference in the objective function between the restricted model and the full model) for all 2^k - 1 restricted models without having to run each of the restricted models in NONMEM. We use the algorithm (called WAM for Wald's Approximation Method) to rank all 2^k possible models and then fit the top 10-15 models in NONMEM before deciding on a final model. Whereas stepwise procedures will find a single good fitting model the WAM algorithm gives a sense of the competing models that also have good fits. I gave a presentation at Mufpada last week comparing the WAM algorithm with the stepwise procedures on several data sets. In most cases the WAM algorithm performed as well if not better than the stepwise procedures in finding a parsimonious model and with substantially fewer NONMEM runs. Moreover, the NONMEM runs for the models that the WAM algorithm identifies as the "best models" is considerably more informative than the totality of the NONMEM runs determined by the stepwise procedures. The key to using this algorithm is the fitting of a full model. We have a paper that describes this methodology that will be coming out in the June, 2001 issue of JPP. We have SAS and S+ implementations that we will make available to anyone who is interested in trying it out.
Best regards,
Ken