Sample size/power in NONMEM

From: Mark Sale Date: February 07, 2007 technical Source: mail-archive.com
Dear Colleagues, In response to the recent discussion in this forum about sample size and power in NONMEM, Next Level Solutions has developed tools for formal "power" assessment in NONMEM using simulation (as described by Serge Guzy in this discussion). Essentially, one or more (preferably more) models will be provided to Next Level Solutions by the sponsor, along with proposed sampling schemes. Additional optional information includes drop out rates, sampling time errors (reported and unreported), missing sample probability, trial design (cross over, parallel), LLQ and dosing time inaccuracies (mean and sd). Sampling schemes will be generated (with the described random effects) and NONMEM analysis by bootstrap (with multiple sets of initial estimates to reduce the problem of local minima). The generated report will include: -Probability of model convergence -Probability of successful covariance step -Probability of the absolute value of all off-diagonal elements of the correlation matrix of estimation being < 0.95. -Mean and SEE for all parameters (by bootstrap, not NONMEM estimates of SEE) -Probability of picking up a hierarchical effect (e.g., effect of age on clearance vs no effect of age on clearance), given the effect size described by the model(s) and a log likelihood ratio test. Turnaround time for analysis will depend on the run times for the models, but typically is a few days to a week - most of which is computer run time (1000 samples and NONMEM analyses for each sampling scheme, or twice that if reduced vs full model hypothesis is to be tested) Thanks Mark