algorithm limits

7 messages 4 people Latest: Jul 22, 2008

algorithm limits

From: Mark Sale Date: July 19, 2008 technical
General question: What are practical limits on the magnitude of OMEGA that is compatible with the FO and FOCE/I method? I seem to recall Stuart at one time suggesting that a CV of 0.5 (exponential OMEGA of 0.5) was about the limit at which the Taylor expansion can be considered a reasonable approximation of the real distribution. What about FOCE-I? I'm asking because I have a model that has an OMEGA of 13, exponential (and sometime 100) FOCE-I, and it seems to be very poorly behaved in spite of overall, reasoable looking data (i.e., the structural model traces a line that looks like the data, but some people are WAY above the line and some are WAY below, and some rise MUCH faster, and some rise MUCH later, by way I mean >10,000 fold, but residual error looks not too bad). Looking at the raw data, I believe that the the variability is at least this large. Can I beleive that NONMEM FOCE (FO?) will behave reasonably? thanks Mark <<attachment: left.letterhead>>

Re: algorithm limits

From: Leonid Gibiansky Date: July 19, 2008 technical
Hi Mark, If you really have 10,000 fold differences in, say, volume or bioavailability, population model does not make any sense: individual parameters have uninformative priors; they are defined by the individual data only, no meaningful predictions can be made for the next patient. So, if you need data description, you can directly see whether the method provides you with the correct line, but you cannot count on prediction: they can be anywhere. For the estimation procedure, my understanding is that large OMEGAs will discount population model influence on the individual fit, and in this respect, the method will give you the correct answer (individual parameters controlled by the individual data only). This is how you trick nonmem into the individual model fit: assign huge OMEGAs. Whether your true OMEGA value is 50 or 150 is more or less irrelevant: both values are huge and do not provide informative priors for the individual parameters. Sometimes you get huge OMEGAs if there is a strong correlation between parameters, so that combination of ETAs is finite while each of them individually can be anywhere. Removal of some random effects can help in this case. Sometimes large OMEGAs are indicative of multivariate distributions (or strong categorical covariate effects): this will be seen on ETA distributions histograms or ETAs vs covariates plots. Overall, I think you have problems with the model or data rather than with the estimation method failure. Thanks Leonid -------------------------------------- Leonid Gibiansky, Ph.D. President, QuantPharm LLC web: www.quantpharm.com e-mail: LGibiansky at quantpharm.com tel: (301) 767 5566 Mark Sale - Next Level Solutions wrote: > General question: > > What are practical limits on the magnitude of OMEGA that is compatible with the FO and FOCE/I method? I seem to recall Stuart at one time suggesting that a CV of 0.5 (exponential OMEGA of 0.5) was about the limit at which the Taylor expansion can be considered a reasonable approximation of the real distribution. What about FOCE-I? I'm asking because I have a model that has an OMEGA of 13, exponential (and sometime 100) FOCE-I, and it seems to be very poorly behaved in spite of overall, reasoable looking data (i.e., the structural model traces a line that looks like the data, but some people are WAY above the line and some are WAY below, and some rise MUCH faster, and some rise MUCH later, by way I mean >10,000 fold, but residual error looks not too bad). Looking at the raw data, I believe that the the variability is at least this large. Can I beleive that NONMEM FOCE (FO?) will behave reasonably? > > thanks > Mark

Re: algorithm limits

From: Leonid Gibiansky Date: July 20, 2008 technical
Mark, The description that you gave confirms that population model has limited value unless four parameters (baseline, percent change, time to drop and time to recovery) correlate somehow. If not, your data tells you that the biomarker may start from very small or very large values, decrease to zero or not decrease at all, and recover in a week or in a year. Moreover, as I understood, there is no central tendency there: any baseline, drop, time to decrease and time to recovery are independent and equally-probable (otherwise, you would have reasonable OMEGAs with the bell-shaped rather than flat distribution of random effects. Sparse sampling will not work in this case, and if you have dense sampling, you may just use two-stage to describe observed (uniform?) distribution of individual parameters (and correlations if there are any). Leonid -------------------------------------- Leonid Gibiansky, Ph.D. President, QuantPharm LLC web: www.quantpharm.com e-mail: LGibiansky at quantpharm.com tel: (301) 767 5566 Mark Sale - Next Level Solutions wrote: > Leonid, > > This isn't PK, and the model show basically the right shape, and the data suggest reasonable residual error (the biological marker falls from a value between 5 and 310000, to somewhere between 0 and no change from baseline, over a course of a couple of hours to a couple of weeks, then recovers somewhere between a 100 hours and 9000 hours later.) ie., it start at a highly variable level fall by some highly variable fraction, over some variable lenghth of time and recovers somewhere between about a week and about a year. > > But, within those limits, it appears pretty well behaved. > > Mark Sale MD > Next Level Solutions, LLC > www.NextLevelSolns.com http://www.NextLevelSolns.com > 919-846-9185 >
Quoted reply history
> -------- Original Message -------- > Subject: Re: [NMusers] algorithm limits > From: Leonid Gibiansky <[EMAIL PROTECTED]> > Date: Sat, July 19, 2008 5:36 pm > To: Mark Sale - Next Level Solutions <[EMAIL PROTECTED]> > Cc: [email protected] <mailto:[email protected]> > > Hi Mark, > > If you really have 10,000 fold differences in, say, volume or > bioavailability, population model does not make any sense: individual > parameters have uninformative priors; they are defined by the > individual > data only, no meaningful predictions can be made for the next patient. > So, if you need data description, you can directly see whether the > method provides you with the correct line, but you cannot count on > prediction: they can be anywhere. > > For the estimation procedure, my understanding is that large OMEGAs > will > discount population model influence on the individual fit, and in this > respect, the method will give you the correct answer (individual > parameters controlled by the individual data only). This is how you > trick nonmem into the individual model fit: assign huge OMEGAs. Whether > your true OMEGA value is 50 or 150 is more or less irrelevant: both > values are huge and do not provide informative priors for the > individual > parameters. > > Sometimes you get huge OMEGAs if there is a strong correlation between > parameters, so that combination of ETAs is finite while each of them > individually can be anywhere. Removal of some random effects can > help in > this case. Sometimes large OMEGAs are indicative of multivariate > distributions (or strong categorical covariate effects): this will be > seen on ETA distributions histograms or ETAs vs covariates plots. > > Overall, I think you have problems with the model or data rather than > with the estimation method failure. > > Thanks > Leonid > > -------------------------------------- > Leonid Gibiansky, Ph.D. > President, QuantPharm LLC > web: www.quantpharm.com http://www.quantpharm.com > e-mail: LGibiansky at quantpharm.com http://quantpharm.com > tel: (301) 767 5566 > > Mark Sale - Next Level Solutions wrote: > > > > General question: > > What are practical limits on the magnitude of OMEGA that is > compatible > > with the FO and FOCE/I method? I seem to recall Stuart at one time > > suggesting that a CV of 0.5 (exponential OMEGA of 0.5) was about the > > limit at which the Taylor expansion can be considered a reasonable > > approximation of the real distribution. What about FOCE-I? > > I'm asking because I have a model that has an OMEGA of 13, > exponential > > (and sometime 100) FOCE-I, and it seems to be very poorly behaved in > > spite of overall, reasoable looking data (i.e., the structural model > > traces a line that looks like the data, but some people are WAY > above > > the line and some are WAY below, and some rise MUCH faster, and some > > rise MUCH later, by way I mean >10,000 fold, but residual error > looks > > not too bad). Looking at the raw data, I believe that the the > > variability is at least this large. Can I beleive that NONMEM FOCE > > (FO?) will behave reasonably? > > thanks > > Mark > >

RE: algorithm limits

From: Mark Sale Date: July 20, 2008 technical
Thanks Leonid, I believe what you tell me, and I understand that FOCE doesn't solve the problem with the approximation that FO makes, only reduces it (and possibly expands the range that the approximation is useful for?). Anyone out there with insight into what a practical limit is for FOCE and/or if there are any diagnostics that are helpful when you're close to it? Is it really 0.5 for FO? Mark Mark Sale MD Next Level Solutions, LLC www.NextLevelSolns.com 919-846-9185 > -------- Original Message -------- Subject: Re: [NMusers] algorithm limits From: Leonid Gibiansky <[EMAIL PROTECTED]> Date: Sat, July 19, 2008 9:37 pm To: Mark Sale - Next Level Solutions <[EMAIL PROTECTED]> Cc: > > [email protected] > > Mark, The description that you gave confirms that population model has limited value unless four parameters (baseline, percent change, time to drop and time to recovery) correlate somehow. If not, your data tells you that the biomarker may start from very small or very large values, decrease to zero or not decrease at all, and recover in a week or in a year. Moreover, as I understood, there is no central tendency there: any baseline, drop, time to decrease and time to recovery are independent and equally-probable (otherwise, you would have reasonable OMEGAs with the bell-shaped rather than flat distribution of random effects. Sparse sampling will not work in this case, and if you have dense sampling, you may just use two-stage to describe observed (uniform?) distribution of individual parameters (and correlations if there are any). Leonid -------------------------------------- Leonid Gibiansky, Ph.D. President, QuantPharm LLC web: > > www.quantpharm.com > > e-mail: LGibiansky at > > quantpharm.com > > tel: (301) 767 5566 Mark Sale - Next Level Solutions wrote: > > Leonid, > This isn't PK, and the model show basically the right shape, and the > data suggest reasonable residual error (the biological marker falls from > a value between 5 and 310000, to somewhere between 0 and no change from > baseline, over a course of a couple of hours to a couple of weeks, then > recovers somewhere between a 100 hours and 9000 hours later.) > ie., it start at a highly variable level fall by some highly variable > fraction, over some variable lenghth of time and recovers somewhere > between about a week and about a year. > But, within those limits, it appears pretty well behaved. > > > Mark Sale MD > Next Level Solutions, LLC > > > www.NextLevelSolns.com > > < > > http://www.NextLevelSolns.com > > > > 919-846-9185 > > -------- Original Message -------- > Subject: Re: [NMusers] algorithm limits > From: Leonid Gibiansky < > > LGibiansky @quantpharm.com > > > > Date: Sat, July 19, 2008 5:36 pm > To: Mark Sale - Next Level Solutions < > > mark @nextlevelsolns.com > > > > Cc: > > nmusers @globomaxnm.com > > <mailto: > > nmusers @globomaxnm.com > > > > > Hi Mark, > > If you really have 10,000 fold differences in, say, volume or > bioavailability, population model does not make any sense: individual > parameters have uninformative priors; they are defined by the > individual > data only, no meaningful predictions can be made for the next patient. > So, if you need data description, you can directly see whether the > method provides you with the correct line, but you cannot count on > prediction: they can be anywhere. > > For the estimation procedure, my understanding is that large OMEGAs > will > discount population model influence on the individual fit, and in this > respect, the method will give you the correct answer (individual > parameters controlled by the individual data only). This is how you > trick nonmem into the individual model fit: assign huge OMEGAs. Whether > your true OMEGA value is 50 or 150 is more or less irrelevant: both > values are huge and do not provide informative priors for the > individual > parameters. > > Sometimes you get huge OMEGAs if there is a strong correlation between > parameters, so that combination of ETAs is finite while each of them > individually can be anywhere. Removal of some random effects can > help in > this case. Sometimes large OMEGAs are indicative of multivariate > distributions (or strong categorical covariate effects): this will be > seen on ETA distributions histograms or ETAs vs covariates plots. > > Overall, I think you have problems with the model or data rather than > with the estimation method failure. > > Thanks > Leonid > > -------------------------------------- > Leonid Gibiansky, Ph.D. > President, QuantPharm LLC > web: > > www.quantpharm.com > > < > > http://www.quantpharm.com > > > > e-mail: LGibiansky at > > quantpharm.com > > < > > http://quantpharm.com > > > > tel: (301) 767 5566 > > > > > Mark Sale - Next Level Solutions wrote: > > > > General question: > > What are practical limits on the magnitude of OMEGA that is > compatible > > with the FO and FOCE/I method? I seem to recall Stuart at one time > > suggesting that a CV of 0.5 (exponential OMEGA of 0.5) was about the > > limit at which the Taylor expansion can be considered a reasonable > > approximation of the real distribution. What about FOCE-I? > > I'm asking because I have a model that has an OMEGA of 13, > exponential > > (and sometime 100) FOCE-I, and it seems to be very poorly behaved in > > spite of overall, reasoable looking data (i.e., the structural model > > traces a line that looks like the data, but some people are WAY > above > > the line and some are WAY below, and some rise MUCH faster, and some > > rise MUCH later, by way I mean >10,000 fold, but residual error > looks > > not too bad). Looking at the raw data, I believe that the the > > variability is at least this large. Can I beleive that NONMEM FOCE > > (FO?) will behave reasonably? > > thanks > > Mark > > >

Re: algorithm limits

From: Saik Urien Svp Date: July 21, 2008 technical
Mark, Leonid I suspect that OMEGA values above 2 or 3 units are very doubtful. As Leonid pointed out, such variability levels does not tell us anything on priors. Another point to discuss about is the s.e. that are associated to these OMEGA estimates. What is their extent ? Finally with such results I would have subjected the model to a bootstrap evaluation , to check the true confidence intervals of the model estimates. Regards Saïk ----- Original Message ----- From: Mark Sale - Next Level Solutions Cc: [email protected] Sent: Sunday, July 20, 2008 3:52 AM Subject: RE: [NMusers] algorithm limits Thanks Leonid, I believe what you tell me, and I understand that FOCE doesn't solve the problem with the approximation that FO makes, only reduces it (and possibly expands the range that the approximation is useful for?). Anyone out there with insight into what a practical limit is for FOCE and/or if there are any diagnostics that are helpful when you're close to it? Is it really 0.5 for FO? Mark Mark Sale MD Next Level Solutions, LLC www.NextLevelSolns.com 919-846-9185 -------- Original Message -------- Subject: Re: [NMusers] algorithm limits From: Leonid Gibiansky <[EMAIL PROTECTED]> Date: Sat, July 19, 2008 9:37 pm To: Mark Sale - Next Level Solutions <[EMAIL PROTECTED]> Cc: [email protected] Mark, The description that you gave confirms that population model has limited value unless four parameters (baseline, percent change, time to drop and time to recovery) correlate somehow. If not, your data tells you that the biomarker may start from very small or very large values, decrease to zero or not decrease at all, and recover in a week or in a year. Moreover, as I understood, there is no central tendency there: any baseline, drop, time to decrease and time to recovery are independent and equally-probable (otherwise, you would have reasonable OMEGAs with the bell-shaped rather than flat distribution of random effects. Sparse sampling will not work in this case, and if you have dense sampling, you may just use two-stage to describe observed (uniform?) distribution of individual parameters (and correlations if there are any). Leonid -------------------------------------- Leonid Gibiansky, Ph.D. President, QuantPharm LLC web: www.quantpharm.com e-mail: LGibiansky at quantpharm.com tel: (301) 767 5566 Mark Sale - Next Level Solutions wrote: > > Leonid, > This isn't PK, and the model show basically the right shape, and the > data suggest reasonable residual error (the biological marker falls from > a value between 5 and 310000, to somewhere between 0 and no change from > baseline, over a course of a couple of hours to a couple of weeks, then > recovers somewhere between a 100 hours and 9000 hours later.) > ie., it start at a highly variable level fall by some highly variable > fraction, over some variable lenghth of time and recovers somewhere > between about a week and about a year. > But, within those limits, it appears pretty well behaved. > > > Mark Sale MD > Next Level Solutions, LLC > www.NextLevelSolns.com http://www.NextLevelSolns.com > 919-846-9185 > > -------- Original Message -------- > Subject: Re: [NMusers] algorithm limits > From: Leonid Gibiansky <[EMAIL PROTECTED]> > Date: Sat, July 19, 2008 5:36 pm > To: Mark Sale - Next Level Solutions <[EMAIL PROTECTED]> > Cc: [email protected] <mailto:[email protected]> > > Hi Mark, > > If you really have 10,000 fold differences in, say, volume or > bioavailability, population model does not make any sense: individual > parameters have uninformative priors; they are defined by the > individual > data only, no meaningful predictions can be made for the next patient. > So, if you need data description, you can directly see whether the > method provides you with the correct line, but you cannot count on > prediction: they can be anywhere. > > For the estimation procedure, my understanding is that large OMEGAs > will > discount population model influence on the individual fit, and in this > respect, the method will give you the correct answer (individual > parameters controlled by the individual data only). This is how you > trick nonmem into the individual model fit: assign huge OMEGAs. Whether > your true OMEGA value is 50 or 150 is more or less irrelevant: both > values are huge and do not provide informative priors for the > individual > parameters. > > Sometimes you get huge OMEGAs if there is a strong correlation between > parameters, so that combination of ETAs is finite while each of them > individually can be anywhere. Removal of some random effects can > help in > this case. Sometimes large OMEGAs are indicative of multivariate > distributions (or strong categorical covariate effects): this will be > seen on ETA distributions histograms or ETAs vs covariates plots. > > Overall, I think you have problems with the model or data rather than > with the estimation method failure. > > Thanks > Leonid > > -------------------------------------- > Leonid Gibiansky, Ph.D. > President, QuantPharm LLC > web: www.quantpharm.com http://www.quantpharm.com > e-mail: LGibiansky at quantpharm.com http://quantpharm.com > tel: (301) 767 5566 > > > > > Mark Sale - Next Level Solutions wrote: > > > > General question: > > What are practical limits on the magnitude of OMEGA that is > compatible > > with the FO and FOCE/I method? I seem to recall Stuart at one time > > suggesting that a CV of 0.5 (exponential OMEGA of 0.5) was about the > > limit at which the Taylor expansion can be considered a reasonable > > approximation of the real distribution. What about FOCE-I? > > I'm asking because I have a model that has an OMEGA of 13, > exponential > > (and sometime 100) FOCE-I, and it seems to be very poorly behaved in > > spite of overall, reasoable looking data (i.e., the structural model > > traces a line that looks like the data, but some people are WAY > above > > the line and some are WAY below, and some rise MUCH faster, and some > > rise MUCH later, by way I mean >10,000 fold, but residual error > looks > > not too bad). Looking at the raw data, I believe that the the > > variability is at least this large. Can I beleive that NONMEM FOCE > > (FO?) will behave reasonably? > > thanks > > Mark > > > <<attachment: left.letterhead>>

RE: algorithm limits

From: Mark Sale Date: July 21, 2008 technical
Saik, Thanks for your views on this - are there some simulation results that you've seen, or other basis for the limit of 2-3? Mark Sale MD Next Level Solutions, LLC www.NextLevelSolns.com 919-846-9185 > -------- Original Message -------- Subject: Re: [NMusers] algorithm limits From: "saik.urien.svp" < > > [EMAIL PROTECTED] > > .fr> Date: Mon, July 21, 2008 4:26 am To: Mark Sale - Next Level Solutions <[EMAIL PROTECTED]> Cc: > > [email protected] > >  > > Mark, Leonid > > I suspect that OMEGA values above 2 or 3 units are very doubtful. As Leonid pointed out, such variability levels does not tell us anything on priors. Another point to discuss about is the s.e. that are associated to these OMEGA estimates. What is their extent ? > > Finally with such results I would have subjected the model to a bootstrap evaluation , to check the true confidence intervals of the model estimates. > > Regards > > Saïk > > > ----- Original Message ----- > > > > From: > > > > Mark Sale - Next Level Solutions > > > > Cc: > > > > [email protected] > > > > Sent: > > > > Sunday, July 20, 2008 3:52 AM > > > > Subject: > > > > RE: [NMusers] algorithm limits > > > > Thanks Leonid, I believe what you tell me, and I understand that FOCE doesn't solve the problem with the approximation that FO makes, only reduces it (and possibly expands the range that the approximation is useful for?). Anyone out there with insight into what a practical limit is for FOCE and/or if there are any diagnostics that are helpful when you're close to it? Is it really 0.5 for FO? Mark Mark Sale MD Next Level Solutions, LLC > > > > www.NextLevelSolns.com > > > > 919-846-9185 > > > > > -------- Original Message -------- Subject: Re: [NMusers] algorithm limits From: Leonid Gibiansky <[EMAIL PROTECTED]> Date: Sat, July 19, 2008 9:37 pm To: Mark Sale - Next Level Solutions <[EMAIL PROTECTED]> Cc: > > > > > > [email protected] > > > > > > Mark, The description that you gave confirms that population model has limited value unless four parameters (baseline, percent change, time to drop and time to recovery) correlate somehow. If not, your data tells you that the biomarker may start from very small or very large values, decrease to zero or not decrease at all, and recover in a week or in a year. Moreover, as I understood, there is no central tendency there: any baseline, drop, time to decrease and time to recovery are independent and equally-probable (otherwise, you would have reasonable OMEGAs with the bell-shaped rather than flat distribution of random effects. Sparse sampling will not work in this case, and if you have dense sampling, you may just use two-stage to describe observed (uniform?) distribution of individual parameters (and correlations if there are any). Leonid -------------------------------------- Leonid Gibiansky, Ph.D. President, QuantPharm LLC web: > > > > > > www.quantpharm.com > > > > > > e-mail: LGibiansky at > > > > > > quantpharm.com > > > > > > tel: (301) 767 5566 Mark Sale - Next Level Solutions wrote: > > Leonid, > This isn't PK, and the model show basically the right shape, and the > data suggest reasonable residual error (the biological marker falls from > a value between 5 and 310000, to somewhere between 0 and no change from > baseline, over a course of a couple of hours to a couple of weeks, then > recovers somewhere between a 100 hours and 9000 hours later.) > ie., it start at a highly variable level fall by some highly variable > fraction, over some variable lenghth of time and recovers somewhere > between about a week and about a year. > But, within those limits, it appears pretty well behaved. > > > Mark Sale MD > Next Level Solutions, LLC > > > > > > > www.NextLevelSolns.com > > > > > > < > > > > > > http://www.NextLevelSolns.com > > > > > > > > 919-846-9185 > > -------- Original Message -------- > Subject: Re: [NMusers] algorithm limits > From: Leonid Gibiansky < > > > > > > LGibiansky @quantpharm.com > > > > > > > > Date: Sat, July 19, 2008 5:36 pm > To: Mark Sale - Next Level Solutions < > > > > > > mark @nextlevelsolns.com > > > > > > > > Cc: > > > > > > nmusers @globomaxnm.com > > > > > > <mailto: > > > > > > nmusers @globomaxnm.com > > > > > > > > > Hi Mark, > > If you really have 10,000 fold differences in, say, volume or > bioavailability, population model does not make any sense: individual > parameters have uninformative priors; they are defined by the > individual > data only, no meaningful predictions can be made for the next patient. > So, if you need data description, you can directly see whether the > method provides you with the correct line, but you cannot count on > prediction: they can be anywhere. > > For the estimation procedure, my understanding is that large OMEGAs > will > discount population model influence on the individual fit, and in this > respect, the method will give you the correct answer (individual > parameters controlled by the individual data only). This is how you > trick nonmem into the individual model fit: assign huge OMEGAs. Whether > your true OMEGA value is 50 or 150 is more or less irrelevant: both > values are huge and do not provide informative priors for the > individual > parameters. > > Sometimes you get huge OMEGAs if there is a strong correlation between > parameters, so that combination of ETAs is finite while each of them > individually can be anywhere. Removal of some random effects can > help in > this case. Sometimes large OMEGAs are indicative of multivariate > distributions (or strong categorical covariate effects): this will be > seen on ETA distributions histograms or ETAs vs covariates plots. > > Overall, I think you have problems with the model or data rather than > with the estimation method failure. > > Thanks > Leonid > > -------------------------------------- > Leonid Gibiansky, Ph.D. > President, QuantPharm LLC > web: > > > > > > www.quantpharm.com > > > > > > < > > > > > > http://www.quantpharm.com > > > > > > > > e-mail: LGibiansky at > > > > > > quantpharm.com > > > > > > < > > > > > > http://quantpharm.com > > > > > > > > tel: (301) 767 5566 > > > > > Mark Sale - Next Level Solutions wrote: > > > > General question: > > What are practical limits on the magnitude of OMEGA that is > compatible > > with the FO and FOCE/I method? I seem to recall Stuart at one time > > suggesting that a CV of 0.5 (exponential OMEGA of 0.5) was about the > > limit at which the Taylor expansion can be considered a reasonable > > approximation of the real distribution. What about FOCE-I? > > I'm asking because I have a model that has an OMEGA of 13, > exponential > > (and sometime 100) FOCE-I, and it seems to be very poorly behaved in > > spite of overall, reasoable looking data (i.e., the structural model > > traces a line that looks like the data, but some people are WAY > above > > the line and some are WAY below, and some rise MUCH faster, and some > > rise MUCH later, by way I mean >10,000 fold, but residual error > looks > > not too bad). Looking at the raw data, I believe that the the > > variability is at least this large. Can I beleive that NONMEM FOCE > > (FO?) will behave reasonably? > > thanks > > Mark > > >

RE: algorithm limits

From: James G Wright Date: July 22, 2008 technical
Hi Mark, This is a good question. I am not aware of any public domain simulation work in extreme variability scenarios, so my comments are based on the theory. The fundamental problem with the standard NONMEM algorithm, where the fixed effect and random effects are estimated simultaneously by joint maximum likelihood, is that the size of the variance parameters can bias the mean, sometimes substantially (and hence generalized least squares remains the standard algorithm in the statistical community). If the variance model is even slightly misspecified (which it nearly always is), this can be very damaging to your population mean estimate. Often this leads to overestimates of the mean (so the variance can be smaller) but in some circumstances you can get an excessively high CV% because the mean is underestimated. The other common cause is that you have parameter values close to zero in a subset of subjects, which on a log-scale is minus infinity. Given that you are getting such a high CV% the lognormal may not be the best approach. Switching to additive intersubject variability would remove this dependence between mean and variance, and I would definitely give this a try as an exploratory step. In WinBugs or a nonparametric package, you could explore other distributions - in NONMEM, your only option is subsetting the data manually or using a mixture model, each of which bring new problems. Linearization is a slightly different issue, as this effects how the random effects impact the fit. FOCE linearization will probably give you good individual fits if your individual data contain information about all parameters (ie you could almost get away with a two-stage approach), but this is not the same question as having reliable population parameter estimates. From your description of the model it sounds like you have variability parallel to the time axis, and this is the toughest to linearize - this pushes you away from classic NONMEM as a software choice if the problem lies in a parameter that shifts the predicted curve horizontally in time (like a lag-time does). As a rule of thumb, I would definitely be cynical about a CV over 300%, and would be extremely cautious to use such a model for prediction. My eyebrows start to raise at around 130%. If you decide to simulate, good luck, and I would love to know your findings. Best regards, James G Wright PhD Scientist Wright Dose Ltd Tel: 44 (0) 772 5636914
Quoted reply history
-----Original Message----- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Mark Sale - Next Level Solutions Sent: 19 July 2008 21:13 Cc: [email protected] Subject: [NMusers] algorithm limits General question: What are practical limits on the magnitude of OMEGA that is compatible with the FO and FOCE/I method? I seem to recall Stuart at one time suggesting that a CV of 0.5 (exponential OMEGA of 0.5) was about the limit at which the Taylor expansion can be considered a reasonable approximation of the real distribution. What about FOCE-I? I'm asking because I have a model that has an OMEGA of 13, exponential (and sometime 100) FOCE-I, and it seems to be very poorly behaved in spite of overall, reasoable looking data (i.e., the structural model traces a line that looks like the data, but some people are WAY above the line and some are WAY below, and some rise MUCH faster, and some rise MUCH later, by way I mean >10,000 fold, but residual error looks not too bad). Looking at the raw data, I believe that the the variability is at least this large. Can I beleive that NONMEM FOCE (FO?) will behave reasonably? thanks Mark