OMEGA selection

9 messages 7 people Latest: Apr 23, 2009

OMEGA selection

From: Ethan Wu Date: April 15, 2009 technical
Dear all, I am fitting a PD response, and the equation goes like this: total response = baseline+f(placebo response) +f(drug response) first, I tried full omega block, and model was able to converge, but $COV stop failed. To me, this indicates that too many parameters in the model. The structure model is rather simple one, so I think probably too many Etas. I wonder is there a good principle of Eta reduction that I could implement here. Any good reference?

RE: OMEGA selection

From: William Bachman Date: April 15, 2009 technical
Well, the first thing that I would do is look at the magnitude of the estimates of the etas. I would eliminate those etas that are poorly estimated (essentially the very large values or those approaching zero).
Quoted reply history
________________________________ From: [email protected] [mailto:[email protected]] On Behalf Of Ethan Wu Sent: Wednesday, April 15, 2009 11:47 AM To: [email protected] Subject: [NMusers] OMEGA selection Dear all, I am fitting a PD response, and the equation goes like this: total response = baseline+f(placebo response) +f(drug response) first, I tried full omega block, and model was able to converge, but $COV stop failed. To me, this indicates that too many parameters in the model. The structure model is rather simple one, so I think probably too many Etas. I wonder is there a good principle of Eta reduction that I could implement here. Any good reference?

Re: OMEGA selection

From: Nick Holford Date: April 15, 2009 technical
Ethan, Do not pay any attention to whether or not the $COV step runs or even if the run is 'SUCCESSFUL' to conclude anything about your model. Your opinion is not supported experimentally e.g. see http://www.mail-archive.com/ [email protected] /msg00454.html for discussion and references. NONMEM has no idea if the parameters make sense or not and will happily converge with models that are overparameterised. You cannot rely on a failed $COV step or a MINIMIZATION TERMINATED message to conclude the model is not a good one. You need to use your brains (NONMEM does not have a brain) and your common sense to decide if your model makes sense or is perhaps overparameterised. Nick Ethan Wu wrote: > Dear all, > > I am fitting a PD response, and the equation goes like this: > > total response = baseline+f(placebo response) +f(drug response) > > first, I tried full omega block, and model was able to converge, but $COV stop failed. > > To me, this indicates that too many parameters in the model. The structure model is rather simple one, so I think probably too many Etas. > > I wonder is there a good principle of Eta reduction that I could implement here. Any good reference? -- Nick Holford, Dept Pharmacology & Clinical Pharmacology University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand [email protected] tel:+64(9)923-6730 fax:+64(9)373-7090 mobile: +33 64 271-6369 (Apr 6-Jul 17 2009) http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford

RE: OMEGA selection

From: Mark Sale Date: April 15, 2009 technical
Nick et al. At this risk of starting an discussion that probably has little mileage left in it. First I agree with Nick on covariance - it probably doesn't matter. But, I'd like to point out what may be an error in our logic. We content that we have demonstrated that covariance doesn't matter. Our evidence is that, when bootstrapping, the parameters for the sample that have successful covariance are not different from those that failed. So, we conclude that the results are the same regardless of covariance outcome across sampled data sets - the independent variable in this test is the data set, the model is fixed. In model selection/building, we have a fixed data set and the independent variable is the model structure. Whether covariance success is a useful predictor across different models with a fixed data set is a different question than whether covariance is a useful predictor across data sets with a fixed model. But, in the end, I do agree that biological plausibility, diagnostic plots, reasonable parameters and some suggestion of numerical stability/identifiably (such as bootstrap CIs) are more important than a successful covariance step. Mark Mark Sale MD Next Level Solutions, LLC www.NextLevelSolns.com 919-846-9185
Quoted reply history
-------- Original Message -------- Subject: Re: [NMusers] OMEGA selection From: Nick Holford < [email protected] > Date: Wed, April 15, 2009 12:17 pm To: [email protected] Ethan, Do not pay any attention to whether or not the $COV step runs or even if the run is 'SUCCESSFUL' to conclude anything about your model. Your opinion is not supported experimentally e.g. see http://www.mail-archive.com/ [email protected] /msg00454.html for discussion and references. NONMEM has no idea if the parameters make sense or not and will happily converge with models that are overparameterised. You cannot rely on a failed $COV step or a MINIMIZATION TERMINATED message to conclude the model is not a good one. You need to use your brains (NONMEM does not have a brain) and your common sense to decide if your model makes sense or is perhaps overparameterised. Nick Ethan Wu wrote: > > Dear all, > > I am fitting a PD response, and the equation goes like this: > > total response = baseline+f(placebo response) +f(drug response) > > first, I tried full omega block, and model was able to converge, but > $COV stop failed. > > To me, this indicates that too many parameters in the model. The > structure model is rather simple one, so I think probably too many Etas. > > I wonder is there a good principle of Eta reduction that I could > implement here. Any good reference? > > -- Nick Holford, Dept Pharmacology & Clinical Pharmacology University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand [email protected] tel:+64(9)923-6730 fax:+64(9)373-7090 mobile: +33 64 271-6369 (Apr 6-Jul 17 2009) http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford

RE: OMEGA selection

From: William Bachman Date: April 15, 2009 technical
In my opinion, I would not remove those in the 10-90% range. I would be suspect of anything over 100%, even with noisy data, they are being poorly estimated. _____
Quoted reply history
From: [email protected] [mailto:[email protected]] On Behalf Of Ethan Wu Sent: Wednesday, April 15, 2009 1:15 PM To: Bachman, William; [email protected] Subject: Re: [NMusers] OMEGA selection some Etas estimated to be around 2 or 3, but since I am fitting a quite noisy PD data, I think they are actually reasonable no Etas close to 0 cov% esimated in the range of 10-90%.should those small ones like 10% be taken out? _____ From: "Bachman, William" <[email protected]> To: Ethan Wu <[email protected]>; [email protected] Sent: Wednesday, April 15, 2009 12:12:56 PM Subject: RE: [NMusers] OMEGA selection Well, the first thing that I would do is look at the magnitude of the estimates of the etas. I would eliminate those etas that are poorly estimated (essentially the very large values or those approaching zero). _____ From: [email protected] [mailto:[email protected]] On Behalf Of Ethan Wu Sent: Wednesday, April 15, 2009 11:47 AM To: [email protected] Subject: [NMusers] OMEGA selection Dear all, I am fitting a PD response, and the equation goes like this: total response = baseline+f(placebo response) +f(drug response) first, I tried full omega block, and model was able to converge, but $COV stop failed. To me, this indicates that too many parameters in the model. The structure model is rather simple one, so I think probably too many Etas. I wonder is there a good principle of Eta reduction that I could implement here. Any good reference?

RE: OMEGA selection

From: Mats Karlsson Date: April 16, 2009 technical
Hi Ethan, I think you have given too little info to diagnose your problem properly. We don't even know if ETAs come in additively, proportionally, in logit expressions or what (so values of 2 or 3 doesn't give the scale). Also, I think that you mentioned 10-90% as values for correlations, whereas Bill interpreted it as CVs for IIV. It was just not enough info to make the distinction. If the model is so simple, why not show the whole model. Mats Mats Karlsson, PhD Professor of Pharmacometrics Dept of Pharmaceutical Biosciences Uppsala University Box 591 751 24 Uppsala Sweden phone: +46 18 4714105 fax: +46 18 471 4003
Quoted reply history
From: [email protected] [mailto:[email protected]] On Behalf Of Bill Bachman Sent: Wednesday, April 15, 2009 7:46 PM To: 'Ethan Wu'; 'Bachman, William'; [email protected] Subject: RE: [NMusers] OMEGA selection In my opinion, I would not remove those in the 10-90% range. I would be suspect of anything over 100%, even with noisy data, they are being poorly estimated. _____ From: [email protected] [mailto:[email protected]] On Behalf Of Ethan Wu Sent: Wednesday, April 15, 2009 1:15 PM To: Bachman, William; [email protected] Subject: Re: [NMusers] OMEGA selection some Etas estimated to be around 2 or 3, but since I am fitting a quite noisy PD data, I think they are actually reasonable no Etas close to 0 cov% esimated in the range of 10-90%.should those small ones like 10% be taken out? _____ From: "Bachman, William" <[email protected]> To: Ethan Wu <[email protected]>; [email protected] Sent: Wednesday, April 15, 2009 12:12:56 PM Subject: RE: [NMusers] OMEGA selection Well, the first thing that I would do is look at the magnitude of the estimates of the etas. I would eliminate those etas that are poorly estimated (essentially the very large values or those approaching zero). _____ From: [email protected] [mailto:[email protected]] On Behalf Of Ethan Wu Sent: Wednesday, April 15, 2009 11:47 AM To: [email protected] Subject: [NMusers] OMEGA selection Dear all, I am fitting a PD response, and the equation goes like this: total response = baseline+f(placebo response) +f(drug response) first, I tried full omega block, and model was able to converge, but $COV stop failed. To me, this indicates that too many parameters in the model. The structure model is rather simple one, so I think probably too many Etas. I wonder is there a good principle of Eta reduction that I could implement here. Any good reference?

RE: OMEGA selection

From: Kenneth Kowalski Date: April 21, 2009 technical
NMusers, My apologies for entering into this discussion a bit late as I was on vacation last week. Rather than rehash previous debates about $COV, I thought I would just list some of the ways I use the $COV step output to assist my model building and clinical trial simulation efforts. Before I do so, let me preface my comments by saying that for me the real diagnostic value of the $COV step is in the output reported by the $COV step and not simply whether or not $COV runs successfully. Thus, I strive for a successful $COV step because I find diagnostic value in the $COV output to guide my model-building efforts. There are 3 basic ways I use the $COV step output: 1) Inspection of the standard errors, pairwise correlations among the parameter estimates, and the eigenvalue analysis of the correlation matrix helps me to understand the limitations of the design/data via the model. 2) I find building full covariate models much easier to obtain by first ensuring that I have a stable base model through an inspection of the $COV step output. I tend to like to use the full model to make inference about the covariate parameter estimates (e.g., CIs) as they will not suffer from model selection bias which occurs with stepwise procedures. 3) Based on asymptotic statistical theory for maximum likelihood estimation I will often assume that the parameters estimates follow a multivariate normal distribution with mean vector set to the population parameter estimates and covariance matrix set to the covariance matrix of the parameter estimates for THETA, OMEGA and SIGMA reported in the $COV output. This assumption allows me to easily generate random sets of population parameters reflecting parameter uncertainty when conducting clinical trial simulations. Of course one could do non-parametric bootstrapping to accomplish this as well but it is easier and faster to use the multivariate normal distribution when it is reasonable to assume that the asymptotics hold. Below are examples that illustrate some of the ways I use the $COV output: • Identify largest standard errors relative to the point estimates and rationalize the limitations of the data/design that would give rise to these large SEs (e.g., a standard error for ka may be large if few sample times are observed prior to Tmax). • Screen for high pairwise correlations. For example, a high correlation in the population parameter estimates for CL/F and V/F may result when fitting a base model to steady-state PK data. This would suggest that the same information in the data is being used to estimate both parameters. This can be problematic for building full covariate models where one or more covariates may have effects on both parameters. In this setting I may use clinical judgment as to whether a particular covariate effect is more likely to be on CL/F or V/F if the limitations of the design/data preclude estimating it on both. • The covariance matrix of the estimates from a full model run are helpful in determining a subset of potential parsimonious final models using the WAM algorithm (see Kowalski & Hutmacher, JPP 2001;28:253-275). • I use SAS (or Splus) to generate a random set of population parameters from the multivariate normal distribution using the population parameter estimates and the covariance matrix of the estimates from the $COV output in clinical trial simulations so that I can quantify operating characteristics such as probability of success (probability of a Go decision) and probability of a correct decision in contrast to power calculations which assume a fixed effect size. Power is a conditional probability (conditioning on an assumed effect magnitude) whereas POS (prob of success) is an unconditional probability that takes into account the uncertainty in achieving a given effect magnitude. Power is a performance characteristic of the design whereas POS is a performance characteristic of both the design and compound (dose of treatment). Kind regards, Ken
Quoted reply history
-----Original Message----- From: [email protected] [mailto:[email protected]] On Behalf Of Nick Holford Sent: Wednesday, April 15, 2009 2:49 PM To: nmusers Subject: Re: [NMusers] OMEGA selection Mark, I agree with your logic. In the meantime I will ignore the $COV step (it rarely happens for me) and wait for some empirical evidence that the $COV step is of demonstrable value for model building. Perhaps your grid computing system could take on that challenge by comparing the results of automated model building with and without $COV or convergence? Nick Mark Sale - Next Level Solutions wrote: > > Nick et al. > At this risk of starting an discussion that probably has little > mileage left in it. First I agree with Nick on covariance - it > probably doesn't matter. But, I'd like to point out what may be an > error in our logic. > We content that we have demonstrated that covariance doesn't matter. > Our evidence is that, when bootstrapping, the parameters for the > sample that have successful covariance are not different from those > that failed. So, we conclude that the results are the same regardless > of covariance outcome across sampled data sets - the independent > variable in this test is the data set, the model is fixed. > In model selection/building, we have a fixed data set and the > independent variable is the model structure. Whether covariance > success is a useful predictor across different models with a fixed > data set is a different question than whether covariance is a useful > predictor across data sets with a fixed model. > But, in the end, I do agree that biological plausibility, diagnostic > plots, reasonable parameters and some suggestion of numerical > stability/identifiably (such as bootstrap CIs) are more important than > a successful covariance step. > > Mark > > Mark Sale MD > Next Level Solutions, LLC > www.NextLevelSolns.com http://www.NextLevelSolns.com > 919-846-9185 > > -------- Original Message -------- > Subject: Re: [NMusers] OMEGA selection > From: Nick Holford <[email protected]> > Date: Wed, April 15, 2009 12:17 pm > To: [email protected] > > Ethan, > > Do not pay any attention to whether or not the $COV step runs or > even if > the run is 'SUCCESSFUL' to conclude anything about your model. Your > opinion is not supported experimentally e.g. see > http://www.mail-archive.com/[email protected]/msg00454.html for > discussion and references. > > NONMEM has no idea if the parameters make sense or not and will > happily > converge with models that are overparameterised. You cannot rely on a > failed $COV step or a MINIMIZATION TERMINATED message to conclude the > model is not a good one. You need to use your brains (NONMEM does not > have a brain) and your common sense to decide if your model makes > sense > or is perhaps overparameterised. > > Nick > > Ethan Wu wrote: > > > > Dear all, > > > > I am fitting a PD response, and the equation goes like this: > > > > total response = baseline+f(placebo response) +f(drug response) > > > > first, I tried full omega block, and model was able to converge, but > > $COV stop failed. > > > > To me, this indicates that too many parameters in the model. The > > structure model is rather simple one, so I think probably too > many Etas. > > > > I wonder is there a good principle of Eta reduction that I could > > implement here. Any good reference? > > > > > > -- > Nick Holford, Dept Pharmacology & Clinical Pharmacology > University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, > New Zealand > [email protected] tel:+64(9)923-6730 fax:+64(9)373-7090 > mobile: +33 64 271-6369 (Apr 6-Jul 17 2009) > http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford > > -- Nick Holford, Dept Pharmacology & Clinical Pharmacology University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand [email protected] tel:+64(9)923-6730 fax:+64(9)373-7090 mobile: +33 64 271-6369 (Apr 6-Jul 17 2009) http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford

RE: OMEGA selection

From: Yaming Hang Date: April 23, 2009 technical
Hi Mark, Very interesting point. In general, your logic about why the covariance step doesn't matter in the bootstrapping case makes sense to me. However, I have some questions about why such a conclusion was reached. My questions are: 1. how many data sets are bootstrapped, 2. among them, what's the frequency of failed vs. successful covariance step, 3. are parameter estimates themselves similar across different bootstraps, 4. are there any major difference among the data sets leading to successful and failed covariance step? I am imagining an example: with an Emax model, I generate two data sets, one with good distribution with regard to the X variable (say concentration) and the other with ill distribution. So that the first data set gives me a successful run including $COV step with reasonable estimates for Emax and EC50, the second data set will lead to a total failure in estimation, even estimates for Emax and EC50 cannot be obtained. I guess I cannot use this as a basis to conclude that even the $ESTIMATE step is not reliable, since both data sets are coming from the same population, right? I'd love to hear your thoughts on this one. Thanks, Yaming
Quoted reply history
________________________________ From: [email protected] [mailto:[email protected]] On Behalf Of Mark Sale - Next Level Solutions Sent: Wednesday, April 15, 2009 1:00 PM Cc: [email protected] Subject: RE: [NMusers] OMEGA selection Nick et al. At this risk of starting an discussion that probably has little mileage left in it. First I agree with Nick on covariance - it probably doesn't matter. But, I'd like to point out what may be an error in our logic. We content that we have demonstrated that covariance doesn't matter. Our evidence is that, when bootstrapping, the parameters for the sample that have successful covariance are not different from those that failed. So, we conclude that the results are the same regardless of covariance outcome across sampled data sets - the independent variable in this test is the data set, the model is fixed. In model selection/building, we have a fixed data set and the independent variable is the model structure. Whether covariance success is a useful predictor across different models with a fixed data set is a different question than whether covariance is a useful predictor across data sets with a fixed model. But, in the end, I do agree that biological plausibility, diagnostic plots, reasonable parameters and some suggestion of numerical stability/identifiably (such as bootstrap CIs) are more important than a successful covariance step. Mark Mark Sale MD Next Level Solutions, LLC www.NextLevelSolns.com 919-846-9185 -------- Original Message -------- Subject: Re: [NMusers] OMEGA selection From: Nick Holford <[email protected]> Date: Wed, April 15, 2009 12:17 pm To: [email protected] Ethan, Do not pay any attention to whether or not the $COV step runs or even if the run is 'SUCCESSFUL' to conclude anything about your model. Your opinion is not supported experimentally e.g. see http://www.mail-archive.com/[email protected]/msg00454.html for discussion and references. NONMEM has no idea if the parameters make sense or not and will happily converge with models that are overparameterised. You cannot rely on a failed $COV step or a MINIMIZATION TERMINATED message to conclude the model is not a good one. You need to use your brains (NONMEM does not have a brain) and your common sense to decide if your model makes sense or is perhaps overparameterised. Nick Ethan Wu wrote: > > Dear all, > > I am fitting a PD response, and the equation goes like this: > > total response = baseline+f(placebo response) +f(drug response) > > first, I tried full omega block, and model was able to converge, but > $COV stop failed. > > To me, this indicates that too many parameters in the model. The > structure model is rather simple one, so I think probably too many Etas. > > I wonder is there a good principle of Eta reduction that I could > implement here. Any good reference? > > -- Nick Holford, Dept Pharmacology & Clinical Pharmacology University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand [email protected] tel:+64(9)923-6730 fax:+64(9)373-7090 mobile: +33 64 271-6369 (Apr 6-Jul 17 2009) http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford Notice: This e-mail message, together with any attachments, contains information of Merck & Co., Inc. (One Merck Drive, Whitehouse Station, New Jersey, USA 08889), and/or its affiliates (which may be known outside the United States as Merck Frosst, Merck Sharp & Dohme or MSD and in Japan, as Banyu - direct contact information for affiliates is available at http://www.merck.com/contact/contacts.html) that may be confidential, proprietary copyrighted and/or legally privileged. It is intended solely for the use of the individual or entity named on this message. If you are not the intended recipient, and have received this message in error, please notify us immediately by reply e-mail and then delete it from your system. <<left.letterhead>>

RE: OMEGA selection

From: Mark Sale Date: April 23, 2009 technical
Yaming, For details, I'd refer you to the abstracts, I've never published this. But, whenever I do a bootstrap I look at whether the samples that had a successful covariance step are different (in mean or variability), just for my own interest. They never have been different, I'd guess I've looked at 6 or so. I have no records of what fraction of samples had a successful covariance step. I'd also refer to any number of good reference on how to decide if a model is "good" (plots, biological plauability, reasonable parameters, various metrics of "goodness". etc. I'd suggest that if your parameters are poorly defined by the data (e.g., all concentrations near EMAX, unable to define EC50) you'll invariably find that other metrics suggest lack of model goodness. Whether and how successful covariance or minimization fits into this will have to wait until we have a universally accepted metric of model "goodness". I would list CI (based on bootstrap, not $COV) among my metrics of model goodness, I'd even list a successful covariance step among metrics of model goodness - but pretty far down the list. (everything else being equal, I'd prefer a model that has a successful covariance step - of course everything else is never equal). Mark Sale MD Next Level Solutions, LLC www.NextLevelSolns.com 919-846-9185
Quoted reply history
-------- Original Message -------- Subject: RE: [NMusers] OMEGA selection From: "Hang, Yaming" < [email protected] > Date: Thu, April 23, 2009 10:35 am To: "Mark Sale - Next Level Solutions" < [email protected] > Cc: < [email protected] > Hi Mark, Very interesting point. In general, your logic about why the covariance step doesn't matter in the bootstrapping case makes sense to me. However, I have some questions about why such a conclusion was reached. My questions are: 1. how many data sets are bootstrapped, 2. among them, what's the frequency of failed vs. successful covariance step, 3. are parameter estimates themselves similar across different bootstraps, 4. are there any major difference among the data sets leading to successful and failed covariance step? I am imagining an example: with an Emax model, I generate two data sets, one with good distribution with regard to the X variable (say concentration) and the other with ill distribution. So that the first data set gives me a successful run including $COV step with reasonable estimates for Emax and EC50, the second data set will lead to a total failure in estimation, even estimates for Emax and EC50 cannot be obtained. I guess I cannot use this as a basis to conclude that even the $ESTIMATE step is not reliable, since both data sets are coming from the same population, right? I'd love to hear your thoughts on this one. Thanks, Yaming From: [email protected] [ mailto: [email protected] ] On Behalf Of Mark Sale - Next Level Solutions Sent: Wednesday, April 15, 2009 1:00 PM Cc: [email protected] Subject: RE: [NMusers] OMEGA selection Nick et al. At this risk of starting an discussion that probably has little mileage left in it. First I agree with Nick on covariance - it probably doesn't matter. But, I'd like to point out what may be an error in our logic. We content that we have demonstrated that covariance doesn't matter. Our evidence is that, when bootstrapping, the parameters for the sample that have successful covariance are not different from those that failed. So, we conclude that the results are the same regardless of covariance outcome across sampled data sets - the independent variable in this test is the data set, the model is fixed. In model selection/building, we have a fixed data set and the independent variable is the model structure. Whether covariance success is a useful predictor across different models with a fixed data set is a different question than whether covariance is a useful predictor across data sets with a fixed model. But, in the end, I do agree that biological plausibility, diagnostic plots, reasonable parameters and some suggestion of numerical stability/identifiably (such as bootstrap CIs) are more important than a successful covariance step. Mark Mark Sale MD Next Level Solutions, LLC www.NextLevelSolns.com 919-846-9185 -------- Original Message -------- Subject: Re: [NMusers] OMEGA selection From: Nick Holford < [email protected] > Date: Wed, April 15, 2009 12:17 pm To: [email protected] Ethan, Do not pay any attention to whether or not the $COV step runs or even if the run is 'SUCCESSFUL' to conclude anything about your model. Your opinion is not supported experimentally e.g. see http://www.mail-archive.com/ [email protected] /msg00454.html for discussion and references. NONMEM has no idea if the parameters make sense or not and will happily converge with models that are overparameterised. You cannot rely on a failed $COV step or a MINIMIZATION TERMINATED message to conclude the model is not a good one. You need to use your brains (NONMEM does not have a brain) and your common sense to decide if your model makes sense or is perhaps overparameterised. Nick Ethan Wu wrote: > > Dear all, > > I am fitting a PD response, and the equation goes like this: > > total response = baseline+f(placebo response) +f(drug response) > > first, I tried full omega block, and model was able to converge, but > $COV stop failed. > > To me, this indicates that too many parameters in the model. The > structure model is rather simple one, so I think probably too many Etas. > > I wonder is there a good principle of Eta reduction that I could > implement here. Any good reference? > > -- Nick Holford, Dept Pharmacology & Clinical Pharmacology University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand [email protected] tel:+64(9)923-6730 fax:+64(9)373-7090 mobile: +33 64 271-6369 (Apr 6-Jul 17 2009) http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford Notice: This e-mail message, together with any attachments, contains information of Merck & Co., Inc. (One Merck Drive, Whitehouse Station, New Jersey, USA 08889), and/or its affiliates (which may be known outside the United States as Merck Frosst, Merck Sharp & Dohme or MSD and in Japan, as Banyu - direct contact information for affiliates is available at http://www.merck.com/contact/contacts.html) that may be confidential, proprietary copyrighted and/or legally privileged. It is intended solely for the use of the individual or entity named on this message. If you are not the intended recipient, and have received this message in error, please notify us immediately by reply e-mail and then delete it from your system.