Using MCP-MOD in dose finding for Phase 3

8 messages 6 people Latest: Mar 27, 2015

Using MCP-MOD in dose finding for Phase 3

From: Nele Mueller-Plock Date: March 20, 2015 technical
Dear all, I am writing to you as we are currently discussing the implementation of the MCP-MOD approach for dose finding based on Phase 2B results and would like to hear your opinion on this approach. It would be good to get feedback from both statisticians and classical modelers. I have thought about the approach, and have a few problems about seeing the advantage of the approach over complete population-PK/PD modeling. From what I understood, I can see the following issues: MCP-MOD · Only uses trial endpoints, i.e. it ignores the time course of the treatment effect. I have a problem with this because there might be noise in the endpoint (e.g. if the effect has reached a plateau), which might potentially lead to the selection of the wrong model structure. Including the time-course like in PKPD modeling approaches would detect that the deviation is just noise, and thus probably be able to identify the right model structure despite this. · Uses dose-response models instead of exposure-response models · Pre-specifies the model structure. While I understand that for pivotal trials prespecification is crucial, I would assume that Phase 2 is performed to allow exploration of the data to come up with the best model given the data we have. What happens if the true model is not part of the tested ones? What if we have new physiological insights that tell us about the model structure after we have seen the data? Do we then ignore what we know and fit all bad models, and if none gives a good description we do model averaging of bad models? · If we include a model with many parameters in the prespecification and only have a few dose strength, wouldn't the model with more parameters be more likely to give a good fit (e.g. when comparing Emax to logistic), with the consequence that a wrong dose might be selected? Colleagues from statistics recommend to cover all potential models with different shapes in the candidate set to avoid potential bias in dose selection, but they argue that post-hoc model fitting leads to data-dredging and over-fitting, does not account for model uncertainty and gives overly-optimistic results. I am wondering however what the difference in the approach is if anyway ALL potential models are considered (which can lead to overfitting as well)? Might a good solution be to combine PKPD modeling with MCP-Mod? Your opinion will be highly appreciated, and I am looking forward to receiving comments both in favour and against the approach :-) Best Nele ______________________________________________________________ Dr. Nele Mueller-Plock, CAPM Associate Scientific Director Pharmacometrics Global Pharmacometrics Translational Medicine Takeda Pharmaceuticals International GmbH Thurgauerstrasse 130 8152 Glattpark-Opfikon (Zürich) Switzerland Visitor address: Alpenstrasse 3 8152 Glattpark-Opfikon (Zürich) Switzerland Phone: (+41) 44 / 55 51 404 Mobile: (+41) 79 / 654 33 99 mailto: [email protected]<mailto:[email protected]> http://www.takeda.com/ -------------------------------------------------------------------- The content of this email and of any files transmitted may contain confidential, proprietary or legally privileged information and is intended solely for the use of the person/s or entity/ies to whom it is addressed. If you have received this email in error you have no permission whatsoever to use, copy, disclose or forward all or any of its contents. Please immediately notify the sender and thereafter delete this email and any attachments.

RE: Using MCP-MOD in dose finding for Phase 3

From: Magnus Åstrand Date: March 20, 2015 technical
Dear Nele, here are some thoughts: The idea with the MCPmod is twofold, a) provide a procedure for testing for a treatment effect and in that test incorporate all doses studies and still maintain control of type I error. b) If significance in a) continue with framework for estimating the dose response either by model selection or model averaging among the significant candidate models. I think you could use the principles of MCPmod even if you use a longitudinal model with a time course of your treatment effect. You could for example use the same time profile for the treatment effect in all doses, but estimate different magnitude for each dose. (indirect response model with effect on kin, one level for each dose) The estimated magnitudes would then replace the mean effect in each dose in the standard MCPmod application. The theory of MCPmod builds on the existence of a optimal contrast for a given true effect profile across your set of doses. Potentially there is a way to derive optimal tests but instead base that on a assumed distribution of the exposure across all your doses included, combined with a assumed true dose response curve. An interesting thought that I actually may explore! (I think the output would be a weight function w(exposure) so that you would get a test based on w(exposure)*observed_effect, sum across all your data. There is no limit on how many candidate models you can use, so I don't see that as a problem. Planning of your analysis across a wide range of potential DR functions to make sure you have good power whatever the true DR is recommended. (And actually by selecting a smart set of candidate models can improve on the power) You can include several emax, but with different set of parameters, combine that with other types of functions, sigmod emax. On your last bullet, a good way around is to use model averaging instead of model selection. If your model with more parameters only marginally improves the fit, the weight for that model will not be so high. My experience is that model averaging generally performs better than model selection. A big advantage is also if you end up with 2 equally good models, instead of presenting 2 results to your project, you combine them both into one. Kind regards Magnus strand Senior Clinical Pharmacometrician, Ph.D. _____________________________________________________________________________________________ AstraZeneca Innovative Medicines | Quantitative Clinical Pharmacology SE-431 83 Mlndal, Sweden T: +46 (0)31 776 23 41 Mob: +46 (0)708 467 667 magnus.astrand_at_astrazeneca.com Please consider the environment before printing this e-mail
Quoted reply history
From: owner-nmusers_at_globomaxnm.com [mailto:owner-nmusers_at_globomaxnm.com] On Behalf Of Mueller-Plock, Nele Sent: den 20 mars 2015 13:02 To: nmusers_at_globomaxnm.com Subject: [NMusers] Using MCP-MOD in dose finding for Phase 3 Dear all, I am writing to you as we are currently discussing the implementation of the MCP-MOD approach for dose finding based on Phase 2B results and would like to hear your opinion on this approach. It would be good to get feedback from both statisticians and classical modelers. I have thought about the approach, and have a few problems about seeing the advantage of the approach over complete population-PK/PD modeling. From what I understood, I can see the following issues: MCP-MOD Only uses trial endpoints, i.e. it ignores the time course of the treatment effect. I have a problem with this because there might be noise in the endpoint (e.g. if the effect has reached a plateau), which might potentially lead to the selection of the wrong model structure. Including the time-course like in PKPD modeling approaches would detect that the deviation is just noise, and thus probably be able to identify the right model structure despite this. Uses dose-response models instead of exposure-response models Pre-specifies the model structure. While I understand that for pivotal trials prespecification is crucial, I would assume that Phase 2 is performed to allow exploration of the data to come up with the best model given the data we have. What happens if the true model is not part of the tested ones? What if we have new physiological insights that tell us about the model structure after we have seen the data? Do we then ignore what we know and fit all bad models, and if none gives a good description we do model averaging of bad models? If we include a model with many parameters in the prespecification and only have a few dose strength, wouldn't the model with more parameters be more likely to give a good fit (e.g. when comparing Emax to logistic), with the consequence that a wrong dose might be selected? Colleagues from statistics recommend to cover all potential models with different shapes in the candidate set to avoid potential bias in dose selection, but they argue that post-hoc model fitting leads to data-dredging and over-fitting, does not account for model uncertainty and gives overly-optimistic results. I am wondering however what the difference in the approach is if anyway ALL potential models are considered (which can lead to overfitting as well)? Might a good solution be to combine PKPD modeling with MCP-Mod? Your opinion will be highly appreciated, and I am looking forward to receiving comments both in favour and against the approach :-) Best Nele ______________________________________________________________ Dr. Nele Mueller-Plock, CAPM Associate Scientific Director Pharmacometrics Global Pharmacometrics Translational Medicine Takeda Pharmaceuticals International GmbH Thurgauerstrasse 130 8152 Glattpark-Opfikon (Zrich) Switzerland Visitor address: Alpenstrasse 3 8152 Glattpark-Opfikon (Zrich) Switzerland Phone: (+41) 44 / 55 51 404 Mobile: (+41) 79 / 654 33 99 mailto: nele.mueller-plock_at_takeda.com<mailto:nele.kaessner_at_nycomed.com> http://www.takeda.com/ -------------------------------------------------------------------- The content of this email and of any files transmitted may contain confidential, proprietary or legally privileged information and is intended solely for the use of the person/s or entity/ies to whom it is addressed. If you have received this email in error you have no permission whatsoever to use, copy, disclose or forward all or any of its contents. Please immediately notify the sender and thereafter delete this email and any attachments. -------------------------------------------------------------------- ________________________________ Confidentiality Notice: This message is private and may contain confidential and proprietary information. If you have received this message in error, please notify us and remove it from your system and note that you must not copy, distribute or take any action in reliance on it. Any unauthorized use or disclosure of the contents of this message is not permitted and may be unlawful.

RE: Using MCP-MOD in dose finding for Phase 3

From: Magnus Åstrand Date: March 20, 2015 technical
Dear Nele, here are some thoughts: The idea with the MCPmod is twofold, a) provide a procedure for testing for a treatment effect and in that test incorporate all doses studies and still maintain control of type I error. b) If significance in a) continue with framework for estimating the dose response either by model selection or model averaging among the significant candidate models. I think you could use the principles of MCPmod even if you use a longitudinal model with a time course of your treatment effect. You could for example use the same time profile for the treatment effect in all doses, but estimate different magnitude for each dose. (indirect response model with effect on kin, one level for each dose) The estimated magnitudes would then replace the mean effect in each dose in the standard MCPmod application. The theory of MCPmod builds on the existence of a optimal contrast for a given true effect profile across your set of doses. Potentially there is a way to derive optimal tests but instead base that on a assumed distribution of the exposure across all your doses included, combined with a assumed true dose response curve. An interesting thought that I actually may explore! (I think the output would be a weight function w(exposure) so that you would get a test based on w(exposure)*observed_effect, sum across all your data. There is no limit on how many candidate models you can use, so I don't see that as a problem. Planning of your analysis across a wide range of potential DR functions to make sure you have good power whatever the true DR is recommended. (And actually by selecting a smart set of candidate models can improve on the power) You can include several emax, but with different set of parameters, combine that with other types of functions, sigmod emax. On your last bullet, a good way around is to use model averaging instead of model selection. If your model with more parameters only marginally improves the fit, the weight for that model will not be so high. My experience is that model averaging generally performs better than model selection. A big advantage is also if you end up with 2 equally good models, instead of presenting 2 results to your project, you combine them both into one. Kind regards Magnus Åstrand Senior Clinical Pharmacometrician, Ph.D. _____________________________________________________________________________________________ AstraZeneca Innovative Medicines | Quantitative Clinical Pharmacology SE-431 83 Mölndal, Sweden T: +46 (0)31 776 23 41 Mob: +46 (0)708 467 667 [email protected] Please consider the environment before printing this e-mail
Quoted reply history
From: [email protected] [mailto:[email protected]] On Behalf Of Mueller-Plock, Nele Sent: den 20 mars 2015 13:02 To: [email protected] Subject: [NMusers] Using MCP-MOD in dose finding for Phase 3 Dear all, I am writing to you as we are currently discussing the implementation of the MCP-MOD approach for dose finding based on Phase 2B results and would like to hear your opinion on this approach. It would be good to get feedback from both statisticians and classical modelers. I have thought about the approach, and have a few problems about seeing the advantage of the approach over complete population-PK/PD modeling. From what I understood, I can see the following issues: MCP-MOD · Only uses trial endpoints, i.e. it ignores the time course of the treatment effect. I have a problem with this because there might be noise in the endpoint (e.g. if the effect has reached a plateau), which might potentially lead to the selection of the wrong model structure. Including the time-course like in PKPD modeling approaches would detect that the deviation is just noise, and thus probably be able to identify the right model structure despite this. · Uses dose-response models instead of exposure-response models · Pre-specifies the model structure. While I understand that for pivotal trials prespecification is crucial, I would assume that Phase 2 is performed to allow exploration of the data to come up with the best model given the data we have. What happens if the true model is not part of the tested ones? What if we have new physiological insights that tell us about the model structure after we have seen the data? Do we then ignore what we know and fit all bad models, and if none gives a good description we do model averaging of bad models? · If we include a model with many parameters in the prespecification and only have a few dose strength, wouldn't the model with more parameters be more likely to give a good fit (e.g. when comparing Emax to logistic), with the consequence that a wrong dose might be selected? Colleagues from statistics recommend to cover all potential models with different shapes in the candidate set to avoid potential bias in dose selection, but they argue that post-hoc model fitting leads to data-dredging and over-fitting, does not account for model uncertainty and gives overly-optimistic results. I am wondering however what the difference in the approach is if anyway ALL potential models are considered (which can lead to overfitting as well)? Might a good solution be to combine PKPD modeling with MCP-Mod? Your opinion will be highly appreciated, and I am looking forward to receiving comments both in favour and against the approach :-) Best Nele ______________________________________________________________ Dr. Nele Mueller-Plock, CAPM Associate Scientific Director Pharmacometrics Global Pharmacometrics Translational Medicine Takeda Pharmaceuticals International GmbH Thurgauerstrasse 130 8152 Glattpark-Opfikon (Zürich) Switzerland Visitor address: Alpenstrasse 3 8152 Glattpark-Opfikon (Zürich) Switzerland Phone: (+41) 44 / 55 51 404 Mobile: (+41) 79 / 654 33 99 mailto: [email protected]<mailto:[email protected]> http://www.takeda.com/ -------------------------------------------------------------------- The content of this email and of any files transmitted may contain confidential, proprietary or legally privileged information and is intended solely for the use of the person/s or entity/ies to whom it is addressed. If you have received this email in error you have no permission whatsoever to use, copy, disclose or forward all or any of its contents. Please immediately notify the sender and thereafter delete this email and any attachments. -------------------------------------------------------------------- ________________________________ Confidentiality Notice: This message is private and may contain confidential and proprietary information. If you have received this message in error, please notify us and remove it from your system and note that you must not copy, distribute or take any action in reliance on it. Any unauthorized use or disclosure of the contents of this message is not permitted and may be unlawful.

RE: Using MCP-MOD in dose finding for Phase 3

From: Joseph Standing Date: March 23, 2015 technical
Dear Nele, One advantage of the biologically blind multiple model approach in Phase 2B is that it shifts the blame from the pharmacologist to the statistician when you get the wrong dose in Phase 3. Agree that you should be concerned when people want to use use dose rather than PK (not symmetrically distributed) when you can get it, and anyone suggesting they are not interested in biomarker or response time courses clearly needs get out more. My advice is to keep in mind that PK models are parameterised specifically to link to physiological processes: e.g. CL scales with organ function, and the Emax model is not just some function that a crazy pharmacologist dreamt up but can be derived from the law of mass action. Biomarker trajectories with time predicted using turnover models linked to Emax models will be more useful than an empirical function. Mechanistic pharmacometric models are trying to go beyond describing observed data, and may be used for extrapolating to places where we don't have data. I therefore have two general concerns about the "any old model" approach: 1. Highly data reliant and assumes you have covered the whole dose-response range - picks up on your point about adding in physiological knowledge. Perhaps some Bayesians might comment on how to sensibly incorporate prior information from earlier phases? 2. Difficult to see how resulting models can be used to extrapolate to special populations where we will never have Phase 2B type data. For pharma company strategic decision makers, what is needed is a systematic comparison of mechanistic PKPD dose recommendations versus MCPmod across a large range of compounds. Perhaps this has already been done (I don't know the literature on this), but if not and before putting all eggs in one basket then it would seem sensible to try to perform such a comparison. Also this needs to account for the fact that good mechanistic Phase 2B PKPD models will help support paediatric and other special population development, and in dose individualisation (personalised medicine) which as we leave the blockbuster era will become increasingly important. Joe Joseph F Standing MRC Fellow, UCL Institute of Child Health Antimicrobial Pharmacist, Great Ormond Street Hospital Tel: +44(0)207 905 2370 Mobile: +44(0)7970 572435
Quoted reply history
________________________________________ From: owner-nmusers_at_globomaxnm.com [owner-nmusers_at_globomaxnm.com] On Behalf Of strand, Magnus [Magnus.Astrand_at_astrazeneca.com] Sent: 20 March 2015 17:47 To: Mueller-Plock, Nele; nmusers_at_globomaxnm.com Subject: [NMusers] RE: Using MCP-MOD in dose finding for Phase 3 Dear Nele, here are some thoughts: The idea with the MCPmod is twofold, a) provide a procedure for testing for a treatment effect and in that test incorporate all doses studies and still maintain control of type I error. b) If significance in a) continue with framework for estimating the dose response either by model selection or model averaging among the significant candidate models. I think you could use the principles of MCPmod even if you use a longitudinal model with a time course of your treatment effect. You could for example use the same time profile for the treatment effect in all doses, but estimate different magnitude for each dose. (indirect response model with effect on kin, one level for each dose) The estimated magnitudes would then replace the mean effect in each dose in the standard MCPmod application. The theory of MCPmod builds on the existence of a optimal contrast for a given true effect profile across your set of doses. Potentially there is a way to derive optimal tests but instead base that on a assumed distribution of the exposure across all your doses included, combined with a assumed true dose response curve. An interesting thought that I actually may explore! (I think the output would be a weight function w(exposure) so that you would get a test based on w(exposure)*observed_effect, sum across all your data. There is no limit on how many candidate models you can use, so I dont see that as a problem. Planning of your analysis across a wide range of potential DR functions to make sure you have good power whatever the true DR is recommended. (And actually by selecting a smart set of candidate models can improve on the power) You can include several emax, but with different set of parameters, combine that with other types of functions, sigmod emax. On your last bullet, a good way around is to use model averaging instead of model selection. If your model with more parameters only marginally improves the fit, the weight for that model will not be so high. My experience is that model averaging generally performs better than model selection. A big advantage is also if you end up with 2 equally good models, instead of presenting 2 results to your project, you combine them both into one. Kind regards Magnus strand Senior Clinical Pharmacometrician, Ph.D. _____________________________________________________________________________________________ AstraZeneca Innovative Medicines | Quantitative Clinical Pharmacology SE-431 83 Mlndal, Sweden T: +46 (0)31 776 23 41 Mob: +46 (0)708 467 667 magnus.astrand_at_astrazeneca.com Please consider the environment before printing this e-mail From: owner-nmusers_at_globomaxnm.com [mailto:owner-nmusers_at_globomaxnm.com] On Behalf Of Mueller-Plock, Nele Sent: den 20 mars 2015 13:02 To: nmusers_at_globomaxnm.com Subject: [NMusers] Using MCP-MOD in dose finding for Phase 3 Dear all, I am writing to you as we are currently discussing the implementation of the MCP-MOD approach for dose finding based on Phase 2B results and would like to hear your opinion on this approach. It would be good to get feedback from both statisticians and classical modelers. I have thought about the approach, and have a few problems about seeing the advantage of the approach over complete population-PK/PD modeling. From what I understood, I can see the following issues: MCP-MOD Only uses trial endpoints, i.e. it ignores the time course of the treatment effect. I have a problem with this because there might be noise in the endpoint (e.g. if the effect has reached a plateau), which might potentially lead to the selection of the wrong model structure. Including the time-course like in PKPD modeling approaches would detect that the deviation is just noise, and thus probably be able to identify the right model structure despite this. Uses dose-response models instead of exposure-response models Pre-specifies the model structure. While I understand that for pivotal trials prespecification is crucial, I would assume that Phase 2 is performed to allow exploration of the data to come up with the best model given the data we have. What happens if the true model is not part of the tested ones? What if we have new physiological insights that tell us about the model structure after we have seen the data? Do we then ignore what we know and fit all bad models, and if none gives a good description we do model averaging of bad models? If we include a model with many parameters in the prespecification and only have a few dose strength, wouldnt the model with more parameters be more likely to give a good fit (e.g. when comparing Emax to logistic), with the consequence that a wrong dose might be selected? Colleagues from statistics recommend to cover all potential models with different shapes in the candidate set to avoid potential bias in dose selection, but they argue that post-hoc model fitting leads to data-dredging and over-fitting, does not account for model uncertainty and gives overly-optimistic results. I am wondering however what the difference in the approach is if anyway ALL potential models are considered (which can lead to overfitting as well)? Might a good solution be to combine PKPD modeling with MCP-Mod? Your opinion will be highly appreciated, and I am looking forward to receiving comments both in favour and against the approach :-) Best Nele ______________________________________________________________ Dr. Nele Mueller-Plock, CAPM Associate Scientific Director Pharmacometrics Global Pharmacometrics Translational Medicine Takeda Pharmaceuticals International GmbH Thurgauerstrasse 130 8152 Glattpark-Opfikon (Zrich) Switzerland Visitor address: Alpenstrasse 3 8152 Glattpark-Opfikon (Zrich) Switzerland Phone: (+41) 44 / 55 51 404 Mobile: (+41) 79 / 654 33 99 mailto: nele.mueller-plock_at_takeda.com<mailto:nele.kaessner_at_nycomed.com> http://www.takeda.com/ -------------------------------------------------------------------- The content of this email and of any files transmitted may contain confidential, proprietary or legally privileged information and is intended solely for the use of the person/s or entity/ies to whom it is addressed. If you have received this email in error you have no permission whatsoever to use, copy, disclose or forward all or any of its contents. Please immediately notify the sender and thereafter delete this email and any attachments. -------------------------------------------------------------------- ________________________________ Confidentiality Notice: This message is private and may contain confidential and proprietary information. If you have received this message in error, please notify us and remove it from your system and note that you must not copy, distribute or take any action in reliance on it. Any unauthorized use or disclosure of the contents of this message is not permitted and may be unlawful. ******************************************************************************************************************** This message may contain confidential information. If you are not the intended recipient please inform the sender that you have received the message in error before deleting it. Please do not disclose, copy or distribute information in this e-mail or take any action in reliance on its contents: to do so is strictly prohibited and may be unlawful. Thank you for your co-operation. NHSmail is the secure email and directory service available for all NHS staff in England and Scotland NHSmail is approved for exchanging patient data and other sensitive information with NHSmail and GSi recipients NHSmail provides an email address for your career in the NHS and can be accessed anywhere ********************************************************************************************************************

RE: Using MCP-MOD in dose finding for Phase 3

From: Joseph Standing Date: March 23, 2015 technical
Dear Nele, One advantage of the biologically blind multiple model approach in Phase 2B is that it shifts the blame from the pharmacologist to the statistician when you get the wrong dose in Phase 3. Agree that you should be concerned when people want to use use dose rather than PK (not symmetrically distributed) when you can get it, and anyone suggesting they are not interested in biomarker or response time courses clearly needs get out more. My advice is to keep in mind that PK models are parameterised specifically to link to physiological processes: e.g. CL scales with organ function, and the Emax model is not just some function that a crazy pharmacologist dreamt up but can be derived from the law of mass action. Biomarker trajectories with time predicted using turnover models linked to Emax models will be more useful than an empirical function. Mechanistic pharmacometric models are trying to go beyond describing observed data, and may be used for extrapolating to places where we don't have data. I therefore have two general concerns about the "any old model" approach: 1. Highly data reliant and assumes you have covered the whole dose-response range - picks up on your point about adding in physiological knowledge. Perhaps some Bayesians might comment on how to sensibly incorporate prior information from earlier phases? 2. Difficult to see how resulting models can be used to extrapolate to special populations where we will never have Phase 2B type data. For pharma company strategic decision makers, what is needed is a systematic comparison of mechanistic PKPD dose recommendations versus MCPmod across a large range of compounds. Perhaps this has already been done (I don't know the literature on this), but if not and before putting all eggs in one basket then it would seem sensible to try to perform such a comparison. Also this needs to account for the fact that good mechanistic Phase 2B PKPD models will help support paediatric and other special population development, and in dose individualisation (personalised medicine) which as we leave the blockbuster era will become increasingly important. Joe Joseph F Standing MRC Fellow, UCL Institute of Child Health Antimicrobial Pharmacist, Great Ormond Street Hospital Tel: +44(0)207 905 2370 Mobile: +44(0)7970 572435
Quoted reply history
________________________________________ From: [email protected] [[email protected]] On Behalf Of Åstrand, Magnus [[email protected]] Sent: 20 March 2015 17:47 To: Mueller-Plock, Nele; [email protected] Subject: [NMusers] RE: Using MCP-MOD in dose finding for Phase 3 Dear Nele, here are some thoughts: The idea with the MCPmod is twofold, a) provide a procedure for testing for a treatment effect and in that test incorporate all doses studies and still maintain control of type I error. b) If significance in a) continue with framework for estimating the dose response either by model selection or model averaging among the significant candidate models. I think you could use the principles of MCPmod even if you use a longitudinal model with a time course of your treatment effect. You could for example use the same time profile for the treatment effect in all doses, but estimate different magnitude for each dose. (indirect response model with effect on kin, one level for each dose) The estimated magnitudes would then replace the mean effect in each dose in the standard MCPmod application. The theory of MCPmod builds on the existence of a optimal contrast for a given true effect profile across your set of doses. Potentially there is a way to derive optimal tests but instead base that on a assumed distribution of the exposure across all your doses included, combined with a assumed true dose response curve. An interesting thought that I actually may explore! (I think the output would be a weight function w(exposure) so that you would get a test based on w(exposure)*observed_effect, sum across all your data. There is no limit on how many candidate models you can use, so I don’t see that as a problem. Planning of your analysis across a wide range of potential DR functions to make sure you have good power whatever the true DR is recommended. (And actually by selecting a smart set of candidate models can improve on the power) You can include several emax, but with different set of parameters, combine that with other types of functions, sigmod emax. On your last bullet, a good way around is to use model averaging instead of model selection. If your model with more parameters only marginally improves the fit, the weight for that model will not be so high. My experience is that model averaging generally performs better than model selection. A big advantage is also if you end up with 2 equally good models, instead of presenting 2 results to your project, you combine them both into one. Kind regards Magnus Åstrand Senior Clinical Pharmacometrician, Ph.D. _____________________________________________________________________________________________ AstraZeneca Innovative Medicines | Quantitative Clinical Pharmacology SE-431 83 Mölndal, Sweden T: +46 (0)31 776 23 41 Mob: +46 (0)708 467 667 [email protected] Please consider the environment before printing this e-mail From: [email protected] [mailto:[email protected]] On Behalf Of Mueller-Plock, Nele Sent: den 20 mars 2015 13:02 To: [email protected] Subject: [NMusers] Using MCP-MOD in dose finding for Phase 3 Dear all, I am writing to you as we are currently discussing the implementation of the MCP-MOD approach for dose finding based on Phase 2B results and would like to hear your opinion on this approach. It would be good to get feedback from both statisticians and classical modelers. I have thought about the approach, and have a few problems about seeing the advantage of the approach over complete population-PK/PD modeling. From what I understood, I can see the following issues: MCP-MOD · Only uses trial endpoints, i.e. it ignores the time course of the treatment effect. I have a problem with this because there might be noise in the endpoint (e.g. if the effect has reached a plateau), which might potentially lead to the selection of the wrong model structure. Including the time-course like in PKPD modeling approaches would detect that the deviation is just noise, and thus probably be able to identify the right model structure despite this. · Uses dose-response models instead of exposure-response models · Pre-specifies the model structure. While I understand that for pivotal trials prespecification is crucial, I would assume that Phase 2 is performed to allow exploration of the data to come up with the best model given the data we have. What happens if the true model is not part of the tested ones? What if we have new physiological insights that tell us about the model structure after we have seen the data? Do we then ignore what we know and fit all bad models, and if none gives a good description we do model averaging of bad models? · If we include a model with many parameters in the prespecification and only have a few dose strength, wouldn’t the model with more parameters be more likely to give a good fit (e.g. when comparing Emax to logistic), with the consequence that a wrong dose might be selected? Colleagues from statistics recommend to cover all potential models with different shapes in the candidate set to avoid potential bias in dose selection, but they argue that post-hoc model fitting leads to data-dredging and over-fitting, does not account for model uncertainty and gives overly-optimistic results. I am wondering however what the difference in the approach is if anyway ALL potential models are considered (which can lead to overfitting as well)? Might a good solution be to combine PKPD modeling with MCP-Mod? Your opinion will be highly appreciated, and I am looking forward to receiving comments both in favour and against the approach :-) Best Nele ______________________________________________________________ Dr. Nele Mueller-Plock, CAPM Associate Scientific Director Pharmacometrics Global Pharmacometrics Translational Medicine Takeda Pharmaceuticals International GmbH Thurgauerstrasse 130 8152 Glattpark-Opfikon (Zürich) Switzerland Visitor address: Alpenstrasse 3 8152 Glattpark-Opfikon (Zürich) Switzerland Phone: (+41) 44 / 55 51 404 Mobile: (+41) 79 / 654 33 99 mailto: [email protected]<mailto:[email protected]> http://www.takeda.com/ -------------------------------------------------------------------- The content of this email and of any files transmitted may contain confidential, proprietary or legally privileged information and is intended solely for the use of the person/s or entity/ies to whom it is addressed. If you have received this email in error you have no permission whatsoever to use, copy, disclose or forward all or any of its contents. Please immediately notify the sender and thereafter delete this email and any attachments. -------------------------------------------------------------------- ________________________________ Confidentiality Notice: This message is private and may contain confidential and proprietary information. If you have received this message in error, please notify us and remove it from your system and note that you must not copy, distribute or take any action in reliance on it. Any unauthorized use or disclosure of the contents of this message is not permitted and may be unlawful. ******************************************************************************************************************** This message may contain confidential information. If you are not the intended recipient please inform the sender that you have received the message in error before deleting it. Please do not disclose, copy or distribute information in this e-mail or take any action in reliance on its contents: to do so is strictly prohibited and may be unlawful. Thank you for your co-operation. NHSmail is the secure email and directory service available for all NHS staff in England and Scotland NHSmail is approved for exchanging patient data and other sensitive information with NHSmail and GSi recipients NHSmail provides an email address for your career in the NHS and can be accessed anywhere ********************************************************************************************************************
Dear Nele, Dear all, Below in red in Nele's e-mail, you will find the input of Bjoern Bornkamp, a statistician from the Novartis Stats/Methods group. I forwarded your mail to him. Bjoern was involved in the qualification discussion with EMA together with Jose Pinheiro and Frank Bretz. He is one of the implementers of the MCP-Mod methodology within Novartis, and applies it routinely in Phase 2 studies. I am sure that Bjoern' answers will help. Bye Jean-Louis Steimer +++++++++++++++++++++++ Dear all, I am writing to you as we are currently discussing the implementation of the MCP-MOD approach for dose finding based on Phase 2B results and would like to hear your opinion on this approach. Original MCP-Mod is not intended to be used in Ph III, special adaptations are necessary (closed testing). It would be good to get feedback from both statisticians and classical modelers. I have thought about the approach, and have a few problems about seeing the advantage of the approach over complete population-PK/PD modeling. These are two different approaches that complement each other. MCP-Mod is not intended to replace population-PK/PD modeling (the idea is to replace ANOVA-type models). I can see benefits to do both a simple cross-sectional dose-response analysis (like MCP-Mod) and a complete dose-exposure-response characterization. If results are consistent between both approaches one would have more confidence overall in the analysis results than from either analysis alone. If results are not consistent one needs to dig a bit "deeper", but this is also useful information. >From what I understood, I can see the following issues: MCP-MOD · Only uses trial endpoints, i.e. it ignores the time course of the treatment effect. I have a problem with this because there might be noise in the endpoint (e.g. if the effect has reached a plateau), which might potentially lead to the selection of the wrong model structure. Including the time-course like in PKPD modeling approaches would detect that the deviation is just noise, and thus probably be able to identify the right model structure despite this. MCP-Mod can handle longitudinal data, see Pinheiro et al. (2014), Stat Med. 33,1646-61 for one example, which is also available in the DoseFinding R package. · Uses dose-response models instead of exposure-response models Correct. Again, MCP-Mod is not intended to replace population-PK/PD modeling. We have started thinking how to extend the key ideas of MCP-Mod to exposure-response models and encourage the community to look into this. · Pre-specifies the model structure. While I understand that for pivotal trials prespecification is crucial, I would assume that Phase 2 is performed to allow exploration of the data to come up with the best model given the data we have. What happens if the true model is not part of the tested ones? What if we have new physiological insights that tell us about the model structure after we have seen the data? Do we then ignore what we know and fit all bad models, and if none gives a good description we do model averaging of bad models? Excellent questions. Candidate models for MCP-Mod should always be selected based on entire teams input and operating characteristics should be evaluated upfront. More specifically, our experience shows that MCP-Mod is relatively robust if the true model is not part of the tested ones, see for example Pinheiro et al. (2006), J. Biopharm. Statist. 16,639-656. This is also something that can be evaluated to some extend upfront (at the design stage) by simulations. Among other things one advantage of pre-specification is that it makes the modelling more transparent/credible for externals (e.g. health authorities), if one specifies before seeing the data what will be done. But of course there is a trade-off: Not sure if it is possible to pre-specify a full population PK/PD analysis. · If we include a model with many parameters in the prespecification and only have a few dose strength, wouldn't the model with more parameters be more likely to give a good fit (e.g. when comparing Emax to logistic), with the consequence that a wrong dose might be selected? Not sure whether I fully understand this question. Of course the model-selection/averaging step of MCP-Mod would take into account the model complexity by using AIC/BIC (not only looking at model fit). Again, operating characteristics need to be evaluated in advance, which include precision of target dose estimation and also possible convergence problems if the number of parameters is to larger. Colleagues from statistics recommend to cover all potential models with different shapes in the candidate set to avoid potential bias in dose selection, but they argue that post-hoc model fitting leads to data-dredging and over-fitting, does not account for model uncertainty and gives overly-optimistic results. I am wondering however what the difference in the approach is if anyway ALL potential models are considered (which can lead to overfitting as well)? There is a penalty for using many models in MCP-Mod: In the MCP step the multiplicity adjustment would get higher if there are more models included (in particular if they are very different). In the Mod step the variance of the dose-response curve would increase with an increased number of models, so there one faces the usual variance/bias trade-off. Might a good solution be to combine PKPD modeling with MCP-Mod? Yes, see above Your opinion will be highly appreciated, and I am looking forward to receiving comments both in favour and against the approach :-) Best Nele
Quoted reply history
From: [email protected] [mailto:[email protected]] On Behalf Of Mueller-Plock, Nele Sent: Friday, March 20, 2015 1:02 PM To: [email protected] Subject: [NMusers] Using MCP-MOD in dose finding for Phase 3 Dear all, I am writing to you as we are currently discussing the implementation of the MCP-MOD approach for dose finding based on Phase 2B results and would like to hear your opinion on this approach. It would be good to get feedback from both statisticians and classical modelers. I have thought about the approach, and have a few problems about seeing the advantage of the approach over complete population-PK/PD modeling. From what I understood, I can see the following issues: MCP-MOD · Only uses trial endpoints, i.e. it ignores the time course of the treatment effect. I have a problem with this because there might be noise in the endpoint (e.g. if the effect has reached a plateau), which might potentially lead to the selection of the wrong model structure. Including the time-course like in PKPD modeling approaches would detect that the deviation is just noise, and thus probably be able to identify the right model structure despite this. · Uses dose-response models instead of exposure-response models · Pre-specifies the model structure. While I understand that for pivotal trials prespecification is crucial, I would assume that Phase 2 is performed to allow exploration of the data to come up with the best model given the data we have. What happens if the true model is not part of the tested ones? What if we have new physiological insights that tell us about the model structure after we have seen the data? Do we then ignore what we know and fit all bad models, and if none gives a good description we do model averaging of bad models? · If we include a model with many parameters in the prespecification and only have a few dose strength, wouldn't the model with more parameters be more likely to give a good fit (e.g. when comparing Emax to logistic), with the consequence that a wrong dose might be selected? Colleagues from statistics recommend to cover all potential models with different shapes in the candidate set to avoid potential bias in dose selection, but they argue that post-hoc model fitting leads to data-dredging and over-fitting, does not account for model uncertainty and gives overly-optimistic results. I am wondering however what the difference in the approach is if anyway ALL potential models are considered (which can lead to overfitting as well)? Might a good solution be to combine PKPD modeling with MCP-Mod? Your opinion will be highly appreciated, and I am looking forward to receiving comments both in favour and against the approach :-) Best Nele ______________________________________________________________ Dr. Nele Mueller-Plock, CAPM Associate Scientific Director Pharmacometrics Global Pharmacometrics Translational Medicine Takeda Pharmaceuticals International GmbH Thurgauerstrasse 130 8152 Glattpark-Opfikon (Zürich) Switzerland Visitor address: Alpenstrasse 3 8152 Glattpark-Opfikon (Zürich) Switzerland Phone: (+41) 44 / 55 51 404 Mobile: (+41) 79 / 654 33 99 mailto: [email protected]<mailto:[email protected]> http://www.takeda.com/ -------------------------------------------------------------------- The content of this email and of any files transmitted may contain confidential, proprietary or legally privileged information and is intended solely for the use of the person/s or entity/ies to whom it is addressed. If you have received this email in error you have no permission whatsoever to use, copy, disclose or forward all or any of its contents. Please immediately notify the sender and thereafter delete this email and any attachments.

RE: Using MCP-MOD in dose finding for Phase 3

From: Mike K Smith Date: March 23, 2015 technical
Dear Nele, The EMA Scientific Advice on MCP-Mod is really worth reading here. http://www.ema.europa.eu/docs/en_GB/document_library/Regulatory_and_procedural_guideline/2014/02/WC500161027.pdf All materials from the qualification: http://www.ema.europa.eu/ema/index.jsp?curl=pages/regulation/document_listing/document_listing_000319.jsp#section3 Some key sentences from the CHMP qualification opinion: "The MCP-Mod approach is efficient in the sense that it uses the available data better than the commonly applied pairwise comparisons. It is fully appreciated that certain benefits that may be derived from an MCP-Mod approach would also be derived from other model-based approaches and that modelling approaches are not restricted to those based on dose-response. MCP-Mod represents one tool in the toolbox of the well-informed drug developer. In that sense, this opinion does not preclude any other statistical methodology for model-based design and analysis of exploratory dose finding studies from being used." In other words - Many dose-response analyses that are seen by EMA use pairwise comparisons between doses, despite ICH-E4 saying (>20 years ago) "Study designs usually should emphasize elucidation of the dose-response function, not individual pairwise comparisons." So MCP-Mod meets that criteria ("one tool in the toolbox"), as do the other methods you discuss. Many other approaches could (and should) be used to properly characterise and learn about Dose-Exposure-Response. You can go to town in using a "fit all models" Bayesian model averaging technique, but in the end "ALL models are wrong" and if you use that technique then I'd guess that the majority would be "sub-optimal". Question is, how to best learn what's going on for your drug in this population so that you can then successfully pick a dose and confirm efficacy? I'm not sure there's a "best" or one size fits all solution. Prespecification means that you can easily write a protocol stats section and SAP, hand off the analysis to a third party and expect a result within 3 days of the database unblinding. Learning fully about the disease progression, pharmacology, characterising benefit risk takes a little more work, time and careful consideration however... Mike Mike K. Smith Pharmacometrics Pfizer, Sandwich Tel: +44 (0)1304 643561 LEGAL NOTICE Unless expressly stated otherwise, this message is confidential and may be privileged. It is intended for the addressee(s) only. Access to this e-mail by anyone else is unauthorised. If you are not an addressee, any disclosure or copying of the contents of this e-mail or any action taken (or not taken) in reliance on it is unauthorised and may be unlawful. If you are not an addressee, please inform the sender immediately. Pfizer Limited is registered in England under No. 526209 with its registered office at Ramsgate Road, Sandwich, Kent CT13 9NJ

Re: Using MCP-MOD in dose finding for Phase 3

From: Alan Maloney Date: March 27, 2015 technical
Hi Nele/All, I wanted to (belatedly) add a few comments about MCP-MOD and the comments you have received, and Phase 2 design/analysis in general. In short, I agree with most of the observations made by yourself and others, and I would not want to use MCP-MOD (see my comments " http://www.ema.europa.eu/docs/en_GB/document_library/Other/2014/02/WC500161028.pdf"; and other comments, in particular those of Qing Liu). That said, I would fully agree with Björn, in that it is clearly better than pairwise comparisons. Like Mike, I find it incredulous that Phase 2 Dose-Exposure-Response (D-E-R) studies are still being designed WITHOUT planned D-E-R analyses...the D-E-R is the purpose of the study!...hopefully MCP-MOD will continue to generate the types of discussions you are having, which is great. We can consider 5 aspects of Phase 2 design, which overlap with different aspects of MCP-MOD a) The models being considered (the "model space") b) The design of the study (the "design/data space"...e.g. minimum dose, maximum dose, dose spacing, N, observation schedule etc.)) c) The metric of designs performance (e.g. the expected accuracy and precision of our potential D-E-R models under alternative study designs) d) The ability of the design/data to detect a D-E-R relationship e) The presentation of multiple credible models, and possible model averaging. a) The models being considered (the "model space") I think that D-E-R models should normally be based around (longitudinal) sigmoidal Emax type models. The sigmoidal Emax is special! (1). That is, I do not wish to design or analysis my study using a linear (or log-linear, umbrella) model, so my candidate set of models clearly differ from MCP-MOD. See Neil Thomas's work looking at the appropriateness of this model in drug development (" http://www.ema.europa.eu/docs/en_GB/document_library/Presentation/2015/01/WC500179795.pdf";). Clearly there are multiple options around the longitudinal sigmoidal Emax model, including the formulation of the longitudinal component, treatment of missing data, location/parameterisation of random effects, correlation structure between timepoints, use of dose/exposure/concentration (and what PK model?), covariate effects etc. This is my "model space". In addition, we should put the study data into the framework of external analyses (e.g. a Model Based Meta Analysis (MBMA)), where we often have a good idea of parameters associated with the expected changes over time, the maximal effect for that class of compounds, the effect for a comparator arm etc. Thus we can think of both models where we use only the internal data AND models where we utilise information from external data. For example, if we wished to determine the precision of the D-E-R versus an active comparator, we could use the internal arm in the study as the reference (say an effect (95% CI) of 1 (0.3, 1.7). However a MBMA may put the comparator effect at 1.2 (0.9, 1.5). Clearly both references are credible, and the two results are not inconsistent with each other. Surely it makes sense to determine the doses to take forward after reviewing both sets of results. There are numerous examples (correlations between endpoints, turnover parameters, "system" components etc.) where we should look to leverage external data and understanding to augment our interpretation of the (relatively weak) data from a single study. b) The design of the study ("design/data space"), and c) metrics of design performance The design is critical, and we should always assess the performance of the combination of "model(s) + potential true model parameters sets + design" for both efficacy and safety endpoints. This is always enlightening, and often reveals why Phase 2 studies do need to be large...getting high levels of precision on the D-E-R is hard, even when we use all the data! To minimise N (and maximise information), the design should be adaptive. We should learn as we go, and target those dose levels which teach us most about the D-E-R for both efficacy and safety endpoints as we accrue data during the study. Safety D-E-R should not be a post-hoc (and weak) secondary analysis. A good example using multiple efficacy and safety parameters to adaptively find doses with potentially maximum utility in Phase 2 is worth reading (2), even though we may not love the models and subsequent dose selection methods therein for the Phase 3 part. This is one example (adaptive design) where having an initial simpler D-R model will be helpful for the dose adaptations, since it may be logistically challenging to get PK information in real time. A criticism of MCP-MOD is that if you wish to entertain models like the linear model at the analysis stage, you should also design/optimise your study around these models. Since I have no desire to fit a linear model, I can happily ignore it at the design stage, and focus on designs which will do well over a set of plausible sigmoidal Emax type models. d) The ability of the design/data to detect a D-E-R relationship The MCP part of MCP-MOD is concerned with being able to reject the "no D-E-R" null hypothesis. Like the Power to detect a given treatment effect, we can indeed discuss the power to detect a given D-E-R, but it is often quite pointless. Crudely, we could say we have detected a D-E-R if the 95% CI for Emax does not include zero, but this result is wholly useless from a prediction perspective, since our D-E-R predictions will range from a lot to near zero. That is, standard "powered" phase 2 studies do not ensure useful predictions can be made. Thus the N required to obtain a reasonably high precision on the D-E-R is MUCH higher than that needed to detect the D-E-R. In short, if we are trying to work out if the D-E-R is not flat at the final analysis stage, the design was probably flawed (or we should have stopped for futility a while ago). e) The presentation of multiple credible models, and possible model averaging. Whilst I am not against using Bayesian model averaging per se, I think the individual results for each credible model should be presented simultaneously, to see if any key decisions (e.g. dose selections for phase 3) are dependent on the choice of model (...think of a forest plot). Clearly we hope they are not, but when they are, we need to know that, since we may wish to dig further and/or make decisions that reflect our uncertainty (rather than simply presenting an "average" effect). Of note, clearly the model set being combined is key (e.g. if it is a set of PKPD models which differ only in covariate effects, then the results may be all very similar, whilst structurally different models from separate modelling groups (e.g. pharmacometrics, stats, system pharmacologists) may yield a much wider distribution of predictions. I'll stop there. Nele...if you feel any of your original questions remain unanswered, feel free to give me a call. Kind regards, Al (1) see " https://www.youtube.com/watch?v=E713BehI2fE)" (2) See " http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3570871/"; and related papers Al Maloney Consultant Pharmacometrician Phone: +46 35 10 39 78 E-mail:[email protected] E-mail:[email protected]
Quoted reply history
On 20 Mar 2015, at 13:01, Mueller-Plock, Nele wrote: > Dear all, > > I am writing to you as we are currently discussing the implementation of the > MCP-MOD approach for dose finding based on Phase 2B results and would like to > hear your opinion on this approach. It would be good to get feedback from > both statisticians and classical modelers. > I have thought about the approach, and have a few problems about seeing the > advantage of the approach over complete population-PK/PD modeling. From what > I understood, I can see the following issues: > MCP-MOD > · Only uses trial endpoints, i.e. it ignores the time course of the > treatment effect. I have a problem with this because there might be noise in > the endpoint (e.g. if the effect has reached a plateau), which might > potentially lead to the selection of the wrong model structure. Including the > time-course like in PKPD modeling approaches would detect that the deviation > is just noise, and thus probably be able to identify the right model > structure despite this. > · Uses dose-response models instead of exposure-response models > · Pre-specifies the model structure. While I understand that for > pivotal trials prespecification is crucial, I would assume that Phase 2 is > performed to allow exploration of the data to come up with the best model > given the data we have. What happens if the true model is not part of the > tested ones? What if we have new physiological insights that tell us about > the model structure after we have seen the data? Do we then ignore what we > know and fit all bad models, and if none gives a good description we do model > averaging of bad models? > · If we include a model with many parameters in the prespecification > and only have a few dose strength, wouldn’t the model with more parameters be > more likely to give a good fit (e.g. when comparing Emax to logistic), with > the consequence that a wrong dose might be selected? > > Colleagues from statistics recommend to cover all potential models with > different shapes in the candidate set to avoid potential bias in dose > selection, but they argue that post-hoc model fitting leads to data-dredging > and over-fitting, does not account for model uncertainty and gives > overly-optimistic results. I am wondering however what the difference in the > approach is if anyway ALL potential models are considered (which can lead to > overfitting as well)? > Might a good solution be to combine PKPD modeling with MCP-Mod? > > Your opinion will be highly appreciated, and I am looking forward to > receiving comments both in favour and against the approach :-) > > Best > Nele > ______________________________________________________________ > > Dr. Nele Mueller-Plock, CAPM > Associate Scientific Director Pharmacometrics > Global Pharmacometrics > Translational Medicine > > Takeda Pharmaceuticals International GmbH > Thurgauerstrasse 130 > 8152 Glattpark-Opfikon (Zürich) > Switzerland > > Visitor address: > Alpenstrasse 3 > 8152 Glattpark-Opfikon (Zürich) > Switzerland > > Phone: (+41) 44 / 55 51 404 > Mobile: (+41) 79 / 654 33 99 > > mailto: [email protected] > http://www.takeda.com > > -------------------------------------------------------------------- > > The content of this email and of any files transmitted may contain > confidential, proprietary or legally privileged information and is intended > solely for the use of the person/s or entity/ies to whom it is addressed. If > you have received this email in error you have no permission whatsoever to > use, copy, disclose or forward all or any of its contents. Please immediately > notify the sender and thereafter delete this email and any attachments. > > -------------------------------------------------------------------- > >