Leonid, ; Very similar to my experience, running large jobs (24+ hours on one core) on a cluster of 9, 6 core machines - typically about 80% efficiency (by exactly the same metric). I think that is the spectrum, ~90% on single machine, ~80% on cluster, and it looks like maybe 50-60% on cloud. Only hardware configuration missing is VPN - I'd guess it would be between cluster and cloud. (and might be a very good business model for small users to share resources).
Your mileage may vary.
Mark Sale MD
President, Next Level Solutions, LLC
www.NextLevelSolns.com
919-846-9185A carbon-neutral companySee our real time solar energy production at: http://enlighten.enphaseenergy.com/public/systems/aSDz2458
Quoted reply history
-------- Original Message --------
Subject: Re: [NMusers] CycleCloud BigScience Challenge giving away
~8-hours on 30000 core cluster for research
From: Leonid Gibiansky <
To: Bill Knebel <
on the same computer improved the speed proportionally to the number of
processors with the efficiency (for 12 processor run) in the range of 85
to 95% for all methods except BAYES, which had parallelization
efficiency of about 70%." Efficiency was defined as
100%*(single CPU run time)/(multiple CPU run time)/(number of processors)
Roughly, the model run on 11 processors was 10 times faster than the
single-processor run of the same model
Leonid
--------------------------------------
Leonid Gibiansky, Ph.D.
President, QuantPharm LLC
web: www.quantpharm.com
e-mail: LGibiansky at quantpharm.com
tel: (301) 767 5566
On 11/14/2011 9:54 AM, Bill Knebel wrote:
> David,
>
> Some limited benchmarking results are listed at the bottom of this
> email. It is also important to remember that there are ways that
> cloud-computing helps beyond parallel NONMEM. Cloud-computing allows on
> demand, user specific, clusters that can grow and shrink depending on
> user requirements. This allows for rapid completion of large bootstrap
> processes (500 - 1000 jobs) in the time it takes to run one,
> non-parallel job. Users can also evaluate model variants simultaneously
> with cluster size being increased or decreased as needed. Users do not
> have to worry about competing for resources (compute cores) with other
> users because cloud-computing clusters are user and/or project specific.
> The performance gains are evident in parallel NONMEM jobs and single
> modeling projects, but it is important to look beyond the simple
> benefits of individual job/project speed-up and more towards the impact
> of cloud-computing on the entire portfolio of modeling and simulation
> projects in a given group or company.
>
> Bill
>
>
> Model 1 - ADVAN6, 1000 subjects, dual linear and non-linear elimination
> CoresRuntime (hr)
> 180.7
> 816.8
> 168.9
> 246.2
> 483.8
> 962.5
>
> Model 2 - ADVAN6 - 70 subjects, PKPD model
> CoresRuntime (hr)
> 14 hr
> 81.1
> 160.66
> 240.4
> 480.31
>
> cores = number of compute cores (value of "NODES=" argument in NONMEM
> pmn file)
>
> On Nov 9, 2011, at 9:28 PM, David Foster wrote:
>
>> I agree with Julia,
>>
>> Thanks for this Bill, but quantitative benchmarking would be very much
>> appreciated.
>>
>> Regards,
>>
>> David
>>
>>
>> On 10/11/11 9:55 AM, "Ivashina, Julia" <
>> Bill,
>>
>> This is nice to hear about the speed improvements for NONMEM you
>> received.
>>
>> Could you please describe gains in performance in a qualitative
>> manner with model examples you used.
>> I think everyone will benefit from such NONMEM 7.2 benchmarking.
>>
>> I posted a similar question in March but not many responded.
>>
>> Thanks,
>> Julia
>>
>> -----Original Message-----
>> From: owner-nmusers sg: <x-msg:idssoftware oftware >
>> >
>> > William J. Bachman, Ph.D.
>> > Director, Pharmacometrics R&D
>;> > Icon Development Solutions
>> > 6031 University Blvd., Suite 300
>> > Ellicott City, MD 21043
>> > Office 215-616-8699
>> > >
>> >
>> >
>;> >
>> > From:
>> > Sent: Friday, November 04, 2011 2:12 PM
>> > To: Nick Holford
>> > Cc: ><x-msg: http://bit.ly/BigScience
>> >
>> > The application process is simply answering 4 questions (takes
>> less than half an hour): State who you are, what is your research,
>> why it is important, and how you currently run computation. The
>> applicatiopn is available here:
>> > http://cyclecomputing.com/big-science-challenge/overview
>> >
>> > So far, response has been great, and Inside HPC covered
>> descriptions of some of the recent applications we've received:
>> > http://insidehpc.com/2011/10/27/24209/
>> > http://blog.cyclecomputing.com/
>> >
>> > Submissions are due by November 7th, so submit early and we hope
>> to help some of you get some BigScience done quickly.>> >
>> > Best,
>> > Jason
>> >
>> > --
>> >
>> >
>> > ===
=============
==
>> > Jason A. Stowe
>> > cell: 607.227.9686
>> > main: 888.292.5320
>> >
>> > http://twitter.com/jasonastowe/
>> > http://twitter.com/cyclecomputing/
>> >
>> > Cycle Computing, LLC
>> > Leader in Open Compute Solutions for Clouds, Servers, and Desktops
>> > Enterprise Condor Support and Management Tools
>> >
>> > http://www.cyclecomputing.com http://www.cyclecomputing.com/;
>> > http://www.cyclecloud.com http://www.cyclecloud.com/;
>> >
>> >
>> > --
>> > Nick Holford, Professor Clinical Pharmacology
>> > Dept Pharmacology& Clinical Pharmacology
>> > University of Auckland,85 Park Rd,Private Bag 92019,Auckland,New
>> Zealand
>> > tel:+64(9)923-6730 fax:+64(9)373-7090 mobile:+64(21)46 23 53
>> > email: //692/n.holford ; http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford
>> >
>> >
>> >
>> >>> > --
>> >
>> >>> > =================
>> > Jason A. Stowe
>> > cell: 607.227.9686
>>; > main: 888.292.5320
>> >
>> > http://twitter.com/jasonastowe/
>> > http://twitter.com/cyclecomputing/
>> >
>> > Cycle Computing, LLC
>> > Leader in Open Compute Solutions for Clouds, Servers, and Desktops
>> > Enterprise Condor Support and Management Tools
>> >
>> > http://www.cyclecomputing.com http://www.cyclecomputing.com/;
>> > http://www.cyclecloud.com http://www.cyclecloud.com/;
>> >
>> >