NONMEM/PsN benchmark for SGE expansion
Dear all,
We would like to benchmark our new SGE cluster, and appreciate anyone who has
performed a similar task and can share the findings.
We use NONMEM 7.1.2 with PsN 3.2.12 in two cluster environments.
Our older environment consists of 9 quad core machines (about 40 work nodes,
counting the head node), and the newer one - over 2000 work nodes 512 CPU each.
These are the questions we'd like to answer:
* What is a reasonable time one should expect to shave off by moving
PK/PD analysis from the smaller cluster to the bigger one?
* What type of analysis is the most sensitive to an increase in number of
work nodes?
* What should be the expected gain from increasing the number in -threads
50 times?
* What parts of NONMEM/PsN are the most optimized for parallel execution?
* What are the scenarios where gain from parallelization is the biggest?
The initial bootstrap test we've done showed some progress. Although, the model
we chose did not run 50 times faster (2000/40=50).
Some of the reasons: pre-processing (creating of bootstrap samples), Fortran
compiler work, and combining of the results are not spread across work nodes.
Since the compute time for each of the job was small (5-10 seconds), the
overhead of job submittal was more significant.
We also use vpc, npc, cdd, llp, sse and scm analysis, so would like to get some
ideas on parallelization capability of these functions. Any benchmarking
results or ideas that you can share is very much appreciated.
Thank you,
Julia
Notice: This e-mail message, together with any attachments, contains
information of Merck & Co., Inc. (One Merck Drive, Whitehouse Station,
New Jersey, USA 08889), and/or its affiliates Direct contact information
for affiliates is available at
http://www.merck.com/contact/contacts.html) that may be confidential,
proprietary copyrighted and/or legally privileged. It is intended solely
for the use of the individual or entity named on this message. If you are
not the intended recipient, and have received this message in error,
please notify us immediately by reply e-mail and then delete it from
your system.