Dynamically Sized MPI NONMEM Runs?

1 messages 1 people Latest: Sep 19, 2016

Dynamically Sized MPI NONMEM Runs?

From: Bill Denney Date: September 19, 2016 technical
Hi, I've been setting up Docker for NONMEM with NMQual and PsN ( https://github.com/billdenney/Pharmacometrics-Docker ). The docker containers work with MPI in a static fashion-- as an example, I can start a run with 4 cores, but I can't add or remove cores from the run. MPI appears to have the ability to dynamically size jobs [1], but I don't see a way to do this within NONMEM. I'd like to maximize my NONMEM license use by always using all the licensed cores. I recognize that I can just run jobs sequentially using 4 cores each, but if one model takes significantly longer than the others, I'd like the others to go ahead and finish while the long-running job works. Examples are: Example 1: I start 1 NONMEM job with 4 parallel threads, and all threads are in use through completion of the job (this currently works). Example 2: I start 4 NONMEM jobs with 1 thread each, and each job runs to completion (this currently works). Example 3: I start 4 NONMEM jobs with up to 4 threads each, but I want to stay within my license, so I only want 1 core used until some of the jobs finish. Job 1 completes first, so I'd like to have Job 2 expand to 2 cores. Job 2 completes, so I'd like to have Job 3 and Job 4 expand to 2 cores each. Job 3 completes, so I'd like to have Job 4 expand to 4 cores. (This doesn't work right now.) Example 4: I start 3 NONMEM jobs with up to 4 threads each, but I want to stay within my license, so I want Job 1 to start with 2 cores and Jobs 3 and 4 to start with 1 core. As jobs finish, they expand similarly to Example 3. [1] http://www.netlib.org/utk/people/JackDongarra/WEB-PAGES/SPRING-2012/Lect05-dynamicprocesses.pdf Thanks, Bill ------------------------------------------------------------------------ Human Predictions Logo < http://www.humanpredictions.com >*William S. Denney, PhD* Chief Scientist, Human Predictions LLC +1-617-899-8123 [email protected]