问题
I have a 64-node cluster, running PBS Pro. If I submit many hundreds of jobs, I can get 64 running at once. This is great, except when all 64 jobs happen to be nearly I/O bound, and are reading/writing to the same disk. In such cases, I'd like to be able to still submit all the jobs, but have a max of (say) 10 jobs running at a given time. Is there an incantation to qsub that will allow me to do such, without having administrative access to the cluster's PBS server?
回答1:
In TORQUE you can do this by setting a slot limit on a job array, as long as you can arrange the jobs as an array:
qsub script.sh -t 0-99%10
would limit 10 of them to running at once. If PBSPro has an equivalent to this then you can use that.
回答2:
you could make them dependent on each other. or schedule them to start at different timepoints.
otherwise, your admin can reduce the number of simultaneous jobs you can run at the same time.
来源:https://stackoverflow.com/questions/2053281/how-to-limit-number-of-concurrently-running-pbs-jobs