starting slurm array job with a specified number of nodes

北战南征 提交于 2019-12-25 04:12:12

问题


I’m trying to align 168 sequence files on our HPC using slurm version 14.03.0. I’m only allowed to use a maximum of 9 compute nodes at once to keep some nodes open for other people.

I changed the file names so I could use the array function in sbatch. The sequence files look like this: Sequence1.fastq.gz, Sequence2.fastq.gz, … Sequence168.fastq.gz

I can’t seem to figure out how to tell it to run all 168 files, 9 at a time. I can get it to run all 168 files, but it uses all the available nodes, which will get me in trouble since this is going to run for a few days.

I’ve found where I should be able to use “--array=1-168%9” to specify how many to run at once, but this was implemented in a newer version of slurm than we have on our cluster. Is there an alternate way to get this functionality? I've been trying things and pulling my hair out for a couple of weeks.

The way I’m trying to run it is:

#!/bin/bash
#SBATCH --job-name=McSeqs
#SBATCH --nodes=1
#SBATCH --array=1-168
srun alignmentProgramHere Sequence${SLURM_ARRAY_TASK_ID}.fastq.gz -o outputdirectory/

Thanks! Matt


回答1:


So I figured out a way to make it work I think. The trick has been that the sbatch options all get passed to each array instance. I used the --exclude option to tell each array instance not to use half of the compute nodes. So now I'm running 9 of my files at once, leaving compute nodes open for other people.

#!/bin/bash
#SBATCH --job-name=McSeqs
#SBATCH --nodes=1
#SBATCH --array=1-168
#SBATCH --exclude=cluster[10-20]

srun alignmentProgramHere Sequence${SLURM_ARRAY_TASK_ID}.fastq.gz -o outputdirectory/


来源:https://stackoverflow.com/questions/28420817/starting-slurm-array-job-with-a-specified-number-of-nodes

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!