qsub

Why Torque qsub don't create output file?

好久不见. 提交于 2019-12-11 04:17:30
问题 I trying start task on cluster via Torque PBS with command qsub -o a.txt a.sh File a.sh contain single string: hostname After command qsub I make qstat command, that give next output: Job ID Name User Time Use S Queue ------------------------- ---------------- --------------- -------- - ----- 302937.voms a.sh user 00:00:00 E long After 5 seconds command qstat return empty output (no jobs in queue). Command qsub --version give output: version: 2.5.13 Command which qsub Output: /usr/bin/qsub

Can a snakemake input rule be defined with different paths/wildcards

江枫思渺然 提交于 2019-12-11 04:08:54
问题 I want to know if one can define a input rule that has dependencies on different wildcards. To elaborate, I am running this Snakemake pipeline on different fastq files using qsub which submits each job to a different node: fastqc on original fastq - no downstream dependency on other jobs adapter/quality trimming to generate trimmed fastq fastqc_after on trimmed fastq (output from step 2) and no downstream dependency star-rsem pipeline on trimmed fastq (output from step 2 above) rsem and

how to limit number of concurrently running PBS jobs

放肆的年华 提交于 2019-12-10 12:48:20
问题 I have a 64-node cluster, running PBS Pro. If I submit many hundreds of jobs, I can get 64 running at once. This is great, except when all 64 jobs happen to be nearly I/O bound, and are reading/writing to the same disk. In such cases, I'd like to be able to still submit all the jobs, but have a max of (say) 10 jobs running at a given time. Is there an incantation to qsub that will allow me to do such, without having administrative access to the cluster's PBS server? 回答1: In TORQUE you can do

PBSPro qsub output error file directed to path with jobid in name

狂风中的少年 提交于 2019-12-10 11:35:33
问题 I'm using PBSPro and am trying to use qsub command line to submit a job but can't seem to get the output and error files to be named how I want them. Currently using: qsub -N ${subjobname_short} \ -o ${path}.o{$PBS_JOBID} -e ${path}.e${PBS_JOBID} ... submission_script.sc Where $path=fulljobname (i.e. more than 15 characters) I'm aware that $PBS_JOBID won't be set until after the job is submitted... Any ideas? Thanks 回答1: The solution I came up with was following the qsub command with a qalter

How to set the qsub to run job2 five seconds (or any desired value) after the job1 is finished?

别等时光非礼了梦想. 提交于 2019-12-08 03:57:57
问题 Currently what I do is to estimate when job1 will be finished, then using the “#PBS -a [myEstimatedTime+5]" directive I run qsub for job2. But I’m not happy with my approach since sometimes it is over/under estimated. Is there any better solution? 回答1: Add a time-killing job that runs 5 minutes between job1 and job2. The cluster's running order will be job1 -> job (for waiting 5 mins) -> job2. 回答2: The best way to do this is through job dependencies. You can submit the jobs: job1id=`qsub

Syntax for submitting a qsub job without an actual job file?

孤者浪人 提交于 2019-12-08 01:32:05
问题 I would like to submit qsub jobs on the fly without creating discrete job files. So, let's say I have a python script called "get_time.py" that simply reports the time. Instead of making a submission script like this: cat>job.sub<<eof #PBS -l walltime=1:00:00 cd $PBS_O_WORKDIR get_time.py eof ...and then submitting the job: qsub job.sub I would like to be able to bypass the file creation step, and I'd image the construct would be something like this: qsub -d . -e get_time.py where -e is my

PBSPro qsub output error file directed to path with jobid in name

房东的猫 提交于 2019-12-07 19:15:29
I'm using PBSPro and am trying to use qsub command line to submit a job but can't seem to get the output and error files to be named how I want them. Currently using: qsub -N ${subjobname_short} \ -o ${path}.o{$PBS_JOBID} -e ${path}.e${PBS_JOBID} ... submission_script.sc Where $path=fulljobname (i.e. more than 15 characters) I'm aware that $PBS_JOBID won't be set until after the job is submitted... Any ideas? Thanks The solution I came up with was following the qsub command with a qalter command like so: jobid=$(qsub -N ${subjobname_short} submission_script.sc) qalter -o ${path}.o{$jobid} -e $

Exclude certain nodes when submitting jobs with qsub / torque?

∥☆過路亽.° 提交于 2019-12-07 01:02:35
问题 When submitting batch jobs with qsub, is there a way to exclude a certain node (by hostname)? Something like # this is just a pseudo command: qsub myscript.sh --exclude computer01 回答1: Depending on how many nodes you would like available, there are a couple of options. You could specify by name a specific node that is acceptable: qsub -l nodes=n006+n007 To exclude, say, one node out of a group, I would ask the administrator to assign a dummy property to all nodes but the one you want excluded

Syntax for submitting a qsub job without an actual job file?

吃可爱长大的小学妹 提交于 2019-12-06 11:47:20
I would like to submit qsub jobs on the fly without creating discrete job files. So, let's say I have a python script called "get_time.py" that simply reports the time. Instead of making a submission script like this: cat>job.sub<<eof #PBS -l walltime=1:00:00 cd $PBS_O_WORKDIR get_time.py eof ...and then submitting the job: qsub job.sub I would like to be able to bypass the file creation step, and I'd image the construct would be something like this: qsub -d . -e get_time.py where -e is my imaginary parameter that tells qsub that the following is code to be sent to the scheduler, instead of

Wait for all jobs of a user to finish before submitting subsequent jobs to a PBS cluster

谁说胖子不能爱 提交于 2019-12-06 02:44:48
问题 I am trying to adjust some bash scripts to make them run on a (pbs) cluster. The individual tasks are performed by several script thats are started by a main script. So far this main scripts starts multiple scripts in background (by appending & ) making them run in parallel on one multi core machine. I want to substitute these calls by qsub s to distribute load accross the cluster nodes. However, some jobs depend on others to be finished before they can start. So far, this was achieved by