job-scheduling

PBS programming

醉酒当歌 提交于 2020-01-05 14:06:29
问题 some short and probably stupid questions about PBS: 1- I submit jobs using qsub job_file is it possible to submit a (sub)job inside a job file? 2- I have the following script: qsub job_a qsub job_b For launching job_b, it would be great to have before the results of job_a finished. Is it possible to put some kind of barrier or some otehr workaround so job_b is not launched until job_a finished? Thanks 回答1: Answer to the first question: Typically you're only allowed to submit jobs from the

How to optimize multithreaded program for use in LSF?

岁酱吖の 提交于 2020-01-02 10:00:24
问题 I am working on a multithreaded number crunching app, let's call it myprogram . I plan to run myprogram on IBM's LSF grid. LSF allows a job to scheduled on CPUs from different machines. For example, bsub -n 3 ... myprogram ... can allocate two CPUs from node1 and one CPU from node2. I know that I can ask LSF to allocate all 3 cores in the same node, but I am interested in the case where my job is scheduled onto different nodes. How does LSF manage this? Will myprogram be run in two different

multiple spark application submission on standalone mode

。_饼干妹妹 提交于 2019-12-31 05:31:06
问题 i have 4 spark application (to find wordcount from text file) which written on 4 different language (R,python,java,scala) ./wordcount.R ./wordcount.py ./wordcount.java ./wordcount.scala spark works in standalone mode... 1.4worker nodes 2.1 core for each worker node 3.1gb memory for each node 4.core_max set to 1 ./conf/spark-env.sh export SPARK_MASTER_OPTS="-Dspark.deploy.defaultCores=1" export SPARK_WORKER_OPTS="-Dspark.deploy.defaultCores=1" export SPARK_WORKER_CORES=1 export SPARK_WORKER

C# - Windows Service with awareness of System Time

我的未来我决定 提交于 2019-12-30 06:18:12
问题 I am thinking about writing a windows service that will turn a certain functionality on or off at times specified by the user (using a configuration utility I will provide). Basically, the user would specify certain times that the PC would go into "work-only" mode (blocking Facebook and other distracting sites), and then when those times are up the PC will return to normal mode. I have already come up with a few ways to create the "work-only" mode, but what I am struggling with is how to know

Azure Batch Job Scheduling: Task doesn't run recurrently

随声附和 提交于 2019-12-25 12:13:53
问题 My objective is to schedule an Azure Batch Task to run every 5 minutes from the moment it has been added, and I use the Python SDK to create/manage my Azure resources. I tried creating a Job-Schedule and it automatically created a new Job under the specified Pool. job_spec = batch.models.JobSpecification( pool_info=batch.models.PoolInformation(pool_id=pool_id) ) schedule = batch.models.Schedule( start_window=datetime.timedelta(hours=1), recurrence_interval=datetime.timedelta(minutes=5) )

Expressing setup time with cumulatives

主宰稳场 提交于 2019-12-23 09:23:27
问题 There are many families of scheduling problems. I'm looking into a problem where I have families of jobs/tasks where the transition from one family to another family require reconfiguring the machine (setup time). I'm using cumulatives[2/3] to solve this problem, but I feel unsure how the setup time could be expressed. In this small example I have 10 tasks belonging to 3 different families. Any task can run on any machine, but a switch from one task in one family to another task in another

What is exactly mean by 'DisallowConcurrentExecution' in Quartz.net

送分小仙女□ 提交于 2019-12-23 07:38:47
问题 I have a Quartz.net Job with the following definition. [PersistJobDataAfterExecution] [DisallowConcurrentExecution] public class AdItemsJob : IJob, IInterruptableJob { public void Execute(IJobExecutionContext context) { // Job execution logic, } } As I have decorated the Job with DisallowConcurrentExecution attribute. What I know about this attribute, we can't run multiple instances of same job at the same time. What is meant by multiple instances here.? Does the two jobs of AddItemsJob with

Quartz properties does not trigger Quartz Job

两盒软妹~` 提交于 2019-12-23 03:45:44
问题 I'm using Quartz 2.1.3. My quartz.properties : #=================================================== # Configure the Job Initialization Plugin #=================================================== org.quartz.plugin.jobInitializer.class = org.quartz.plugins.xml.XMLSchedulingDataProcessorPlugin org.quartz.plugin.jobInitializer.fileNames = quartz-jobs.xml org.quartz.plugin.jobInitializer.failOnFileNotFound = true org.quartz.plugin.jobInitializer.scanInterval = 10 org.quartz.plugin.jobInitializer

how to call the stored procedure in oracle with the daily scheduled jobs?

爱⌒轻易说出口 提交于 2019-12-22 08:48:57
问题 I am new to the oracle job scripts. I wrote some purge procedure to clean all the old data and retain the last 3 months data... procedure is executed successfully. its working when im calling manually also. procedure is as follows: CREATE OR REPLACE PROCEDURE Archive IS v_query varchar2(2048); v_tablename VARCHAR2(50); v_condition varchar2(50); TYPE cur_typ IS REF CURSOR; c cur_typ; BEGIN OPEN c for 'select tablename,columnname from pseb.purge_tables'; FETCH c INTO v_tablename,v_condition;

SLURM Submit multiple tasks per node?

房东的猫 提交于 2019-12-22 06:16:15
问题 I found some very similar questions which helped me arrive at a script which seems to work however I'm still unsure if I fully understand why, hence this question.. My problem (example): On 3 nodes, I want to run 12 tasks on each node (so 36 tasks in total). Also each task uses OpenMP and should use 2 CPUs. In my case a node has 24 CPUs and 64GB memory. My script would be: #SBATCH --nodes=3 #SBATCH --ntasks=36 #SBATCH --cpus-per-task=2 #SBATCH --mem-per-cpu=2000 export OMP_NUM_THREADS=2 for i