job-scheduling

Will MessageBox.Show cause the timeout issue on server side?

邮差的信 提交于 2019-12-06 07:04:43
I have a scheduled SSIS package with a script task in SQL Server Agent on our server. I did set the timeout for the SQL connection, and for some of the codes inside the Try block, it will throw the error, and there is MessageBox.Show inside the Catch block. If I leave the codes as they are, it will fail the job, but if I comment out those MessageBox.Show and leave the Catch block blank just for testing purpose, the job ran successfully. Anybody knows that the MessageBox.Show will affect the timeout for connection on server side or what exactly cause this different result after disable showing

How to optimize multithreaded program for use in LSF?

ⅰ亾dé卋堺 提交于 2019-12-06 04:17:44
I am working on a multithreaded number crunching app, let's call it myprogram . I plan to run myprogram on IBM's LSF grid. LSF allows a job to scheduled on CPUs from different machines. For example, bsub -n 3 ... myprogram ... can allocate two CPUs from node1 and one CPU from node2. I know that I can ask LSF to allocate all 3 cores in the same node, but I am interested in the case where my job is scheduled onto different nodes. How does LSF manage this? Will myprogram be run in two different processes in node1 and node2? Does LSF automatically manage data transfer between node1 and node2?

Apache Spark - How does internal job scheduler in spark define what are users and what are pools

时间秒杀一切 提交于 2019-12-06 03:51:22
问题 I am sorry about being a little general here, but I am a little confused about how job scheduling works internally in spark. From the documentation here I get that it is some sort of implementation of Hadoop Fair Scheduler. I am unable to come around to understand that who exactly are users here (are the linux users, hadoop users, spark clients?). I am also unable to understand how are the pools defined here. For example, In my hadoop cluster I have given resource allocation to two different

Exe name like update.exe blocked by UAC in scheduled task

可紊 提交于 2019-12-05 23:31:50
问题 I have a problem with windows UAC, scheduled tasks and a executable named "dbupdate.exe". I have full power over source code, manifests and so on, but not over user systems (short: normal software engineer ;-). Language is Delphi, but shouldn't be important I think. I have a problem using an application in task scheduler in windows vista and windows 7. The program is named dbUpdate.exe. It has built in a xp/vista manifest, which configures that the program should be started "asInvoker". Now,

nodejs job server (multiple purpose)

孤街醉人 提交于 2019-12-05 21:51:26
I'm fairly new and just getting to know node.js (background as PHP developer). I've seen some nodeJs examples and the video on nodejs website. Currently I'm running a video site and in the background a lot of tasks have to be executed. Currently this is done by cronjobs that call php scripts. The downsite of this approach is when an other process is started when the previous is still working you get a high load on the servers etc. The jobs that needs to be done on the server are the following: Scrape feeds from websites and insert them in mysql database Fetch data from websites (scraping)

Submit job with python code (mpi4py) on HPC cluster

a 夏天 提交于 2019-12-05 12:11:29
I am working a python code with MPI (mpi4py) and I want to implement my code across many nodes (each node has 16 processors) in a queue in a HPC cluster. My code is structured as below: from mpi4py import MPI comm = MPI.COMM_WORLD size = comm.Get_size() rank = comm.Get_rank() count = 0 for i in range(1, size): if rank == i: for j in range(5): res = some_function(some_argument) comm.send(res, dest=0, tag=count) I am able to run this code perfectly fine on the head node of the cluster using the command $mpirun -np 48 python codename.py Here "code" is the name of the python script and in the

SLURM Submit multiple tasks per node?

你说的曾经没有我的故事 提交于 2019-12-05 11:16:00
I found some very similar questions which helped me arrive at a script which seems to work however I'm still unsure if I fully understand why, hence this question.. My problem (example): On 3 nodes, I want to run 12 tasks on each node (so 36 tasks in total). Also each task uses OpenMP and should use 2 CPUs. In my case a node has 24 CPUs and 64GB memory. My script would be: #SBATCH --nodes=3 #SBATCH --ntasks=36 #SBATCH --cpus-per-task=2 #SBATCH --mem-per-cpu=2000 export OMP_NUM_THREADS=2 for i in {1..36}; do srun -N 1 -n 1 ./program input${i} >& out${i} & done wait This seems to work as I

Open Source Job Scheduler with REST API

折月煮酒 提交于 2019-12-05 04:42:39
Are there any open source Job Scheduler with REST API for commercial use which will support features like: Tree like Job dependency Hold & Release Rerun failed steps Parallelism Help would be appreciated :) NOTE: we are looking for open source alternative for TWS , Control-M , AutoSys . JobScheduler would seem to meet your requirements: Open Source see: Open Source and Commercial Licenses Rest API see: Web Service Integration Parallelism see: Organisation of Jobs and Job Chains I think that these areas are also covered (I downloaded and trialled the application): See here Tree like Job

Android JobScheduler onStartJob called multiple times

ε祈祈猫儿з 提交于 2019-12-05 00:55:53
The JobScheduler calls onStartJob() multiple times, although the job finished. Everything works fine, if I schedule one single job and wait until it has finished. However, if I schedule two or more jobs with different IDs at the same time, then onStartJob() is called again after invoking jobFinished() . For example I schedule job 1 and job 2 with exactly the same parameters except the ID, then the order is: onStartJob() for job 1 and job 2 Both jobs finish, so jobFinished() is invoked for both of them After that onStartJob() is called again for both jobs with the same ID My job is very basic

How to tell Condor to dispatch jobs only to machines on the cluster, that have “numpy” installed on them?

谁都会走 提交于 2019-12-05 00:42:34
I just figured out how to send jobs to be processed on machines on the cluster by using Condor . Since we have a lot of machines and not each of those machines are configured the same, I was wondering: Is it possible to tell condor only to dispatch my jobs (python scripts) to machines, that have numpy installed on them since my script depends on this package? Like any other machine attribute, you just need to advertise it in the machine classad, and then have your jobs require it. To advertise it in the machine classad, you can either hard-code it into each machine's condor config file by