kubernetes-cronjob

Cron Jobs in Kubernetes - connect to existing Pod, execute script

走远了吗. 提交于 2020-07-16 18:32:44
问题 I'm certain I'm missing something obvious. I have looked through the documentation for ScheduledJobs / CronJobs on Kubernetes, but I cannot find a way to do the following on a schedule: Connect to an existing Pod Execute a script Disconnect I have alternative methods of doing this, but they don't feel right. Schedule a cron task for: kubectl exec -it $(kubectl get pods --selector=some-selector | head -1) /path/to/script Create one deployment that has a "Cron Pod" which also houses the

How do I make sure my cronjob job does NOT retry on failure?

拈花ヽ惹草 提交于 2020-05-17 06:39:26
问题 I have a Kubernetes Cronjob that runs on GKE and runs Cucumber JVM tests. In case a Step fails due to assertion failure, some resource being unavailable, etc., Cucumber rightly throws an exception which leads the Cronjob job to fail and the Kubernetes pod's status changes to ERROR . This leads to creation of a new pod that tries to run the same Cucumber tests again, which fails again and retries again. I don't want any of these retries to happen. If a Cronjob job fails, I want it to remain in

Pods stuck in PodInitializing state indefinitely

扶醉桌前 提交于 2020-01-16 00:45:29
问题 I've got a k8s cronjob that consists of an init container and a one pod container. If the init container fails, the Pod in the main container never gets started, and stays in "PodInitializing" indefinitely. My intent is for the job to fail if the init container fails. --- apiVersion: batch/v1beta1 kind: CronJob metadata: name: job-name namespace: default labels: run: job-name spec: schedule: "15 23 * * *" startingDeadlineSeconds: 60 concurrencyPolicy: "Forbid" successfulJobsHistoryLimit: 30

access logs in cron jobs kubernetes

巧了我就是萌 提交于 2019-12-23 08:16:29
问题 im running cron job in kubernetes, jobs completes successfully and i log output to log file inside(path: storage/logs) but i cannot access that file due to container is in completed here is my job yaml. apiVersion: v1 items: - apiVersion: batch/v1beta1 kind: CronJob metadata: labels: chart: cronjobs-0.1.0 name: cron-cronjob1 namespace: default spec: concurrencyPolicy: Forbid failedJobsHistoryLimit: 1 jobTemplate: spec: template: metadata: labels: app: cron cron: cronjob1 spec: containers: -

Scheduled restart of Kubernetes pod without downtime

☆樱花仙子☆ 提交于 2019-12-08 07:08:49
问题 I have 6 replicas of a pod running which I would like to restart\recreate every 5 minutes. This needs to be a rolling update - so that all are not terminated at once and there is no downtime. How do I achieve this? I tried using cron job, but seems not to be working : apiVersion: batch/v1beta1 kind: CronJob metadata: name: scheduled-pods-recreate spec: schedule: "*/5 * * * *" concurrencyPolicy: Forbid jobTemplate: spec: template: spec: containers: - name: ja-engine image: app-image

What does Kubernetes cronjobs `startingDeadlineSeconds` exactly mean?

佐手、 提交于 2019-12-03 12:15:45
问题 In Kubernetes cronjobs, It is stated in the limitations section that Jobs may fail to run if the CronJob controller is not running or broken for a span of time from before the start time of the CronJob to start time plus startingDeadlineSeconds, or if the span covers multiple start times and concurrencyPolicy does not allow concurrency. What I understand from this is that, If the startingDeadlineSeconds is set to 10 and the cronjob couldn't start for some reason at its scheduled time, then it

How to automatically remove completed Kubernetes Jobs created by a CronJob?

一曲冷凌霜 提交于 2019-12-03 08:06:19
问题 Is there a way to automatically remove completed Jobs besides making a CronJob to clean up completed Jobs? The K8s Job Documentation states that the intended behavior of completed Jobs is for them to remain in a completed state until manually deleted. Because I am running thousands of Jobs a day via CronJobs and I don't want to keep completed Jobs around. 回答1: You can now set history limits, or disable history altogether, so that failed or successful jobs are not kept around indefinitely. See

What does Kubernetes cronjobs `startingDeadlineSeconds` exactly mean?

南楼画角 提交于 2019-12-03 03:36:48
In Kubernetes cronjobs , It is stated in the limitations section that Jobs may fail to run if the CronJob controller is not running or broken for a span of time from before the start time of the CronJob to start time plus startingDeadlineSeconds, or if the span covers multiple start times and concurrencyPolicy does not allow concurrency. What I understand from this is that, If the startingDeadlineSeconds is set to 10 and the cronjob couldn't start for some reason at its scheduled time, then it can still be attempted to start again as long as those 10 seconds haven't passed, however, after the

How to schedule a cronjob which executes a kubectl command?

和自甴很熟 提交于 2019-12-01 00:43:59
How to schedule a cronjob which executes a kubectl command? I would like to run the following kubectl command every 5 minutes: kubectl patch deployment runners -p '{"spec":{"template":{"spec":{"containers":[{"name":"jp-runner","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}' -n jp-test For this, I have created a cronjob as below: apiVersion: batch/v1beta1 kind: CronJob metadata: name: hello spec: schedule: "*/5 * * * *" jobTemplate: spec: template: spec: containers: - name: hello image: busybox args: - /bin/sh - -c - kubectl patch deployment runners -p '{"spec":{"template":{"spec":

How to schedule a cronjob which executes a kubectl command?

大兔子大兔子 提交于 2019-11-28 01:59:02
问题 How to schedule a cronjob which executes a kubectl command? I would like to run the following kubectl command every 5 minutes: kubectl patch deployment runners -p '{"spec":{"template":{"spec":{"containers":[{"name":"jp-runner","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}' -n jp-test For this, I have created a cronjob as below: apiVersion: batch/v1beta1 kind: CronJob metadata: name: hello spec: schedule: "*/5 * * * *" jobTemplate: spec: template: spec: containers: - name: hello