Monitoring a kubernetes job

一曲冷凌霜 提交于 2019-12-12 14:00:18

问题


I have kubernetes jobs that takes variable amount of time to complete. Between 4 to 8 minutes. Is there any way i can know when a job have completed, rather than waiting for 8 minutes assuming worst case. I have a test case that does the following:

1) Submits the kubernetes job.
2) Waits for its completion.
3) Checks whether the job has had the expected affect.

Problem is that in my java test that submits the deployment job in the kubernetes, I am waiting for 8 minutes even if the job has taken less than that to complete, as i dont have a way to monitor the status of the job from the java test.


回答1:


<kube master>/apis/batch/v1/namespaces/default/jobs 

endpoint lists status of the jobs. I have parsed this json and retrieved the name of the latest running job that starts with "deploy...".

Then we can hit

<kube master>/apis/batch/v1/namespaces/default/jobs/<job name retrieved above>

And monitor the status field value which is as below when the job succeeds

"status": {
    "conditions": [
      {
        "type": "Complete",
        "status": "True",
        "lastProbeTime": "2016-09-22T13:59:03Z",
        "lastTransitionTime": "2016-09-22T13:59:03Z"
      }
    ],
    "startTime": "2016-09-22T13:56:42Z",
    "completionTime": "2016-09-22T13:59:03Z",
    "succeeded": 1
  }

So we keep polling this endpoint till it completes. Hope this helps someone.




回答2:


$ kubectl wait --for=condition=complete --timeout=600s job/myjob



回答3:


Since you said Java; you can use kubernetes java bindings from fabric8 to start the job and add a watcher:

KubernetesClient k = ...
k.extensions().jobs().load(yaml).watch (new Watcher <Job>() {

  @Override
  public void onClose (KubernetesClientException e) {}

  @Override
  public void eventReceived (Action a, Job j) {
    if(j.getStatus().getSucceeded()>0)
      System.out.println("At least one job attempt succeeded");
    if(j.getStatus().getFailed()>0)
      System.out.println("At least one job attempt failed");
  }
});



回答4:


I found that the JobStatus does not get updated while polling using job.getStatus() Even if the status changes while checking from the command prompt using kubectl.

To get around this, I reload the job handler:

    client.extensions().jobs()
                       .inNamespace(myJob.getMetadata().getNamespace())
                       .withName(myJob.getMetadata().getName())
                       .get();

My loop to check the job status looks like this:

    KubernetesClient client = new DefaultKubernetesClient(config);
    Job myJob = client.extensions().jobs()
                      .load(new FileInputStream("/path/x.yaml"))
                      .create();
    boolean jobActive = true;
    while(jobActive){
        myJob = client.extensions().jobs()
                .inNamespace(myJob.getMetadata().getNamespace())
                .withName(myJob.getMetadata().getName())
                .get();
        JobStatus myJobStatus = myJob.getStatus();
        System.out.println("==================");
        System.out.println(myJobStatus.toString());

        if(myJob.getStatus().getActive()==null){
            jobActive = false;
        }
        else {
            System.out.println(myJob.getStatus().getActive());
            System.out.println("Sleeping for a minute before polling again!!");
            Thread.sleep(60000);
        }
    }

    System.out.println(myJob.getStatus().toString());

Hope this helps




回答5:


You can use NewSharedInformer method to watch the jobs' statuses. Not sure how to write it in Java, here's the golang example to get your job list periodically:

type ClientImpl struct {
    clients *kubernetes.Clientset
}

type JobListFunc func() ([]batchv1.Job, error)

var (
    jobsSelector = labels.SelectorFromSet(labels.Set(map[string]string{"job_label": "my_label"})).String()
)


func (c *ClientImpl) NewJobSharedInformer(resyncPeriod time.Duration) JobListFunc {
    var once sync.Once
    var jobListFunc JobListFunc

    once.Do(
        func() {
            restClient := c.clients.BatchV1().RESTClient()
            optionsModifer := func(options *metav1.ListOptions) {
                options.LabelSelector = jobsSelector
            }
            watchList := cache.NewFilteredListWatchFromClient(restClient, "jobs", metav1.NamespaceAll, optionsModifer)
            informer := cache.NewSharedInformer(watchList, &batchv1.Job{}, resyncPeriod)

            go informer.Run(context.Background().Done())

            jobListFunc = JobListFunc(func() (jobs []batchv1.Job, err error) {
                for _, c := range informer.GetStore().List() {
                    jobs = append(jobs, *(c.(*batchv1.Job)))
                }
                return jobs, nil
            })
        })

    return jobListFunc
}

Then in your monitor you can check the status by ranging the job list:

func syncJobStatus() {
    jobs, err := jobListFunc()
    if err != nil {
        log.Errorf("Failed to list jobs: %v", err)
        return
    }

    // TODO: other code

    for _, job := range jobs {
        name := job.Name
        // check status...
    }
}



回答6:


You did not mention what is actually checking the job completion, but instead of waiting blindly and hope for the best you should keep polling the job status inside a loop until it becomes "Completed".




回答7:


I don't know what kind of tasks are you talking about but let's assume you are running some pods

you can do

watch 'kubectl get pods | grep <name of the pod>'

or

kubectl get pods -w

It will not be the full name of course as most of the time the pods get random names if you are running nginx replica or deployment your pods will end up with something like nginx-1696122428-ftjvy so you will want to do

watch 'kubectl get pods | grep nginx'

You can replace the pods with whatever job you are doing i.e (rc,svc,deployments....)



来源:https://stackoverflow.com/questions/39146436/monitoring-a-kubernetes-job

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!