Jenkins kubernetes plugin not working

你离开我真会死。 提交于 2019-12-29 06:23:17

问题


I am trying to setup Jenkins Dynamic slaves creation using jenkins-kubernetes plugin.

My jenkins is running outside K8s Cluster.

Link: https://github.com/jenkinsci/kubernetes-plugin

My jenkins version is 2.60.2 and Kubernetes plugin version is 1.1.2

I followed the steps mention on the readme and successfully setup the connection.

My setting looks like:

And connection is successful.

Then I created a job with pod template :

Here starts the problem: 1. When I run this job initially it runs and jenkins slave container inside my pod not able to connect and throws:

I have enabled JNLP port(50000) not sure if it is the right port even tested with random option in Jenkins nothing worked.

2. Now I discarded this jenkins job and re run again it says:

 Started by user Vaibhav Jain
[Pipeline] podTemplate
[Pipeline] {
[Pipeline] node
Still waiting to schedule task
Jenkins doesn’t have label defaultlabel

and no pod is getting started in kubernetes. This is weird.

I am not sure what I am doing wrong. Need help!


回答1:


Instead of using certificates, I suggest you to use credentials in kubernetes, by creating a serviceAccount:

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: jenkins
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: jenkins
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
  resources: ["pods/exec"]
  verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
  resources: ["pods/log"]
  verbs: ["get","list","watch"]
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: jenkins
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: jenkins
subjects:
- kind: ServiceAccount
  name: jenkins

and deploying jenkins using that serviceAccount:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: jenkins
  name: jenkins
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jenkins
  template:
    metadata:
      labels:
        app: jenkins
    spec:           
      serviceAccountName: jenkins 
....

I show you my screenshots for Kubernetes plugin (note Jenkins tunnel for the JNLP port, 'jenkins' is the name of my kubernetes service):

For credentials:

Then fill the fileds (ID will be autogenerated, description will be shown in credentials listbox), but be sure to have created serviceAccount in kubernetes as I said before:

My instructions are for the Jenkins master inside kubernetes. If you want it outside the cluster (but slaves inside) I think you have to use simple login/password credentials.

For what concerns your last error, it seems to be a host resolution error: the slave cannot resolve your host.

I hope it helps you.




回答2:


Ok! I find the issue, I am giving container cap as 10 (in default Namespace)which is too low for my cluster. I have 15 Worker nodes cluster and when K8s master trying starting a pod it starts multiple pods at once(though terminates rest after one is scheduled) which eventually crosses the container cap limit(which was 10). I changed the CAP to 100 and now things are working as expected.

One thing which I noticed with K8s Jenkins plugins, it will not clear out the error container itself which increases the container count and leads to this problem.



来源:https://stackoverflow.com/questions/47870961/jenkins-kubernetes-plugin-not-working

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!