I am trying to setup Jenkins Dynamic slaves creation using jenkins-kubernetes plugin.
My jenkins is running outside K8s Cluster.
Link: https
Instead of using certificates, I suggest you to use credentials in kubernetes, by creating a serviceAccount:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jenkins
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
and deploying jenkins using that serviceAccount:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: jenkins
name: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
serviceAccountName: jenkins
....
I show you my screenshots for Kubernetes plugin (note Jenkins tunnel for the JNLP port, 'jenkins' is the name of my kubernetes service):
For credentials:
Then fill the fileds (ID will be autogenerated, description will be shown in credentials listbox), but be sure to have created serviceAccount in kubernetes as I said before:
My instructions are for the Jenkins master inside kubernetes. If you want it outside the cluster (but slaves inside) I think you have to use simple login/password credentials.
For what concerns your last error, it seems to be a host resolution error: the slave cannot resolve your host.
I hope it helps you.
Ok! I find the issue, I am giving container cap as 10 (in default Namespace)which is too low for my cluster. I have 15 Worker nodes cluster and when K8s master trying starting a pod it starts multiple pods at once(though terminates rest after one is scheduled) which eventually crosses the container cap limit(which was 10). I changed the CAP to 100 and now things are working as expected.
One thing which I noticed with K8s Jenkins plugins, it will not clear out the error container itself which increases the container count and leads to this problem.