kubernetes-pod

Kubernetes node Device port (USB) mapping to POD? Or Swarm service --device mapping

爱⌒轻易说出口 提交于 2020-01-21 12:26:06
问题 Is it possible to map, the device port(USB port) of a worker node, to a POD? Similar to docker create --device=/dev/ttyACM0:/dev/ttyACM0 Is it possible? I checked the refence doc, but could not find anything. In Docker service, is it possible to map --device port to service container(if I am running only 1 container)? 回答1: You can actually get this to work. You need to run the container privileged and use a hostPath like this: containers: - name: acm securityContext: privileged: true

Why kubernetes scheduler ignores nodeAffinity?

丶灬走出姿态 提交于 2020-01-16 19:04:09
问题 I have a kubernetes cluster version 1.12 deployed to aws with kops The cluster has several nodes marked with a label 'example.com/myLabel' that takes the values a, b, c, d For example: Node name example.com/myLabel instance1 a instance2 b instance3 c instance4 d And there is a test deployment apiVersion: apps/v1 kind: Deployment metadata: name: test-scheduler spec: replicas: 6 selector: matchLabels: app: test-scheduler template: metadata: labels: app: test-scheduler spec: tolerations: - key:

Pods stuck in PodInitializing state indefinitely

扶醉桌前 提交于 2020-01-16 00:45:29
问题 I've got a k8s cronjob that consists of an init container and a one pod container. If the init container fails, the Pod in the main container never gets started, and stays in "PodInitializing" indefinitely. My intent is for the job to fail if the init container fails. --- apiVersion: batch/v1beta1 kind: CronJob metadata: name: job-name namespace: default labels: run: job-name spec: schedule: "15 23 * * *" startingDeadlineSeconds: 60 concurrencyPolicy: "Forbid" successfulJobsHistoryLimit: 30

Kubernetes service with clustered PODs in active/standby

前提是你 提交于 2020-01-05 04:34:07
问题 Apologies for not keeping this short, as any such attempt would make me miss-out on some important details of my problem. I have a legacy Java application which works in a active/standby mode in a clustered environment to expose certain RESTful WebServices via a predefined port. If there are two nodes in my app cluster, at any point in time only one would be in Active mode, and the other in Passive mode, and the requests are always served by the node with app running in Active mode. 'Active'

How to store logs of all pods in kubernetes at one place on Node?

自古美人都是妖i 提交于 2020-01-01 07:10:13
问题 I want to store logs of pods in kubernetes at one place. i.e output of kubectl logs podname I referred this question Kubernetes basic pod logging which gives me successfully logs for counter...How to modify this args attribute in spec to get output of kubectl logs podname stored in file at one place? Here is my pod.yaml i created but not able to see any file at location /tmp/logs/ apiVersion: v1 kind: Service metadata: name: spring-boot-demo-pricing spec: ports: - name: spring-boot-pricing

Kubernetes pod unable to connect to rabbit mq instance running locally

青春壹個敷衍的年華 提交于 2019-12-29 09:22:22
问题 I am moving my application from docker to kubernetes \ helm - and so far I have been successful except for setting up incoming \ outgoing connections. One particular issue I am facing is that I am unable to connect to the rabbitmq instance running locally on my machine on another docker container. app-deployment.yaml: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: jks labels: app: myapp spec: replicas: 1 template: metadata: labels: app: myapp spec: imagePullSecrets: - name:

Kubernetes pod unable to connect to rabbit mq instance running locally

允我心安 提交于 2019-12-29 09:22:14
问题 I am moving my application from docker to kubernetes \ helm - and so far I have been successful except for setting up incoming \ outgoing connections. One particular issue I am facing is that I am unable to connect to the rabbitmq instance running locally on my machine on another docker container. app-deployment.yaml: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: jks labels: app: myapp spec: replicas: 1 template: metadata: labels: app: myapp spec: imagePullSecrets: - name:

How to restart the kubernetes kube-scheduler of a k8s cluster built by kubeadm

╄→尐↘猪︶ㄣ 提交于 2019-12-25 17:44:29
问题 I have created a kubernetes cluster by kubeadm following this official tutorial. Each of the control panel components (apiserver,control manager, kube-scheduler) is a running pod. I learned that kube-scheduler will be using some default scheduling policies (defined here) when it is created by kubeadm . These default policies are a subset of all available policies (listed here) How can I restart the kube-scheduler pod with a new configuration(different policy list)? 回答1: The kube-scheduler is

Not able to access kubernetes api from inside a pod container

感情迁移 提交于 2019-12-25 01:49:00
问题 I have created a hashicorp vault deployment and configured kubernetes auth. The vault container calls kubernetes api internally from the pod to do k8s authentication, and that call is failing with 500 error code (connection refused). I am using docker for windows kubernetes. I added the below config to vault for kubernetes auth mechanism. payload.json { "kubernetes_host": "http://kubernetes", "kubernetes_ca_cert": <k8s service account token> } curl --header "X-Vault-Token: <vault root token>"

Why is my PodSecurityPolicy applied even if I don't have access?

最后都变了- 提交于 2019-12-24 21:34:55
问题 I have two PodSecurityPolicy: 000-privileged (only kube-system service accounts and admin users) 100-restricted (everything else) I have a problem with their assignment to pods. First policy binding: kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: psp:privileged rules: - apiGroups: - extensions resources: - podsecuritypolicies resourceNames: - 000-privileged verbs: - use --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: psp:privileged