persistent-volumes

Kubernetes Pod Warning: 1 node(s) had volume node affinity conflict

六眼飞鱼酱① 提交于 2019-12-29 04:36:11
问题 I try to set up kubernetes cluster. I have Persistent Volomue, Persistent Volume Claim and Storage class all set-up and running but when I wan to create pod from deployment, pod is created but it hangs in Pending state. After describe I get only this warnig "1 node(s) had volume node affinity conflict." Can somebody tell me what I am missing in my volume configuration? apiVersion: v1 kind: PersistentVolume metadata: creationTimestamp: null labels: io.kompose.service: mariadb-pv0 name: mariadb

GKE Kubernetes Persistent Volume

时光怂恿深爱的人放手 提交于 2019-12-25 01:53:12
问题 I try to use a persistent volume for my rethinkdb server. But I got this error: Unable to mount volumes for pod "rethinkdb-server-deployment-6866f5b459-25fjb_default(efd90244-7d02-11e8-bffa-42010a8400b9)": timeout expired waiting for volumes to attach/mount for pod "default"/"rethinkdb-server-deployment- Multi-Attach error for volume "pvc-f115c85e-7c42-11e8-bffa-42010a8400b9" Volume is already used by pod(s) rethinkdb-server-deployment-58f68c8464-4hn9x I think that Kubernetes deploy a new

WSO2 loss APIs after changes in docker container

纵饮孤独 提交于 2019-12-23 21:54:31
问题 I'm having another problem using WSO2 API Manager 2.0.0: I have installed it in docker using three containers (one for APIM, one for Analytics and one for MySQL) and I replace some configuration files with my custom version (e.g. DB, server name, gateway setup...). Both APIM and Analytics are configured to save data in the MySQL container and I am able to see changes in the DB. The issue is that I cannot find my APIs neither in the publisher nor in the store after the container has been

Using Windows SMB shares from Kubernetes deployment app

大城市里の小女人 提交于 2019-12-23 10:28:16
问题 We are migrating legacy java and .net applications from on-premises VMs to an on-premises Kubernetes cluster. Many of these applications make use of windows file shares to transfer files from and to other existing systems. Deploying to Kubernetes has less priority than re-engineering all the solutions to avoid using samba shares, so if we want to migrate we will have to find a way of keeping many things as they are. We have setup a 3-node cluster on 3 centos 7 machines using Kubeadm and Canal

Using Windows SMB shares from Kubernetes deployment app

陌路散爱 提交于 2019-12-23 10:27:46
问题 We are migrating legacy java and .net applications from on-premises VMs to an on-premises Kubernetes cluster. Many of these applications make use of windows file shares to transfer files from and to other existing systems. Deploying to Kubernetes has less priority than re-engineering all the solutions to avoid using samba shares, so if we want to migrate we will have to find a way of keeping many things as they are. We have setup a 3-node cluster on 3 centos 7 machines using Kubeadm and Canal

MountVolume.SetUp failed for volume “nfs” : mount failed: exit status 32

岁酱吖の 提交于 2019-12-22 05:13:08
问题 This is 2nd question following 1st question at PersistentVolumeClaim is not bound: "nfs-pv-provisioning-demo" I am setting up a kubernetes lab using one node only and learning to setup kubernetes nfs. I am following kubernetes nfs example step by step from the following link: https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs Based on feedback provided by 'helmbert', I modified the content of https://github.com/kubernetes/examples/blob/master/staging/volumes/nfs

PersistentVolumeClaim is not bound: “nfs-pv-provisioning-demo”

北城以北 提交于 2019-12-21 22:54:47
问题 I am setting up a kubernetes lab using one node only and learning to setup kubernetes nfs. I am following kubernetes nfs example step by step from the following link: https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs Trying the first section, NFS server part, executed 3 commands: $ kubectl create -f examples/volumes/nfs/provisioner/nfs-server-gce-pv.yaml $ kubectl create -f examples/volumes/nfs/nfs-server-rc.yaml $ kubectl create -f examples/volumes/nfs/nfs-server-service

Kubernetes - how to download a PersistentVolume's content

早过忘川 提交于 2019-12-21 11:07:11
问题 I have a test executor Pod in K8s cluster created through helm, which asks for a dynamically created PersistentVolume where it stores the test results. Now I would like to get the contents of this volume. It seems quite natural thing to do. I would expect some kubectl download pv <id> . But I can't google up anything. How can I get the contents of a PersistentVolume ? I am in AWS EKS; so AWS API is also an option. Also I can access ECR so perhaps I could somehow store it as an image and

Kubernetes PVC with ReadWriteMany on AWS

送分小仙女□ 提交于 2019-12-21 09:39:03
问题 I want to setup a PVC on AWS, where I need ReadWriteMany as access mode. Unfortunately, EBS only supports ReadWriteOnce . How could I solve this? I have seen that there is a beta provider for AWS EFS which supports ReadWriteMany , but as said, this is still beta, and its installation looks somewhat flaky. I could use node affinity to force all pods that rely on the EBS volume to a single node, and stay with ReadWriteOnce , but this limits scalability. Are there any other ways of how to solve

Kubernetes: Can't delete PersistentVolumeClaim (pvc)

本秂侑毒 提交于 2019-12-20 11:49:50
问题 I created the following persistent volume by calling kubectl create -f nameOfTheFileContainingTheFollowingContent.yaml apiVersion: v1 kind: PersistentVolume metadata: name: pv-monitoring-static-content spec: capacity: storage: 100Mi accessModes: - ReadWriteOnce hostPath: path: "/some/path" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pv-monitoring-static-content-claim spec: accessModes: - ReadWriteOnce storageClassName: "" resources: requests: storage: 100Mi After this I