persistent-volumes

Kubernetes / Rancher 2, mongo-replicaset with Local Storage Volume deployment

血红的双手。 提交于 2019-12-20 01:45:07
问题 I try, I try, but Rancher 2.1 fails to deploy the " mongo-replicaset " Catalog App, with Local Persistent Volumes configured. How to correctly deploy a mongo-replicaset with Local Storage Volume? Any debugging techniques appreciated since I am new to rancher 2. I follow the 4 ABCD steps bellow, but the first pod deployment never ends. What's wrong in it? Logs and result screens are at the end. Detailed configuration can be found here. Note : Deployment without Local Persistent Volumes succeed

K8S Unable to mount AWS EBS as a persistent volume for pod

五迷三道 提交于 2019-12-13 03:30:58
问题 Question Please suggest the cause of the error of not being able to mount AWS EBS volume in pod. journalctl -b -f -u kubelet 1480 kubelet.go:1625] Unable to mount volumes for pod "nginx_default(ddc938ee-edda-11e7-ae06-06bb783bb15c)": timeout expired waiting for volumes to attach/mount for pod "default"/"nginx". list of unattached/unmounted volumes=[ebs]; skipping pod 1480 pod_workers.go:186] Error syncing pod ddc938ee-edda-11e7-ae06-06bb783bb15c ("nginx_default(ddc938ee-edda-11e7-ae06

Container keeps crashing for Pod in minikube after the creation of PV and PVC

我是研究僧i 提交于 2019-12-11 19:11:57
问题 i have a REST application integrated with kubernetes for testing REST queries. Now when i execute a POST query on my client side the status of the job which is automatically created remains PENDING indefinitely. The same happens with the POD which is also created automatically When i looked deeper into the events in dashboard, it attaches the volume but is unable to mount the volume and gives this error : Unable to mount volumes for pod "ingestion-88dhg_default(4a8dd589-e3d3-4424-bc11

Can I rely on volumeClaimTemplates naming convention?

≡放荡痞女 提交于 2019-12-09 06:33:30
问题 I want to setup a pre-defined PostgreSQL cluster in a bare meta kubernetes 1.7 with local PV enable. I have three work nodes. I create local PV on each node and deploy the stateful set successfully (with some complex script to setup Postgres replication). However I'm noticed that there's a kind of naming convention between the volumeClaimTemplates and PersistentVolumeClaim. For example apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: postgres volumeClaimTemplates: - metadata: name:

Kubernetes Persistent Volume Access Modes: ReadWriteOnce vs ReadOnlyMany vs ReadWriteMany

送分小仙女□ 提交于 2019-12-07 20:44:55
问题 As per this official document, Kubernetes Persistent Volumes support three types of access modes. ReadOnlyMany ReadWriteOnce ReadWriteMany The given definitions of them in the document is very high-level. It would be great if someone can explain them in little more detail along with some examples of different use cases where we should use one vs other. 回答1: You should use ReadWriteX when you plan to have Pods that will need to write to the volume, and not only read data from the volume. You

kubeadm/kubectl/kube-apiserver turn on feature gate

爱⌒轻易说出口 提交于 2019-12-07 01:52:00
问题 i'm trying to test the local persistent volume in kubernetes v1.9.2. from what i gather (and i may be wrong!) i cannot use kubeadm to add these feature gates: $ sudo kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T09:42:01Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} $ kubeadm init --help ... --feature-gates string A set of key=value

Kubernetes: Is it possible to mount volumes to a container running as a CronJob?

浪子不回头ぞ 提交于 2019-12-06 23:17:34
问题 I'm attempting to create a Kubernetes CronJob to run an application every minute. A prerequisite is that I need to get my application code onto the container that runs within the CronJob. I figure that the best way to do so is to use a persistent volume, a pvclaim, and then defining the volume and mounting it to the container. I've done this successfully with containers running within a Pod, but it appears to be impossible within a CronJob? Here's my attempted configuration: apiVersion: batch

Kubernetes Persistent Volume Access Modes: ReadWriteOnce vs ReadOnlyMany vs ReadWriteMany

◇◆丶佛笑我妖孽 提交于 2019-12-06 09:54:53
As per this official document , Kubernetes Persistent Volumes support three types of access modes. ReadOnlyMany ReadWriteOnce ReadWriteMany The given definitions of them in the document is very high-level. It would be great if someone can explain them in little more detail along with some examples of different use cases where we should use one vs other. You should use ReadWriteX when you plan to have Pods that will need to write to the volume, and not only read data from the volume. You should use XMany when you want the ability for Pods to access the given volume while those workloads are

Kubernetes - how to download a PersistentVolume's content

只愿长相守 提交于 2019-12-04 07:09:16
I have a test executor Pod in K8s cluster created through helm, which asks for a dynamically created PersistentVolume where it stores the test results. Now I would like to get the contents of this volume. It seems quite natural thing to do. I would expect some kubectl download pv <id> . But I can't google up anything. How can I get the contents of a PersistentVolume ? I am in AWS EKS; so AWS API is also an option. Also I can access ECR so perhaps I could somehow store it as an image and download? Or, in general, I am looking for a way to transfer a directory, can be even in an archive. But It

Kubernetes PVC with ReadWriteMany on AWS

落花浮王杯 提交于 2019-12-04 03:38:27
I want to setup a PVC on AWS, where I need ReadWriteMany as access mode. Unfortunately, EBS only supports ReadWriteOnce . How could I solve this? I have seen that there is a beta provider for AWS EFS which supports ReadWriteMany , but as said, this is still beta, and its installation looks somewhat flaky. I could use node affinity to force all pods that rely on the EBS volume to a single node, and stay with ReadWriteOnce , but this limits scalability. Are there any other ways of how to solve this? Basically, what I need is a way to store data in a persistent way to share it across pods that