persistent-volumes

kubernetes timescaledb statefulset: Changes lost on pod recreation

我的梦境 提交于 2020-06-17 15:09:46
问题 I have a Timescaledb server running as StatefulSet in AKS. It appears when I delete and recreate timescaledb pod, the changes are lost even though the pod is associated to the initially associated PV (persistent volume). Any help is appreciated. Below is the PV, PVC config of statefulset extracted by running kubectl get statefulset timescaledb -o yaml template: metadata: creationTimestamp: null labels: app: timescaledb spec: containers: - args: - -c - config_file=/etc/postgresql/postgresql

kubernetes timescaledb statefulset: Changes lost on pod recreation

前提是你 提交于 2020-06-17 15:08:23
问题 I have a Timescaledb server running as StatefulSet in AKS. It appears when I delete and recreate timescaledb pod, the changes are lost even though the pod is associated to the initially associated PV (persistent volume). Any help is appreciated. Below is the PV, PVC config of statefulset extracted by running kubectl get statefulset timescaledb -o yaml template: metadata: creationTimestamp: null labels: app: timescaledb spec: containers: - args: - -c - config_file=/etc/postgresql/postgresql

Cannot mount Config directory in Nextcloud Docker container

痴心易碎 提交于 2020-05-12 08:07:26
问题 I'm trying to create a custom Nextcloud config locally, then have the ability to mount it to the appropriate folder using volumes as defined here: https://github.com/nextcloud/docker#persistent-data. All the volume mounts work except for the config mount... Why is that being treated differently here? Steps to reproduce 0) Enter a new/emptry directory (containing no sub-directories or additional files). 1) Create a docker-compose.yml file containing only the below contents: version: "3.4"

How to isolate data of one persistent volume claim from another

放肆的年华 提交于 2020-04-16 03:28:26
问题 I created a persistent volume using the following YAML apiVersion: v1 kind: PersistentVolume metadata: name: dq-tools-volume labels: name: dq-tools-volume spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Recycle storageClassName: volume-class nfs: server: 192.168.215.83 path: "/var/nfsshare" After creating this I created two persistentvolumeclaims using following YAMLS PVC1: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: jenkins-volume-1

Add persistent volume in kubernetes statefulset

て烟熏妆下的殇ゞ 提交于 2020-01-22 22:56:22
问题 I'm new to kubernetes and I'm trying to add a PVC in my statefulset . PV and PVC are shown here: NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE neo4j-backups 5Gi RWO Retain Bound default/backups-claim manual 1h NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE backups-claim Bound neo4j-backups 5Gi RWO manual 51m Basically I want all pods of the statefulset to see the contents of that volume as backup files are stored there. Statefulset used can be found

Kubernetes NFS Persistent Volumes - multiple claims on same volume? Claim stuck in pending?

拟墨画扇 提交于 2020-01-22 17:07:29
问题 Use case: I have a NFS directory available and I want to use it to persist data for multiple deployments & pods. I have created a PersistentVolume : apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv spec: capacity: storage: 10Gi accessModes: - ReadWriteMany nfs: server: http://mynfs.com path: /server/mount/point I want multiple deployments to be able to use this PersistentVolume , so my understanding of what is needed is that I need to create multiple PersistentVolumeClaims which

Kubernetes persistent volume on Docker Desktop (Windows)

瘦欲@ 提交于 2020-01-15 12:07:35
问题 I'm using Docker Desktop on Windows 10. For the purposes of development, I want to expose a local folder to a container. When running the container in Docker, I do this by specifying the volume flag (-v). How do I achieve the same when running the container in Kubernetes? 回答1: You should use hostpath Volume type in your pod`s spec to mount a file or directory from the host node’s filesystem, where hostPath.path field should be of following format to accept Windows like paths: /W/fooapp

How to configure a manually provisioned Azure Managed Disk to use as a Kubernetes persistent volume?

旧街凉风 提交于 2020-01-14 10:45:27
问题 I'm trying to run the Jenkins Helm chart. As part of this setup, I'd like to pass in a persistent volume that I provisioned ahead of time (or perhaps exported from another cluster during a migration). I'm trying to get my persistent volume (PV) and persistent volume claim (PVC) setup in a such a way that when Jenkins starts, it uses my predefined PV and PVC. I think the problem originates from the persistent storage definition for the Azure disk points to a VHD in my storage account. Is there

Azure ACS AzureFile Dynamic Persistent Volume Claim

♀尐吖头ヾ 提交于 2020-01-06 06:05:34
问题 I am trying to Dynamically provision storage using a storageclass I've defined with type azure-file. I've tried setting both the parameters in the storageclass for storageAccount and skuName. Here is my example with storageAccount set. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azuretestfilestorage namespace: kube-system provisioner: kubernetes.io/azure-file parameters: storageAccount: <storage_account_name> The storageclass is created successfully however when I try to

How to mount a postgresql volume using Aws EBS in Kubernete

耗尽温柔 提交于 2020-01-01 01:15:11
问题 I've created the persistent volume (EBS 10G) and corresponding persistent volume claim first. But when I try to deploy the postgresql pods as below (yaml file) : test-postgresql.yaml Receive the errors from pod: initdb: directory "/var/lib/postgresql/data" exists but is not empty It contains a lost+found directory, perhaps due to it being a mount point. Using a mount point directly as the data directory is not recommended. Create a subdirectory under the mount point. Why the pod can't use