kubernetes-statefulset

Error “pod has unbound immediate PersistentVolumeClaim” during statefulset deployment

痴心易碎 提交于 2021-02-16 20:26:33
问题 I am deploying stolon via statefulset (default from stolon repo). I have define in statefulset config volumeClaimTemplates: - metadata: name: data spec: accessModes: ["ReadWriteOnce"] storageClassName: stolon-local-storage resources: requests: storage: 1Gi and here is my storageClass: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: stolon-local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer statefulset was created fine, but pod has

Kubernetes statefulset with NFS persistent volume

梦想与她 提交于 2021-02-05 08:46:08
问题 I have a kubernetes cluster and I have a simple deployment for mongodb with NFS persistent volume set. It works fine, but since resources like databases are stateful I thought of using Statefulset for the mongodb , but now the problem is, when I go through the documentation, statefulset has volumeClaimTemplates instead of volumes (in deployments). But now the problem comes. in a deployment do it like this: PersistentVolume -> PersistentVolumeClaim -> Deployment But how can we do this in

Kubernetes: Cassandra(stateful set) deployment on GCP

本小妞迷上赌 提交于 2021-01-28 12:20:38
问题 Has anyone tried deploying Cassandra (POC) on GCP using kubernetes (not GKE). If so can you please share info on how to get it working? 回答1: I have implemented cassandra on kubernetes. Please find my deployment and service yaml files: apiVersion: v1 kind: Service metadata: labels: app: cassandra name: cassandra spec: clusterIP: None ports: - port: 9042 selector: app: cassandra --- apiVersion: apps/v1beta2 kind: StatefulSet metadata: name: cassandra labels: app: cassandra spec: serviceName:

Kubectl rollout restart for statefulset

早过忘川 提交于 2021-01-27 05:46:51
问题 As per the kubectl docs, kubectl rollout restart is applicable for deployments, daemonsets and statefulsets. It works as expected for deployments. But for statefulsets, it restarts only one pod of the 2 pods. ✗ k rollout restart statefulset alertmanager-main (playground-fdp/monitoring) statefulset.apps/alertmanager-main restarted ✗ k rollout status statefulset alertmanager-main (playground-fdp/monitoring) Waiting for 1 pods to be ready... Waiting for 1 pods to be ready... statefulset rolling

How to deploy logstash with persistent volume on kubernetes?

此生再无相见时 提交于 2021-01-07 06:31:18
问题 Using GKE to deploy logstash by statefulset kind with pvc. Also need to install an output plugin. When don't use while true; do sleep 1000; done; in container's command args , it can't deploy with pvc successfully. The pod will cause CrashLoopBackOff error. Normal Created 13s (x2 over 14s) kubelet Created container logstash Normal Started 13s (x2 over 13s) kubelet Started container logstash Warning BackOff 11s (x2 over 12s) kubelet Back-off restarting failed container From here I found it can