kubectl

Container logs not working after cluster update on GKE

杀马特。学长 韩版系。学妹 提交于 2020-06-01 01:40:53
问题 Recently I did an upgrade on my cluster that's running multiple containers for microservices written in Java (using default Spring Boot's log4j2 default configuration). Since then, the container log is not being updated anymore. The kubectl logs command is working fine, all the recent logs can be seen using this command, but the logs that should be appearing in the GKE dashboard is simply not working anymore. I checked the Google's Loggin API and it's enabled. Does anyone know what's the

Container logs not working after cluster update on GKE

无人久伴 提交于 2020-06-01 01:36:11
问题 Recently I did an upgrade on my cluster that's running multiple containers for microservices written in Java (using default Spring Boot's log4j2 default configuration). Since then, the container log is not being updated anymore. The kubectl logs command is working fine, all the recent logs can be seen using this command, but the logs that should be appearing in the GKE dashboard is simply not working anymore. I checked the Google's Loggin API and it's enabled. Does anyone know what's the

Kubernetes - Granting RBAC access to anonymous users in kube dns

心不动则不痛 提交于 2020-05-30 09:56:39
问题 I have Kubernetes Cluster setup with a master and worker node. Kubectl cluster-info shows kubernetes-master as well as kube-dns running successfully. I am trying to access below URL and since it is internal to my organization, below URL is not visible to external world. https://10.118.3.22:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy But I am getting below error when I access it - { "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message":

kubectl wait --for=condition=complete --timeout=30s

坚强是说给别人听的谎言 提交于 2020-05-29 03:52:19
问题 I am trying to check the status of a pod using kubectl wait command through this documentation. Following is the command that i am trying kubectl wait --for=condition=complete --timeout=30s -n d1 job/test-job1-oo-9j9kj Following is the error that i am getting Kubectl error: status.conditions accessor error: Failure is of the type string, expected map[string]interface{} and my kubectl -o json output can be accessed via this github link. Can someone help me to fix the issue 回答1: This totally

kubectl wait --for=condition=complete --timeout=30s

ぐ巨炮叔叔 提交于 2020-05-29 03:52:09
问题 I am trying to check the status of a pod using kubectl wait command through this documentation. Following is the command that i am trying kubectl wait --for=condition=complete --timeout=30s -n d1 job/test-job1-oo-9j9kj Following is the error that i am getting Kubectl error: status.conditions accessor error: Failure is of the type string, expected map[string]interface{} and my kubectl -o json output can be accessed via this github link. Can someone help me to fix the issue 回答1: This totally

kubectl exec into pod resulting in Unable to use a TTY error every time if run through automation

。_饼干妹妹 提交于 2020-05-17 09:24:32
问题 i have a simple automation to exec into a kubernetes pod but it always results in the below error :- kubectl exec -it my-pod -c my-contaner -n my-namespace /bin/bash Unable to use a TTY - input is not a terminal or the right kind of file I am trying to run a simple shell script using jenkins to exec into a pod and execute ls -las in the root directory but its not allowing to exec into the pod automatically. The same thing works fine if i do manually on the linux server terminal. Can someone

Limit the number of pods per node

泄露秘密 提交于 2020-05-15 05:28:25
问题 I'm trying to limit the number of pods per each node from my cluster. I managed to add a global limit per node from kubeadm init with config file: apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration networking: podSubnet: <subnet> --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration maxPods: 10 This is not quite well because the limit is applied even on master node (where multiple kube-system pods are running and the number of pods here may increase over 10). I

How do I delete clusters and contexts from kubectl config?

旧巷老猫 提交于 2020-05-09 17:42:04
问题 kubectl config view shows contexts and clusters corresponding to clusters that I have deleted. How can I remove those entries? The command kubectl config unset clusters appears to delete all clusters. Is there a way to selectively delete cluster entries? What about contexts? 回答1: kubectl config unset takes a dot-delimited path. You can delete cluster/context/user entries by name. E.g. kubectl config unset users.gke_project_zone_name kubectl config unset contexts.aws_cluster1-kubernetes

Kubernetes share a directory from your local system to kubernetes container

99封情书 提交于 2020-04-29 09:36:11
问题 Is there any way to share the directory/files to kubernetes container from your local system? I have a deployment yaml file. I want to share the directory without using kubectl cp . I tried with configmap but I later came to know that configmap can not have the whole directory but only a single file. If anyone has any idea please share. Please note: I do not want to host the file into minikube but I want to push the directory directly to container 回答1: I found a way. We can specify the

Kubernetes share a directory from your local system to kubernetes container

岁酱吖の 提交于 2020-04-29 09:35:10
问题 Is there any way to share the directory/files to kubernetes container from your local system? I have a deployment yaml file. I want to share the directory without using kubectl cp . I tried with configmap but I later came to know that configmap can not have the whole directory but only a single file. If anyone has any idea please share. Please note: I do not want to host the file into minikube but I want to push the directory directly to container 回答1: I found a way. We can specify the