kops

Multiple Certficiation Authority certificates (?)

蓝咒 提交于 2020-01-03 03:26:04
问题 I have created a kubernetes cluster on aws using kops . Unless I am wrong, the ca.crt and ca.key files are in the following locations as indicated by this very helpful answer: - s3://<BUCKET_NAME>/<CLUSTER_NAME>/pki/private/ca/*.key - s3://<BUCKET_NAME>/<CLUSTER_NAME>/pki/issued/ca/*.crt However, I coulnd't help noticing that in my ~/.kube/config file (which was created automatically by kops ), I have an entry named: certificate-authority-data whose contents are different than both of the

How to Add Users to Kubernetes (kubectl)?

烂漫一生 提交于 2019-12-29 10:16:06
问题 I've created a Kubernetes cluster on AWS with kops and can successfully administer it via kubectl from my local machine. I can view the current config with kubectl config view as well as directly access the stored state at ~/.kube/config , such as: apiVersion: v1 clusters: - cluster: certificate-authority-data: REDACTED server: https://api.{CLUSTER_NAME} name: {CLUSTER_NAME} contexts: - context: cluster: {CLUSTER_NAME} user: {CLUSTER_NAME} name: {CLUSTER_NAME} current-context: {CLUSTER_NAME}

Even after adding additional Kubernetes node, I see new node unused while getting error "No nodes are available that match all of the predicates:

不羁岁月 提交于 2019-12-25 02:26:15
问题 We tried to add one more deployment with 2 pods to existing mix of pods scheduled over 4 nodes and 1 master node cluster. We are getting following error: No nodes are available that match all of the predicates: Insufficient cpu (4), Insufficient memory (1), PodToleratesNodeTaints (2). Looking at the other threads and documentation, this would be the case when existing nodes are exceeding cpu capacity (on 4 nodes) and memory capacity(on 1 node)... To solve the resource issue, we added another

automatic spot pricing for kops deployment

给你一囗甜甜゛ 提交于 2019-12-23 17:37:53
问题 I can already do spot deployments with kops but it requires manually editing a the instance groups (nodes) $ kops edit ig --name=test.dev.test.com nodes machineType: t2.medium maxSize: 2 minSize: 2 => machineType: t1.nano maxSize: 1 minSize: 1 Need to look into a way of doing this automatically with the average spot price + 10% I would also like to have at least 1 master and 1 node that are running on normal instances to survive a complete spot-overbid shutdown and the rest to spot price. Can

(Kops) Kubernetes Service maped to DNS names in AWS Route53?

梦想与她 提交于 2019-12-23 00:52:19
问题 I am new to Kops and a bit to kubernetes as well. I managed to create a cluster with Kops, and run a deployment and a service on it. everything went well, and an ELB was created for me and I could access the application via this ELB endpoint. My question is: How can I map my subdomain (eg. my-sub.example.com ) to the generated ELB endpoint ? I believe this should be somehow done automatic by kubernetes and I should not hardcode the ELB endpoint inside my code. I tried something that has to do

kops / kubectl - how do i import state created on a another server?

核能气质少年 提交于 2019-12-20 14:21:53
问题 i setup my kubernetes cluster using kops, and I did so from local machine. So my .kube directory is stored on my local machine, but i setup kops for state storage in s3 . Im in the process of setting up my CI server now, and I want to run my kubectl commands from that box. How do i go about importing the existing state to that server? 回答1: To run kubectl command, you will need the cluster's apiServer URL and related credentials for authentication. Those data are by convention stored in ~/

how to add an node to my kops cluster? (node in here is my external instance)

a 夏天 提交于 2019-12-19 05:44:56
问题 i have created a kubernetes cluster on AWS, as by following below instruction. All my master and worker nodes of Ubuntu OS type. https://jee-appy.blogspot.in/2017/10/setup-kubernetes-cluster-kops-aws.html I am aware on how to increase or decrease the number of nodes in my cluster using cluster updates which kubernetes spins up a new node for us, However i was wondering, is it possible to attach my external aws instance(for eg: an instance with same OS like ubuntu) to my existing kops cluster?

“Failed create pod sandbox” pod error in AWS Kubernetes cluster

隐身守侯 提交于 2019-12-13 16:35:04
问题 Summary of the issue We have observed on several occasions that our cluster runs in a problem where one or more pods on one or more nodes are not starting (container or containers within the pod are not starting). The pod shows "Failed create pod sandbox" error. Restarting docker or kubelet on the "affected" nodes does no fix the problem. Also, terminating and recreating affected EC2 instances does not solve the issue. If a pod (both ones having failed to start and "healthy" ones) are

Kubernetes 1.8.10 kube-apiserver priorityclasses error

独自空忆成欢 提交于 2019-12-13 03:25:51
问题 New cluster 1.8.10 spinned with kops. In K8S 1.8 there is a new feature Pod Priority and Preemption . More information: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#how-to-use-priority-and-preemption kube-apiserver is logging errors I0321 16:27:50.922589 7 wrap.go:42] GET /apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations: (140.067µs) 404 [[kube-apiserver/v1.8.10 (linux/amd64) kubernetes/044cd26] 127.0.0.1:47500] I0321 16:27:51.257756 7 wrap.go

Securing connections from ingress to services in Kubernetes with TLS

佐手、 提交于 2019-12-12 04:03:08
问题 I am working on securing my Kubernetes cluster with a TLS connection configured in the ingress rule, which essentially terminates the SSL connection at the load balancer. So far so good. A question came up about whether it would make sense to secure the connection from the load balancer to each of the services running in Kubernetes cluster. My understanding of how Kubernetes works is that services should be able to go up and come down dynamically with no guarantee that the private IPs remain