kops

kubernetes: CA file when deploying via kops

限于喜欢 提交于 2019-12-10 15:40:11
问题 I have created a cluster on aws using kops . However I am unable to find the file used as/by the certificate authority for spawning off client certs. Does kops create such a thing by default? If so, what is the recommended process for creating client certs? The kops documentation is not very clear about this. 回答1: I've done it like this in the past: Download the kops -generated CA certificate and signing key from S3: s3://<BUCKET_NAME>/<CLUSTER_NAME>/pki/private/ca/*.key s3://<BUCKET_NAME>/

There was a problem authenticationg with your cluster. when i making gitlab and k8s cluster integration

隐身守侯 提交于 2019-12-10 11:39:04
问题 I create k8s cluster in aws by using kops i wrote kubernetes cluster name : test.fuzes.io api url : https://api.test.fuzes.io/api/v1 and i fill the CA Certificate field with result of kubectl get secret {secrete_name} -o jsonpath="{['data']['ca\.crt']}" | base64 --decode and finally i fill the Service Token field with result of kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep gitlab-admin | awk '{print $1}') but when i save changes, i got message There was a

Need help on volume mount issue with kubernetes

拈花ヽ惹草 提交于 2019-12-10 06:17:53
问题 I have RBAC enabled kubernetes cluster created using kops version 1.8.0-beta.1 , I am trying to run a nginx pod which should attach pre-created EBS volume and pod should start. But getting issue as not authorized even though i am a admin user. Any help would be highly appreciated. kubectl version Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-09T07:27:47Z", GoVersion:"go1.9

aws s3api create-bucket —bucket make exception

╄→尐↘猪︶ㄣ 提交于 2019-12-09 09:45:20
问题 I am trying to create an S3 bucket using aws s3api create-bucket —bucket kubernetes-aws-wthamira-io It gives this error: An error occurred (IllegalLocationConstraintException) when calling the CreateBucket operation: The unspecified location constraint is incompatible for the region specific endpoint this request was sent to. I set the region using aws configure to eu-west-1 Default region name [eu-west-1]: but it gives the same error. How do I solve this? I use osx terminal to connect aws

(Kops) Kubernetes Service maped to DNS names in AWS Route53?

只愿长相守 提交于 2019-12-08 17:48:25
I am new to Kops and a bit to kubernetes as well. I managed to create a cluster with Kops, and run a deployment and a service on it. everything went well, and an ELB was created for me and I could access the application via this ELB endpoint. My question is: How can I map my subdomain (eg. my-sub.example.com ) to the generated ELB endpoint ? I believe this should be somehow done automatic by kubernetes and I should not hardcode the ELB endpoint inside my code. I tried something that has to do with annotation -> DomainName , but it did not work.(see kubernetes yml file below) apiVersion: v1

Kubernetes kops change basic auth password

耗尽温柔 提交于 2019-12-06 16:41:28
I have successfully configured a Kubernetes cluster using Kops . However, I cannot find where to change the auto-generated admin password - how can I do this? At the moment there is no easy way to do that since there is no way via the Kops API to create a secret of type "Secret" (quite confusing I know). The only workaround is to change the credentials, in this case your password directly on your s3 configuration as explained here: https://github.com/kubernetes/kops/blob/master/docs/secrets.md#workaround-for-changing-secrets-with-type-secret and force a rolling-update of your cluster by

Need help on volume mount issue with kubernetes

落花浮王杯 提交于 2019-12-05 16:49:16
I have RBAC enabled kubernetes cluster created using kops version 1.8.0-beta.1 , I am trying to run a nginx pod which should attach pre-created EBS volume and pod should start. But getting issue as not authorized even though i am a admin user. Any help would be highly appreciated. kubectl version Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-09T07:27:47Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"8",

aws s3api create-bucket —bucket make exception

杀马特。学长 韩版系。学妹 提交于 2019-12-04 02:36:48
I am trying to create an S3 bucket using aws s3api create-bucket —bucket kubernetes-aws-wthamira-io It gives this error: An error occurred (IllegalLocationConstraintException) when calling the CreateBucket operation: The unspecified location constraint is incompatible for the region specific endpoint this request was sent to. I set the region using aws configure to eu-west-1 Default region name [eu-west-1]: but it gives the same error. How do I solve this? I use osx terminal to connect aws try this: aws s3api create-bucket --bucket kubernetes-aws-wthamira-io --create-bucket-configuration

kops / kubectl - how do i import state created on a another server?

*爱你&永不变心* 提交于 2019-12-03 02:51:30
i setup my kubernetes cluster using kops, and I did so from local machine. So my .kube directory is stored on my local machine, but i setup kops for state storage in s3 . Im in the process of setting up my CI server now, and I want to run my kubectl commands from that box. How do i go about importing the existing state to that server? To run kubectl command, you will need the cluster's apiServer URL and related credentials for authentication. Those data are by convention stored in ~/.kube/config file. You may also view it via kubectl config view command. In order to run kubectl on your CI

How to restore kubernetes cluster using kops?

浪子不回头ぞ 提交于 2019-12-02 08:17:10
问题 How do I restore kubernetes cluster using kops? I've kubernetes state files in my s3 bucket. Is there a way to restore kubernetes cluster using kops? 回答1: As you mention, kops stores the state of the cluster in an S3 bucket. If you run kops create cluster with the same state file, it will recreate the cluster as it was before, with the same instancegroups and master configuration. This assumes the cluster has been deleted, if not, you'll need to use the kops update cluster command which