aws-eks

Initializing a MySQL database deployed in an AWS EKS

吃可爱长大的小学妹 提交于 2019-12-20 07:36:53
问题 I have a pod in my AWS EKS Cluster that runs MySQL:5.7. I have an SQL file that initializes and populates the data for it. in a normal docker-compose, I will use a mount point for it in my docker-compose file: volumes: - ./<path-to-my-config-directory>:/etc/mysql/conf.d - ./<path-to-my-persistence-directory>:/var/lib/mysql - ./<path-to-my-init.sql>:/docker-entrypoint-initdb.d/init.sql In EKS, I can create a storage class in which to save MySQL data. How can I use my init.sql file (about 8GB)

k8s - IP and DNS for postgres with service

霸气de小男生 提交于 2019-12-11 17:57:50
问题 I have created stateful service which is backed by a postgres deployment with k8s. Setup is 3 public subnet|AZ and 3 private subnet|AZ. postgres deployment is in place to create 1 replica and Service with clusterIP: none But now every time I delete the service and create again IP is changing and I was reading something about DNS resolution. I want to access the DB from java client to be deployed another pod on n/w; here i am unable to get static IP. Can I create a service with clusterIP:

Kubernetes Load balancer without Label Selector

孤者浪人 提交于 2019-12-11 17:26:45
问题 Trying to create a Laod Balancer resource with Kubernetes (for an EKS cluster). It works normally with the Label Selector, but we want to only have one LB per cluster, then let ingress direct services. Here is what I currently have : kind: Service apiVersion: v1 metadata: namespace: default name: name annotations: service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0 spec: ports: - port: 80 type: LoadBalancer This creates a LB and gives it an internal DNS, but instances never get

Allow kubernetes application to access other AWS resources?

主宰稳场 提交于 2019-12-11 17:14:06
问题 I want to deploy an application in AWS EKS using kubernetes. My application needs to access the SQS and AWS S3. I am not sure how to allow the kubernetes application to access the SQS and S3. I looked into RBAC but I guess RBAC only provides access to manage the cluster, namespace or pods. I am trying to pass the access key and secret key as the secrets to the environment variable and allow the permission. But I think this is not a good idea. Is there any other way like creating the IAM role

Amazon Kubernetes AWS-EKS is not getting created properly or not synched with kubectl

走远了吗. 提交于 2019-12-10 17:19:21
问题 Following this document step by step: https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html?shortFooter=true I created EKS cluster using aws cli instead-of UI. So I got the following output proxy-kube$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 18h But when I am following this getting started and associating Worker nodes with the cluster, I get proxy-kube$ kubectl get nodes No resources found. I can see 3 EC2

Kubernetes pod distribution amongst nodes with preferred mode

▼魔方 西西 提交于 2019-12-10 11:09:08
问题 I am working on migrating my applications to Kubernetes. I am using EKS. I want to distribute my pods to different nodes, to avoid having a single point of failure. I read about pod-affinity and anti-affinity and required and preferred mode. This answer gives a very nice way to accomplish this. But my doubt is, let's say if I have 3 nodes, of which 2 are already full(resource-wise). If I use requiredDuringSchedulingIgnoredDuringExecution , k8s will spin-up new nodes and will distribute the

Kubernetes pod pending when a new volume is attached (EKS)

↘锁芯ラ 提交于 2019-12-05 19:56:15
问题 Let me describe my scenario: TL;DR When I create a deployment on Kubernetes with 1 attached volume, everything works perfectly. When I create the same deployment, but with a second volume attached (total: 2 volumes), the pod gets stuck on "Pending" with errors: pod has unbound PersistentVolumeClaims (repeated 2 times) 0/2 nodes are available: 2 node(s) had no available volume zone. Already checked that the volumes are created in the correct availability zones. Detailed description I have a

Terraform local-exec provisioner on an EC2 instance fails with “Permission denied”

那年仲夏 提交于 2019-12-04 05:47:39
问题 Trying to provision EKS cluster with Terraform. terraform apply fails with: module.eks_node.null_resource.export_rendered_template: Provisioning with 'local-exec'... module.eks_node.null_resource.export_rendered_template (local-exec): Executing: ["/bin/sh" "-c" "cat > /data_output.sh <<EOL\n#!/bin/bash -xe\n\nCA_CERTIFICATE_DIRECTORY=/etc/kubernetes/pki\nCA_CERTIFICATE_FILE_PATH=$CA_CERTIFICATE_DIRECTORY/ca.crt\nmkdir -p $CA_CERTIFICATE_DIRECTORY\necho \

Kubernetes pod pending when a new volume is attached (EKS)

醉酒当歌 提交于 2019-12-04 02:48:56
Let me describe my scenario: TL;DR When I create a deployment on Kubernetes with 1 attached volume, everything works perfectly. When I create the same deployment, but with a second volume attached (total: 2 volumes), the pod gets stuck on "Pending" with errors: pod has unbound PersistentVolumeClaims (repeated 2 times) 0/2 nodes are available: 2 node(s) had no available volume zone. Already checked that the volumes are created in the correct availability zones. Detailed description I have a cluster set up using Amazon EKS, with 2 nodes. I have the following default storage class: kind:

AWS VPC - k8s - load balancing

风流意气都作罢 提交于 2019-11-29 17:57:18
Sorry for newbie question; I am new to the k8s world.The current way of deploying is to deploy the app on EC2. The new way I am trying to deploy the containerized app to VPC. In the old way AWS would route the traffic for aaa.bbb.com to vpc-ip:443 ELB which would further route it to ASG on private subnet:443 and app would work fine. With k8s in the picture, how does traffic flow look like? I'm trying to figure out if I could use multiple ports on ELB with respective dns and route traffic to on certain port on worker nodes. i.e. xxx.yyy.com -> vpc-ip:443/ -> ec2:443/ aaa.bbb.com -> vpc-ip:9000/