coreos

Using rsync on windows with vagrant running a CoreOS VM

天涯浪子 提交于 2020-01-02 04:36:25
问题 I am using windows 8.1 Pro pc running vagrant and cygwin's rsync. I am configuring as such: config.vm.synced_folder "../sharedFolder", "/vagrant_data", type: "rsync" And when I execute vagrant up I get the following error: C:\dev\vagrantBoxes\coreOS>vagrant up Bringing machine 'default' up with 'virtualbox' provider... ==> default: Checking if box 'yungsang/coreos' is up to date... ==> default: Clearing any previously set forwarded ports... ==> default: Clearing any previously set network

How to retry image pull in a kubernetes Pods?

ε祈祈猫儿з 提交于 2019-12-31 07:58:07
问题 I am new to kubernetes. I have an issue in the pods. When I run the command kubectl get pods Result: NAME READY STATUS RESTARTS AGE mysql-apim-db-1viwg 1/1 Running 1 20h mysql-govdb-qioee 1/1 Running 1 20h mysql-userdb-l8q8c 1/1 Running 0 20h wso2am-default-813fy 0/1 ImagePullBackOff 0 20h Due to an issue of "wso2am-default-813fy" node, I need to restart it. Any suggestion? 回答1: Usually in case of "ImagePullBackOff" it's retried after few seconds/minutes. In case you want to try again

Pulling images from private registry in Kubernetes

烂漫一生 提交于 2019-12-28 05:07:05
问题 I have built a 4 node kubernetes cluster running multi-container pods all running on CoreOS. The images come from public and private repositories. Right now I have to log into each node and manually pull down the images each time I update them. I would like be able to pull them automatically. I have tried running docker login on each server and putting the .dockercfg file in /root and /core I have also done the above with the .docker/config.json I have added secret to the kube master and

calico-policy-controller requests etcd2 certificates of a different coreos server

时光毁灭记忆、已成空白 提交于 2019-12-24 08:30:10
问题 i have two coreos stable servers, each one includes an etcd2 server and they share the same discovery url. each generated a different certificate for each of the etcd2 daemons. i installed kubernetes controller on one, ( coreos-2.tux-in.com ) and a worker on coreos-3.tux-in.com . calico is configured to use the etcd2 certificates for coreos-2.tux-in.com , but it seems that kuberenetes started the calico-policy-controller on coreos-3.tux-in.com so it can't find the etcd2 certificates. coreos-2

Journalctl : add _SYSTEMD_UNIT field into log printout

倖福魔咒の 提交于 2019-12-23 19:20:15
问题 Using the command: /usr/bin/journalctl -o short -f | ncat {some-ip} {some port} To forward journal output to some remote log tracking app. Problem is that I'm missing the systemd unit / service name in the printout making it hard to tell which service produces what log line. for example this is a nginx line : Jun 25 07:51:09 localhost bash[497] : 10.23.132.98 - - [25/Jun/2014:07:51:09 +0000] "GET /page.html HTTP/1.1" 200 321 "https://{ip}" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537

.pgpass for PostgreSQL replication in Dockerized environment

家住魔仙堡 提交于 2019-12-23 14:54:14
问题 I try to set up an PostgreSQL slave using Docker and a bash script (I use Coreos). I have not found any way to supply a valid .pgpass . I know I could create a PGPASSWORD environment variable, but do not wish to do so for security reasons (as stated here, http://www.postgresql.org/docs/current/static/libpq-envars.html),, and because this password should be accessible every time the recovery.conf file is used (for the primary_conninfo variable). Dockerfile # ... # apt-get installs and other

How to write a kubernetes pod configuration to start two containers

跟風遠走 提交于 2019-12-23 12:26:20
问题 I would like to create a kubernetes pod that contains 2 containers, both with different images, so I can start both containers together. Currently I have tried the following configuration: { "id": "podId", "desiredState": { "manifest": { "version": "v1beta1", "id": "podId", "containers": [{ "name": "type1", "image": "local/image" }, { "name": "type2", "image": "local/secondary" }] } }, "labels": { "name": "imageTest" } } However when I execute kubecfg -c app.json create /pods I get the

CoreOS Vagrant Virtual box SSH password

女生的网名这么多〃 提交于 2019-12-21 21:34:38
问题 I'm trying to SSH into CoreOS Virtual Box using Putty. I know the username appears in the output when I do Vagrant up but I don't know what the password is. I've also tried overriding it with config.ssh.password settings in Vagrantfile but when I do vagrant up again it comes up with Authentication failure warning and retries endlessly. How do we use Putty to log into this Box instance? 回答1: By default there is no password set for the core user, only key-based authentication. If you'd like to

Kubernetes dashboard keeps pending with message: no endpoints available for service “kubernetes-dashboard”

北慕城南 提交于 2019-12-21 20:48:09
问题 Heeey all, I need some help with getting the dashboard to work. My dashboard pod has status "Pending" and if I do a curl call to http://127.0.0.1:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard then I get this result: "no endpoints available for service \"kubernetes-dashboard\"" { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "no endpoints available for service \"kubernetes-dashboard\"", "reason": "ServiceUnavailable", "code": 503

Kubernetes simple authentication

好久不见. 提交于 2019-12-21 20:39:03
问题 I am using KUbernetes on a coreOs cluster hosted on DigitalOcean. And using this repo to set it up. I start the apiserver with the following line: /opt/bin/kube-apiserver --runtime-config=api/v1 --allow-privileged=true \ --insecure-bind-address=0.0.0.0 --insecure-port=8080 \ --secure-port=6443 --etcd-servers=http://127.0.0.1:2379 \ --logtostderr=true --advertise-address=${COREOS_PRIVATE_IPV4} \ --service-cluster-ip-range=10.100.0.0/16 --bind-address=0.0.0.0 The problem is that it accepts