openshift-3

Unable to redeploy the certificates post-expiry in openshift 3.11

拥有回忆 提交于 2021-01-27 13:20:46
问题 I have deployed openshift(okd) 3.11 using : https://github.com/openshift/openshift-ansible/tree/release-3.11 I would want to produce a scenario where certificates expire and test how the renewal certificates can be done. Hence I have set following variables in the inventory as 1 day(so that certificates expire quickly): openshift_hosted_registry_cert_expire_days=1 openshift_ca_cert_expire_days=1 openshift_master_cert_expire_days=1 etcd_ca_default_days=1 As expected after 1 day the oc commands

Openshift retrieve branch name in jenkinsfile

天大地大妈咪最大 提交于 2020-06-28 14:24:07
问题 I have configured webhook on bitbucket server which points to Openshift. I want to get GIT repo url , git branch etc from webhook payload in my inline jenkinsfile but I dont know how to retrieve them. (Webhook triggers build though). Is it possible ? Here is my BuildConfig apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: labels: application: spring-demo template: openjdk18-web-basic-s2i name: spring-demo spec: output: to: kind: ImageStreamTag name: 'spring--demo:latest' runPolicy

Openshift Route is not load balancing from Service pods

北城余情 提交于 2020-05-27 12:21:03
问题 I have tried before on Openshift Origin 3.9 and Online. I have deployed a simple hello world php app on Openshift. It has a Service and a Route. When I call the route, I am getting expected output with Hello world and the Pod IP. Let's call this pod ip as 1.1.1.1 Now i deployed same app with small text change with same label under same Service. Let's call this pod ip as 2.2.2.2 I can see both pods running in a single Service. Now when I call the route, it always shows Podip 1.1.1.1 My route

Openshift Route is not load balancing from Service pods

寵の児 提交于 2020-05-27 12:20:17
问题 I have tried before on Openshift Origin 3.9 and Online. I have deployed a simple hello world php app on Openshift. It has a Service and a Route. When I call the route, I am getting expected output with Hello world and the Pod IP. Let's call this pod ip as 1.1.1.1 Now i deployed same app with small text change with same label under same Service. Let's call this pod ip as 2.2.2.2 I can see both pods running in a single Service. Now when I call the route, it always shows Podip 1.1.1.1 My route

how to get kaa deployed on openshift

北城余情 提交于 2020-01-06 07:00:52
问题 The Kaa platform as an IoT cloud platform is prebuilt to run on amazon aws or a virtualbox sandbox. Is it immediately deployable to openshift, especially the starter free plan? If not, what it takes to get it to work? I have looked at the python on openshift which uses the S2I to dockerize a software collections version of python, e.g. 2.7. I'm wondering how these projects or technologies would work together to make Kaa to run on multiple platforms, or to make more versions/flavors/variants

How to open an internal port in Openshift 3 Online?

假如想象 提交于 2019-12-13 02:47:23
问题 Say if I want to open two ports, one for the public at 8080, and another one to process some public request but was forwarded by the 8080 port like such: const http = require('http'); const publicServer = http.createServer(...).listen(8080); const privateServer = http.createServer(...).listen(9999); publicServer.on('connect', (req, cltSocket, head) => { ... if (...) { // let srvSocket = net.connect('9999', 'localhost', () => { let srvSocket = net.connect('9999', '127.0.0.1', () => { cltSocket

OKD 3.11 Installation failed “Control plane pods didn't come up” “network plugin is not ready: cni config uninitialized”

拟墨画扇 提交于 2019-12-11 09:02:42
问题 OKD 3.11 Installation failed "Control plane pods didn't come up" Environment CentOS Linux release 7.6.1810 (Core) ansible 2.6.16 OKD 3.11 Docker version 1.13.1, build b2f74b2/1.13.1 Ansible inventory file ##Open shift master nodes # Create an OSEv3 group that contains the masters, nodes, and etcd groups [OSEv3:children] masters nodes etcd # host group for masters [masters] SBSTJVMLX605 openshift_ip=192.168.62.95 # host group for etcd [etcd] SBSTJVMLX605 openshift_ip=192.168.62.95 # host group

how to run celery with django on openshift 3

只谈情不闲聊 提交于 2019-12-02 18:24:20
问题 What is the easiest way to launch a celery beat and worker process in my django pod? I'm migrating my Openshift v2 Django app to Openshift v3. I'm using Pro subscription. I'm really a noob on Openshift v3 and docker and containers and kubernetes. I have used this tutorial https://blog.openshift.com/migrating-django-applications-openshift-3/ to migrate my app (which works pretty well). I'm now struggling on how to start celery. On Openshift 2 I just used an action hook post_start: source