autoscaling

Solr AutoScaling - Add replicas on new nodes

孤街浪徒 提交于 2019-12-04 13:04:33
问题 Using Solr version 7.3.1 Starting with 3 nodes: I have created a collection like this: wget "localhost:8983/solr/admin/collections?action=CREATE&autoAddReplicas=true&collection.configName=my_col_config&maxShardsPerNode=1&name=my_col&numShards=1&replicationFactor=3&router.name=compositeId&wt=json" -O /dev/null In this way I have a replica on each node. GOAL: Each shard should add a replica to new nodes joining the cluster. When a node are shoot down. It should just go away. Only one replica

How to scale a slack bot to 1000's of teams

旧巷老猫 提交于 2019-12-04 09:19:15
问题 To implement a slack bot, i need to deal with 'Real Time Messaging API' of slack. It is a WebSocket-based API that allows you to receive events from Slack in real time and send messages as user. more info: https://api.slack.com/rtm To create a bot for only one team , i need to open one websocket connection and listen it for events. To make available the slack bot for another team. I need to open a new websocket connection. So, 1 team => 1 websocket connection 2 teams => 2 websocket

Y-axis autoscaling with x-range sliders in plotly

我的梦境 提交于 2019-12-04 08:30:41
Afaik, y-axis cant be made to auto scale when using x-range sliders. Y range is chosen with respect to the y values of the whole x range and does not change after zooming-in. This is especially annoying with candlestick charts in volatile periods. When you zoom-in using x-range slider, you essentially get flat candlesticks as their fluctuations only cover a very small part of the initial range. After doing some research it seems that some progress has been made here: https://github.com/plotly/plotly.js/pull/2364 . Anyone knows if there is a working solution for plotly.py ? Thanks for your time

Stopping AWS EC2 instance leads to autocreation of another instance of the stopped one

可紊 提交于 2019-12-04 07:19:54
问题 I had to stop my m3.medium EC2 instance from the AWS console to resize it to m3.large . However, after it stopped, it automatically created a new instance. Any idea why this is happening? It caused some big troubles for me. 回答1: Your AutoScaling group with minimum size = 1 spun up a new instance because there were no instances in the 'running' state available to respond to requests, particularly health checks. Your instance was deemed 'unhealthy' and replaced by the ASG. If your instance

Running kubernetes autoscalar

北城余情 提交于 2019-12-04 04:31:41
问题 I have a replication controller running with the following spec: apiVersion: v1 kind: ReplicationController metadata: name: owncloud-controller spec: replicas: 1 selector: app: owncloud template: metadata: labels: app: owncloud spec: containers: - name: owncloud image: adimania/owncloud9-centos7 ports: - containerPort: 80 volumeMounts: - name: userdata mountPath: /var/www/html/owncloud/data resources: requests: cpu: 400m volumes: - name: userdata hostPath: path: /opt/data Now I run a hpa

AWS autoscale ELB status checks grace period

旧城冷巷雨未停 提交于 2019-12-04 03:32:58
I'm running servers in a AWS auto scale group. The running servers are behind a load balancer. I'm using the ELB to mange the auto scaling groups healthchecks. When servers are been started and join the auto scale group they are currently immediately join to the load balancer. How much time (i.e. the healthcheck grace period) do I need to wait until I let them join to the load balancer? Should it be only after the servers are in a state of running? Should it be only after the servers passed the system and the instance status checks? There are two types of Health Check available for Auto

List out auto scaling group names with a specific application tag using boto3

自古美人都是妖i 提交于 2019-12-03 21:09:45
I was trying to fetch auto scaling groups with Application tag value as 'CCC'. The list is as below, gweb prd-dcc-eap-w2 gweb prd-dcc-emc gweb prd-dcc-ems CCC dev-ccc-wer CCC dev-ccc-gbg CCC dev-ccc-wer The script I coded below gives output which includes one ASG without CCC tag. #!/usr/bin/python import boto3 client = boto3.client('autoscaling',region_name='us-west-2') response = client.describe_auto_scaling_groups() ccc_asg = [] all_asg = response['AutoScalingGroups'] for i in range(len(all_asg)): all_tags = all_asg[i]['Tags'] for j in range(len(all_tags)): if all_tags[j]['Key'] == 'Name':

How does pod replica scaling down work in Kubernetes Horizontal Pod Autoscaler?

前提是你 提交于 2019-12-03 14:10:26
My understanding is that in Kubernetes, when using the Horizontal Pod Autoscaler, if the targetCPUUtilizationPercentage field is set to 50%, and the average CPU utilization across all the pod's replicas is above that value, the HPA will create more replicas. Once the average CPU drops below 50% for some time, it will lower the number of replicas. Here is the part that I am not sure about: What if the CPU utilization on a pod is 10%, not 0%?Will HPA still terminate the replica? 10% CPU isn't much, but since it's not 0%, some task is currently running on that pod. If it's a long lasting task

CloudFormation AutoScalingGroup not waiting for signal on update/scale-up

青春壹個敷衍的年華 提交于 2019-12-03 12:47:38
问题 I'm working with a CloudFormation template that brings up as many instances as I request, and want to wait for them to finish initialising (via User Data) before the stack creation/update is considered complete. The Expectation Creating or updating the stack should wait for signals from all newly created instances, such to ensure that their initialisation is complete. I don't want the stack creation or update to be considered successful if any of the created instances fail to initialise. The

Solr AutoScaling - Add replicas on new nodes

吃可爱长大的小学妹 提交于 2019-12-03 08:04:11
Using Solr version 7.3.1 Starting with 3 nodes: I have created a collection like this: wget "localhost:8983/solr/admin/collections?action=CREATE&autoAddReplicas=true&collection.configName=my_col_config&maxShardsPerNode=1&name=my_col&numShards=1&replicationFactor=3&router.name=compositeId&wt=json" -O /dev/null In this way I have a replica on each node. GOAL: Each shard should add a replica to new nodes joining the cluster. When a node are shoot down. It should just go away. Only one replica for each shard on each node. I know that it should be possible with the new AutoScalling API but I am