high-availability

NameNode HA when using hdfs:// URI

橙三吉。 提交于 2019-12-12 07:56:06
问题 With HDFS or HFTP URI scheme (e.g. hdfs://namenode/path/to/file ) I can access HDFS clusters without requiring their XML configuration files. It is very handy when running shell commands like hdfs dfs -get , hadoop distcp or reading files from Spark like sc.hadoopFile() , because I don't have to copy and manage xml files for all relevant HDFS clusters to all nodes that those codes might potentially run. One drawback of this approach is that I have to use the active NameNode's hostname,

How to connect to a high availability SQL Server from Python + SQL Alchemy

若如初见. 提交于 2019-12-11 12:12:34
问题 Our infrastructure group has asked us to "add MultiSubnetFailover=True to all application connection strings" so that we can take advantage of a new SQL Server HA setup involving Availability Groups. I am stuck though since we have some python programs that connect (read+write) to the database via SQL Alchemy. I have been searching and I don't see anything about this MultiSubnetFailover feature being available as an option in SQL Alchemy or any other Python driver. Is it possible to connect

Error executing hdfs zkfc command

醉酒当歌 提交于 2019-12-11 10:29:37
问题 I am new to hadoop, hdfs.. I have do the next steps: I have started zookeeper in the three namenodes: *vagrant@172:~$ zkServer.sh start I can see the status: *vagrant@172:~$ zkServer.sh status Result Status: JMX enabled by default Using config: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg Mode: follower with jps command only appear jps and sometimes appear quaroom too: *vagrant@172:~$ jps 2237 Jps When I run the next command. * vagrant@172:~$ hdfs zkfc -formatZK 16/01/07 16:10:09 INFO zookeeper

tomcat webapp failover

烈酒焚心 提交于 2019-12-11 09:33:26
问题 I am working on the high availability aspect of a webapp deployed in tomcat. I require a mechanism for failover such that it should not be apparent to the webapp user and have am looking at tomcat clustering as a solution for the same. If I am looking only at failover and not on load balancing(not required at this point) , how should I configure the tomcat cluster ? EDIT I am aware about the mechanism but am looking at the configuration aspect. 回答1: looking for this myself, I finally found

How to configure apache with active passive setup

和自甴很熟 提交于 2019-12-11 08:51:45
问题 I have two servers both having Apache httpd with identical configurations Server1 and Server2. I want to create active and passive setup for these servers. Server1(lbserver.my.com) IP:192.168.10.88 (Active) Server2(lbserver.my.com) IP:192.168.10.89 (Passive) Server1 should respond to http requests. In case Server1 goes down then Server2 should become Active server and respond to http requests. Can anyone suggest how to achieve this. I tried this with keepalived configured on both the servers

Nginx retry same end point on http_502 in Docker service Discovery

大城市里の小女人 提交于 2019-12-11 06:12:53
问题 We use docker swarm with service discovery for Backend REST application. The services in swarm are configured with endpoint_mode: vip and are running in global mode. Nginx is proxy passed with service discovery aliases. When we update Backend services sometimes nginx throws 502 as service discovery may point to the updating service. In such case, We wanted to retry the same endpoint again. How can we achieve this? According to this we added upstream with the host's private IP and used proxy

Failover on MySQL JDBC connections?

梦想的初衷 提交于 2019-12-11 03:13:22
问题 I am trying to determine how i could implement a high availablity solution using the MySQL JDBC driver, it seems that there is a failover property that I can set. But I am wondering what people tend to use when implement a simple failover mechanism using MySQL and JDBC? We are planning to have 2 front Tomcat servers connected to 2 MySQL servers. 回答1: Even though you're asking about JDBC, I hope this helps you understand all available options... I typically handle failover by using a load

How to bring up the new node

流过昼夜 提交于 2019-12-11 02:32:23
问题 It is a follow-up question from High Availability in Cassandra 1) Let's say we have three nodes N1, N2 and N3, I have RF =3 and WC = 3 and RC = 1, then which means I cannot handle any node failure in case of write. 2) Let's say If the N3 (Imagine It holds the data) went down and as of now we will not be able to write the data with the consistency as '3'. Question 1: Now If I bring a new Node N4 up and attach to the cluster, Still I will not be able to write to the cluster with consistency 3,

Hadoop HA. Auto failover configured but Standby NN doesn't become active until NN is started again

房东的猫 提交于 2019-12-10 20:27:12
问题 I am using Hadoop 2.6.0-cdh5.6.0. I have configured HA. I have active(NN1) and standby namenodes(NN2) being displayed. Now when i issue a kill signal to the active namenode(NN1) the standby namenode(NN2) does not become active until I start the NN1 back again. After starting the NN1 again it takes the standby state and NN2 takes the active state. I haven't configured the "ha.zookeeper.session-timeout.ms" parameter, so I am assuming it would be default to 5 seconds. I am waiting for the time

hadoop namenode port in use

早过忘川 提交于 2019-12-10 12:57:25
问题 This is actually a standby HA namenode. It was configured with the same settings as the primary and hdfs namenode -bootstrapStandby was successfully run. It begins coming up on the standard HTTP port 50070 as defined in the config file: <property> <name>dfs.namenode.http-address.ha-hadoop.namenode2</name> <value>namenode2:50070</value> </property> The start up begins OK then hits: 15/02/02 08:06:17 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://hadoop1:50070 15/02/02 08:06:17 INFO