failover

namenode ha failover time

大兔子大兔子 提交于 2019-12-05 22:42:03
Namenode HA (NFS, QJM) is available in hadoop 2.x (HDFS-1623). It provides fast failover for Namenode, but I can't find any description on how long does it take to recover from a failure. Can any one tell me? Thanks for your answer.As the matter of fact,I want to know the time between the transformation of two nodes(active namenode and standby namenode).can you tell me how long? Here are some qualified examples of times for failover with a standby NameNode: A 60 node cluster with 6 million blocks using 300TB raw storage, and 100K files: 30 seconds. Hence total failover time ranges from 1-3

FOSS ASP.Net Session Replication Solution?

跟風遠走 提交于 2019-12-05 08:04:38
I've been searching (with little success) for a free/opensource session clustering and replication solution for asp.net. I've run across the usual suspects (indexus sharedcache, memcached), however, each has some limitations. Indexus - Very immature, stubbed session interface implementation. Its otherwise a great caching solution, though. Memcached - Little replication/failover support without going to a db backend. Several SF.Net projects - All aborted in the early stages... nothing that appears to have any traction, and one which seems to have gone all commercial. Microsoft Velocity - Not

How do I get pcp to automatically attach nodes to postgres pgpool?

冷暖自知 提交于 2019-12-04 19:22:05
I'm using postgres 9.4.9, pgpool 3.5.4 on centos 6.8. I'm having a major hard time getting pgpool to automatically detect when nodes are up (it often detects the first node but rarely detects the secondary) but if I use pcp_attach_node to tell it what nodes are up, then everything is hunky dory. So I figured until I could properly sort the issue out, I would write a little script to check the status of the nodes and attach them as appropriate, but I'm having trouble with the password prompt. According to the documentation, I should be able to issue commands like pcp_attach_node 10 localhost

CTDB Samba failover not highly available

萝らか妹 提交于 2019-12-04 17:07:27
My Setup 3 nodes running ceph + cephfs 2 of these nodes running CTDB & Samba 1 client (not one of the 3 servers) It is a Lab setup, so only one nic per server=node, one subnet as well as all Ceph components plus Samba on the same servers. I'm aware, that this is not the way to go. The problem I want to host a clustered Samba file share on top of Ceph with ctdb. I followed the CTDB documentation ( https://wiki.samba.org/index.php/CTDB_and_Clustered_Samba#Configuring_Clusters_with_CTDB ) and parts of this: https://wiki.samba.org/index.php/Samba_CTDB_GPFS_Cluster_HowTo . The cluster is running

Adding a generic service to cluster from powershell

倖福魔咒の 提交于 2019-12-04 16:53:10
I'm a newbie in clustering and I'm trying to create a generic service to a cluster using PowerShell. I can add it without any issues using the GUI, but for some reason I cannot add it from PowerShell. Following the first example from the documentation for Add-ClusterGenericServiceRole , I've tried the following command: Add-ClusterGenericServiceRole -ServiceName "MyService" This throws the following error: Static network was [network range] was not configured. Please use -StaticAddress to use this network or -IgnoreNetwork to ignore it. What's the connection between the network and my service?

WebLogic load balancing

我的梦境 提交于 2019-12-04 14:01:10
问题 I'm currently developing a project supported on a WebLogic clustered environment. I've successfully set up the cluster, but now I want a load-balancing solution (currently, only for testing purposes, I'm using WebLogic's HttpClusterServlet with round-robin load-balancing). Is there any documentation that gives a clear comparison (with pros and cons) of the various ways of providing load-balancing for WebLogic? These are the main topics I want to cover: Performance (normal and on failover );

Java outgoing TCP connection failover based on multiple DNS results

我与影子孤独终老i 提交于 2019-12-04 11:35:39
If I make a connection using new Socket("unit.domain.com", 100) and the unit.domain.com DNS record has multiple IP addresses in the A record.. In the event of a failed connection, Does Java automatically connect to one of the other addresses in the list like the browser does? or does that have to be implemented manually? bestsss No! Creating a socket via new Socket(String, int) results in a resolving like that addr = InetAddress.getByName(hostname); which is a shortcut for return InetAddress.getAllByName(host)[0]; The address resolution is performed in the Socket c-tor. If you have to

Using SignalR with Redis messagebus failover using BookSleeve's ConnectionUtils.Connect()

孤者浪人 提交于 2019-12-04 07:48:38
问题 I am trying to create a Redis message bus failover scenario with a SignalR app. At first, we tried a simple hardware load-balancer failover, that simply monitored two Redis servers. The SignalR application pointed to the singular HLB endpoint. I then failed one server, but was unable to successfully get any messages through on the second Redis server without recycling the SignalR app pool. Presumably this is because it needs to issue the setup commands to the new Redis message bus. As of

Log4j2's FailoverAppender Error: appender Failover has no parameter that matches element Failovers

泄露秘密 提交于 2019-12-04 06:53:34
When I compile my spring 3.2.9 web application using log4j 2.1, this error appears in the console: 2015-02-02 12:08:25,213 ERROR appender Failover has no parameter that matches element Failovers What I understand is that the element "Failovers" does not exist inside the element "Failover", right? Why would this happen? I don't see whats wrong since I have the same configuration as the log4j2 manual. I have this configuration in my log4j2.xml: <?xml version="1.0" encoding="UTF-8"?> <Configuration name="vcr-log4j2-config" status="debug"> <Appenders> <Console name="STDOUT" target="SYSTEM_OUT">

Kafka consumer fails to consume if first broker is down

你说的曾经没有我的故事 提交于 2019-12-04 03:38:00
问题 I'm using latest version of kafka(kafka_2.12-1.0.0.tgz). I have setup simple cluster with 3 brokers(just changed broker.id=1 and listeners=PLAINTEXT://:9092 in properties file for each instance).After cluster is up I created topic with the following command ./kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 13 --topic demo then start kafka consumer and producers with the following commands ./kafka-console-producer.sh --topic demo --broker-list localhost