elasticsearch python client - work with many nodes - how to work with sniffer

大城市里の小女人 提交于 2019-12-10 23:21:03

问题


i have one cluster with 2 nodes.

i am trying to understand the best practise to connect the nodes, and check failover when there is downtime on one node.

from documentation:

es = Elasticsearch(
    ['esnode1', 'esnode2'],
    # sniff before doing anything
    sniff_on_start=True,
    # refresh nodes after a node fails to respond
    sniff_on_connection_fail=True,
    # and also every 60 seconds
    sniffer_timeout=60
)

so i tried to connect to my nodes like this:

client = Elasticsearch([ip1, ip2],sniff_on_start=True, sniffer_timeout=10,sniff_on_connection_fail=True)

where ip1/ip2 are machine ip's (for example 10.0.0.1, 10.0.0.2)

in order to test it, i terminated ip2 (or put non existent if) now, when i am trying to connect, i am always get:

TransportError: TransportError(N/A, 'Unable to sniff hosts - no viable hosts found.') 

even that ip1 is exist and up.

if i am trying to connect like this:

es = Elasticsearch([ip1, ip2])

then i can see in log that if the client is not getting any response from ip2, it will move to ip1, and return valid response.

so am i missing here something? i thought that with sniffing, client wont throw any exception if one of the nodes is down, and continue working with active nodes (until next sniffing)

update: i get this behaviour when ever i set sniff to 'True':

----> 1 client = Elasticsearch([ip1, ip2],sniff_on_start=True)

/usr/local/lib/python2.7/dist-packages/elasticsearch/client/__init__.pyc in __init__(self, hosts, transport_class, **kwargs)
    148             :class:`~elasticsearch.Connection` instances.
    149         """
--> 150         self.transport = transport_class(_normalize_hosts(hosts), **kwargs)
    151 
    152         # namespaced clients for compatibility with API names

/usr/local/lib/python2.7/dist-packages/elasticsearch/transport.pyc in __init__(self, hosts, connection_class, connection_pool_class, host_info_callback, sniff_on_start, sniffer_timeout, sniff_timeout, sniff_on_connection_fail, serializer, serializers, default_mimetype, max_retries, retry_on_status, retry_on_timeout, send_get_body_as, **kwargs)
    128 
    129         if sniff_on_start:
--> 130             self.sniff_hosts(True)
    131 
    132     def add_connection(self, host):

/usr/local/lib/python2.7/dist-packages/elasticsearch/transport.pyc in sniff_hosts(self, initial)
    235         # transport_schema or host_info_callback blocked all - raise error.
    236         if not hosts:
--> 237             raise TransportError("N/A", "Unable to sniff hosts - no viable hosts found.")
    238 
    239         self.set_connections(hosts)

回答1:


You need to set sniff_timeout to a higher value than the default value (which is 0.1 if memory serves).

Try it like this

es = Elasticsearch(
    ['esnode1', 'esnode2'],
    # sniff before doing anything
    sniff_on_start=True,
    # refresh nodes after a node fails to respond
    sniff_on_connection_fail=True,
    # and also every 60 seconds
    sniffer_timeout=60,
    # set sniffing request timeout to 10 seconds
    sniff_timeout=10
)


来源:https://stackoverflow.com/questions/39640200/elasticsearch-python-client-work-with-many-nodes-how-to-work-with-sniffer

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!