pika

pika 的坑

霸气de小男生 提交于 2019-11-30 05:08:42
之前只是用celery, 这次用一下pika 参考rabbitMQ官网的python版, https://www.rabbitmq.com/tutorials/tutorial-one-python.html 没想到各种坑. 如果说rabbitMQ官网是为了让新人入门,所以刻意忽略掉细节, 那么必须吐槽pika的官方文档, 很不好.远不如celery 1 Stream connection lost: BrokenPipeError(32, 'Broken pipe') 使用pika 的BlockingConnection 但启动后不久, 作为publish的生产端就会掉线: raise self._closed_result.value.error pika.exceptions.StreamLostError: Stream connection lost: BrokenPipeError(32, 'Broken pipe') 根据 https://www.cnblogs.com/zhaof/p/9774390.html 是要在连接时设置心跳为0,就不会超时自动下线了, 否则RabbitMQ服务器会发过来默认值580 #--------------rabbitMQ------------------ import pika connection = pika

No handlers could be found for logger “pika.adapters.blocking_connection”

我的梦境 提交于 2019-11-30 01:11:11
问题 Similar questions all seem to be based around using a custom logger, I'm happy to just use the default / none at all. My pika python app runs and receives messages but after a few seconds crashes with No handlers could be found for logger "pika.adapters.blocking_connection" , any ideas? import pika credentials = pika.PlainCredentials('xxx_apphb.com', 'xxx') parameters = pika.ConnectionParameters('bunny.cloudamqp.com', 5672, 'xxx_apphb.com', credentials) connection = pika.BlockingConnection

pika常见问题解答(FAQ)

坚强是说给别人听的谎言 提交于 2019-11-29 19:38:54
1 编译安装 Q1: 支持的系统? A1: 目前只支持Linux环境,包括Centos,Ubuntu; 不支持Windowns, Mac Q2: 怎么编译安装? A2: 参考 编译安装wiki Q3: Ubuntu编译偶尔报错isnan isinf was not declared? A3: 一些旧版本的pika对Ubuntu环境兼容不好,某些情况下会出现;可以先修改代码,用std::isnan和std::isinf替代isnan,isinf, 并包含头文件cmath。 我们会在新版兼容这个。 #include <cmath> 2 设计与实现 Q1: 为什么要开那么多线程?比如purge,搞个定时任务不就好了。难道编程框架不支持定时器? A1: pika有一些比较耗时的任务,如删binlog,扫描key,备份,同步数据文件等等,为了不影响正常的用户请求,这些任务都是放到后台执行的,并且将能并行的都放到不同线程里来最大程度上提升后台任务的执行速度;你说的变成框架是pink吗?pink是支持定时器的,每一个workerthread只要用户定义了cronhandle和频率,就会定时执行要执行的内容,不过这时候worker是被独占的,响应不了用户请求,所以占时的任务最好还是单独开线程去做,redis的bio也是这个原因 Q2: heartbeat让sender做不就好了

大容量类Redis存储--Pika介绍

依然范特西╮ 提交于 2019-11-29 19:21:03
嘉宾介绍 大家好,首先自我介绍一下,我是360 web平台-基础架构组的宋昭,负责大容量类redis存储pika的和分布式存储Bada的开发工作,这是我的github和博客地址,平时欢迎指正交流^^ 我的github: https://github.com/KernelMaker 我的博客: http://kernelmaker.github.io 下面是pika的github,欢迎关注 https://github.com/Qihoo360/pika Pika介绍 pika是360 DBA和基础架构组联合开发的类redis存储系统, 使用Redis协议,兼容redis绝大多数命令(String,Hash,List,ZSet,Set),用户不需要修改任何代码, 就可以将服务迁移至pika. pika主要是使用持久化存储来解决redis在内存占用超过50G,80G时遇到的如启动恢复时间长,主从同步代价大,硬件成本贵等问题,并且在对外用法上尽可能做到与redis一致,用户基本上对后端是redis或pika无感知 既然pika要做到兼容redis并解决redis在大容量时的各种问题,那么首先要面对的问题便是如何从redis迁移到pika,毕竟现在redis的使用非常广泛,如果从redis迁移到pika很麻烦,那应该也不会有多少人用了 从redis迁移到pika需要经过几个步骤?

pika 生产 消费 简单类

时光毁灭记忆、已成空白 提交于 2019-11-29 08:22:25
# -*- coding: utf-8 -*- # by dl import pika class MessageQueue: def __init__(self, host='localhost', queueName='TestQueue', exchange='', body='Hello World', consumer_tag=''): self.host = host self.queueName = queueName self.exchange = exchange self.body = body self.consumer_tag = consumer_tag def SPsend(self): connection = pika.BlockingConnection(pika.ConnectionParameters(self.host)) channel = connection.channel() channel.queue_declare(queue=self.queueName) channel.basic_publish(exchange=self.exchange, routing_key=self.queueName, body=self.body) print("[x] Seng 'Hello World!;'") connection

Is it possible to move / merge messages between RabbitMQ queues?

喜你入骨 提交于 2019-11-29 07:57:37
I'm looking to know is it possible to move / merge messages from one queue to another. For example: main-queue contains messages ['cat-1','cat-2','cat-3','cat-4','dog-1','dog-2','cat-5'] dog-queue contains messages ['dog-1, dog-2, dog-3, dog-4] So the question is, (assuming both queues are on the same cluster, vhost) it possible to move messages from dog-queue to main-queue using rabbitmqctl ? So at the end I'm looking to get something like: Ideally: main-queue : ['cat-1','cat-2','cat-3','cat-4','dog-1','dog-2','cat-5', dog-3, dog-4] But this is ok too: main-queue : ['cat-1','cat-2','cat-3',

消息队列: rabbitMQ

谁说胖子不能爱 提交于 2019-11-28 14:09:51
什么是rabbitMQ? rabbitMQ是一款基于AMQP协议的消息中间件,它能够在应用之间提供可靠的消息传输。在易用性,扩展性,高可用性上表现优秀。而且使用消息中间件利于应用之间的解耦,生产者(客户端)无需知道消费者(服务端)的存在。而且两端可以使用不同的语言编写,大大提供了灵活性。 centos7 上安装 rabbitmq 参考链接: https://www.cnblogs.com/liaojie970/p/6138278.html rabbitmq 安装好后,远程连接rabbitmq server的话,需要配置权限 1. 首先在rabbitmq server上创建一个用户 [root@rabbitmq ~]# rabbitmqctl add_user admin 123456 2. 同时还要配置权限,允许从外面访问: [root@rabbitmq ~]# rabbitmqctl set_permissions -p / admin ".*" ".*" ".*" 3. 给用户分配管理员权限(optional) [root@rabbitmq ~]# rabbitmqctl set_user_tags admin administrator 4. 列出所有用户命令: [root@rabbitmq ~]# rabbitmqctl list_users

Authenticating rabbitmq using ExternalCredentials

橙三吉。 提交于 2019-11-28 11:15:08
问题 I have a rabbitmq server and use the pika library with Python to produce/consume messages. For development purposes, I was simply using credentials = pika.PlainCredentials(<user-name>, <password>) I want to change that to use pika.ExternalCredentials or TLS. I have set up my rabbitmq server to listen for TLS on port 5671, and have configured it correctly. I am able to communicate with rabbitmq from localhost, but the moment I try to communicate with it from outside the localhost it doesn't

RabbitMQ change queue parameters on a production system

北战南征 提交于 2019-11-28 08:26:55
I'm using RabbitMQ as a message queue in a service-oriented architecture, where many separate web services publish messages bound for RabbitMQ queues. Those queues are in turn subscribed to by various consumers, which perform background work; a pretty vanilla use-case for RabbitMQ. Now I'd like to change some of the queue parameters (specifically, I'd like to bind queues to a new dead-letter exchange with a certain routing key). My problem is that making this change in place on a production system is problematic for a couple reasons. Whats the best way for me to transition to these new queues

Handling long running tasks in pika / RabbitMQ

╄→гoц情女王★ 提交于 2019-11-28 03:33:40
We're trying to set up a basic directed queue system where a producer will generate several tasks and one or more consumers will grab a task at a time, process it, and acknowledge the message. The problem is, the processing can take 10-20 minutes, and we're not responding to messages at that time, causing the server to disconnect us. Here's some pseudo code for our consumer: #!/usr/bin/env python import pika import time connection = pika.BlockingConnection(pika.ConnectionParameters( host='localhost')) channel = connection.channel() channel.queue_declare(queue='task_queue', durable=True) print