ipc

Is anyone using netlink for IPC?

冷暖自知 提交于 2020-02-27 05:42:33
问题 I am planning to use netlink for communication between two userland processes. Part of the reason being so picky about netlink is - Most of the processing for one of the process would eventually go in kernel space and netlink based communication can be used as it is (hopefully). The approach I am taking is - define a new Generic Netlink family (I will have to write a kernel module just to support that family - as it appears so at the moment). That is fine, I was looking at some example code,

Is anyone using netlink for IPC?

我只是一个虾纸丫 提交于 2020-02-27 05:42:06
问题 I am planning to use netlink for communication between two userland processes. Part of the reason being so picky about netlink is - Most of the processing for one of the process would eventually go in kernel space and netlink based communication can be used as it is (hopefully). The approach I am taking is - define a new Generic Netlink family (I will have to write a kernel module just to support that family - as it appears so at the moment). That is fine, I was looking at some example code,

Binder IPC的权限控制

可紊 提交于 2020-02-26 09:41:19
copy from : http://gityuan.com/2016/03/05/binder-clearCallingIdentity/ 基于Android 6.0的源码剖析, 分析Binder IPC通信的权限控制方法clearCallingIdentity和restoreCallingIdentity的原理和用途。 frameworks/base/core/java/android/os/Binder.java frameworks/base/core/jni/android_util_Binder.cpp frameworks/native/libs/binder/IPCThreadState.cpp 一、概述 在 Binder系列 中通过十篇文章,深入探讨了Android M的Binder IPC机制。看过Android系统源代码的朋友,一定看到过 Binder.clearCallingIdentity() 和 Binder.restoreCallingIdentity() 这两个方法,其定义在 Binder.java 文件: //作用是清空远程调用端的uid和pid,用当前本地进程的uid和pid替代; public static final native long clearCallingIdentity(); //作用是恢复远程调用端的uid和pid信息,正好是

Fail-safe message broadcasting to be consumed by a specific recipient using redis and python

耗尽温柔 提交于 2020-02-25 04:02:50
问题 So redis 5.0 freshly introduced a new feature called Streams. They seem to be perfect for distributing messages for inter process communication: they surpass the abilities of PUB/SUB event messaging in terms of reliability: PUB/SUB is fire-and-forget an there's no guarantee a recipient will receive the message redis lists are somewhat low-level but could still be used. However, streams are optimized for performance and exactly the use case described above. However, since this feature is quite

Fail-safe message broadcasting to be consumed by a specific recipient using redis and python

隐身守侯 提交于 2020-02-25 04:02:38
问题 So redis 5.0 freshly introduced a new feature called Streams. They seem to be perfect for distributing messages for inter process communication: they surpass the abilities of PUB/SUB event messaging in terms of reliability: PUB/SUB is fire-and-forget an there's no guarantee a recipient will receive the message redis lists are somewhat low-level but could still be used. However, streams are optimized for performance and exactly the use case described above. However, since this feature is quite

Fail-safe message broadcasting to be consumed by a specific recipient using redis and python

怎甘沉沦 提交于 2020-02-25 04:00:40
问题 So redis 5.0 freshly introduced a new feature called Streams. They seem to be perfect for distributing messages for inter process communication: they surpass the abilities of PUB/SUB event messaging in terms of reliability: PUB/SUB is fire-and-forget an there's no guarantee a recipient will receive the message redis lists are somewhat low-level but could still be used. However, streams are optimized for performance and exactly the use case described above. However, since this feature is quite

Fail-safe message broadcasting to be consumed by a specific recipient using redis and python

三世轮回 提交于 2020-02-25 04:00:09
问题 So redis 5.0 freshly introduced a new feature called Streams. They seem to be perfect for distributing messages for inter process communication: they surpass the abilities of PUB/SUB event messaging in terms of reliability: PUB/SUB is fire-and-forget an there's no guarantee a recipient will receive the message redis lists are somewhat low-level but could still be used. However, streams are optimized for performance and exactly the use case described above. However, since this feature is quite

Can I capture STDOUT write events from a process in perl?

点点圈 提交于 2020-02-24 10:44:29
问题 I need (would like?) to spawn a slow process from a web app using a Minion queue. The process - a GLPK solver - can run for a long time but generates progress output. I'd like to capture that output as it happens and write it to somewhere (database? log file?) so that it can be played back to the user as a status update inside the web app. Is that possible? I have no idea (hence no code). I was exploring Capture::Tiny - the simplicity of it is nice but I can't tell if it can track write

docker容器的底层技术

半腔热情 提交于 2020-02-22 13:43:07
cgroup(实现资源限制) cgroup全称control group。linux操作系统通过cgroup可以设置进程使用CPU、内存和IO资源的限制。--cpu-shares、-m、--device-write-bps实际上就是在配置cgroup。 在/sys/fs/cgroup/cpu/docker目录中,linux会为每个容器创建一个cgroup目录,以容器的长ID命名的目录中包含所有与cpu相关的cgroup配置,文件cpu.shares保存的就是--cpu-shares的配置,同样的,/sys/fs/cgroup/memory/docker和/sys/fs/cgroup/blkio/docker中保存的就是内存以及block io的cgroup配置 namespace(实现资源隔离) 在每个容器中,都有文件系统、网卡等资源,这些资源看上去都是容器自己的。拿容器来说,每个容器都会认为自己有一块独立的网卡。即使host上只有一块物理网卡。这种方式非常好,它使得容器更像一个独立的计算机 linux实现这种方式的技术是namespace。namespace管理着host中全局唯一的资源。可以让每个容器都觉得只有自己在使用它。换句话说,namespace实现了容器间资源的隔离 Linux使用了6种namespace,分别对应6种资源:mount、uts、ipc、pid

Celery: Interact/Communicate with a running task

家住魔仙堡 提交于 2020-02-22 08:25:45
问题 A related (albeit not identical) question appears here: Interact with celery ongoing task It's easy to start a task and get its unique ID: async_result = my_task.delay() task_id = async_result.task_id It's easy to broadcast a message that will reach a custom command in the worker: my_celery_app.control.broadcast('custom_command', arguments= {'id': task_id}) The problem arises that the worker is started in the form of a small process tree formed of one supervisor and a number of children. The