low-latency

IoT request response protocol

别来无恙 提交于 2019-12-02 20:58:58
We need to build a server that can communicate with some embedded devices running a variant of Android. We need to be able to send commands to the device, and receive a response. A simple command might be asking the device for it's status. We won't have HTTP, so we need to have the client/device establish a connection with the server. We were considering using MQTT as it has a lot of nice properties (QoS, lightweight, built for IoT), but it doesn't natively support a request response workflow. We have considered building RPC on top of MQTT, but before we do I just wanted peoples thoughts on

Why is hibernate batching / order_inserts / order_updates disabled by default?

烈酒焚心 提交于 2019-12-02 20:48:46
are there any reasons why hibernate batching / hibernate.order_updates / hibernate.order_inserts are disabled by default? Is there any disadvantage when you enable a batch size of 50? Same for the order_updates / order_inserts parameters. Is there an use case where you shouldn't enable this features? Are there any performance impacts when using this features? I can only see that these settings help a lot when i need to reduce my query count which is necessary especially in a cloud environment with high latencies between my application and database server. Generally setting batch size to

Low latency programming

吃可爱长大的小学妹 提交于 2019-12-02 13:49:54
I've been reading a lot about low latency financial systems (especially since the famous case of corporate espionage) and the idea of low latency systems has been in my mind ever since. There are a million applications that can use what these guys are doing, so I would like to learn more about the topic. The thing is I cannot find anything valuable about the topic. Can anybody recommend books, sites, examples on low latency systems? I work for a financial company that produces low latency software for communication directly with exchanges (for submitting trades and streaming prices). We

Lowest Latency small size data Internet transfer protocol? c#

只愿长相守 提交于 2019-12-02 05:47:10
问题 I am doing a Internet Gaming project which involving keeping sending small size of data (between 1K to 50K) over the Internet between two normal home PCs. The key I care about is the latency. I understand TCP, UDP are the popular ones. TCP is reliable but slower than UDP while UDP is not safe and I have to implement my own fault-handling codes. I am just wondering are there any other protocols I can follow to send/receive small data between two normal home PCs? by the term of normal home PCs,

Java Netty load testing issues

心不动则不痛 提交于 2019-12-01 22:30:56
I wrote the server that accepts connection and bombards messages ( ~100 bytes ) using text protocol and my implementation is able to send about loopback 400K/sec messages with the 3rt party client. I picked Netty for this task, SUSE 11 RealTime, JRockit RTS. But when I started developing my own client based on Netty I faced drastic throughput reduction ( down from 400K to 1.3K msg/sec ). The code of the client is pretty straightforward. Could you, please, give an advice or show examples how to write much more effective client. I,actually, more care about latency, but started with throughput

How can I prefetch infrequently used code?

拥有回忆 提交于 2019-11-30 23:12:45
I want to prefetch some code into the instruction cache. The code path is used infrequently but I need it to be in the instruction cache or at least in L2 for the rare cases that it is used. I have some advance notice of these rare cases. Does _mm_prefetch work for code? Is there a way to get this infrequently used code in cache? For this problem I don't care about portability so even asm would do. The answer depends on your CPU architecture. That said, if you are using gcc or clang, you can use the __builtin_prefetch instruction to try to generate a prefetch instruction. On Pentium 3 and

How fast is state of the art HFT trading systems today?

爱⌒轻易说出口 提交于 2019-11-29 19:40:34
All the time you hear about high frequency trading (HFT) and how damn fast the algorithms are. But I'm wondering - what is fast these days? Update I'm not thinking about the latency caused by the physical distance between an exchange and the server running a trading application, but the latency introduced by the program itself. To be more specific: What is the time from events arriving on the wire in an application to that application outputs an order/price on the wire? I.e. tick-to-trade time. Are we talking sub-millisecond? Or sub-microsecond? How do people achieve these latencies? Coding in

Node.js, Socket.io, Redis pub/sub high volume, low latency difficulties

本秂侑毒 提交于 2019-11-29 18:35:00
When conjoining socket.io/node.js and redis pub/sub in an attempt to create a real-time web broadcast system driven by server events that can handle multiple transports, there seems to be three approaches: 'createClient' a redis connection and subscribe to channel(s). On socket.io client connection, join the client into a socket.io room. In the redis.on("message", ...) event, call io.sockets.in(room).emit("event", data) to distribute to all clients in the relevant room. Like How to reuse redis connection in socket.io? 'createClient' a redis connection. On socket.io client connection, join the

Low latency serial communication on Linux

人走茶凉 提交于 2019-11-28 17:58:24
I'm implementing a protocol over serial ports on Linux. The protocol is based on a request answer scheme so the throughput is limited by the time it takes to send a packet to a device and get an answer. The devices are mostly arm based and run Linux >= 3.0. I'm having troubles reducing the round trip time below 10ms (115200 baud, 8 data bit, no parity, 7 byte per message). What IO interfaces will give me the lowest latency: select, poll, epoll or polling by hand with ioctl? Does blocking or non blocking IO impact latency? I tried setting the low_latency flag with setserial. But it seemed like

Why does the JVM require warmup?

守給你的承諾、 提交于 2019-11-28 03:56:14
I understand that in the Java virtual machine (JVM), warmup is potentially required as Java loads classes using a lazy loading process and as such you want to ensure that the objects are initialized before you start the main transactions. I am a C++ developer and have not had to deal with similar requirements. However, the parts I am not able to understand are the following: Which parts of the code should you warm up? Even if I warm up some parts of the code, how long does it remain warm (assuming this term only means how long your class objects remain in-memory)? How does it help if I have