latency

Low-latency read of UDP port

偶尔善良 提交于 2019-12-20 10:49:08
问题 I am reading a single data item from a UDP port. It's essential that this read be the lowest latency possible. At present I'm reading via the boost::asio library's async_receive_from method. Does anyone know the kind of latency I will experience between the packet arriving at the network card, and the callback method being invoked in my user code? Boost is a very good library, but quite generic, is there a lower latency alternative? All opinions on writing low-latency UDP network programs are

Why is request_time much larger than upstream_response_time in nginx access.log?

蹲街弑〆低调 提交于 2019-12-20 10:37:14
问题 I am trying to improve the performance of a web app. Profiling the app itself, I found its response time are quite acceptable (100ms-200ms), but when I use ApacheBench to test the app, the response time sometimes exceeds 1 second. When I looked closely at the logs, I found a big discrepancy between request_time and upstream_response_time occasionally: "GET /wsq/p/12 HTTP/1.0" 200 114081 "-" "ApacheBench/2.3" 0.940 0.286 "GET /wsq/p/31 HTTP/1.0" 200 114081 "-" "ApacheBench/2.3" 0.200 0.086 The

Ruby GC execution exceeding ~250-320ms per request

蓝咒 提交于 2019-12-20 10:36:32
问题 I have a ruby on rails application. I am investigating an Apdex decline in my NewRelic portal and I'm seeing that on average, 250-320ms of time is being spent on GC execution. This is a highly disturbing number. I've included a screen shot below. My Ruby version is: ruby 1.9.3p194 (2012-04-20 revision 35410) [x86_64-linux] Any suggestions for tuning this would be ideal. This number should be substantially lower. 回答1: You're spending so much time in GC because you're running your GC so often.

preconnect vs dns-prefetch resource hints

我与影子孤独终老i 提交于 2019-12-20 08:49:07
问题 https://www.w3.org/TR/resource-hints/ If I understand correctly, both are used to initiate an early connection to load resources faster at a later time. preconnect is just doing "more". Apart from a better browser support, is there any reason to use dns-prefetch over preconnect? I've also seen websites using both rel at the same link tag in order to use preconnect if possible and fall back to dns-prefetch if not. <head> <link rel="dns-prefetch preconnect" href="https://fonts.gstatic.com"

Determine asymmetric latencies in a network

流过昼夜 提交于 2019-12-18 03:42:39
问题 Imagine you have many clustered servers, across many hosts, in a heterogeneous network environment, such that the connections between servers may have wildly varying latencies and bandwidth. You want to build a map of the connections between servers by transferring data between them. Of course, this map may become stale over time as the network topology changes - but lets ignore those complexities for now and assume the network is relatively static. Given the latencies between nodes in this

What is the correct definition of interrupt latency in RTOS?

こ雲淡風輕ζ 提交于 2019-12-13 18:04:15
问题 I read two different definition for 'interrupt latency' in RTOS. "In computing, interrupt latency is the time that elapses from when an interrupt is generated to when the source of the interrupt is serviced" (source: https://en.wikipedia.org/wiki/Interrupt_latency ) "The ability to guarantee a maximum latency between an external interrupt and the start of the interrupt handler." (source: What makes a kernel/OS real-time? ) Now, my question is what is the correct definition of 'interrupt

one two-directed tcp socket OR two one-directed? (linux, high volume, low latency)

和自甴很熟 提交于 2019-12-12 23:08:09
问题 I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is

How to measure network throughput during runtime

谁说胖子不能爱 提交于 2019-12-12 10:15:54
问题 I'm wondering how to best measure network throughput during runtime. I'm writing a client/server application (both in java). The server regularly sends messages (of compressed media data) over a socket to the client. I would like to adjust the compression level used by the server to match the network quality. So I would like to measure the time a big chunk of data (say 500kb) takes to completely reach the client including all delays in between. Tools like Iperf don't seem to be an option

Multiplying just one column from each of the 2 input DataFrames together

安稳与你 提交于 2019-12-12 01:27:55
问题 I have two DataFrames that are each of the exact sane dimensions and I would like to multiply just one specific column from each of them together: My first DataFrame is: In [834]: patched_benchmark_df_sim Out[834]: build_number name cycles 0 390 adpcm 21598 1 390 aes 5441 2 390 blowfish NaN 3 390 dfadd 463 .... 284 413 jpeg 766742 285 413 mips 4263 286 413 mpeg2 2021 287 413 sha 348417 [288 rows x 3 columns] My second DataFrame is: In [835]: patched_benchmark_df_syn Out[835]: build_number

Single big cache-able CSS vs page-specific small CSS snippets

陌路散爱 提交于 2019-12-11 05:57:52
问题 Right now my CSS files look like this: Then I have a php snippet that looks like this: <?php $Compress->folder(Configuration::get('Public').'/style/', '.css', Configuration::get('Public').'/style.css'); ?> This minimizes all the css files stored in the directory public_html/style/ (shown in the picture) and creates a file in the directory /public_html/ called style.css. It is run only when needed, although in development, always. Then I just include the big file: <link rel="stylesheet" href=