race-condition

connect-redis - how to protect the session object against race condition

梦想的初衷 提交于 2019-12-21 05:26:06
问题 I'm using nodejs with connect-redis to store the session data. I'm saving user data in the session, and use it for in the session lifetime. I've noticed that it's possible to have a race condition between two requests that changes the session data. I've tried to use redis-lock to lock the session, but it's a bit problematic for me. i don't want to lock the entire session, but instead lock only specific session variable. I found it to be impossible, and I thought about direction to solve it:

Atomic UPDATE to increment integer in Postgresql

孤街浪徒 提交于 2019-12-21 04:56:35
问题 I'm trying to figure out if the query below is safe to use for the following scenario: I need to generate sequential numbers, without gaps. As I need to track many of them, I have a table holding sequence records, with a sequence integer column. To get the next sequence, I'm firing off the SQL statement below. WITH updated AS ( UPDATE sequences SET sequence = sequence + ? WHERE sequence_id = ? RETURNING sequence ) SELECT * FROM updated; My question is: is the query below safe when multiple

Cache consistency when using memcached and a rdbms like MySQL

匆匆过客 提交于 2019-12-21 04:13:17
问题 I have taken a database class this semester and we are studying about maintaining cache consistency between the RDBMS and a cache server such as memcached. The consistency issues arise when there are race conditions. For example: Suppose I do a get(key) from the cache and there is a cache miss. Because I get a cache miss, I fetch the data from the database, and then do a put(key,value) into the cache. But, a race condition might happen, where some other user might delete the data I fetched

How can race conditions be useful?

半世苍凉 提交于 2019-12-20 12:38:09
问题 One of the answers to the question of what race conditions are mentioned low-level algorithms deliberately using race condition. How can race conditions be beneficial? EDIT: Concurrency and queues are a good example of deliberately not caring about ordering of things, as long as nothing is lost. Any ideas on how "really hairy low-level algorithms do this on purpose" ? 回答1: Not all races are equally bad. The worst kind of race that you can get is reading of partial results. This is what Herb

nhibernate race condition when loading entity

为君一笑 提交于 2019-12-20 12:16:41
问题 I have a problem with a nhibernate race condition in my webapp. I am aware of this happening when using older versions of log4net (should be fixed in 1.2.10), although I have also experienced this. Because of this we have disabled log4net for now, since the race condition crashes IIS and it's unacceptable for this to happen in production. This happened when loading an entity (see stacktrace below). Besides this, a similar problem seems to have occurred in RavenDB, see this link, and an

Race condition when using dup2

人盡茶涼 提交于 2019-12-20 10:47:09
问题 This manpage for the dup2 system call says: EBUSY (Linux only) This may be returned by dup2() or dup3() during a race condition with open(2) and dup(). What race condition does it talk about and what should I do if dup2 gives EBUSY error? Should I retry like in the case of EINTR ? 回答1: There is an explanation in fs/file.c , do_dup2(): /* * We need to detect attempts to do dup2() over allocated but still * not finished descriptor. NB: OpenBSD avoids that at the price of * extra work in their

Data Races in JavaScript?

别说谁变了你拦得住时间么 提交于 2019-12-20 04:05:29
问题 Lets assume I run this piece of code. var score = 0; for (var i = 0; i < arbitrary_length; i++) { async_task(i, function() { score++; }); // increment callback function } In theory I understand that this presents a data race and two threads trying to increment at the same time may result in a single increment, however, nodejs(and javascript) are known to be single threaded. Am I guaranteed that the final value of score will be equal to arbitrary_length? 回答1: Node uses an event loop. You can

getting argument exception in concurrent dictionary when sorting and displaying as it is being updated

▼魔方 西西 提交于 2019-12-19 08:55:19
问题 I am getting a hard to reproduce error in the following program in which a number of threads update a concurrent dictionary in parallel and the main thread displays the state of the dictionary in sorted order after fixed time intervals, until all updating threads complete. public void Function(IEnumerable<ICharacterReader> characterReaders, IOutputter outputter) { ConcurrentDictionary<string, int> wordFrequencies = new ConcurrentDictionary<string, int>(); Thread t = new Thread(() =>

OpenCL float sum reduction

▼魔方 西西 提交于 2019-12-19 07:36:27
问题 I would like to apply a reduce on this piece of my kernel code (1 dimensional data): __local float sum = 0; int i; for(i = 0; i < length; i++) sum += //some operation depending on i here; Instead of having just 1 thread that performs this operation, I would like to have n threads (with n = length) and at the end having 1 thread to make the total sum. In pseudo code, I would like to able to write something like this: int i = get_global_id(0); __local float sum = 0; sum += //some operation

OpenCL float sum reduction

痞子三分冷 提交于 2019-12-19 07:36:14
问题 I would like to apply a reduce on this piece of my kernel code (1 dimensional data): __local float sum = 0; int i; for(i = 0; i < length; i++) sum += //some operation depending on i here; Instead of having just 1 thread that performs this operation, I would like to have n threads (with n = length) and at the end having 1 thread to make the total sum. In pseudo code, I would like to able to write something like this: int i = get_global_id(0); __local float sum = 0; sum += //some operation