mutex

Should I dispose a Mutex?

微笑、不失礼 提交于 2019-12-17 19:33:47
问题 I'm working on 2 Windows Services that have a common database which I want to lock (cross-process) with a system Mutex. Now I'm wondering whether it's ok to just call WaitOne() and ReleaseMutex() in a try-finally block or should I also dispose the Mutex (e.g. in a using block). If so I guess I should always catch the AbandonedMutexException on the WaitOne() method or am I wrong here? 回答1: A mutex is a Windows kernel object (here wrapped in a .NET object). As such, it is an unmanaged resource

one instance of app per Computer, how?

喜你入骨 提交于 2019-12-17 19:02:00
问题 I am trying to make my app run only once on a computer, My app needs to comunicate to a webservice so it is bad to let it run more than once, currently im using Mutex with this: MyMsg := RegisterWindowMessage('My_Unique_App_Message_Name'); Mutex := CreateMutex(nil, True, 'My_Unique_Application_Mutex_Name'); if (Mutex = 0) OR (GetLastError = ERROR_ALREADY_EXISTS) then exit; Currently this works to limit 1 instance of application per user, but my app is being used in a Windows Server

When do you embed mutex in struct in Go?

孤街浪徒 提交于 2019-12-17 18:28:13
问题 NOTE: I found the word 'embed' in the title was bad choice, but I will keep it. I see a lot of code does like this: type A struct { mu sync.Mutex ... } And use it like this: a := &A{} a.mu.Lock() defer a.mu.Unlock() a.Something() Is it better than local mutex or global mutex? a := &A{} var mu sync.Mutex mu.Lock() defer mu.Unlock() a.Something() When should I use former, or later? 回答1: It's good practice to keep the mutex close to the data it is destined to protect. If a mutex ought to protect

CUDA, mutex and atomicCAS()

偶尔善良 提交于 2019-12-17 18:27:26
问题 Recently I started to develop on CUDA and faced with the problem with atomicCAS(). To do some manipulations with memory in device code I have to create a mutex, so that only one thread could work with memory in critical section of code. The device code below runs on 1 block and several threads. __global__ void cudaKernelGenerateRandomGraph(..., int* mutex) { int i = threadIdx.x; ... do { atomicCAS(mutex, 0, 1 + i); } while (*mutex != i + 1); //critical section //do some manipulations with

How to avoid race condition when using a lock-file to avoid two instances of a script running simultaneously?

北城余情 提交于 2019-12-17 17:38:43
问题 A typical approach to avoid two instances of the same script running simultaneously looks like this: [ -f ".lock" ] && exit 1 touch .lock # do something rm .lock Is there a better way to lock on files from a shell-script, avoiding a race condition? Must directories be used instead? 回答1: Yes, there is indeed a race condition in the sample script. You can use bash's noclobber option in order to get a failure in case of a race, when a different script sneaks in between the -f test and the touch

Multiple-readers, single-writer locks in Boost

心已入冬 提交于 2019-12-17 16:11:00
问题 I'm trying to implement the following code in a multithreading scenario: Get shared access to mutex Read data structure If necessary: Get exclusive access to mutex Update data structure Release exclusive lock Release shared lock Boost threads has a shared_mutex class which was designed for a multiple-readers, single-writer model. There are several stackoverflow questions regarding this class. However, I'm not sure it fits the scenario above where any reader may become a writer. The

C++ Thread, shared data

时光怂恿深爱的人放手 提交于 2019-12-17 15:32:49
问题 I have an application where 2 threads are running... Is there any certanty that when I change a global variable from one thread, the other will notice this change? I don't have any syncronization or Mutual exclusion system in place... but should this code work all the time (imagine a global bool named dataUpdated ): Thread 1: while(1) { if (dataUpdated) updateScreen(); doSomethingElse(); } Thread 2: while(1) { if (doSomething()) dataUpdated = TRUE; } Does a compiler like gcc optimize this

C++11: why does std::condition_variable use std::unique_lock?

£可爱£侵袭症+ 提交于 2019-12-17 15:13:27
问题 I am a bit confused about the role of std::unique_lock when working with std::condition_variable . As far as I understood the documentation, std::unique_lock is basically a bloated lock guard, with the possibility to swap the state between two locks. I've so far used pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex) for this purpose (I guess that's what the STL uses on posix). It takes a mutex, not a lock. What's the difference here? Is the fact that std::condition_variable

Pthread Mutex lock unlock by different threads

自作多情 提交于 2019-12-17 10:47:22
问题 A Naive question .. I read before saying - " A MUTEX has to be unlocked only by the thread that locked it. " But I have written a program where THREAD1 locks mutexVar and goes for a sleep. Then THREAD2 can directly unlock mutexVar do some operations and return. ==> I know everyone say why I am doing so ?? But my question is - Is this a right behaviour of MUTEX ?? ==> Adding the sample code void *functionC() { pthread_mutex_lock( &mutex1 ); counter++; sleep(10); printf("Thread01: Counter value

How efficient is locking an unlocked mutex? What is the cost of a mutex?

不羁岁月 提交于 2019-12-17 06:59:09
问题 In a low level language (C, C++ or whatever): I have the choice in between either having a bunch of mutexes (like what pthread gives me or whatever the native system library provides) or a single one for an object. How efficient is it to lock a mutex? I.e. how many assembler instructions are there likely and how much time do they take (in the case that the mutex is unlocked)? How much does a mutex cost? Is it a problem to have really a lot of mutexes? Or can I just throw as much mutex