concurrency

Can you specify a non-static lifetime for threads? [duplicate]

天大地大妈咪最大 提交于 2021-02-05 09:14:07
问题 This question already has answers here : How can I pass a reference to a stack variable to a thread? (1 answer) Thread references require static lifetime? (1 answer) How do I use static lifetimes with threads? (2 answers) How can I send non-static data to a thread in Rust and is it needed in this example? (1 answer) Closed 1 year ago . Here's a toy example of my problem: use std::sync::{Arc, Mutex}; fn operate_in_chunks(vec: &mut Vec<f32>) { let chunk_size = 10; let mutex_vec: Arc<Mutex<&mut

Race conditions when adding to, but not reading from, a List<T> (or Stack or Queue) - what happens?

邮差的信 提交于 2021-02-05 09:13:25
问题 Note: I'm asking this question relative to the 3.5 framework, so I'm not including any of the newer multithreaded constructs in 4.0 (which I'm still learning). I've been trying to come up with an answer on this to try and close out an argument I've been having, but I don't feel like I've found a conclusive description as to what would or could happen in the following scenario. Say you have an app with multiple threads that are all generating objects, with each thread generating a unique

Race conditions when adding to, but not reading from, a List<T> (or Stack or Queue) - what happens?

不打扰是莪最后的温柔 提交于 2021-02-05 09:11:36
问题 Note: I'm asking this question relative to the 3.5 framework, so I'm not including any of the newer multithreaded constructs in 4.0 (which I'm still learning). I've been trying to come up with an answer on this to try and close out an argument I've been having, but I don't feel like I've found a conclusive description as to what would or could happen in the following scenario. Say you have an app with multiple threads that are all generating objects, with each thread generating a unique

Standard experimental latch and barrier use ptrdiff_t

独自空忆成欢 提交于 2021-02-05 07:37:51
问题 I was looking at the C++ experimental extensions for concurrency and noticed the new synchronization classes latch , barrier , and flex_barrier . They all implement a standard barrier, either single-use or reusable. The current documentation states the following signature for their constructors: explicit latch( ptrdiff_t value ); explicit barrier( std::ptrdiff_t num_threads ); explicit flex_barrier( std::ptrdiff_t num_threads ); With the following explanation for the value or num_threads

How to merge multiple observables with order preservation and maximum concurrency?

旧街凉风 提交于 2021-02-05 06:39:25
问题 I searched for a duplicate and didn't find any. What I have is a nested observable IObservable<IObservable<T>> , and I want to flatten it to a IObservable<T> . I don't want to use the Concat operator because it delays the subscription to each inner observable until the completion of the previous observable. This is a problem because the inner observables are cold, and I want them to start emitting T values immediately after they are emitted by the outer observable. I also don't want to use

Do store instructions block subsequent instructions on a cache miss?

ⅰ亾dé卋堺 提交于 2021-02-05 05:10:24
问题 Let's say we have a processor with two cores (C0 and C1) and a cache line starting at address k that is owned by C0 initially. If C1 issues a store instruction on a 8-byte slot at line k , will that affect the throughput of the following instructions that are being executed on C1? The intel optimziation manual has the following paragraph When an instruction writes data to a memory location [...], the processor ensures that it has the line containing this memory location is in its L1d cache [.

mysqldump concurrency

*爱你&永不变心* 提交于 2021-02-04 19:40:08
问题 if I started mysqldump on a database, and then created a new table with new data, will this table be dumped? what's the concurrency behavior here? 回答1: Well, that is not sure, from Mysql Manual: --single-transaction This option sends a START TRANSACTION SQL statement to the server before dumping data. It is useful only with transactional tables such as InnoDB and BDB, because then it dumps the consistent state of the database at the time when BEGIN was issued without blocking any applications

predicate for condition variable

心已入冬 提交于 2021-02-04 19:10:10
问题 I am new to multi threading. While writing multi threaded code in C++11 using condition variable , I use the following construct while(predicate) { cond_var.wait(&lock); } However, I have been reading Deitel's third edition book on operating systems(chp 6) where the following construct is being used if(predicate) { cond_var.wait(&lock); } So, what's the difference? Why isn't the book using while? Isn't spurious call an issue? 回答1: Spurious wakeup is always a potential issue. For example, look

predicate for condition variable

耗尽温柔 提交于 2021-02-04 19:09:08
问题 I am new to multi threading. While writing multi threaded code in C++11 using condition variable , I use the following construct while(predicate) { cond_var.wait(&lock); } However, I have been reading Deitel's third edition book on operating systems(chp 6) where the following construct is being used if(predicate) { cond_var.wait(&lock); } So, what's the difference? Why isn't the book using while? Isn't spurious call an issue? 回答1: Spurious wakeup is always a potential issue. For example, look

predicate for condition variable

对着背影说爱祢 提交于 2021-02-04 19:07:38
问题 I am new to multi threading. While writing multi threaded code in C++11 using condition variable , I use the following construct while(predicate) { cond_var.wait(&lock); } However, I have been reading Deitel's third edition book on operating systems(chp 6) where the following construct is being used if(predicate) { cond_var.wait(&lock); } So, what's the difference? Why isn't the book using while? Isn't spurious call an issue? 回答1: Spurious wakeup is always a potential issue. For example, look