atomic

How correctly wake up process inside interrupt handlers

雨燕双飞 提交于 2021-02-10 07:11:41
问题 Briefly, in a read method i check if a variable is 0 and if it's i put the current process to sleep: static ssize_t soc2e_read(struct file *filp, char __user *buf, size_t count, loff_t * ppos) { ... struct soc2e_dev *soc2e = (struct soc2e_dev *)filp->private_data; if (soc2e->bytes == 0) { if (wait_event_interruptible(soc2e->wlist, (soc2e->bytes > 0))) return -ERESTARTSYS; } ... } I must wake up the process in an interrupt handler: static irqreturn_t soc2e_irq_handler(int irq, void *dev) { ...

[JDK8]性能优化之使用LongAdder替换AtomicLong

心不动则不痛 提交于 2021-02-09 05:26:38
如果让你实现一个计数器,有点经验的同学可以很快的想到使用AtomicInteger或者AtomicLong进行简单的封装。 因为计数器操作涉及到内存的可见性和线程之间的竞争,而Atomic***的实现完美的屏蔽了这些技术细节,我们只需要执行相应的方法,就能实现对应的业务需求。 Atomic**虽然好用,不过这些的操作在并发量很大的情况下,性能问题也会被相应的放大。我们可以先看下其中 getAndIncrement 的实现代码 public final long getAndIncrement() { return unsafe.getAndAddLong( this , valueOffset, 1L ); } // unsafe类中的实现 public final long getAndAddLong(Object var1, long var2, long var4) { long var6; do { var6 = this .getLongVolatile(var1, var2); } while (! this .compareAndSwapLong(var1, var2, var6, var6 + var4)); return var6; } 很显然,在 getAndAddLong 实现中,为了实现正确的累加操作,如果并发量很大的话,cpu会花费大量的时间在试错上面

c++11 多线程入门教程(一)

二次信任 提交于 2021-02-09 03:39:50
  原文作者:aircraft 原文链接:https://www.cnblogs.com/DOMLX/p/10945309.html      本网络编程入门系列博客是连载学习的,有兴趣的可以看我博客其他篇。。。。 c++ 网络编程课设入门超详细教程 ---目录        最近在找c++服务端开发的实习(大佬们有推荐吗QAQ。。),恰好写了一些c++11多线程有关的东西,就写一下笔记留着以后自己忘记回来看吧,也不是专门写给读者看的,我就想到哪就写到哪吧   c++11呢,就是c++升级之后的一个版本,现在马上就出c++20了,里面增加了很多对多线程支持的类,让多线程编程更加简单了,好了废话不多说,先来建立一个简单的多线程编程案例,看看c++11下多线程编程创建到底有多么的简单。 1.创建一个简单的多线程案例: 首先导入#include<thread>---用于创建线程 其次导入#include<chrono>--用于时间延时 获取时间之类的 定义一个线程对象t1,这就自动创建了一个线程,参数就是你要线程去执行的函数,t1是变量名字 随便取 std::thread t1(func); 下面这里返回一个毫秒级别的时间间隔参数值,间隔10毫秒 std::chrono::milliseconds(10) this_thread::sleep_for()就是让此线程休眠

C++11 atomics: does it make sense, or is it even possible, to use them with memory mapped I/O?

坚强是说给别人听的谎言 提交于 2021-02-08 23:32:51
问题 As I understand it, C volatile and optionally inline asm for memory fence have been used for implementing a device driver on top of memory mapped I/O. Several examples can be found in Linux kernel. If we forget about the risk of uncaught exceptions (if any,) does it make sense to replace them with C++11 atomics? Or, is it possible at all? 回答1: In general, you can replace memory fences with atomics, but not volatile , except where it is used together with a fence exclusively for inter thread

C++ undefined reference to `__atomic_load_16'

僤鯓⒐⒋嵵緔 提交于 2021-02-08 15:47:31
问题 I have linking errors, when trying to do an atomic load of a 16 byte block. I have the following code: #include <atomic> struct MyStruct{ long x; long y; }; struct X{ std::atomic<MyStruct> myStruct; }; int main(){ X x; MyStruct s = atomic_load(&x.myStruct); } When I compile this with (g++ version 5.3.1): g++ --std=c++11 test.cpp I get the error /tmp/ccrvzLMq.o: In function `std::atomic<MyStruct>::load(std::memory_order) const': test.cpp:(.text._ZNKSt6atomicI8MyStructE4loadESt12memory_order[

Implementation of a lock free vector

*爱你&永不变心* 提交于 2021-02-08 12:05:42
问题 After several searches, I cannot find a lock-free vector implementation. There is a document that speaks about it but nothing concrete (in any case I have not found it). http://pirkelbauer.com/papers/opodis06.pdf There are currently 2 threads dealing with arrays, there may be more in a while. One thread that updates different vectors and another thread that accesses the vector to do calculations, etc. Each thread accesses the different array a large number of times per second. I implemented a

rm 删除文件空间

帅比萌擦擦* 提交于 2021-02-07 20:31:41
在 Linux,你是不是曾经天真的以为,使用rm删除一个文件,占用的空间就释放了?事情可能不是常常如人意。 产生一个指定大小的随机内容文件 我们先看一下当前各个挂载目录的空间大小: $ df -h/dev/sda11 454M 280M 147M 66% /boot 我这里挑选了其中一个结果展示(你可以选择任一挂载目录),接下来准备在/boot下生成一个文件。 首先我们产生一个50M大小的文件: $ dd if=/dev/urandom of=/boot/test.txt bs=50M count=1 至此,我们产生了一个50M大小的文件,再看boot下: $ df -h/dev/sda11 454M 312M 115M 74% /boot 这里你不用关心到底多了多少,你只需要关注,/boot下的文件增多了。 测试程序: #include<stdio.h>#include<unistd.h>int main(void){ FILE *fp = NULL; fp = fopen("/boot/test.txt", "rw+"); if(NULL == fp) { perror("open file failed"); return -1; } while(1) { //do nothing sleep(1); } fclose(fp); return 0;} 至于程序本身

Why would 'deleting' nodes in this lock-free stack class would cause race condition?

佐手、 提交于 2021-02-07 18:18:21
问题 In the book titled "C++ Concurrency in Action" by Anthony Williams, in Section 7.2.1, a lock-free stack implementation is listed: template <typename T> class lock_free_stack { struct node { shared_ptr<T> data_; node* next_; node(const T& data) : data_(make_shared(data)) {} }; atomic<node*> head_; public: void push(const T& data) { node* new_node = new node(data); new_node->next_ = head_.load(); while(!head.compare_exchange_weak(new_node->next_, new_node)); } shared_ptr<T> pop() { node* old

Why would 'deleting' nodes in this lock-free stack class would cause race condition?

浪子不回头ぞ 提交于 2021-02-07 18:18:08
问题 In the book titled "C++ Concurrency in Action" by Anthony Williams, in Section 7.2.1, a lock-free stack implementation is listed: template <typename T> class lock_free_stack { struct node { shared_ptr<T> data_; node* next_; node(const T& data) : data_(make_shared(data)) {} }; atomic<node*> head_; public: void push(const T& data) { node* new_node = new node(data); new_node->next_ = head_.load(); while(!head.compare_exchange_weak(new_node->next_, new_node)); } shared_ptr<T> pop() { node* old

Is a single row INSERT atomic? E.g. on a table with 1M columns?

自作多情 提交于 2021-02-07 17:32:04
问题 Is a single row INSERT atomic (for an external reader)? Imagine it happens on a table with 1M columns. While executing a single INSERT statement (namely, the "single row" kind), is it possible for a read operation (maybe using the 'Read uncommitted' isolation level) occurring at the same time to only read some of the values (columns) ? I'm particularly interested in MS SQL Server's behaviour, although I assume this is similar for all major vendors. Bonus cred points for a link to official