shared-memory

What has changed in the memory model in .NET 4.5?

南笙酒味 提交于 2020-01-01 05:42:15
问题 I just read this puzzling line in Peter Richtie blog and I need help to understand the meaning Prior to .NET 4.5 you really programmed to the .NET memory model : http://msmvps.com/blogs/peterritchie/archive/2012/09/09/thread-synchronization-of-atomic-invariants-in-net-4-5.aspx Has the 'usual' .NET memory model (such as the one discussed in Jeffrey Richter book CLR via C# edition 1 and 2 (I haven't read 3d)) changed in .NET 4.5? Is there an article with conscious explanation? 回答1: The proper

How Do I Store and Retrieve a Struct into a Shared Memory Area in C

核能气质少年 提交于 2020-01-01 05:23:06
问题 For a uni assignment I need to create a circular list of up to 10 file names, and then store these in a shared memory area, so that 2 child processes can read/write to the list (using a semaphore to control access). Trouble is, that I am a total C novice and I feel loss and despair because its totally out of my depth. I need some help in "filling in the holes" of my knowledge. Right now, I am simply focussing on it one problem at a time, and presently, I am only trying to get my circular list

Why are multiprocessing.sharedctypes assignments so slow?

亡梦爱人 提交于 2020-01-01 05:11:25
问题 Here's a little bench-marking code to illustrate my question: import numpy as np import multiprocessing as mp # allocate memory %time temp = mp.RawArray(np.ctypeslib.ctypes.c_uint16, int(1e8)) Wall time: 46.8 ms # assign memory, very slow %time temp[:] = np.arange(1e8, dtype = np.uint16) Wall time: 10.3 s # equivalent numpy assignment, 100X faster %time a = np.arange(1e8, dtype = np.uint16) Wall time: 111 ms Basically I want a numpy array to be shared between multiple processes because it's

Better way to share memory for multiprocessing in Python?

点点圈 提交于 2020-01-01 02:51:05
问题 I have been tackling this problem for a week now and it's been getting pretty frustrating because every time I implement a simpler but similar scale example of what I need to do, it turns out multiprocessing will fudge it up. The way it handles shared memory baffles me because it is so limited, it can become useless quite rapidly. So the basic description of my problem is that I need to create a process that gets passed in some parameters to open an image and create about 20K patches of size

Shared memory and copy on write or rvalue references and move semantics?

梦想的初衷 提交于 2019-12-30 17:51:10
问题 Is a shared memory/copy on write implementation for general containers (like that found in Qt's containers) superseded by C++11 move semantics and rvalue references? Where does one fail and the other succeed? Or are they complementary rather than alternatives? 回答1: Both copy on write and move semantics have been used to optimize value semantics of objects that hold their data on the heap. std::string , for example has been implemented both as a copy-on-write object, and as a move-enabled

Deleting shared memory with ipcrm in Linux

ⅰ亾dé卋堺 提交于 2019-12-30 16:22:52
问题 I am working with a shared memory application, and to delete the segments I use the following command: ipcrm -M 0x0000162e (this is the key) But I do not know if I'm doing the right things, because when I run ipcs I see the same segment but with the key 0x0000000. So is the memory segment really deleted? When I run my application several times I see different memory segments with the key 0x000000, like this: key shmid owner perms bytes nattch status 0x00000000 65538 me 666 27 2 dest

Deleting shared memory with ipcrm in Linux

有些话、适合烂在心里 提交于 2019-12-30 16:22:44
问题 I am working with a shared memory application, and to delete the segments I use the following command: ipcrm -M 0x0000162e (this is the key) But I do not know if I'm doing the right things, because when I run ipcs I see the same segment but with the key 0x0000000. So is the memory segment really deleted? When I run my application several times I see different memory segments with the key 0x000000, like this: key shmid owner perms bytes nattch status 0x00000000 65538 me 666 27 2 dest

Can address space be recycled for multiple calls to MapViewOfFileEx without chance of failure?

廉价感情. 提交于 2019-12-30 10:26:41
问题 Consider a complex, memory hungry, multi threaded application running within a 32bit address space on windows XP. Certain operations require n large buffers of fixed size, where only one buffer needs to be accessed at a time. The application uses a pattern where some address space the size of one buffer is reserved early and is used to contain the currently needed buffer. This follows the sequence: (initial run) VirtualAlloc -> VirtualFree -> MapViewOfFileEx (buffer changes) UnMapViewOfFile -

When is padding for shared memory really required?

你。 提交于 2019-12-30 03:34:07
问题 I am confused by 2 documents from NVidia. "CUDA Best Practices" describes that shared memory is organized in banks, and in general in 32-bit mode each 4 bytes is a bank (that is how I understood it). However Parallel Prefix Sum (Scan) with CUDA goes into details how padding should be added to scan algorithm because of bank conflicts. The problem for me is, the basic type for this algorithm as presented is float and its size is 4 bytes. Thus each float is a bank and there is no bank conflict.

Linux shared memory: shmget() vs mmap()?

三世轮回 提交于 2019-12-28 07:39:14
问题 In this thread the OP is suggested to use mmap() instead of shmget() to get shared memory in Linux. I visited this page and this page to get some documentation, but the second one gives an obscure example regarding mmap() . Being almost a newbie, and needing to share some information (in text form) between two processes, should I use the shmget() method or mmap() ? And why? 回答1: Both methods are viable. mmap method is a little bit more restrictive then shmget , but easier to use. shmget is