memory-mapped-files

Read access violation for memory mapped vector in debug mode

馋奶兔 提交于 2019-12-04 20:45:38
While attempting to use boost::interprocess for storing a std::vector in a memory mapped file, I am getting the exception Exception thrown: read access violation. when I try to push back on a loaded vector, but only in debug mode . This minimal example code (written by @sehe) is retrieved from https://stackoverflow.com/a/29602884/2741329 , and it crashes on MSVC14 in debug mode and executed more than once: #include <boost/interprocess/managed_mapped_file.hpp> namespace bi = boost::interprocess; int main() { std::string vecFile = "vector.dat"; bi::managed_mapped_file file_vec(bi::open_or_create

Memory Mapped Files, Managed Mapped File and Offset Pointer

落爺英雄遲暮 提交于 2019-12-04 13:49:06
I'm a little bit confused about the terminology of Boost Library (for windows). What I'm trying to do is simply; create a file on disk (a big file >50 GB) do some mapping for write and read operations seperately. For example first map 1 gb portion for writing & after that flush it to the hard drive take a new portion and so on, while the reader applications maps different parts of the file and do the reading stuff without changing anything (no edit). I'm reading the documentation of boost (1.47.0 version since we allowed to use this one) and I don't understand exactly when to use Memory Mapped

Disadvantages of using memory mapped files

六月ゝ 毕业季﹏ 提交于 2019-12-04 11:18:28
My web service writes several thousands of transactions per minute and we save them on the hd. I was testing different ways to save these files and I made some tests with standard IO and with MemoryMapped files. In my results, writing files (20 k text files) with MemoryMapped files is about 4x faster than standard IO and I was not able to find any disadvantages. As I have not so much experience with this technology, do you think I may face any problem using them or you don't see any disadvantage? Thanks! EDIT 1 , here the source: namespace FileWritingTests.Writers { public class

Java MemoryMapping big files

╄→尐↘猪︶ㄣ 提交于 2019-12-04 11:13:14
The Java limitation of MappedByteBuffer to 2GIG make it tricky to use for mapping big files. The usual recommended approach is to use an array of MappedByteBuffer and index it through: long PAGE_SIZE = Integer.MAX_VALUE; MappedByteBuffer[] buffers; private int getPage(long offset) { return (int) (offset / PAGE_SIZE) } private int getIndex(long offset) { return (int) (offset % PAGE_SIZE); } public byte get(long offset) { return buffers[getPage(offset)].get(getIndex(offset)); } this can be a working for single bytes, but requires rewriting a lot of code if you want to handle read/writes that are

numpy.memmap for an array of strings?

我的梦境 提交于 2019-12-04 09:29:45
Is it possible to use numpy.memmap to map a large disk-based array of strings into memory? I know it can be done for floats and suchlike, but this question is specifically about strings. I am interested in solutions for both fixed-length and variable-length strings. The solution is free to dictate any reasonable file format. If all the strings have the same length, as suggested by the term "array", this is easily possible: a = numpy.memmap("data", dtype="S10") would be an example for strings of length 10. Edit : Since apparently the strings don't have the same length, you need to index the

How to order array in lexicographical order with mapped file vb.net

≡放荡痞女 提交于 2019-12-04 06:53:04
问题 This is kinda complicated for me to understand Dim test() As Byte = New Byte() {50, 40, 30, 10, 10} Dim answer() As UInteger = SortLexicoGraphicallyArrayMappedFile(test) The answer is the each Rotation sorted from lowest array value to highest array value. Rotation 0 = 50, 40, 30, 10, 10 Rotation 1 = 10, 50, 40, 30, 10 Rotation 2 = 10, 10, 50, 40, 30 Rotation 3 = 30, 10, 10, 50, 40 Rotation 4 = 40, 30, 10, 10, 50 When I sort this array above by hand I should get Rotation 2 = 10, 10, 50, 40,

Read memory mapped file or knowing its size to read it correctely

[亡魂溺海] 提交于 2019-12-04 05:31:35
问题 In this question, Read all contents of memory mapped file or Memory Mapped View Accessor without knowing the size of it there is a problem, the (int)stream.Length is not giving me the correct length, it rather gives the size of the internal buffer used! I need to refresh this question because it is very pressing. The main question was: I need something similar to ReadToEnd or ReadAllBytes to read all of the contents of the MemoryMappedFile using the MappedViewAccessor if I don't know the size

Memory-Mapped File is Faster on Huge Sequential Read? Why?

荒凉一梦 提交于 2019-12-04 03:26:41
问题 I used the code below to measure the performance difference between reading large, sequential reads of a memory-mapped file, as compared to just calling ReadFile : HANDLE hFile = CreateFile(_T("D:\\LARGE_ENOUGH_FILE"), FILE_READ_DATA, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, FILE_FLAG_NO_BUFFERING, NULL); __try { const size_t TO_READ = 32 * 1024 * 1024; char sum = 0; #if TEST_READ_FILE DWORD start = GetTickCount(); char* p = (char*)malloc(TO_READ); DWORD nw; ReadFile(hFile, p,

Truncate memory mapped file

时光总嘲笑我的痴心妄想 提交于 2019-12-04 01:12:38
I am using memory mapped IO for an index file, but the problem is that I'm not able to resize the file if it is mostly empty. Somewhere before: MappedByteBuffer map = raf.getChannel().map(MapMode.READ_WRITE, 0, 1 << 30); raf.close(); // use map map.force(); map = null; Resize: for (int c = 0; c < 100; c++) { RandomAccessFile raf = new RandomAccessFile(indexFile, "rw"); try { raf.setLength(newLen); if (c > 0) LOG.warn("used " + c + " iterations to close mapped byte buffer"); return; } catch (Exception e) { System.gc(); Thread.sleep(10); System.runFinalization(); Thread.sleep(10); } finally {

Python: handling a large set of data. Scipy or Rpy? And how?

半世苍凉 提交于 2019-12-03 09:57:26
问题 In my python environment, the Rpy and Scipy packages are already installed. The problem I want to tackle is such: 1) A huge set of financial data are stored in a text file. Loading into Excel is not possible 2) I need to sum a certain fields and get the totals. 3) I need to show the top 10 rows based on the totals. Which package (Scipy or Rpy) is best suited for this task? If so, could you provide me some pointers (e.g. documentation or online example) that can help me to implement a solution