mmap

mmap reading stale array of integers from an file

江枫思渺然 提交于 2020-01-17 07:41:54
问题 I'm trying to read matrix of integers from file using mmap. If I receive it as char pointer from mmap function, I see everything correct but if I use int pointer, it gives me stale data. Problem with using char pointer is that I need to parse whole string using strtok or something else and get integers one by one. My matrix size is going to be 4k * 4k hence making that many calls to sscanf and strtok is not efficient. Please look at the program and output #define INTS 3 * 3 int main() { FILE*

C/UNIX mmap array of int

泪湿孤枕 提交于 2020-01-16 11:22:21
问题 Is it possible to mmap file full of integeres as integer array? I mean sth like this (which doesn't work) given file tmp.in 1 2 15 1258 and similar code to this int fd; if ((fd = open("tmp.in", O_RDWR, 0666)) == -1) { err(1, "open"); } size_t size = 4 * sizeof(int); int * map = (int *) mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); I'd like to be able call printf("%d\n", map[2]+1); with expected result 16 I found char mapping using sscanf to parse integers, but I need to have

C/UNIX mmap array of int

守給你的承諾、 提交于 2020-01-16 11:21:42
问题 Is it possible to mmap file full of integeres as integer array? I mean sth like this (which doesn't work) given file tmp.in 1 2 15 1258 and similar code to this int fd; if ((fd = open("tmp.in", O_RDWR, 0666)) == -1) { err(1, "open"); } size_t size = 4 * sizeof(int); int * map = (int *) mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); I'd like to be able call printf("%d\n", map[2]+1); with expected result 16 I found char mapping using sscanf to parse integers, but I need to have

c++共享内存通信如何实现

萝らか妹 提交于 2020-01-15 04:36:52
c++共享内存通信如何实现 前言 mmap机制-对应cyber中共享内存通信模式中的PosixSegment 小结 System V共享内存-对应cyber中XsiSegment 小结 参考链接 前言 现在很多对性能要求高的项目都会支持共享内存的进程间通信(IPC)方式,本文会以百度Apollo自动驾驶项目为例,展示两种c++中实现共享内存通信的方式(对应linux中两种不同的机制)。 共享内存实际上就是两个不相关的进程访问同一块逻辑内存,相应的肯定需要额外的同步机制来保证读写正确。采用共享内存通信的一个显而易见的好处是效率高,因为进程可以直接读写内存,而不需要任何数据的拷贝。对于像管道和消息队列等通信方式,则需要在内核和用户空间进行四次的数据拷贝,而共享内存则只拷贝两次数据:一次从输入文件到共享内存区,另一次从共享内存区到输出文件。实际上,进程之间在共享内存时,并不总是读写少量数据后就解除映射,有新的通信时,再重新建立共享内存区域。而是保持共享区域,直到通信完毕为止,这样,数据内容一直保存在共享内存中,并没有写回文件。共享内存中的内容往往是在解除映射时才写回文件的。因此,采用共享内存的通信方式效率是非常高的。 mmap机制-对应cyber中共享内存通信模式中的PosixSegment 内存映射机制mmap是POSIX标准的系统调用,mmap(

overhead of reserving address space using mmap

两盒软妹~` 提交于 2020-01-15 04:00:27
问题 I have a program that routinely uses massive arrays, where the memory is allocated using mmap Does anyone know the typical overheads of allocating address space in large amounts before the memory is committed, either if allocating with MAP_NORESERVE or backing the space with a sparse file? It5 strikes me mmap can't be free since it must make page table entries for the allocated space. I want to have some idea of this overhead before implementing an algorithm I'm considering. Obviously the

How do I implement dynamic shared memory resizing?

故事扮演 提交于 2020-01-14 09:56:10
问题 Currently I use shm_open to get a file descriptor and then use ftruncate and mmap whenever I want to add a new buffer to the shared memory. Each buffer is used individually for its own purposes. Now what I need to do is arbitrarily resize buffers. And also munmap buffers and reuse the free space again later. The only solution I can come up with for the first problem is: ftuncate(file_size + old_buffer_size + extra_size), mmap, copy data accross into the new buffer and then munmap the original

How do I implement dynamic shared memory resizing?

泄露秘密 提交于 2020-01-14 09:56:07
问题 Currently I use shm_open to get a file descriptor and then use ftruncate and mmap whenever I want to add a new buffer to the shared memory. Each buffer is used individually for its own purposes. Now what I need to do is arbitrarily resize buffers. And also munmap buffers and reuse the free space again later. The only solution I can come up with for the first problem is: ftuncate(file_size + old_buffer_size + extra_size), mmap, copy data accross into the new buffer and then munmap the original

Linux内核巨页代码解析和使用

混江龙づ霸主 提交于 2020-01-13 23:22:39
前言 巨页的实现,涉及到两个模块:hugetlb 和 hugetlbfs。 hugetlb 相当于是 huge page 页面管理者,页面的分配及释放,都由此模块负责。 hugetlbfs 则用于向用户提供一套基于文件系统的巨页使用界面,其下层功能的实现,则依赖于 hugetlb。 目录 1、hugetlb模块 2、hugetlbfs模块 3、使用巨页 3.1、 mmap方式 3.2、共享内存方式 1、hugetlb模块 struct hstate hstates[HUGE_MAX_HSTATE];定义了一个hstate数组,每个元素是一个巨页池。不同的巨页池,其巨页尺寸是不一样的,例如2M的,4M的,1G的等等。系统中可能会有多个巨页池,每一个池的巨页尺寸都是不一样的。max_hstate标识当前有多少个hstate,即数组的前多少个元素是有效的。默认的话,hugetlb_init中会创建一个hstate,其page size是默认大小,此hstate也就成了默认的hstate。如果hugetlb.c被编译进内核,并且内核启动的时候,命令行参数中有hugepagesz=选项。那么就会调用setup_hugepagesz进行处理,setup_hugepagesz应该会在hugetlb_init之前执行。setup_hugepagesz中会调用hugetlb_add

How do I mmap a _particular_ region in memory?

故事扮演 提交于 2020-01-13 18:54:47
问题 I have a program. I want it to be able to mmap a particular region of memory over different runs. I have the source code of the program. C/C++ I control how the program is compiled. gcc I control how the program is linked. gcc I control how the program is run (Linux). I just want to have this particular region of memory, say 0xabcdabcd to 0xdeadbeef that I mmap to a particular file. Is there anyway to guarantee this? (I have to somehow make sure that other things aren't loaded into this

In linux , how to create a file descriptor for a memory region

Deadly 提交于 2020-01-12 06:50:07
问题 I have some program handling some data either in a file or in some memory buffer. I want to provide uniform way to handle these cases. I can either 1) mmap the file so we can handle them uniformly as a memory buffer; 2) create FILE* using fopen and fmemopen so access them uniformly as FILE*. However, I can't use either ways above. I need to handle them both as file descriptor, because one of the libraries I use only takes file descriptor, and it does mmap on the file descriptor. So my