mmap

How would I design and implement a non-blocking memory mapping module for node.js

眉间皱痕 提交于 2020-02-20 07:46:47
问题 There exists the mmap module for node.js: https://github.com/bnoordhuis/node-mmap/ As the author Ben Noordhuis notes, accesing mapped memory can block, which is why he does not recommend it anymore and discontinued it. So I wonder how would I design a non-blocking memory mapping module for node.js? Threading, Fibers, ? Obviously this nearby raises the question if threading in node.js would just happen elsewhere instead of the request handler. 回答1: When talking about implementing some native

趣谈Linux操作系统学习笔记-内存管理(25讲)--内存映射上

时光总嘲笑我的痴心妄想 提交于 2020-02-16 13:49:06
mmap 的原理 每一个进程都有一个列表 vm_area_struct 1 struct mm_struct { 2 struct vm_area_struct *mmap; /* list of VMAs */ 3 ...... 4 } 5 6 7 struct vm_area_struct { 8 /* 9 * For areas with an address space and backing store, 10 * linkage into the address_space->i_mmap interval tree. 11 */ 12 struct { 13 struct rb_node rb; 14 unsigned long rb_subtree_last; 15 } shared; 16 17 /* 18 * A file's MAP_PRIVATE vma can be in both i_mmap tree and anon_vma 19 * list, after a COW of one of the file pages. A MAP_SHARED vma 20 * can only be in the i_mmap tree. An anonymous MAP_PRIVATE, stack 21 * or brk vma (with NULL

堆、栈的生长方向记录

↘锁芯ラ 提交于 2020-02-14 23:28:28
作者:RednaxelaFX 链接:https://www.zhihu.com/question/36103513/answer/66101372 来源:知乎 著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。 1. 堆没有方向之说,每个堆都是散落的 2. 堆和栈之间没有谁地址高之说,看操作系统实现 3. 数组取下标偏移总是往上涨的,和在堆还是栈没啥关系 简短回答: 进程地址空间的分布取决于操作系统,栈向什么方向增长取决于操作系统与CPU的组合。 不要把别的操作系统的实现方式套用到Windows上。 x86硬件直接支持的栈确实是“向下增长”的:push指令导致sp自减一个slot,pop指令导致sp自增一个slot。其它硬件有其它硬件的情况。 ========================================== 栈的增长方向与栈帧布局 这个上下文里说的“栈”是函数调用栈,是以“栈帧”(stack frame)为单位的。 每一次函数调用会在栈上分配一个新的栈帧,在这次函数调用结束时释放其空间。 被调用函数(callee)的栈帧相对调用函数(caller)的栈帧的位置反映了栈的增长方向:如果被调用函数的栈帧比调用函数的在更低的地址,那么栈就是向下增长;反之则是向上增长。 而在一个栈帧内,局部变量是如何分布到栈帧里的(所谓栈帧布局,stack frame

Python实现共享内存通信方式

佐手、 提交于 2020-02-06 14:49:51
创建共享内存python文件: import mmap import contextlib import time with contextlib.closing(mmap.mmap(-1, 100, tagname='SASU', access=mmap.ACCESS_WRITE)) as m: for i in range(1, 10001): m.seek(0) m.write(str(i).encode()) m.flush() time.sleep(1) 读取共享内存python文件: import mmap import contextlib import time while True: with contextlib.closing(mmap.mmap(-1, 100, tagname="SASU", access=mmap.ACCESS_READ)) as m: m.tell() s = m.read() print(s) 通过创建运行以上两个文件,可以简单实现共享内存通信。并且相同环境下,还可以与C#进行共享内存通信。测试可行。 来源: https://www.cnblogs.com/ming-4/p/12268359.html

/usr/local/lib/libz.a: could not read symbols: Bad value(64 位 Linux)

徘徊边缘 提交于 2020-02-05 07:44:22
/usr/bin/ld: /usr/local/lib/libz.a(crc32.o): relocation R_X86_64_32 against `a local symbol' can not be used when making a shared object; recompile with -fPIC /usr/local/lib/libz.a: could not read symbols: Bad value 一般是64 位 电脑才会出现。 解决方法如下: cd zlib-1.2.3 //进入zlib目录 CFLAGS="-O3 -fPIC" ./configure //使用64位元的方法进行编译 make make install make clean 上面操作演示结果如下: [root@unix-server1 zlib-1.2.3]# CFLAGS="-O3 -fPIC" ./configure --prefix=/usr/local/zlib/ Checking for gcc... Building static library libz.a version 1.2.3 with gcc. Checking for unistd.h... Yes. Checking whether to use vs[n]printf() or s[n]printf()

SIGBUS while doing memcpy from mmap ed buffer which is in RAM as identified by mincore

寵の児 提交于 2020-02-02 09:59:16
问题 I am mmapping a block as: mapAddr = mmap((void*) 0, curMapSize, PROT_NONE, MAP_LOCKED|MAP_SHARED, fd, curMapOffset); if this does not fail (mapAddr != MAP_FAILED) I query mincore as: err = mincore((char*) mapAddr, pageSize, &mincoreRet); to find out whether it is in RAM. In case it is in RAM (err == 0 && mincoreRet & 0x01) I mmap it again for reading as: copyAddr = mmap((void*) 0, curMapSize, PROT_READ, MAP_LOCKED|MAP_SHARED, fd, curMapOffset); and then I try to copy it out to my buffer as:

SIGBUS while doing memcpy from mmap ed buffer which is in RAM as identified by mincore

|▌冷眼眸甩不掉的悲伤 提交于 2020-02-02 09:54:39
问题 I am mmapping a block as: mapAddr = mmap((void*) 0, curMapSize, PROT_NONE, MAP_LOCKED|MAP_SHARED, fd, curMapOffset); if this does not fail (mapAddr != MAP_FAILED) I query mincore as: err = mincore((char*) mapAddr, pageSize, &mincoreRet); to find out whether it is in RAM. In case it is in RAM (err == 0 && mincoreRet & 0x01) I mmap it again for reading as: copyAddr = mmap((void*) 0, curMapSize, PROT_READ, MAP_LOCKED|MAP_SHARED, fd, curMapOffset); and then I try to copy it out to my buffer as:

1039 Course List for Student (25)

余生颓废 提交于 2020-01-31 12:37:51
Zhejiang University has 40000 students and provides 2500 courses. Now given the student name lists of all the courses, you are supposed to output the registered course list for each student who comes for a query. Input Specification: Each input file contains one test case. For each case, the first line contains 2 positive integers: N (<=40000), the number of students who look for their course lists, and K (<=2500), the total number of courses. Then the student name lists are given for the courses (numbered from 1 to K) in the following format: for each course i , first the course index i and

Unexpected exec permission from mmap when assembly files included in the project

烂漫一生 提交于 2020-01-30 14:11:58
问题 I am banging my head into the wall with this. In my project, when I'm allocating memory with mmap the mapping ( /proc/self/maps ) shows that it is an readable and executable region despite I requested only readable memory. After looking into strace (which was looking good) and other debugging, I was able to identify the only thing that seems to avoid this strange problem: removing assembly files from the project and leaving only pure C. (what?!) So here is my strange example, I am working on

malloc 堆分配算法探析

别来无恙 提交于 2020-01-30 05:15:17
提到 C 语言不能不说内存管理,而内存管理则必须了解 malloc,今天深度学习了 malloc 的堆分配算法原理,笔记整理如下 什么是堆分配算法? 程序向操作系统申请一块适当大小的堆空间,然后由程序自己管理这块空间,而具体来讲,管理着堆空间分配的是运行库------也就是封装起来的 malloc 函数。运行库相当于向操作系统“批发”了一块较大的堆空间,然后“零售”给程序使用,当全部“售完”或程序有大量的内存需求时,再根据实际情况向操作系统“进货”。当运行库在向程序零售堆空间时,必须管理它批发来的堆空间,不能重复“出售”,导致地址冲突,于是运行库需要一个算法来管理堆空间,这个算法就是堆的分配算法。 常见的 malloc 分配算法三种: 空闲链表、位图、对象池 1. 空闲链表 空闲链表的方法实际就是把堆上各个空闲的块按照链表的方式连接起来,当用户需要一块空间时,遍历整个链表,直到找到合适的大小将他拆分;当用户释放空间则进行合并。 空闲链表的结构: 头 + 空闲块 头结构记录了上一个 [pre] 和下一个 [next] 空闲块的地址 如何利用这个结构分配空间? 首先在空闲链表中查找足够容纳请求大小的一个空闲块,然后将这个块分为两部分,一部分为程序请求的空间,另一部分为剩余下来的空闲空间。 释放空间的时候,只有一个指针,无法确定这个块的大小,如何释放 当用户请求 k 个字节空间的时候