mmu

How MTRR registers implemented? [closed]

情到浓时终转凉″ 提交于 2019-11-30 07:37:21
x86/x86-64 exposes MTRR (Memory-type-range-register) that can be useful to designate different portions of physical address space for different usages (e.g., Cacheable, Unchangeable, Writecombining, etc.). My question is is anybody knows how these constrained on physical address space as defined by the MTRRs are enforced in hardware? On each memory access does the hardware check whether the physical address falls in a given range before the process decides whether it should look up the cache or lookup the writecombining buffer or send it to memory controller directly? Thanks Wikipedia says in

Do multi-core CPUs share the MMU and page tables?

偶尔善良 提交于 2019-11-30 06:30:05
问题 On a single core computer, one thread is executing at a time. On each context switch the scheduler checks if the new thread to schedule is in the same process than the previous one. If so, nothing needs to be done regarding the MMU (pages table). In the other case, the pages table needs to be updated with the new process pages table. I am wondering how things happen on a multi-core computer. I guess there is a dedicated MMU on each core, and if two threads of the same process are running

Linux内存描述之高端内存–Linux内存管理(五)

那年仲夏 提交于 2019-11-30 05:52:49
服务器体系与共享存储器架构 日期 内核版本 架构 作者 GitHub CSDN 2016-06-14 Linux-4.7 X86 & arm gatieme LinuxDeviceDrivers Linux内存管理 http://blog.csdn.net/vanbreaker/article/details/7579941 #1 前景回顾 前面我们讲到 服务器体系(SMP, NUMA, MPP)与共享存储器架构(UMA和NUMA) #1.1 UMA和NUMA两种模型 共享存储型多处理机有两种模型 均匀存储器存取(Uniform-Memory-Access,简称UMA)模型 非均匀存储器存取(Nonuniform-Memory-Access,简称NUMA)模型 UMA模型 物理存储器被所有处理机均匀共享。所有处理机对所有存储字具有相同的存取时间,这就是为什么称它为均匀存储器存取的原因。每台处理机可以有私用高速缓存,外围设备也以一定形式共享。 NUMA模型 NUMA模式下,处理器被划分成多个"节点"(node), 每个节点被分配有的本地存储器空间。 所有节点中的处理器都可以访问全部的系统物理存储器,但是访问本节点内的存储器所需要的时间,比访问某些远程节点内的存储器所花的时间要少得多。 ##1.2 Linux如何描述物理内存 Linux把物理内存划分为三个层次来管理 层次 描述

How does kernel know, which pages in the virtual address space correspond to a swapped out physical page frame?

感情迁移 提交于 2019-11-29 20:10:40
Consider the following situation: the kernel has exhausted the physical RAM and needs to swap out a page. It picks least recently used page frame and wants to swap its contents out to the disk and allocate that frame to another process. What bothers me is that this page frame was already mapped to, generally speaking, several (identical) pages of several processes. The kernel has to somehow find all of those processes and mark the page as swapped out. How does it carry that out? Thank you. EDIT: Illustrations to the question: Before the swapping processes 1 and 2 had a shared Page 1, which

Do multi-core CPUs share the MMU and page tables?

情到浓时终转凉″ 提交于 2019-11-28 18:50:52
On a single core computer, one thread is executing at a time. On each context switch the scheduler checks if the new thread to schedule is in the same process than the previous one. If so, nothing needs to be done regarding the MMU (pages table). In the other case, the pages table needs to be updated with the new process pages table. I am wondering how things happen on a multi-core computer. I guess there is a dedicated MMU on each core, and if two threads of the same process are running simultaneously on 2 cores, each of this core's MMU simply refer to the same page table. Is this true ? Can

Page table in Linux kernel space during boot

家住魔仙堡 提交于 2019-11-28 16:26:15
I feel confuse in page table management in Linux kernel ? In Linux kernel space, before page table is turned on. Kernel will run in virtual memory with 1-1 mapping mechanism. After page table is turned on, then kernel has consult page tables to translate a virtual address into a physical memory address. Questions are: At this time, after turning on page table, kernel space is still 1GB (from 0xC0000000 - 0xFFFFFFFF ) ? And in the page tables of kernel process, only page table entries (PTE) in range from 0xC0000000 - 0xFFFFFFFF are mapped ?. PTEs are out of this range will be not mapped because

How does kernel know, which pages in the virtual address space correspond to a swapped out physical page frame?

江枫思渺然 提交于 2019-11-28 15:59:21
问题 This question was migrated from Unix & Linux Stack Exchange because it can be answered on Stack Overflow. Migrated 6 years ago . Consider the following situation: the kernel has exhausted the physical RAM and needs to swap out a page. It picks least recently used page frame and wants to swap its contents out to the disk and allocate that frame to another process. What bothers me is that this page frame was already mapped to, generally speaking, several (identical) pages of several processes.

VIPT Cache: Connection between TLB & Cache?

瘦欲@ 提交于 2019-11-28 08:20:15
问题 I just want to clarify the concept and could find detail enough answers which can throw some light upon how everything actually works out in the hardware. Please provide any relevant details. In case of VIPT caches, the memory request is sent in parallel to both the TLB and the Cache. From the TLB we get the traslated physical address. From the cache indexing we get a list of tags (e.g. from all the cache lines belonging to a set). Then the translated TLB address is matched with the list of

How many memory pages do C compilers on desktop OSes use to detect stack overflows?

帅比萌擦擦* 提交于 2019-11-28 07:33:25
问题 This question is related to but different from this one about variable length arrays in C99. The answers point out that one danger with allocating variable length arrays (or just large arrays of a fixed size) in the stack is that the allocation may fail silently, as opposed to, say, calling malloc , which explicitly tells the caller whether allocation succeeded. Modern non-embedded compilation platforms use an invalid memory zone to detect some stack overflows at no additional cost (the

Difference between logical addresses, and physical addresses?

﹥>﹥吖頭↗ 提交于 2019-11-28 03:12:06
I am reading Operating Systems Concept and I am on the 8th chapter! However I could use some clarification, or reassurance that my understanding is correct. Logical Addresses: Logical addresses are generated by the CPU, according to the book. What exactly does this mean? (In an execute-generated address system..) I assume when code is compiled for a program, the program has no idea where the code will be loaded in memory. All the compiler does is set up a general sketch of the program layout and how the image should be laid out, but doesn't assign any real addresses to it. When the program is