what is the right way to update MMU translation table

孤街浪徒 提交于 2019-11-27 21:44:43
artless noise

Note: I assume TTB_BASE is the primary ARM L1 page table. If it is some shadow, you need to show more code as per unixsmurf. Here is my best guess...

Your page_dir is functioning as both the primary L1 entries and as the L2 fine page table. The TTB_BASE should only contain sections, super sections or pointers to sub-page tables. You need to allocate more physical memory for the L2 page tables.

I guess your page_dir[i+0], page_dir[i+1], etc are overwriting other L1 section entries and Your invalidate_tlb() make this concrete to the CPU. You should be using the L2 page_tbl pointer to set/index the small/fine pages. Perhaps your kernel code is in the 3G-4G space and your are over-writing some critical L1 mappings there; any number of strange things could happen. The current use of page_tbl is unclear.

You can not double use the primary L1 tables as they have meaning to hardware. I guess that this is just a mistake?

There are only three types of entries in the primary L1 page tables,

  1. Sections - a 1MB memory mapping straight virt to phys, with no 2nd page table.
  2. Super-sections - a 4MB memory mapping as per sections, but minimizing TLB pressure.
  3. A 2nd page table. Entries can be 1k,4k,and 64k. Fine page tables are 4k in table size and are not in modern ARM designs. Each entry is 1k of address in a fine page table.

The primary page table must be on a physical address that is 16k aligned, which is also it's size. 16k/(4bytes/entry)*1MB gives a 4GB address range of the L1 tables. So each entry in the primary L1 page table always refers to a 1MB entry. Coarse page tables are the only option on newer ARMs and they refer to a 1K L2 table.

1K_L2_size * 4K_entry / (4bytes_per_entry) gives 1MB address space.
1K_L2_size * 64K_entry / (16bytes_per_entry) gives 1MB address space.

There are four entries of 1MB each in the primary L1 for a super-section. There are four entries each for a 64k large page in the L2 table. If you are using super-sections, you do not have L2 entries.

I think you may have Super-sections and large pages mixed up? For certain having improperly formatted page tables will only manifest on a TLB invalidate, so that MMU mappings are re-fetched from the tables via a walk.

Finally, you should flush the D cache and drain the write buffer to ensure consistency with memory. You may wish to have a memory barrier as well.

static inline void dcache_clean(void)
{
    const int zero = 0;
    asm volatile ("" ::: "memory"); /* barrier */
    /* clean entire D cache -> push to external memory. */
    asm volatile ("1: mrc p15, 0, r15, c7, c10, 3\n"
                    " bne 1b\n" ::: "cc");
    /* drain the write buffer */
    asm volatile ("mcr 15, 0, %0, c7, c10, 4"::"r" (zero));
}

There are also co-processor commands to invalidate single TLB entries as you are only changing a portion of the vaddr/paddr mappings in the data fault.

See also: ARM MMU tutorial, Virtual Memory structures, and your ARM Architecture Reference Manual.

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!