huge-pages

Java periodically hangs at futex and very low IO output

我与影子孤独终老i 提交于 2019-12-04 01:34:58
Currently my application periodically blocked in IO , and the output is very low . I use some command to trace the process. By using jstack i found that the app is hanging at FileOutputStream.writeBytes. By using strace -f -c -p pid to collect syscall info, i found that. For normal situation, it has both futex and write syscalls. But when it went unnormal, there are only futex syscalls. The app keeps calling futex but all failed and throw ETIMEDOUT, just like this: <futex resumed> =-1 ETIMEDOUT (Connecton timed out) futex(Ox7f823, FUTEX_WAKE_PRIVATE,1)=0 futex(Ox7f824, FUTEX_WAIT_BITSET

How do I allocate a DMA buffer backed by 1GB HugePages in a linux kernel module?

主宰稳场 提交于 2019-11-30 19:32:58
I'm trying to allocate a DMA buffer for a HPC workload. It requires 64GB of buffer space. In between computation, some data is offloaded to a PCIe card. Rather than copy data into a bunch of dinky 4MB buffers given by pci_alloc_consistent, I would like to just create 64 1GB buffers, backed by 1GB HugePages. Some background info: kernel version: CentOS 6.4 / 2.6.32-358.el6.x86_64 kernel boot options: hugepagesz=1g hugepages=64 default_hugepagesz=1g relevant portion of /proc/meminfo: AnonHugePages: 0 kB HugePages_Total: 64 HugePages_Free: 64 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize:

How to release hugepages from the crashed application

﹥>﹥吖頭↗ 提交于 2019-11-30 09:10:36
I have an application that uses hugepage and the application suddenly crashed due to some bug. After crashing, since the application does not release the hugepage properly, the free hugepage number is not increased in sys filesystem. $ sudo cat /sys/kernel/mm/hugepages/hugepages-2048kB/free_hugepages 0 $ sudo cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 1024 Is there a way to release the hugepages by force? HugeTLB can either be used for shared memory (and Mark J. Bobak's answer would deal with that) or the app mmaps files created in a hugetlb filesystem. If the app crashes

How do I allocate a DMA buffer backed by 1GB HugePages in a linux kernel module?

前提是你 提交于 2019-11-30 03:43:46
问题 I'm trying to allocate a DMA buffer for a HPC workload. It requires 64GB of buffer space. In between computation, some data is offloaded to a PCIe card. Rather than copy data into a bunch of dinky 4MB buffers given by pci_alloc_consistent, I would like to just create 64 1GB buffers, backed by 1GB HugePages. Some background info: kernel version: CentOS 6.4 / 2.6.32-358.el6.x86_64 kernel boot options: hugepagesz=1g hugepages=64 default_hugepagesz=1g relevant portion of /proc/meminfo:

Using mmap and madvise for huge pages

旧城冷巷雨未停 提交于 2019-11-29 17:15:44
问题 I want to allocate memory on the hugepages being used by a Linux machine. I see that there are two ways to do this, using mmap and madvise . That is, using the MAP_HUGETLB flag with the mmap call - base_ptr_ = mmap(NULL, memory_size_, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_HUGETLB, -1, 0); And the MADV_HUGEPAGE flag with the madvise call - madvise(base_ptr_, memory_size_, MADV_HUGEPAGE); Could someone explain the difference between the two? 回答1: Both functions perform

How to release hugepages from the crashed application

寵の児 提交于 2019-11-29 13:49:38
问题 I have an application that uses hugepage and the application suddenly crashed due to some bug. After crashing, since the application does not release the hugepage properly, the free hugepage number is not increased in sys filesystem. $ sudo cat /sys/kernel/mm/hugepages/hugepages-2048kB/free_hugepages 0 $ sudo cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 1024 Is there a way to release the hugepages by force? 回答1: HugeTLB can either be used for shared memory (and Mark J. Bobak's