问题
I'm using a Xilinx Zynq 7000 ARM-based SoC. I'm struggling with DMA buffers (Need help mapping pre-reserved **cacheable** DMA buffer on Xilinx/ARM SoC (Zynq 7000)), so one thing I pursued was faster memcpy.
I've been looking at writing a faster memcpy for ARM using Neon instructions and inline asm. Whatever glibc has, it's terrible, especially if we're copying from an ucached DMA buffer.
I've put together my own copy function from various sources, including:
- Fast ARM NEON memcpy
- arm Inline assembly in gcc
- http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.faqs/ka13544.html
The main difference for me is that I'm trying to copy from an uncached buffer because it's a DMA buffer, and ARM support for cached DMA buffers is nonexistent.
So here's what I wrote:
void my_copy(volatile unsigned char *dst, volatile unsigned char *src, int sz)
{
if (sz & 63) {
sz = (sz & -64) + 64;
}
asm volatile (
"NEONCopyPLD: \n"
" VLDM %[src]!,{d0-d7} \n"
" VSTM %[dst]!,{d0-d7} \n"
" SUBS %[sz],%[sz],#0x40 \n"
" BGT NEONCopyPLD \n"
: [dst]"+r"(dst), [src]"+r"(src), [sz]"+r"(sz) : : "d0", "d1", "d2", "d3", "d4", "d5", "d6", "d7", "cc", "memory");
}
The main thing I did was leave out the prefetch instruction because I figured it would be worthless on uncached memory.
Doing this resulted in a speedup of 4.7x over the glibc memcpy. Speed went from about 70MB/sec to about 330MB/sec.
Unfortunately, this isn't nearly as fast as memcpy from cached memory, which runs at around 720MB/sec for system memcpy and 620MB/sec for the Neon version (probably slower because my memcpy doesn't do prefetching, perhaps).
Can anyone help me figure out what I can do make up for this performance gap?
I've tried a number of things like copying more at once, two loads followed by two stores. I could try prefetch just to prove that it's useless. Any other ideas?
回答1:
You can try to use the buffered memory rather than non-cached memory.
来源:https://stackoverflow.com/questions/34888683/arm-neon-memcpy-optimized-for-uncached-memory