x86

Would having the call stack grow upward make buffer overruns safer?

情到浓时终转凉″ 提交于 2021-02-05 04:54:06
问题 Each thread has its own stack to store local variables. But stacks are also used to store return addresses when calling a function. In x86 assembly, esp points to the most-recently allocated end of the stack. Today, most CPUs have stack grow negatively. This behavior enables arbitrary code execution by overflowing the buffer and overwriting the saved return address. If the stack was to grow positively, such attacks would not be feasible. Is it safer to have the call stack grow upwards? Why

Would having the call stack grow upward make buffer overruns safer?

瘦欲@ 提交于 2021-02-05 04:53:32
问题 Each thread has its own stack to store local variables. But stacks are also used to store return addresses when calling a function. In x86 assembly, esp points to the most-recently allocated end of the stack. Today, most CPUs have stack grow negatively. This behavior enables arbitrary code execution by overflowing the buffer and overwriting the saved return address. If the stack was to grow positively, such attacks would not be feasible. Is it safer to have the call stack grow upwards? Why

Difference between JS and JL x86 instructions

强颜欢笑 提交于 2021-02-04 20:53:17
问题 It seems both JS and JL can implement the comparison in below code snippet ( var >= 0 ), then what's the difference of using these 2 to implement if/else? BTW: the EFLAGS they check are a little difference, so I am also wondering why different EFLAGS are tested for similar statement. int var; if (var >= 0) { ... } else { ... } 回答1: JS jumps if the sign flag is set ( SF=1 ), while JL jumps if the sign flag doesn't equal the overflow flag ( SF != OF ). There are situations where one of these

Very large address copied as negative value

你。 提交于 2021-02-04 19:51:10
问题 I was going through a binary file corresponding to a C program. I have a very large address stored in %eax . When tried to see the value via gdb , it prints a negative value (reason here). Now when mov %eax, 0x4c(%esp) is performed, the resulted value in 0x4c(%esp) is sometimes positive and sometimes negative. This effect cmp $0, 0x4c(%esp) statement that follows! Can someone please explain this behavior? If this helps: core: ELF 32-bit LSB core file Intel 80386, version 1 (SYSV), SVR4-style

Why is execution time of a process shorter when another process shares the same HT core

血红的双手。 提交于 2021-02-04 19:00:49
问题 I have an Intel CPU with 4 HT cores (8 logical CPUs) and I built two simple processes. The first one: int main() { for(int i=0;i<1000000;++i) for(int j=0;j<100000;++j); } The second one: int main() { while(1); } Both are compiled with gcc without special options. (I.e. with the default of -O0 : no optimization debug mode, keeping variables in memory instead of registers.) When I run the first one on the first logical CPU (CPU0), and when the other logical CPUs have a load charge near 0%, the

Why is execution time of a process shorter when another process shares the same HT core

家住魔仙堡 提交于 2021-02-04 18:59:07
问题 I have an Intel CPU with 4 HT cores (8 logical CPUs) and I built two simple processes. The first one: int main() { for(int i=0;i<1000000;++i) for(int j=0;j<100000;++j); } The second one: int main() { while(1); } Both are compiled with gcc without special options. (I.e. with the default of -O0 : no optimization debug mode, keeping variables in memory instead of registers.) When I run the first one on the first logical CPU (CPU0), and when the other logical CPUs have a load charge near 0%, the

x86 jnz after xor?

纵饮孤独 提交于 2021-02-04 17:43:59
问题 After using IDA Pro to disassemble a x86 dll, I found this code (Comments added by me in pusedo-c code. I hope they're correct): test ebx, ebx ; if (ebx == false) jz short loc_6385A34B ; Jump to 0x6385a34b mov eax, [ebx+84h] ; eax = *(ebx+0x84) mov ecx, [esi+84h] ; ecx = *(esi+0x84) mov al, [eax+30h] ; al = *(*(ebx+0x84)+0x30) xor al, [ecx+30h] ; al = al XOR *(*(esi+0x84)+0x30) jnz loc_6385A453 Lets make it simpler for me to understand: mov eax, b3h xor eax, d6h jnz ... How does the

Understanding stack alignment enforcement

北城余情 提交于 2021-02-04 13:57:34
问题 Consider the following C code: #include <stdint.h> void func(void) { uint32_t var = 0; return; } The unoptimized (i.e.: -O0 option) assembly code generated by GCC 4.7.2 for the code above is: func: pushl %ebp movl %esp, %ebp subl $16, %esp movl $0, -4(%ebp) nop leave ret According to the stack alignment requirements of the System V ABI , the stack must be aligned by 16 bytes before every call instruction (the stack boundary is 16 bytes by default when not changed with the option -mpreferred

How does the system choose the right Page Table?

大憨熊 提交于 2021-02-04 10:30:08
问题 Let's focus on uniprocessor computer systems. When a process gets created, as far as I know, the page table gets set up which maps the virtual addresses to the physical memory address space. Each process gets its own page table, stored in the kernel address space. But how does the MMU choose the right page table for the process since there is not only one process running and there will be many context switches happening? Any help is appreciated! Best, Simon 回答1: Processors have a privileged

x86 Assembly (AT&T): How do I dynamically allocate memory to a variable at runtime?

穿精又带淫゛_ 提交于 2021-02-04 08:36:07
问题 I am trying to allocate an amount of space to a variable at runtime. I know that I can allocate a constant amount of space to a variable at compile time, for instance: .data variable: # Allocate 100 bytes for data .space 100 However, how do I allocate a variable amount of space to a variable at runtime? For instance, allocating %eax bytes of space to the variable at runtime? 回答1: You can't dynamically allocate static storage. You need to use the stack, or malloc / mmap / whatever (sometimes