问题
In order to better understand Rusts panic/exception mechanisms, I wrote the following piece of code:
#![feature(libc)]
extern crate libc;
fn main() {
let mut x: i32;
unsafe {
x = libc::getchar();
}
let y = x - 65;
println!("{}", x);
let z = 1 / y;
println!("{}", z);
}
I wanted to check how Rust deals with division by zero cases. Originally I assumed it was either taking an unhandled SIGFPE to the face and dying or it implemented a handler and rerouted it to a panic (which can be dealt with nowadays?).
The code is verbose because I wanted to make sure that Rust does not do anything "smart" when it knows at compile-time that something is zero, hence the user input. Just give it an 'A' and it should do the trick.
I found out that Rust actually produces code that checks for zero division every time before the division happens. I even looked at the assembly for once. :-)
Long story short: Can I disable this behaviour? I imagine for larger datasets this can have quite a performance impact. Why not use our CPUs ability to detect this stuff for us? Can I set up my own signal handler and deal with the SIGFPE instead?
According to an issue on Github the situation must have been different some time ago.
I think checking every division beforehand is far away from "zero-cost". What do you think? Am I missing something obvious?
回答1:
Long story short: Can I disable this behaviour?
Yes you can: std::intrinsics::unchecked_div(a, b). Your question also applies to remainder (thats how Rust calls modulo): std::intrinsics::unchecked_rem(a, b). I checked the assembly output here to compare it to C++.
In the documentation it states:
This is a nightly-only experimental API. (core_intrinsics)
intrinsics are unlikely to ever be stabilized, instead they should be used through stabilized interfaces in the rest of the standard library
So you have to use the nightly build and it is unlikely to ever come in a stabilized form to the standard library for the reasons Matthieu M. already pointed out.
回答2:
I think checking every division beforehand is far away from "zero-cost". What do you think?
What have you measured?
The number of instructions executed is a very poor proxy of performance; vectorized code is generally more verbose, yet faster.
So the real question is: what is the cost of this branch?
Since intentionally dividing by 0 is rather unlikely, and doing it by accident is only slightly more likely, the branch will always be predicted correctly except when a division by 0 occurs. But then, given the cost of a panic, a mispredicted branch is the least of your worries.
Thus, the cost is:
- a slightly fatter assembly,
- an occupied slot in the branch predictor.
The exact impact is hard to qualify, and for math-heavy code it might have an impact. Though I would remind you that an integer division is ~100 cycles1 to start with, so math-heavy code will shy away from it as much as possible (it's maybe THE single most time consuming instruction in your CPU).
1See Agner Fog's Instruction Table: for example on Intel Nehalem DIV and IDIV on 64-bits integrals have a latency of 28 to 90 cycles and 37 to 100 cycles respectively.
Beyond that, rustc is implemented on top of LLVM, to which it delegates actual code generation. Thus, rustc is at the mercy of LLVM for a number of cases, and this is one of them.
LLVM has two integer division instructions: udiv and sdiv.
Both have Undefined Behavior with a divisor of 0.
Rust aims at eliminating Undefined Behavior, so has to prevent division by 0 to occur, lest the optimizer mangles the emitted code beyond repair.
It uses a check, as recommended in the LLVM manual.
来源:https://stackoverflow.com/questions/42544491/can-i-disable-checking-for-zero-division-every-time-the-division-happens