I have the following code:
const N: usize = 10000;
const S: usize = 7000;
#[derive(Copy, Clone, Debug)]
struct T {
a: f64,
b: f64,
f: f64
}
fn main
I'm unsure why the array
t
must be initialized that way.
Because Rust doesn't let you touch (entirely or partially) uninitialised values. The compiler isn't smart enough to prove that the loop will definitely initialise everything, so it just forbids it.
Now, the optimiser is a different story. That can notice that the initialisation is redundant and skip it... in theory. It doesn't appear to do so with that code and the current compiler. Such is optimisation.
I just want to know if there is a smarter way for the array and it's initialization with the first for-loop.
The smart way is to just leave the code as-it-is. Statistically speaking, it's unlikely to be a bottleneck. If profiling suggests that it is a bottleneck, then you can use uninitialised. However, note that doing so can lead to undefined behaviour if you use it wrong. Although not an exhaustive list, you definitely avoid using it on any type that is not Copy
.
If you do need to use it, I strongly recommend also adjusting the first loop to make forgetting to initialise an element or a field in the structure impossible:
let mut t: [T; N] = unsafe { ::std::mem::uninitialized() };
for (i, e) in t.iter_mut().enumerate() {
*e = T {
a: 0.0,
b: 1.0,
f: i as f64 * 0.25,
}
}
You can use std::mem::uninitialized(). Note, however, that it is considered unsafe and needs to be marked as such:
let mut t: [T; N] = unsafe { std::mem::uninitialized() };
As stated by the aforelinked docs:
This is useful for FFI functions and initializing arrays sometimes, but should generally be avoided.