According to this post an indeterminate value is:
3.17.2
1 indeterminate value
either an unspecified value or a trap representation
Accordi
The C90 standard made it clear that reading from an indeterminate location was undefined behavior. More recent standards are not so clear any more (indeterminate memory is “either an unspecified value or a trap representation”), but compilers still optimize in a way that is only excusable if reading an indeterminate location is undefined behavior, for instance, multiplying the integer in an uninitialized variable by two can produce an odd result.
So, in short, no, you can't read whatever happens to occupy indeterminate memory.
Two successive reads of an indeterminate value can give two different values. Moreover reading an indeterminate value invokes undefined behavior in case of a trap representation.
In a DR#260, C Committee wrote:
An indeterminate value may be represented by any bit pattern. The C Standard lays down no requirement that two inspections of the bits representing a given value will observe the same bit-pattern only that the observed pattern on each occasion will be a valid representation of the value.
[...] In reaching our response we noted that requiring immutable bit patterns for indeterminate values would reduce optimization opportunities. For example, it would require tracking of the actual bit-patterns of indeterminate values if the memory containing them were paged out. That seems an unnecessary constraint on optimizers with no compensatory benefit to programmers.
The fact that it is indeterminate not only means that it is unpredictable at the first read, it also means that it is not guaranteed to be stable. This means that reading the same uninitialized variable twice is not guaranteed to produce the same value. For this reason you cannot really "determine" that value by reading it. (See DR#260 for the initial discussion on the subject from 2004 and DR#451 reaffirming that position in 2014.)
For example, a variable a
might be assigned to occupy a CPU register R1
withing a certain timeframe (instead of memory location). In order to establish the optimal variable-to-register assignment schedule the language-level concept of "object lifetime" is not sufficiently precise. The CPU registers are managed by an optimizing compiler based on a much more precise concept of "value lifetime". Value lifetime begins when a variable gets assigned a determinate value. Value lifetime ends when the previously assigned determinate value is read for the last time. Value lifetime determines the timeframe during which a variable is associated with a specific CPU register. Outside of that assigned timeframe the same register R1
might be associated with a completely different variable b
. Trying to read an uninitialized variable a
outside its value lifetime might actually result in reading variable b
, which might be actively changing.
In this code sample
{
int i, j;
for (i = 0; i < 10; ++i)
printf("%d\n", j);
for (j = 0; j < 10; ++j)
printf("%d\n", 42);
}
the compiler can easily determine that even though object lifetimes of i
and j
overlap, the value lifetimes do not overlap at all, meaning that both i
and j
can get assigned to the same CPU register. If something like that happens, you might easily discover that the first cycle prints the constantly changing value of i
on each iteration. This is perfectly consistent with the idea of value of j
being indeterminate.
Note that this optimization does not necessarily require CPU registers. For another example, a smart optimizing compiler concerned with preserving valuable stack space might analyze the value lifetimes in the above code sample and transform it into
{
int i;
for (i = 0; i < 10; ++i)
printf("%d\n", <future location of j>);
}
{
int j;
for (j = 0; j < 10; ++j)
printf("%d\n", 42);
}
with variables i
and j
occupying the same location in memory at different times. In this case the first cycle might again end up printing the value of i
on each iteration.
The authors of the Standard recognized that there are some cases where it might be expensive for an implementation to ensure that code that reads an indeterminate value won't behave in ways that would be inconsistent with the Standard (e.g. the value of a uint16_t
might not be in the range 0..65535
. While many implementations could cheaply offer useful behavioral guarantees about how indeterminate values behave in more cases than the Standard requires, variations among hardware platforms and application fields mean that no single set of guarantees would be optimal for all purposes. Consequently, the Standard simply punts the matter as a Quality of Implementation issue.
The Standard would certainly allow a garbage-quality-but-conforming implementation to treat almost any use of e.g. an uninitialized uint16_t
as an invitation to release nasal demons. It says nothing about whether high-quality implementations that are suitable for various purposes can do likewise (and still be viewed as high-quality implementations suitable for those purposes). If one is needs to accommodate implementations that are designed to trap on possible unintended data leakage, one may need to explicitly clear objects in some cases where their value will ultimately be ignored but where the implementation couldn't prove that it would never leak information. Likewise if one needs to accommodate implementations whose "optimizers" are designed on the basis of what low-quality-but-conforming implementations are allowed to do, rather than than what high-quality general-purpose implementations should do, such "optimizers" may make it necessary to add otherwise-unnecessary code to clear objects even when code doesn't care about the value (thus reducing efficiency) in order to avoid having the "optimizers" treat the failure to do so as an invitation to behave nonsensically.
We can not determine the value of an indeterminate value, even under operations that would normally lead to predictable values such as multiplication by zero. The value is wobbly according to the new language proposed(see edit).
We can find the details for this in defect report #451: Instability of uninitialized automatic variables which had a proposed resolution bout a year after this question was asked.
This defect report covers very similar ground to your question. Three questions were address:
and provided the following examples with further questions:
unsigned char x[1]; /* intentionally uninitialized */ printf("%d\n", x[0]); printf("%d\n", x[0]);
Does the standard allow an implementation to let this code print two different values? And if so, if we insert either of the following three statements
x[0] = x[0]; x[0] += 0; x[0] *= 0;
between the declaration and the printf statements, is this behavior still allowed? Or alternatively, can these printf statements exhibit undefined behavior instead of having to print a reasonable number.
The proposed resolution, which seems unlikely to change much is:
Update to address edit
Part of the discussion includes this comment:
- Strong sentiment formed, in part based on prior experience in developing Annex L, that a new category of "wobbly" value is needed. The underlying issue is that modern compilers track value propagation, and uninitialized values synthesized for an initial use of an object may be discarded as inconsequential prior to synthesizing a different value for a subsequent use. To require otherwise defeats important compiler optimizations. All uses of "wobbly" values might be deemed undefined behavior.
So you will be able to determine a value but the value could change at each evaluation.
When the standard introduces a term like indeterminate, it is a normative term: the standard's definition applies, and not a dictionary definition. This means that an indeterminate value is nothing more or less than an unspecified value, or a trap representation. Ordinary English meanings of indeterminate are not applicable.
Even terms that are not defined in the standard may be normative, via the inclusion of normative references. For instance section 2 of the C99 standard normatively includes a document called ISO/IEC 2382−1:1993, Information technology — Vocabulary — Part 1: Fundamental terms..
This means that if a term is used in the standard, and is not defined in the text (not introduced in italics and expained, and not given in the terms section) it might nevertheless be a word from the above vocabulary document; in that case, the definition from that standard applies.