This is awkward, but the bitwise AND operator is defined in the C++ standard as follows (emphasis mine).
The usual arithmetic conversions are performe
[basic.fundamental]/3 defers to C 5.2.4.2.1. It seems reasonable that the bitwise operators in C++ being underspecified should similarly defer to C, in this case 6.5.10/4:
The result of the binary & operator is the bitwise AND of the operands (that is, each bit in the result is set if and only if each of the corresponding bits in the converted operands is set).
Note that C 6.5/4 has:
Some operators (the unary operator
~
, and the binary operators<<
,>>
,&
,^
, and|
, collectively described as bitwise operators) are required to have operands that have integer type. These operators yield values that depend on the internal representations of integers, and have implementation-defined and undefined aspects for signed types.
The internal representations of the integers are of course described in 6.2.6.2/1, /2.
This is underspecified. The issue of what the standard means when it refers to bit-wise operations is the subject of a few defect reports.
For example defect report 1857: Additional questions about bits:
The specification of the bitwise operations in 5.11 [expr.bit.and], 5.12 [expr.xor], and 5.13 [expr.or] uses the undefined term “bitwise” in describing the operations, without specifying whether it is the value or object representation that is in view.
Part of the resolution of this might be to define “bit” (which is otherwise currently undefined in C++) as a value of a given power of 2.
and the response was:
CWG decided to reformulate the description of the operations themselves to avoid references to bits, splitting off the larger questions of defining “bit” and the like to issue 1943 for further consideration.
and defect report 1943 says:
CWG decided at the 2014-06 (Rapperswil) meeting to address only a limited subset of the questions raised by issues 1857 and 1861. This issue is a placeholder for the remaining questions, such as defining a “bit” in terms of a value of 2n, specifying whether a bit-field has a sign bit, etc.
We can see from this defect report 1796: Is all-bits-zero for null characters a meaningful requirement?, that this issue of what the standard means when it refers to bits affected/affects other sections as well:
According to 2.3 [lex.charset] paragraph 3,
The basic execution character set and the basic execution wide-character set shall each contain all the members of the basic source character set, plus control characters representing alert, backspace, and carriage return, plus a null character (respectively, null wide character), whose representation has all zero bits.
It is not clear that a portable program can examine the bits of the representation; instead, it would appear to be limited to examining the bits of the numbers corresponding to the value representation (3.9.1 [basic.fundamental] paragraph 1). It might be more appropriate to require that the null character value compare equal to 0 or '\0' rather than specifying the bit pattern of the representation.
There is a similar issue for the definition of shift, bitwise and, and bitwise or operators: are those specifications constraints on the bit pattern of the representation or on the values resulting from the interpretation of those patterns as numbers?
In this case the resolution was to change:
representation has all zero bits
to:
value is 0.
Note that as mentioned in ecatmur's answer the draft C++ standard does defer to C standard section 5.2.4.2.1
in section 3.9.1
[basic.fundamental] in paragraph 3
it does not refer to section 6.5/4
from the C standard which would at least tell us that the results are implementation defined. I explain in my comment below that C++ standard can only incorporate text from normative references explicitly.
C++ Standard defines storage as a certain amount of bits. The implementation might decide what meaning to attribute to a particular bit; that being said, binary AND is supposed to work on conceptual 0s and 1s forming a particular type's representation.
3.9.1.7. (...) The representations of integral types shall define values by use of a pure binary numeration system.49 (...)
3.9.1, footnote 49) A positional representation for integers that uses the binary digits 0 and 1, in which the values represented by successive bits are additive, begin with 1, and are multiplied by successive integral power of 2, except perhaps for the bit with the highest position
That means that for whatever physical representation used, binary AND acts according to the truth table for the AND function (for each bit number i, take bits Ai and Bi from appropriate operands and produce a value of 1 only if both are 1, otherwise produce a 0 for the bit Ri).. Resulting value is left to interpret by the implementation, but whatever is chosen, it has to be in line with other expectations with regard to other binary operations like OR and XOR.
Legally, we could consider all bitwise operations to have undefined behaviour as they are not actually defined.
More reasonably, we are expected to apply common sense and refer to the common meanings of these operations, applying them to the bits of the operands (hence the term "bitwise").
But nothing actually states that. Shame my answer can't be considered normative wording.