I recently came across this question on Leetcode and figured out a solution that I need some clarification with:
Given an array of integers, every ele
The key intuitive aspect that distinguishes XOR from the other logical operators is that it is lossless, or non-lossy, meaning that, unlike AND, and OR (and more similar to NOT in this regard), it is deterministcally reversible: You can exactly recover one of the input values given the rest of the computation history.
The following diagrams illustrate that AND and OR each have at least one case where the state of one of the inputs is irrecoverable, given a certain value of the other input. I indicate these as "lost" inputs.
For the XOR gate, there is no condition in which an input or output value cannot be recovered, given the rest of the computation history. In fact, there's a symmetry that knowing any two values of the triple (in0, in1, out)
allows you to recover the third. In other words, regardless of input or output, each of these three values is the XOR of the other two!
This picture suggests that another way to think of the XOR operation is as a controllable NOT gate. By toggling one of the inputs (the upper one in the example above), you can control whether or not the other (lower) one is negated.
Yet another equivalent view is that XOR implements the positive-logic not-equals (≠) function with respect to its two inputs. And thus also the equals function (=) under negative-logic.
In accordance with its symmetry and information-preserving properties, XOR should come to mind for problems that require reversibility or perfect data recovery. The most obvious example is that XORing a dataset with a constant 'key' trivially obscures the data such that knowing the key (which might be kept "secret"), allows for exact recovery.
Preserving all of the available information is also desirable in hashing. Because you want hash values that maximally discriminate amongst the source items, you want to make sure that as many of their distinguishing characteristics as possible are incorporated, minimizing loss, in the hash code. For example, to hash a 64-bit value into 32 bits, you would use the programming language XOR operator ^
because it's an easy way to guarantee that each of the 64 input bits has an opportunity to influence the output:
uint GetHashCode(ulong ul)
{
return (uint)ul ^ (uint)(ul >> 32);
}
Note that in this example, information is lost even though XOR was used. (In fact, ‘strategic information loss’ is kind of the whole point of hashing). The original value of ul
is not recoverable from the hash code, because with that value alone you don't have two out of the three 32-bit values that were used in the internal computation. Recall from above that you need to retain any two out of the three values for perfect reversal. Out of the resulting hash code and the two values that were XORed, you may have saved the result, but typically do not save either of the latter to use as a key value for obtaining the other.1
As an amusing aside, XOR was uniquely helpful in the days of bit-twiddling hacks. My contribution back then was a way to Conditionally set or clear bits without branching in C/C++:
unsigned int v; // the value to modify
unsigned int m; // mask: the bits to set or clear
int f; // condition: 0 to 'set', or 1 to 'clear'
v ^= (-f ^ v) & m; // if (f) v |= m; else v &= ~m;
On a more serious note, the fact that XOR is non-lossy has important information-theoretical implications for futuristic computing, due to an important relationship between information processing and the Second Law of Thermodynamics. As explained in an excellent and accessible book by Charles Seife, Decoding the Universe, it turns out that the loss of information during computation has an e̲x̲a̲c̲t̲ mathematical relationship with the black-body radiation emanated by a processing system. Indeed, the notion of entropy plays a central role in quantifying how information "loss" is (re-)expressed as heat (this also being the same prominent relation from Steven Hawking's famous black hole wager).
Such talk regarding XOR is not necessarily a stretch; Seife notes that modern CPU development currently faces fundamental toleration limitations on the watts/cm² of semiconducting materials, and that a solution would be to design reversible, or lossless, computing systems. In this speculative future-generation of CPUs, XOR's ability to preserve information—and thus shunt away heat—would be invaluable for increasing computational density (i.e., MIPS/per cm²) despite such materials limitations.
ulong
value. Of course the original hashed 'data' itself, represented by ul
here, likely is retained.
XOR is always defined in terms of binary digits (or some equivalent notions, like true or false statements). There is no special XOR for integers other than the XORing of corresponding bits of their binary representations.
Let A and B be two boolean variables, and let XOR be a boolean function that takes two boolean variables.
A⊕B = 1 if either (A = 0 and B = 1) or (A = 1 and B = 0) (i.e. they are different),
A⊕B=0 if either (A = 0 and B = 0) or (A = 1 and B = 1). (i.e they are same)
Thus taking your question into consideration since out of given n elements of the vector ,every element appears twice except one element,the idea is that the binary representation of the duplicate numbers would be same thus there XOR result would nullify each other as 1⊕1 =0 and 0⊕0= 0.
For A=5 ,its binary representation is 101,so A⊕A = (101)⊕(101) = 000 which is the decimal representation is 0.
REMEMBER IT DOES NOT MATTER IN WHICH ORDER THE NUMBERS APPEAR AFTER EACH OTHER BECAUSE ((A⊕B)⊕C) = (A⊕(B⊕C)) . Eventually what you get at the end after XORING every number is the number which occurs once .
To answer your question regarding when you need to use XOR operations for solving a question, practice some BIT MANIPULATION question eventually you will be able to figure out.
A hint: The question which asks to find one element which has unique property apart from rest, requires bit manipulation.
Example: Given an array where every element occurs three times, except one element which occurs only once. Find the element that occurs once .
One simple operation that can be done on the array is to choose a property P
and count the number of elements of the array that satisfy the property. For example, if P
is the property of being divisible by 5, we can loop through the array and count the number of elements that are divisible by 5.
The precondition on the array allows us to gain information about the singleton element from such a count. If the number of elements satisfying P
is odd, then the singleton has property P
. If the number of elements satisfying P
is even, then the singleton must not have property P
. For example, if we count 3 elements divisible by 5, then the singleton must have been divisible by 5.
Therefore, if we can invent a collection of properties such that knowing whether each property is true or false will fully specify any element, we can get the answer by counting the number of elements with each property and checking the parity of the counts.
There are many different collections of properties that will work. We may as well be efficient, though. Given b
different properties, there are (at most) 2**b
different ways the tests could come out, so we can only distinguish from among this many possible elements. So, we need at least b
different properties to fully specify a b
-bit number.
One such minimal collection of properties that is easy for a computer to test is the collection where the n
th property is "Does the number have a 1 in the n
th bit?". If we know the answer to this question for each n
, then clearly we can reconstruct the number.
Another optimization is that we don't have to keep track of the total count of elements satisfying a property; we only need the parity. If we just keep track of the count modulo 2, then we only need one bit to keep track instead of a whole size_t
.
Since we are keeping track of b
different bits of information, each one corresponding to a particular place value n
, we can assemble these bits together into a b
-bit number.
This is exactly the XOR solution presented in the question.
To start out with, the running count of how many numbers have each bit is 0 for each bit (hence result
is initialized to 0). Then, when we XOR an element from the array, we are actually adding one modulo 2 to those bits of result
where the element had a 1. Finally, result
does not require any decoding, as the number with 1 bits exactly where result
has 1 bits is result
itself.
Note: Your algorithm should have a linear runtime complexity. Could you implement it without using extra memory?
Here is a solution using math, with extra space though:
class Solution:
def singleNumber(self, nums):
return (sum(set(nums)) << 1) - sum(nums)
Here are bitwise solutions:
// The following block might slightly improve the execution time;
// Can be removed;
static const auto __optimize__ = []() {
std::ios::sync_with_stdio(false);
std::cin.tie(nullptr);
std::cout.tie(nullptr);
return 0;
}();
// Most of headers are already included;
// Can be removed;
#include <cstdint>
using ValueType = std::uint_fast16_t;
static const struct Solution {
static const int singleNumber(
const std::vector<int> nums
) {
ValueType single_number = 0;
for (ValueType index = 0; index < std::size(nums); index++) {
single_number ^= nums[index];
}
return single_number;
}
};
public final class Solution {
public static final int singleNumber(
final int[] nums
) {
int singleNumber = 0;
for (int index = 0; index != nums.length; index++) {
singleNumber ^= nums[index];
}
return singleNumber;
}
}
As I am sure you are familiar with, integers are stored in memory as tuples of binary digits. One can view each digit as a number in the field of two elements, which is essentially integers modulo 2. The ^
operator is component-wise xor, and with this interpretation, xor is simply addition. That is, we are adding the binary digits with each other.
In this field, 1 + 1 = 0, so one can say that two is zero. Since addition is commutative and associative we can combine all the numbers at once. Anything that is added an even number of times will add nothing, and only the number that is added a single time ends up in the result variable.
It might be interesting to know that the boolean operations are represented this way as (try it!): a xor b = a + b, a and b = ab, a or b = ab + a + b, not a = a + 1.
A ^ 0 == A
A ^ A == 0
A ^ B == B ^ A
(A ^ B) ^ C == A ^ (B ^ C)
(3) and (4) together mean that the order in which numbers are xor
ed doesn't matter.
Which means that, for example, A^B^X^C^B^A^C
is equal to A^A ^ B^B ^ C^C ^ X
.
Because of the (2) that is equal to 0^0^0^X
.
Because of the (1) that is equal to X
.
I don't think there are any specific keywords that can help you to identify such problems. You just should know above properties of XOR.