A number of answers appeal to the spec. In what turns out to be an unusual turn of events, the C# 4 spec does not justify the behaviour specifically mentioned of comparison of two null literals. In fact, a strict reading of the spec says that "null == null" should give an ambiguity error! (This is due to an editing error made during the cleanup of the C# 2 specification in preparation for C# 3; it is not the intention of the spec authors to make this illegal.)
Read the spec carefully if you don't believe me. It says that there are equality operators defined on int, uint, long, ulong, bool, decimal, double, float, string, enums, delegates and objects, plus the lifted-to-nullable versions of all the value type operators.
Now, immediately we have a problem; this set is infinitely large. In practice we do not form the infinite set of all operators on all possible delegate and enum types. The spec needs to be fixed up here to note that the only operators on enum and delegate types which are added to the candidate sets are those of enum or delegate types that are the types of either argument.
Let's therefore leave enum and delegate types out of it, since neither argument has a type.
We now have an overload resolution problem; we must first eliminate all the inapplicable operators, and then determine the best of the applicable operators.
Clearly the operators defined on all the non-nullable value types are inapplicable. That leaves the operators on the nullable value types, and string, and object.
We can now eliminate some for reasons of "betterness". The better operator is the one with the more specific types. int? is more specific than any of the other nullable numeric types, so all of those are eliminated. String is more specific than object, so object is eliminated.
That leaves equality operators for string, int? and bool? as the applicable operators. Which one is the best? None of them is better than the other. Therefore this should be an ambiguity error.
For this behaviour to be justified by the spec we are going to have to emend the specification to note that "null == null" is defined as having the semantics of string equality, and that it is the compile-time constant true.
I actually just discovered this fact yesterday; how odd that you should ask about it.
To answer the questions posed in other answers about why null >= null
gives a warning about comparisons to int? -- well, apply the same analysis as I just did. The >=
operators on non-nullable value types are inapplicable, and of the ones that are left, the operator on int? is the best. There is no ambiguity error for >=
because there is no >=
operator defined on bool? or string. The compiler is correctly analyzing the operator as being comparison of two nullable ints.
To answer the more general question about why operators on null values (as opposed to literals) have a particular unusual behaviour, see my answer to the duplicate question. It clearly explains the design criteria that justify this decision. In short: operations on null should have the semantics of operations on "I don't know". Is a quantity you don't know greater than or equal to another quantity you don't know? The only sensible answer is "I don't know!" But we need to turn that into a bool, and the sensible bool is "false". But when comparing for equality, most people think that null should be equal to null even though comparing two things that you don't know for equality should also result in "I don't know". This design decision is the result of trading off many undesirable outcomes against one another to find the least bad one that makes the feature work; it does make the language somewhat inconsistent, I agree.