CompilationRepresentationFlags.UseNullAsTrueValue
can be used to
Permit the use of null as a representation for nullary discriminators in a discriminated union
Option.None
is the most prominent example of this.
Why is this useful? How is a null check better than the traditional mechanism for checking union cases (the generated Tag
property)?
It leads to perhaps unexpected behavior:
Some(1).ToString() //"Some(1)"
None.ToString() //NullReferenceException
EDIT
I tested Jack's assertion that comparing to null instead of a static readonly field is faster.
[<CompilationRepresentation(CompilationRepresentationFlags.UseNullAsTrueValue)>]
type T<'T> =
| Z
| X of 'T
let t = Z
Using ILSpy, I can see t
compiles to null (as expected):
public static Test.T<a> t<a>()
{
return null;
}
The test:
let mutable i = 0
for _ in 1 .. 10000000 do
match t with
| Z -> i <- i + 1
| _ -> ()
The results:
Real: 00:00:00.036, CPU: 00:00:00.046, GC gen0: 0, gen1: 0, gen2: 0
If the CompilationRepresentation
attribute is removed, t
becomes a static readonly field:
public static Test.T<a> t<a>()
{
return Test.T<a>.Z;
}
public static Test.T<T> Z
{
[CompilationMapping(SourceConstructFlags.UnionCase, 0)]
get
{
return Test.T<T>._unique_Z;
}
}
internal static readonly Test.T<T> _unique_Z = new Test.T<T>._Z();
And the results are the same:
Real: 00:00:00.036, CPU: 00:00:00.031, GC gen0: 0, gen1: 0, gen2: 0
The pattern match is compiled as t == null
in the former case and t is Z
in the latter.
Jack's answer seems good, but to expand a little bit, at the IL level the CLR provides a specific opcode for loading null values (ldnull
) and efficient means of testing for them (ldnull
followed by beq
/bne.un
/ceq
/cgt.un
). When JITted, these should be more efficient than dereferencing a Tag
property and branching accordingly. While the per-call savings are probably small, option types are used frequently enough that the cumulative savings may be significant.
Of course, as you note there is a tradeoff: methods inherited from obj
may throw null reference exceptions. This is one good reason to use string x
/hash x
/x=y
instead of x.ToString()
/x.GetHashCode()
/x.Equals(y)
when dealing with F# values. Sadly, there is no (possible) equivalent of x.GetType()
for values represented by null
.
The F# compiler sometimes uses null
as a representation for None because it's more efficient than actually creating an instance of FSharpOption<'T> and checking the Tag
property.
Think about it -- if you have a normal F# type (like a record) that's not allowed to be null, then any pointer to an instance of that type (the pointer used internally by the CLR) will never be NULL
. At the same time, if T
is a type which can represent n
states, then T option
can represent n+1
states. So, using null
as a representation for None simply takes advantage of that one extra state value which is available by the fact that F# types aren't allow to be null.
If you want to try turning this behavior off (for normal F# types), you can apply [<AllowNullLiteral(true)>]
to them.
来源:https://stackoverflow.com/questions/11959321/why-is-none-represented-as-null