We noticed that lots of bugs in our software developed in C# (or Java) cause a NullReferenceException.
Is there a reason why \"null\" has even been included in the l
The question may be interpreted as "Is it better to have a default value for each referance type (like String.Empty) or null?". In this prespective I would prefer to have nulls, because;
One response mentioned that there are nulls in databases. That's true, but they are very different from nulls in C#.
In C#, nulls are markers for a reference that doesn't refer to anything.
In databases, nulls are markers for value cells that don't contain a value. By value cells, I generally mean the intersection of a row and a column in a table, but the concept of value cells could be extended beyond tables.
The difference between the two seems trivial, at first clance. But it's not.
If a framework allows the creation of an array of some type without specifying what should be done with the new items, that type must have some default value. For types which implement mutable reference semantics (*) there is in the general case no sensible default value. I consider it a weakness of the .NET framework that there is no way to specify that a non-virtual function call should suppress any null check. This would allow immutable types like String to behave as value types, by returning sensible values for properties like Length.
(*) Note that in VB.NET and C#, mutable reference semantics may be implemented by either class or struct types; a struct type would implement mutable reference semantics by acting as a proxy for a wrapped instance of a class object to which it holds an immutable reference.
It would also be helpful if one could specify that a class should have non-nullable mutable value-type semantics (implying that--at minimum--instantiating a field of that type would create a new object instance using a default constructor, and that copying a field of that type would create a new instance by copying the old one (recursively handling any nested value-type classes).
It's unclear, however, exactly how much support should be built into the framework for this. Having the framework itself recognize the distinctions between mutable value types, mutable reference types, and immutable types, would allow classes which themselves hold references to a mixture of mutable and immutable types from outside classes to efficiently avoid making unnecessary copies of deeply-immutable objects.
I propose:
If you're getting a 'NullReferenceException', perhaps you keep referring to objects which no longer exist. This is not an issue with 'null', it's an issue with your code pointing to non-existent addresses.
Null as it is available in C#/C++/Java/Ruby is best seen as an oddity of some obscure past (Algol) that somehow survived to this day.
You use it in two ways:
As you guessed, 1) is what causes us endless trouble in common imperative languages and should have been banned long ago, 2) is the true essential feature.
There are languages out there that avoid 1) without preventing 2).
For example OCaml is such a language.
A simple function returning an ever incrementing integer starting from 1:
let counter = ref 0;;
let next_counter_value () = (counter := !counter + 1; !counter);;
And regarding optionality:
type distributed_computation_result = NotYetAvailable | Result of float;;
let print_result r = match r with
| Result(f) -> Printf.printf "result is %f\n" f
| NotYetAvailable -> Printf.printf "result not yet available\n";;