The ordering of the Keys
collection in a class implementing Dictionary<TKey, TValue>
is not specified. So you don't know what value First()
is going to return.
But there's a reason to use First()
anyway - or, more specifically, to use FirstOrDefault()
. If you have a method that takes an IEnumerable<T>
argument, and you know T is a type whose default value is null, your method can use
FirstOrDefault()` to test the object to see if it's empty.
Why would you do this instead of using Count()
? To take advantage of deferred execution. If you call FirstOrDefault()
on a generator, the generator yields one result and stops. If you call Count()
on a generator, the generator has to enumerate to the end of the list.
So you can write a function like this:
bool ListIsEmpty(IEnumerable<string> list)
{
return list.FirstOrDefault() == null;
}
and use it like this:
if (!ListIsEmpty(dict.Keys))
{
Console.WriteLine("Dictionary is not empty");
}
if (!ListIsEmpty(dict.Keys.Where(x => x.Contains("foo"))
{
Console.WriteLine("Dictionary has at least one key containing 'foo'.");
}
and know that the code is doing the bare minimum that it has to do in order to make those decisions.
Edit:
I should point out that another assumption the code above is making: that the IEnumerable<T>
doesn't have a null as its first item!
This is always guaranteed for the Keys
collection of a dictionary, or a DataRowCollection
(my primary use case for LINQ), or for Where()
when run on one of those collections.
But it's not guaranteed for a List<string>
or a List<DataRow>
. So there are definitely circumstances in which you'd want to think twice before using FirstOrDefault()
.