This is a consequence of the confluence of two characteristics of C#.
The first is that C# never "magics up" a type for you. If C# must determine a "best" type from a given set of types, it always picks one of the types you gave it. It never says "none of the types you gave me are the best type; since the choices you gave me are all bad, I'm going to pick some random thing that you did not give me to choose from."
The second is that C# reasons from inside to outside. We do not say "Oh, I see you are trying to assign the conditional operator result to an ILogger; let me make sure that both branches work." The opposite happens: C# says "let me determine the best type returned by both branches, and verify that the best type is convertible to the target type."
The second rule is sensible because the target type might be what we are trying to determine. When you say D d = b ? c : a;
it is clear what the target type is. But suppose you were instead calling M(b?c:a)
? There might be a hundred different overloads of M each with a different type for the formal parameter! We have to determine what the type of the argument is, and then discard overloads of M which are not applicable because the argument type is not compatible with the formal parameter type; we don't go the other way.
Consider what would happen if we went the other way:
M1( b1 ? M2( b3 ? M4( ) : M5 ( ) ) : M6 ( b7 ? M8() : M9() ) );
Suppose there are a hundred overloads each of M1, M2 and M6. What do you do? Do you say, OK, if this is M1(Foo) then M2(...) and M6(...) must be both convertible to Foo. Are they? Let's find out. What's the overload of M2? There are a hundred possibilities. Let's see if each of them is convertible from the return type of M4 and M5... OK, we've tried all those, so we've found an M2 that works. Now what about M6? What if the "best" M2 we find is not compatible with the "best" M6? Should we backtrack and keep on re-trying all 100 x 100 possibilities until we find a compatible pair? The problem just gets worse and worse.
We do reason in this manner for lambdas and as a result overload resolution involving lambdas is at least NP-HARD in C#. That is bad right there; we would rather not add more NP-HARD problems for the compiler to solve.
You can see the first rule in action in other place in the language as well. For example, if you said: ILogger[] loggers = new[] { consoleLogger, suppressLogger };
you'd get a similar error; the inferred array element type must be the best type of the typed expressions given. If no best type can be determined from them, we don't try to find a type you did not give us.
Same thing goes in type inference. If you said:
void M<T>(T t1, T t2) { ... }
...
M(consoleLogger, suppressLogger);
Then T would not be inferred to be ILogger; this would be an error. T is inferred to be the best type amongst the supplied argument types, and there is no best type amongst them.
For more details on how this design decision influences the behaviour of the conditional operator, see my series of articles on that topic.
If you are interested in why overload resolution that works "from outside to inside" is NP-HARD, see this article.