I have some troubles having Scala to infer the right type from a type projection.
Consider the following:
trait Foo {
type X
}
trait Bar extends Foo {
The program compiles by adding this implicit conversion in the context:
implicit def f(x: Bar#X): Foo#X = x
As this implicit conversion is correct for any F <: Foo
, I wonder why the compiler does not do that by itself.
You can also:
trait Foo {
type X
}
trait Bar extends Foo {
type X = String
}
class BarImpl extends Bar{
def getX:X="hi"
}
def baz[F <: Foo, T <: F#X](clz:F, x: T): Unit = { println("baz worked!")}
val bi = new BarImpl
val x: Bar#X = bi.getX
baz(bi,x)
but:
def baz2[F <: Foo, T <: F#X](x: T): Unit = { println("baz2 failed!")}
baz2(x)
fails with:
test.scala:22: error: inferred type arguments [Nothing,java.lang.String] do not conform to method baz2's type parameter bounds [F <: this.Foo,T <: F#X]
baz2(x)
^
one error found
I think basically, F <: Foo tells the compiler that F has to be a subtype of Foo, but when it gets an X it doesn't know what class your particular X comes from. Your X is just a string, and doesn't maintain information pointing back to Bar.
Note that:
def baz3[F<: Foo](x : F#X) = {println("baz3 worked!")}
baz3[Bar]("hi")
Also works. The fact that you defined a val x:Bar#X=??? just means that ??? is restricted to whatever Bar#X might happen to be at compile time... the compiler knows Bar#X is String, so the type of x is just a String no different from any other String.