问题
Let's imagine passing these two equivalent expressions to a Scala macro:
- with compiler-inferred implicit conversion:
1+"foo"
- with explicitly invoked implicit conversion:
any2stringadd(1)+"foo"
Is there a way to distinguish between these two inside the macro?
回答1:
First of all, the 1 + "foo"
case is going to be tricky because there isn't actually any implicit conversion happening there: Int
itself really, truly does have this + method (unfortunately).
So you're out of luck if that's your use case, but it is possible to do what you're describing more generally. I'll assume the following setup in my examples below:
case class Foo(i: Int)
case class Bar(s: String)
implicit def foo2bar(foo: Foo) = Bar(foo.i.toString)
First for the elegant approach:
object ConversionDetector {
import scala.language.experimental.macros
import scala.reflect.macros.Context
def sniff[A](tree: _): Boolean = macro sniff_impl[A]
def sniff_impl[A: c.WeakTypeTag](c: Context)(tree: c.Tree) = {
// First we confirm that the code typechecks at all:
c.typeCheck(tree, c.universe.weakTypeOf[A])
// Now we try it without views:
c.literal(
c.typeCheck(tree, c.universe.weakTypeOf[A], true, true, false).isEmpty
)
}
}
Which works as desired:
scala> ConversionDetector.sniff[Bar](Foo(42))
res1: Boolean = true
scala> ConversionDetector.sniff[Bar](foo2bar(Foo(42)))
res2: Boolean = false
Unfortunately this requires untyped macros, which are currently only available in Macro Paradise.
You can get what you want with plain old def
macros in 2.10, but it's a bit of a hack:
object ConversionDetector {
import scala.language.experimental.macros
import scala.reflect.macros.Context
def sniff[A](a: A) = macro sniff_impl[A]
def sniff_impl[A: c.WeakTypeTag](c: Context)(a: c.Expr[A]) = {
import c.universe._
c.literal(
a.tree.exists {
case app @ Apply(fun, _) => app.pos.column == fun.pos.column
case _ => false
}
)
}
}
And again:
scala> ConversionDetector.sniff[Bar](Foo(42))
res1: Boolean = true
scala> ConversionDetector.sniff[Bar](foo2bar(Foo(42)))
res2: Boolean = false
The trick is to look for places where we see function application in our abstract syntax tree, and then to check whether the positions of the Apply
node and its fun
child have the same column, which indicates that the method call isn't explicitly present in the source.
回答2:
That's a hack, but it might help you:
import scala.reflect.macros.Context
import language.experimental.macros
object Macros {
def impl(c: Context)(x: c.Expr[Int]) = {
import c.universe._
val hasInferredImplicitArgs = x.tree.isInstanceOf[scala.reflect.internal.Trees#ApplyToImplicitArgs]
val isAnImplicitConversion = x.tree.isInstanceOf[scala.reflect.internal.Trees#ApplyImplicitView]
println(s"x = ${x.tree}, args = $hasInferredImplicitArgs, view = $isAnImplicitConversion")
c.literalUnit
}
def foo(x: Int) = macro impl
}
import language.implicitConversions
import scala.reflect.ClassTag
object Test extends App {
def bar[T: ClassTag](x: T) = x
implicit def foo(x: String): Int = augmentString(x).toInt
Macros.foo(2)
Macros.foo(bar(2))
Macros.foo("2")
}
08:30 ~/Projects/210x/sandbox (2.10.x)$ ss
x = 2, args = false, view = false
x = Test.this.bar[Int](2)(ClassTag.Int), args = true, view = false
x = Test.this.foo("2"), args = false, view = true
来源:https://stackoverflow.com/questions/15508660/how-to-distinguish-compiler-inferred-implicit-conversion-from-explicitly-invoked