问题
Consider I have code like the following:
class Foo {
Y func(X x) {...}
void doSomethingWithAFunc(Function<X,Y> f){...}
void hotFunction(){
doSomethingWithAFunc(this::func);
}
}
Suppose that hotFunction
is called very often. Would it then be advisable to cache this::func
, maybe like this:
class Foo {
Function<X,Y> f = this::func;
...
void hotFunction(){
doSomethingWithAFunc(f);
}
}
As far as my understanding of java method references goes, the Virtual Machine creates an object of an anonymous class when a method reference is used. Thus, caching the reference would create that object only once while the first approach creates it on each function call. Is this correct?
Should method references that appear at hot positions in the code be cached or is the VM able to optimize this and make the caching superfluous? Is there a general best practice about this or is this highly VM-implemenation specific whether such caching is of any use?
回答1:
You have to make a distinction between frequent executions of the same call-site, for stateless lambda or stateful lambdas, and frequent uses of a method-reference to the same method (by different call-sites).
Look at the following examples:
Runnable r1=null;
for(int i=0; i<2; i++) {
Runnable r2=System::gc;
if(r1==null) r1=r2;
else System.out.println(r1==r2? "shared": "unshared");
}
Here, the same call-site is executed two times, producing a stateless lambda and the current implementation will print "shared"
.
Runnable r1=null;
for(int i=0; i<2; i++) {
Runnable r2=Runtime.getRuntime()::gc;
if(r1==null) r1=r2;
else {
System.out.println(r1==r2? "shared": "unshared");
System.out.println(
r1.getClass()==r2.getClass()? "shared class": "unshared class");
}
}
In this second example, the same call-site is executed two times, producing a lambda containing a reference to a Runtime
instance and the current implementation will print "unshared"
but "shared class"
.
Runnable r1=System::gc, r2=System::gc;
System.out.println(r1==r2? "shared": "unshared");
System.out.println(
r1.getClass()==r2.getClass()? "shared class": "unshared class");
In contrast, in the last example are two different call-sites producing an equivalent method reference but as of 1.8.0_05
it will print "unshared"
and "unshared class"
.
For each lambda expression or method reference the compiler will emit an invokedynamic
instruction that refers to a JRE provided bootstrap method in the class LambdaMetafactory and the static arguments necessary to produce the desired lambda implementation class. It is left to the actual JRE what the meta factory produces but it is a specified behavior of the invokedynamic
instruction to remember and re-use the CallSite
instance created on the first invocation.
The current JRE produces a ConstantCallSite containing a MethodHandle to a constant object for stateless lambdas (and there’s no imaginable reason to do it differently). And method references to static
method are always stateless. So for stateless lambdas and single call-sites the answer must be: don’t cache, the JVM will do and if it doesn’t, it must have strong reasons that you shouldn’t counteract.
For lambdas having parameters, and this::func
is a lambda that has a reference to the this
instance, things are a bit different. The JRE is allowed to cache them but this would imply maintaining some sort of Map
between actual parameter values and the resulting lambda which could be more costly than just creating that simple structured lambda instance again. The current JRE does not cache lambda instances having a state.
But this does not mean that the lambda class is created every time. It just means that the resolved call-site will behave like an ordinary object construction instantiating the lambda class that has been generated on the first invocation.
Similar things apply to method references to the same target method created by different call-sites. The JRE is allowed to share a single lambda instance between them but in the current version it doesn’t, most probably because it is not clear whether the cache maintenance will pay off. Here, even the generated classes might differ.
So caching like in your example might have your program do different things than without. But not necessarily more efficient. A cached object is not always more efficient than a temporary object. Unless you really measure a performance impact caused by a lambda creation, you should not add any caching.
I think, there are only some special cases where caching might be useful:
- we are talking about lots of different call-sites referring to the same method
- the lambda is created in the constructor/class initialize because later on the use-site will
- be called by multiple threads concurrently
- suffer from the lower performance of the first invocation
回答2:
As far as I understand the language specification, it allows this kind of optimization even if it changes the observable behaviour. See the following quotes from section JSL8 §15.13.3:
§15.13.3 Run-time Evaluation of Method References
At run time, evaluation of a method reference expression is similar to evaluation of a class instance creation expression, insofar as normal completion produces a reference to an object. [..]
[..] Either a new instance of a class with the properties below is allocated and initialized, or an existing instance of a class with the properties below is referenced.
A simple test shows, that method references for static methods (can) result in the same reference for each evaluation. The following program prints three lines, of which the first two are identical:
public class Demo {
public static void main(String... args) {
foobar();
foobar();
System.out.println((Runnable) Demo::foobar);
}
public static void foobar() {
System.out.println((Runnable) Demo::foobar);
}
}
I can't reproduce the same effect for non-static functions. However, I haven't found anything in the language specification, that inhibits this optimization.
So, as long as there is no performance analysis to determine the value of this manual optimization, I strongly advise against it. The caching affects the readability of the code, and it's unclear whether it has any value. Premature optimization is the root of all evil.
回答3:
One situation where it is a good ideal, unfortunately, is if the lambda is passed as a listener that you want to remove at some point in the future. The cached reference will be needed as passing another this::method reference will not be seen as the same object in the removal and the original won't be removed. For example:
public class Example
{
public void main( String[] args )
{
new SingleChangeListenerFail().listenForASingleChange();
SingleChangeListenerFail.observableValue.set( "Here be a change." );
SingleChangeListenerFail.observableValue.set( "Here be another change that you probably don't want." );
new SingleChangeListenerCorrect().listenForASingleChange();
SingleChangeListenerCorrect.observableValue.set( "Here be a change." );
SingleChangeListenerCorrect.observableValue.set( "Here be another change but you'll never know." );
}
static class SingleChangeListenerFail
{
static SimpleStringProperty observableValue = new SimpleStringProperty();
public void listenForASingleChange()
{
observableValue.addListener(this::changed);
}
private<T> void changed( ObservableValue<? extends T> observable, T oldValue, T newValue )
{
System.out.println( "New Value: " + newValue );
observableValue.removeListener(this::changed);
}
}
static class SingleChangeListenerCorrect
{
static SimpleStringProperty observableValue = new SimpleStringProperty();
ChangeListener<String> lambdaRef = this::changed;
public void listenForASingleChange()
{
observableValue.addListener(lambdaRef);
}
private<T> void changed( ObservableValue<? extends T> observable, T oldValue, T newValue )
{
System.out.println( "New Value: " + newValue );
observableValue.removeListener(lambdaRef);
}
}
}
Would have been nice to not need lambdaRef in this case.
来源:https://stackoverflow.com/questions/23983832/is-method-reference-caching-a-good-idea-in-java-8