Why is Func<> created from Expression> slower than Func<> declared directly?

前端 未结 6 1726
囚心锁ツ
囚心锁ツ 2020-12-12 17:54

Why is a Func<> created from an Expression> via .Compile() considerably slower than just using a Func<>

相关标签:
6条回答
  • 2020-12-12 17:57

    Ultimately what it comes down to is that Expression<T> is not a pre compiled delegate. It's only an expression tree. Calling Compile on a LambdaExpression (which is what Expression<T> actually is) generates IL code at runtime and creates something akin to a DynamicMethod for it.

    If you just use a Func<T> in code, it pre compiles it just like any other delegate reference.

    So there are 2 sources of slowness here:

    1. The initial compilation time to compile Expression<T> into a delegate. This is huge. If you're doing this for every invocation - definitely don't (but this isn't the case, since you're using your Stopwatch after you call compile.

    2. It's a DynamicMethod basically after you call Compile. DynamicMethods (even strongly typed delegates for ones) ARE in fact slower to execute than direct calls. Func<T>s resolved at compile time are direct calls. There's performance comparisons out there between dynamically emitted IL and compile time emitted IL. Random URL: http://www.codeproject.com/KB/cs/dynamicmethoddelegates.aspx?msg=1160046

    ...Also, in your stopwatch test for the Expression<T>, you should start your timer when i = 1, not 0... I believe your compiled Lambda will not be JIT compiled until the first invocation, so there will be a performance hit for that first call.

    0 讨论(0)
  • 2020-12-12 18:03

    I was interested in the answer by Michael B. so I added in each case extra call before stopwatch even started. In debug mode the compile (case 2) method was faster nearly two times (6 seconds to 10 seconds), and in release mode both versions both version was on par (the difference was about ~0.2 second).

    Now, what is striking to me, that with JIT put out of the equation I got the opposite results than Martin.

    Edit: Initially I missed the Foo, so the results above are for Foo with field, not a property, with original Foo the comparison is the same, only times are bigger -- 15 seconds for direct func, 12 seconds for compiled version. Again, in release mode the times are similar, now the difference is about ~0.5.

    However this indicates, that if your expression is more complex, even in release mode there will be real difference.

    0 讨论(0)
  • 2020-12-12 18:10

    It is most likely because the first invocation of the code was not jitted. I decided to look at the IL and they are virtually identical.

    Func<int, Foo> func = x => new Foo(x * 2);
    Expression<Func<int, Foo>> exp = x => new Foo(x * 2);
    var func2 = exp.Compile();
    Array.ForEach(func.Method.GetMethodBody().GetILAsByteArray(), b => Console.WriteLine(b));
    
    var mtype = func2.Method.GetType();
    var fiOwner = mtype.GetField("m_owner", BindingFlags.Instance | BindingFlags.NonPublic);
    var dynMethod = fiOwner.GetValue(func2.Method) as DynamicMethod;
    var ilgen = dynMethod.GetILGenerator();
    
    
    byte[] il = ilgen.GetType().GetMethod("BakeByteArray", BindingFlags.NonPublic | BindingFlags.Instance).Invoke(ilgen, null) as byte[];
    Console.WriteLine("Expression version");
    Array.ForEach(il, b => Console.WriteLine(b));
    

    This code gets us the byte arrays and prints them to the console. Here is the output on my machine::

    2
    24
    90
    115
    13
    0
    0
    6
    42
    Expression version
    3
    24
    90
    115
    2
    0
    0
    6
    42
    

    And here is reflector's version of the first function::

       L_0000: ldarg.0 
        L_0001: ldc.i4.2 
        L_0002: mul 
        L_0003: newobj instance void ConsoleApplication7.Foo::.ctor(int32)
        L_0008: ret 
    

    There are only 2 bytes different in the entire method! They are the first opcode, which is for the first method, ldarg0 (load the first argument), but on the second method ldarg1 (load the second argument). The difference here is because an expression generated object actually has a target of a Closure object. This can also factor in.

    The next opcode for both is ldc.i4.2 (24) which means load 2 onto the stack, the next is the opcode for mul (90), the next opcode is the newobj opcode (115). The next 4 bytes are the metadata token for the .ctor object. They are different as the two methods are actually hosted in different assemblies. The anonymous method is in an anonymous assembly. Unfortunately, I haven't quite gotten to the point of figuring out how to resolve these tokens. The final opcode is 42 which is ret. Every CLI function must end with ret even functions that don't return anything.

    There are few possibilities, the closure object is somehow causing things to be slower, which might be true (but unlikely), the jitter didn't jit the method and since you were firing in rapid spinning succession it didn't have to time to jit that path, invoking a slower path. The C# compiler in vs may also be emitting different calling conventions, and MethodAttributes which may act as hints to the jitter to perform different optimizations.

    Ultimately, I would not even remotely worry about this difference. If you really are invoking your function 3 billion times in the course of your application, and the difference being incurred is 5 whole seconds, you're probably going to be ok.

    0 讨论(0)
  • 2020-12-12 18:13

    (This is not a proper answer, but is material intended to help discover the answer.)

    Statistics gathered from Mono 2.6.7 - Debian Lenny - Linux 2.6.26 i686 - 2.80GHz single core:

          Func: 00:00:23.6062578
    Expression: 00:00:23.9766248
    

    So on Mono at least both mechanisms appear to generate equivalent IL.

    This is the IL generated by Mono's gmcs for the anonymous method:

    // method line 6
    .method private static  hidebysig
           default class Foo '<Main>m__0' (int32 x)  cil managed
    {
        .custom instance void class [mscorlib]System.Runtime.CompilerServices.CompilerGeneratedAttribute::'.ctor'() =  (01 00 00 00 ) // ....
    
        // Method begins at RVA 0x2204
        // Code size 9 (0x9)
        .maxstack 8
        IL_0000:  ldarg.0
        IL_0001:  ldc.i4.2
        IL_0002:  mul
        IL_0003:  newobj instance void class Foo::'.ctor'(int32)
        IL_0008:  ret
    } // end of method Default::<Main>m__0
    

    I will work on extracting the IL generated by the expression compiler.

    0 讨论(0)
  • 2020-12-12 18:14

    Just for the record: I can reproduce the numbers with the code above.

    One thing to note is that both delegates create a new instance of Foo for every iteration. This could be more important than how the delegates are created. Not only does that lead to a lot of heap allocations, but GC may also affect the numbers here.

    If I change the code to

    Func<int, int> test1 = x => x * 2;
    

    and

    Expression<Func<int, int>> expression = x => x * 2;
    Func<int, int> test2 = expression.Compile();
    

    The performance numbers are virtually identical (actually result2 is a little better than result1). This supports the theory that the expensive part is heap allocations and/or collections and not how the delegate is constructed.

    UPDATE

    Following the comment from Gabe, I tried changing Foo to be a struct. Unfortunately this yields more or less the same numbers as the original code, so perhaps heap allocation/garbage collection is not the cause after all.

    However, I also verified the numbers for delegates of the type Func<int, int> and they are quite similar and much lower than the numbers for the original code.

    I'll keep digging and look forward to seeing more/updated answers.

    0 讨论(0)
  • 2020-12-12 18:21

    As others have mentioned, the overhead of calling a dynamic delegate is causing your slowdown. On my computer that overhead is about 12ns with my CPU at 3GHz. The way to get around that is to load the method from a compiled assembly, like this:

    var ab = AppDomain.CurrentDomain.DefineDynamicAssembly(
                 new AssemblyName("assembly"), AssemblyBuilderAccess.Run);
    var mod = ab.DefineDynamicModule("module");
    var tb = mod.DefineType("type", TypeAttributes.Public);
    var mb = tb.DefineMethod(
                 "test3", MethodAttributes.Public | MethodAttributes.Static);
    expression.CompileToMethod(mb);
    var t = tb.CreateType();
    var test3 = (Func<int, Foo>)Delegate.CreateDelegate(
                    typeof(Func<int, Foo>), t.GetMethod("test3"));
    
    int counter3 = 0;
    Stopwatch s3 = new Stopwatch();
    s3.Start();
    for (int i = 0; i < 300000000; i++)
    {
        counter3 += test3(i).Value;
    }
    s3.Stop();
    var result3 = s3.Elapsed;
    

    When I add the above code, result3 is always just a fraction of a second higher than result1, for about a 1ns overhead.

    So why even bother with a compiled lambda (test2) when you can have a faster delegate (test3)? Because creating the dynamic assembly is much more overhead in general, and only saves you 10-20ns on each invocation.

    0 讨论(0)
提交回复
热议问题