Is the object initializer in c# 3.0 faster then the regular way?
Is this faster
Object object = new Object
{
id = 1;
}
than th
Ok, from the comment by [SLaks][2], and testing myself after reading it, it turns out the difference I describe here is only present in debug mode. If you compile for release, they all compile to the same code. Learn something new every day :)
(so rest of answer assumes debug mode.)
There is a difference in the IL produced, contrary to what others here have answered, but the difference is negligible, and shouldn't actually have any performance impact on your program at all.
The difference is that using an object initializer, like this:
Object object = new Object
{
id = 1;
}
The code is actually compiled as though you had written this:
Object temp = new Object();
temp.id = 1;
Object object = temp;
(of course, barring the fact that Object doesn't have an Id field/property, and you can't actually name a variable "object" without using the verbatim identifier syntax "@object".)
Why would this matter? Well, one difference you could possibly notice is that if any of the assignments threw an exception (either writing the value into the object, or obtaining the value from an expression or function throws an exception), then with the object initializer, you won't actually see any object in the variable, whereas in your "manual" code, the object will be there, initialized right up to the point where the exception occured.
A minor difference, which shouldn't have much of an performance difference, but might change the behavior of your program.
This can be verified by looking at the IL. Take this C# program:
using System;
namespace ConsoleApplication3
{
class Test
{
public Int32 Id { get; set; }
}
class Program
{
static void Main(string[] args)
{
M1();
M2();
}
static void M1()
{
Test t = new Test();
t.Id = 1;
}
static void M2()
{
Test t = new Test { Id = 1 };
}
static void M3()
{
Test t;
Test temp = new Test();
temp.Id = 1;
t = temp;
}
}
}
and compile it, run it through Reflector and you'll get this for M1, M2 and M3:
.method private hidebysig static void M1() cil managed
{
.maxstack 2
.locals init (
[0] class ConsoleApplication3.Test t)
L_0000: nop
L_0001: newobj instance void ConsoleApplication3.Test::.ctor()
L_0006: stloc.0
L_0007: ldloc.0
L_0008: ldc.i4.1
L_0009: callvirt instance void ConsoleApplication3.Test::set_Id(int32)
L_000e: nop
L_000f: ret
}
.method private hidebysig static void M2() cil managed
{
.maxstack 2
.locals init (
[0] class ConsoleApplication3.Test t,
[1] class ConsoleApplication3.Test <>g__initLocal0)
L_0000: nop
L_0001: newobj instance void ConsoleApplication3.Test::.ctor()
L_0006: stloc.1
L_0007: ldloc.1
L_0008: ldc.i4.1
L_0009: callvirt instance void ConsoleApplication3.Test::set_Id(int32)
L_000e: nop
L_000f: ldloc.1
L_0010: stloc.0
L_0011: ret
}
.method private hidebysig static void M3() cil managed
{
.maxstack 2
.locals init (
[0] class ConsoleApplication3.Test t,
[1] class ConsoleApplication3.Test temp)
L_0000: nop
L_0001: newobj instance void ConsoleApplication3.Test::.ctor()
L_0006: stloc.1
L_0007: ldloc.1
L_0008: ldc.i4.1
L_0009: callvirt instance void ConsoleApplication3.Test::set_Id(int32)
L_000e: nop
L_000f: ldloc.1
L_0010: stloc.0
L_0011: ret
}
If you look at the code, the only thing that differs between M2 and M3 is the name of the second local (<>g__initLocal0
vs temp
).
But as others have already answered, the difference won't make any performance difference you should notice.