(excuse me if I post it today, but I was looking for a place where to put this piece of code, and this question seemed to be perfect)
As an extension on the Gravell's article:
public static class Add<T>
{
public static readonly Func<T, T, T> Do;
static Add()
{
var par1 = Expression.Parameter(typeof(T));
var par2 = Expression.Parameter(typeof(T));
var add = Expression.Add(par1, par2);
Do = Expression.Lambda<Func<T, T, T>>(add, par1, par2).Compile();
}
}
You use it like:
int sum = Add<int>.Do(x, y);
The advantage is that we use the type system of .NET for safekeeping the various "variants" of Add
and creating new ones if necessary. So the first time you call Add<int>.Do(...)
the Expression
will be built, but if you call it a second time, the Add<int>
will already be fully initialized.
On some simple benchmark, it's 2x slower than direct addition. I think it's very good. Ah... it's compatible with objects that redefine the operator+
. Clearly building the other operations is easy.
Addition from Meirion Hughes
Method can be extended with meta-coding so you can handle cases of T1
operation T2
. For instance, here if T1
is a number, then it needs to be converted to T2 == double
first before the operator *
then converts it back. Whereas when T1
is Foo
and Foo
has operator to multiply with a T2 == double
you can omit the conversion. The try
, catch
is necessary because it is the easiest way to check if the T operator *(T, double)
is present.
public static class Scale<T>
{
public static Func<T, double, T> Do { get; private set; }
static Scale()
{
var par1 = Expression.Parameter(typeof(T));
var par2 = Expression.Parameter(typeof(double));
try
{
Do = Expression
.Lambda<Func<T, double, T>>(
Expression.Multiply(par1, par2),
par1, par2)
.Compile();
}
catch
{
Do = Expression
.Lambda<Func<T, double, T>>(
Expression.Convert(
Expression.Multiply(
Expression.Convert(par1, typeof (double)),
par2),
typeof(T)),
par1, par2)
.Compile();
}
}
}