This article on MSDN states that you can use as many try catch blocks as you want and not incur any performance cost as long no actual exception is thrown.
Since I alway
The try/catch/finally/fault block itself has essentially no overhead itself in an optimized release assembly. While there is often additional IL added for catch and finally blocks, when no exception is thrown, there is little difference in behavior. Rather than a simple ret, there is usually a leave to a later ret.
The true cost of try/catch/finally blocks occurs when handling an exception. In such cases, an exception must be created, stack crawl marks must be placed, and, if the exception is handled and its StackTrace property accessed, a stack walk is incurred. The heaviest operation is the stack trace, which follows the previously set stack crawl marks to build up a StackTrace object that may be used to display the location the error happened and the calls it bubbled through.
If there is no behavior in a try/catch block, then the extra cost of 'leave to ret' vs. just 'ret' will dominate, and there will obviously be a measurable difference. However, in any other situation where there is some kind of behavior in the try clause, the cost of the block itself will be entirely negated.
See discussion on try/catch implementation for a discussion of how try/catch blocks work, and how some implementations have high overhead, and some have zero overhead, when no exceptions occur.
A difference of just 34 milliseconds is smaller than the margin-of-error for a test like this.
As you've noticed, when you increase the duration of the test that difference just falls away and the performance of the two sets of code is effectively the same.
When doing this sort of benchmark I try to loop over each section of code for at least 20 seconds, preferably longer, and ideally for several hours.
the actual computation is so minimal that accurate measurements are very tricky. It looks to me like try catch might add a very small fixed amount of extra time to the routine. I would hazard to guess, not knowing anything about how exceptions are implemented in C#, that this is mostly just initialization of the exception paths and perhaps just a slight load on the JIT.
For any actual use, the time spent on the computation will so overwhelm the time spent fiddling with try-catch that the cost of try-catch can be taken as near zero.
The Problem is first at your Test Code. you used stopwatch.Elapsed.Milliseconds which shows only the milisecond part of the Elapsed time, use TotalMilliseconds to get the whole part...
If no exception is thrown, the difference is minimal
But the real question is "Do i need to check for exceptions or let C# handle the Exception throwing?"
Clearly... handle alone... Try running this:
private void TryCatchPerformance()
{
int iterations = 10000;
textBox1.Text = "";
Stopwatch stopwatch = Stopwatch.StartNew();
int c = 0;
for (int i = 0; i < iterations; i++)
{
try
{
c += i / (i % 50);
}
catch (Exception)
{
}
}
stopwatch.Stop();
Debug.WriteLine(String.Format("With try catch: {0}", stopwatch.Elapsed.TotalSeconds));
Stopwatch stopwatch2 = Stopwatch.StartNew();
int c2 = 0;
for (int i = 0; i < iterations; i++)
{
int iMod50 = (i%50);
if(iMod50 > 0)
c2 += i / iMod50;
}
stopwatch2.Stop();
Debug.WriteLine( String.Format("Without try catch: {0}", stopwatch2.Elapsed.TotalSeconds));
}
Output: OBSOLETE : Look below! With try catch: 1.9938401
Without try catch: 8.92E-05
Amazing, only 10000 objects, with 200 Exceptions.
CORRECTION: I Run my code on DEBUG and VS Written Exception to Output window.. These are the Results of the RELEASE A lot less overhead, but still 7,500 % improvement.
With try catch: 0.0546915
Checking Alone: 0.0007294
With try catch Throwing my own same Exception object: 0.0265229
Note that I only have Mono available:
// a.cs
public class x {
static void Main() {
int x = 0;
x += 5;
return ;
}
}
// b.cs
public class x {
static void Main() {
int x = 0;
try {
x += 5;
} catch (System.Exception) {
throw;
}
return ;
}
}
Disassembling these:
// a.cs
default void Main () cil managed
{
// Method begins at RVA 0x20f4
.entrypoint
// Code size 7 (0x7)
.maxstack 3
.locals init (
int32 V_0)
IL_0000: ldc.i4.0
IL_0001: stloc.0
IL_0002: ldloc.0
IL_0003: ldc.i4.5
IL_0004: add
IL_0005: stloc.0
IL_0006: ret
} // end of method x::Main
and
// b.cs
default void Main () cil managed
{
// Method begins at RVA 0x20f4
.entrypoint
// Code size 20 (0x14)
.maxstack 3
.locals init (
int32 V_0)
IL_0000: ldc.i4.0
IL_0001: stloc.0
.try { // 0
IL_0002: ldloc.0
IL_0003: ldc.i4.5
IL_0004: add
IL_0005: stloc.0
IL_0006: leave IL_0013
} // end .try 0
catch class [mscorlib]System.Exception { // 0
IL_000b: pop
IL_000c: rethrow
IL_000e: leave IL_0013
} // end handler 0
IL_0013: ret
} // end of method x::Main
The main difference I see is a.cs goes straight to ret
at IL_0006
, whereas b.cs has to leave IL_0013
at IL_006
. My best guess with my example, is that the leave
is a (relatively) expensive jump when compiled to machine code -- that may or may not be the case, especially in your for loop. That is to say, the try-catch has no inherent overhead, but jumping over the catch has a cost, like any conditional branch.