Note: I may have chosen the wrong word in the title; perhaps I\'m really talking about polynomial growth here. See the benchmark
Accessing a new type causes the runtime to recompile it from IL to native code (x86 etc). The runtime also optimizes the code, which will also produce different results for value types and reference types.
And List<int>
clearly will be optimized differently than List<List<int>>
.
Thus also EmptyStack<int>
and NonEmptyStack<int, EmptyStack<int>>
and so on will be handled as completely different types and will all be 'recompiled' and optimized.
(As far as I know!)
By nesting further layers the complexity of the resulting type grows and the optimization takes longer.
So adding one layer takes 1 step to recompile and optimize, the next layer takes 2 steps plus the first step (or so) and the 3rd layer takes 1 + 2 + 3 steps etc.
If James and other people are correct about types being created in runtime, then performance is limited by speed of types creation. So, why speed of types creation is exponentially slow ? I think, that by definition, types are different to each other. Consequently, every next type causes series of increasingly different memory allocation and deallocation patterns. The speed is simply limited by how efficient is automatic managing of memory by a GC. There are some agressive sequencies, which will slow down any memory manager, no matter how good it is. GC and allocator will spend more and more time looking for optimally sized pieces of free memory for every next allocation and size.
Answer:
Because, you found one very agressive sequence, which fragments memory so bad and so fast, that GC is confused to no means.
What one can learn from it, is that: really fast real world apps (for example: Algorithmic Stock Trading apps) are very plain pieces of straight code with static data structures, allocated once only for the whole run of application.
In Java, computation time appears to be a little more than linear and far more efficient than you're reporting in .net. Using the testRandomPopper
method from my answer, it takes ~4 seconds to run with N=10,000,000 and ~10 seconds to run with N=20,000,000
Is there a desperate need to have a distinction between the empty stack and the non-empty stack?
From a practical point of view you can't pop the value of an arbitrary stack without fully qualifying the type and after adding 1,000 values that's an insanely long type name.
Why not just do this:
public interface IImmutableStack<T>
{
T Top { get; }
IImmutableStack<T> Pop { get; }
IImmutableStack<T> Push(T x);
}
public class ImmutableStack<T> : IImmutableStack<T>
{
private ImmutableStack(T top, IImmutableStack<T> pop)
{
this.Top = top;
this.Pop = pop;
}
public T Top { get; private set; }
public IImmutableStack<T> Pop { get; private set; }
public static IImmutableStack<T> Push(T x)
{
return new ImmutableStack<T>(x, null);
}
IImmutableStack<T> IImmutableStack<T>.Push(T x)
{
return new ImmutableStack<T>(x, this);
}
}
You can pass around any IImmutableStack<T>
and you only need to check for Pop == null
to know you've hit the end of the stack.
Otherwise this has the semantics you're trying to code without the performance penalty. I created a stack with 10,000,000 values in 1.873 seconds with this code.