I write some code use objective-c block, but the result confused me.
@interface MyTest : NSObject
@end
@implementation MyTest
- (void)test {
NSArray *
You seem to have found a case of type loss in relation to blocks which the compiler does not handle. But we need to start at the beginning...
The following relates to the use of blocks under ARC. Other scenarios (MRC, GC) are not considered.
That some blocks are created on the stack rather than the heap is an optimisation that could technically be implemented in such a way that programmers never need to be aware of it. However when blocks were first introduced the decision was made that the optimisation would not be transparent to the user, hence the introduction of blockCopy()
. Since that time both the specification and the compiler have evolved (and the compiler actually goes beyond the spec), and blockCopy()
is not (by the specification) needed it places it used to be, and may not (as the compiler may exceed the spec) be needed in others.
How can the optimisation be implemented transparently?
Consider:
The trivial answer is "yes" - move to the heap on any assignment. But that would negate the whole purpose of the optimisation - create a stack block, pass it to another method, which involves and assignment to the parameter...
The easy answer is "don't try" - introduce blockCopy()
and let the programmer figure it out.
The better answer is "yes" - but do it smartly. In pseudo-code the cases are:
// stack allocated block in "a", consider assignment "b = a"
if ( b has a longer lifetime than a )
{
// case 1: assigning "up" the stack, to a global, into the heap
// a will die before b so we need to copy
b = heap copy of a;
}
else
{
if (b has a block type)
{
// case 2: assigning "down" the stack - the raison d'être for this optimisation
// b has shorter life (nested) lifetime and is explicitly typed as a block so
// can accept a stack allocated block (which will in turn be handled by this
// algorithm when it is used)
b = a;
}
else
{
// case 3: type loss - e.g. b has type id
// as the fact that the value is a block is being lost (in a static sense)
// the block must be moved to the heap
b = heap copy of a;
}
}
At the introduction of blocks cases 1 & 3 required the manual insertion of blockCopy()
, and case 2 was where the optimisation paid off.
However as explain in an earlier answer the specification now covers case 1, while the compiler appeared to cover case 3 but no documentation confirming that was known.
(BTW if you follow that link you will see it contains a link to an older question on this topic. The case described there is now handled automatically, it is an example of case 1 above.)
Phew, got all that? Let's get back to the examples in the question:
array1
, array3
and array4
are all examples of case 3 where there is type loss. They are also the scenario tested in the previous question and found to be handled by the current compiler. That they work is not an accident or luck, the compiler inserts the required block copies explicitly. However I don't know this is officially documented anywhere.array2
is also an example of case 3 where there is type loss, but it is a variation not tested in the previous question - type loss by passing as a part of a variable argument list. This case does not appear to be handled by the current compiler. So now we have a clue as to why handling of case 3 is not documented - the handling is not complete.Note that, as mentioned previously, it is possible to test what your compiler does - you can even incorporate some simple tests in your code to immediately abort an application if the tests fail. So you can, if you wish, write code based on what you know the compiler currently handles automatically (so far everything considered accept variadic functions) and which will abort your code should you update the compiler and the replacement lacks the support.
Hope this was helpful and makes sense!