I recall getting a scolding for concatenating Strings in Python once upon a time. I was told that it is more efficient to create an List of Strings in Python and join them later
" The problem is the pile of data as a whole. In his first situation, he had two types of data stockpiling: (1) a temporary string for each row in his CSV file, with fixed quotations and such things, and (2) the giant string containing everything. If each string is 1k and there are 5,000 rows...
Scenario One: build a big string from little strings
temporary strings: 5 megs (5,000k) massive string: 5 megs (5,000k) TOTAL: 10 megs (10,000k) Dave's improved script swapped the massive string for an array. He kept the temporary strings, but stored them in an array. The array will only end up costing 5000 * sizeof(VALUE) rather than the full size of each string. And generally, a VALUE is four bytes.
Scenario Two: storing strings in an array
strings: 5 megs (5,000k) massive array: 20k
Then, when we need to make a big string, we call join. Now we're up to ten megs and suddenly all those strings become temporary strings and they can all be released at once. It's a huge cost at the end, but it's a lot more efficient than a gradual crescendo that eats resources the whole time. "
http://viewsourcecode.org/why/hacking/theFullyUpturnedBin.html
^It's actually better to in the for memory/garbage collection performance to delay the operation until the end just like I was taught to in Python. The reason begin that you get one huge chunk of allocation towards the end and an instant release of objects.