Given an array arr
and an array of indices ind
, I\'d like to rearrange arr
in-place to satisfy the given indices. For exa
Index array defines a permutation. Each permutation consists of cycles. We could rearrange given array by following each cycle and replacing the array elements along the way.
The only problem here is to follow each cycle exactly once. One possible way to do this is to process the array elements in order and for each of them inspect the cycle going through this element. If such cycle touches at least one element with lesser index, elements along this cycle are already permuted. Otherwise we follow this cycle and reorder the elements.
function rearrange(values, indexes) {
main_loop:
for (var start = 0, len = indexes.length; start < len; start++) {
var next = indexes[start];
for (; next != start; next = indexes[next])
if (next < start) continue main_loop;
next = start;
var tmp = values[start];
do {
next = indexes[next];
tmp = [values[next], values[next] = tmp][0]; // swap
} while (next != start);
}
return values;
}
This algorithm overwrites each element of given array exactly once, does not mutate the index array (even temporarily). Its worst-case complexity is O(n2). But for random permutations its expected complexity is O(n log n) (as noted in comments for related answer).
This algorithm could be optimized a little bit. Most obvious optimization is to use a short bitset to keep information about several indexes ahead of current position (whether they are already processed or not). Using a single 32-or-64-bit word to implement this bitset should not violate O(1) space requirement. Such optimization would give small but noticeable speed improvement. Though it does not change worst case and expected asymptotic complexities.
To optimize more, we could temporarily use the index array. If elements of this array have at least one spare bit, we could use it to maintain a bitset allowing us to keep track of all processed elements, which results in a simple linear-time algorithm. But I don't think this could be considered as O(1) space algorithm. So I would assume that index array has no spare bits.
Still the index array could give us some space (much larger then a single word) for look-ahead bitset. Because this array defines a permutation, it contains much less information than arbitrary array of the same size. Stirling approximation for ln(n!)
gives n ln n
bits of information while the array could store n log n
bits. Difference between natural and binary logarithms gives us to about 30% of potential free space. Also we could extract up to 1/64 = 1.5% or 1/32 = 3% free space if size of the array is not exactly a power-of-two, or in other words, if high-order bit is only partially used. (And these 1.5% could be much more valuable than guaranteed 30%).
The idea is to compress all indexes to the left of current position (because they are never used by the algorithm), use part of free space between compressed data and current position to store a look-ahead bitset (to boost performance of the main algorithm), use other part of free space to boost performance of the compression algorithm itself (otherwise we'll need quadratic time for compression only), and finally uncompress all the indexes back to original form.
To compress the indexes we could use factorial number system: scan the array of indexes to find how many of them are less than current index, put the result to compressed stream, and use available free space to process several values at once.
The downside of this method is that most of free space is produced when algorithm comes to the array's end while this space is mostly needed when we are at the beginning. As a result, worst-case complexity is likely to be only slightly less than O(n2). This could also increase expected complexity if not this simple trick: use original algorithm (without compression) while it is cheap enough, then switch to the "compressed" variant.
If length of the array is not a power of 2 (and we have partially unused high-order bit) we could just ignore the fact that index array contains a permutation, and pack all indexes as if in base-n
numeric system. This allows to greatly reduce worst-case asymptotic complexity as well as speed up the algorithm in "average case".