I am trying to do the circular left shift of an array by n positions using only a single 1D array. I can do it in two arrays, but I haven\'t figured out how to do it using o
I would shift it 1 element at a time in place, using a single temporary variable to hold the element while moving elements 1 place along each. I would then repeat this n
times to achieve n
shifts.
public static void main( String[] args ) {
int[] array = {1,2,3,4,5,6,7,8};
leftShift( array, 3);
System.out.println( Arrays.toString( array));
}
public static void leftShift(int[] array, int n) {
for (int shift = 0; shift < n; shift++) {
int first = array[0];
System.arraycopy( array, 1, array, 0, array.length - 1 );
array[array.length - 1] = first;
}
}
Output:
[4, 5, 6, 7, 8, 1, 2, 3]
Not too inefficient, as System.arraycopy() is highly optimized.
for (int i = 0; i < n; i++)
array[array.length - n + i] = array[i];
for (int i = 0; i < array.length - n; i++)
array[i] = array[i + n];
Here is the very easy algorithm, with O(1) Space in O(n)
Algorithm
public class ArrayRotator {
private final int[] target;
private final int length;
public ArrayRotator(int[] seed) {
this.target = seed;
this.length = seed.length;
}
public void rotateInline(int numberOfPositions) {
reverse(0, numberOfPositions);
reverse(numberOfPositions + 1, length-1);
reverse(0, length-1);
}
private void reverse(int start, int end) {
for (int i = start; i <= (start + end)/2; i++) {
swap(i, start + end - i);
}
}
private void swap(int first, int second) {
int temp = this.target[second];
this.target[second] = this.target[first];
this.target[first] = temp;
}
}
For example, lets say the array is [1,2,3,4]
and n
is 2
After step one, you would end up [2,1,3,4]
After Step two, you would end up [2,1,4,3]
After Step three, you would end up [3,4,1,2]
You could shift the data by iterating and copying, this will be O(n). An alternate approach is to create a List
implementation that wraps your array and exposes it as being circular shifted. This had the advantage that the actual shifting is lazily done when get
or iteration is performed.
There is actually a clever algorithm for that. We'll use A
to denote the array, N
to denote the array size, and n
to denote the number of positions to shift. After the shift you would like the i-th
element to move to the ((i + n) mod N)-th
position, hence we can define the new positions by the following mapping:
f(j) := (j + n) mod N (j = 0,...,N - 1)
The general idea behind this algorithm goes like this: We don't want to move elements around more then necessary, so ideally we would like to simply place each element in it's proper (shifted) position on the first try. Say we start with the element at position i
. What we want to do is move the element at position i
to position f(i)
, but then we'll overwrite the element at that position, so we need to first save the element at position f(i)
and then perform the shift. Once we shifted the first element, we need to pick another element to shift. Since we want to conserve space, the obvious candidate is the element we just saved (the element that was in position f(i)
). Like before, we save the element at position f(f(i))
and then copy our saved element into that position. We keep repeating this process (going through positions i, f(i), f(f(i)), f(f(f(i))), ...
) until we reach an element we already shifted (which we are guaranteed to do, since there are finitely many position). If we passed through all of the elements, then we are done, if not then we select another element (that hasn't been shifted yet), say at position j
, and repeat the procedure (going through j, f(j), f(f(j)), f(f(f(j))), ...
). That's it. But before we can implement such an algorithm, or even before we decide whether this is indeed a good algorithm we need to answer a few questions:
Say we iterate through positions i, f(i), f(f(i)), ...
. How can we tell that we reached a position that has already been shifted? Do we need to save every position we passed through? If we do, then this means we need to hold an array of size N (to cover all positions), and we also need to perform a lookup every time we shift an element. This would make the algorithm terribly inefficient. Luckily this is not necessary, since the sequence i, f(i), f(f(i)), ...
must wrap around itself at position i
, so we only need to wait until we reach that position. We can prove this assertion as follows: Assume that the first repeated element we meet is not i
. Then we must have 2 different elements, that when shifted will reach the same position - a contradiction.
Say we finished going through i, f(i), f(f(i)), ...
, but there are still elements that remain unshifted (we can tell by counting how many elements we shifted). How do we now find a position j
that contains such an element? And also, once we finished this second iteration (going through j, f(j), f(f(j)), ...
) how do we find a third position k
with an unshifted element? and so on.. This too might suggest we need to save an array to account for used used\unused elements, and perform a lookup every time we need to find an unused element. However, we can again relax, since, as we'll soon show, all of the starting positions (which we denoted by i
, j
and k
) are adjacent. Which means, that if we start from position i
, we'll next select i + 1
, and then i + 2
, and so on…
might the sequences i, f(i), f(f(i)), ...
and j, f(j), f(f(j)), ...
(wherei
and j
are different) contain common elements? If they do this would mean that the algorithm is useless, since it could shift the same element twice - causing it to end up in the wrong position. The answer then (of course), is that they cannot contain common elements. And we will show why.
Let us denoted := gcd(N, n)
. For every one of the integers: i = 0,...,d - 1
we define the following set:
S(i) := { kd + i | k = 0,...,N/d - 1}
It is easy to see that the sets S(0),...,S(d - 1)
together cover the set {0,...,N - 1}
. We also observe that when dividing an element in a set S(i)
by d
, we are left with remainder i
, and dividing an element from a different set S(j)
by d
will leave us with a different remainder (j
). Thus, no two sets contain a common element. With this we have established that the sets S(0),...,S(d - 1)
form a partition of {0,...,N - 1}
Now, for every i = 0,...,d - 1
, we will define the set T(i)
as i, f(i), f(f(i)), ...
. By the definition of f
we can write T(i)
as follows:
T(i) = {(kn + i) mod N | k is an integer}
We observe that if x
is an element in T(i)
, then we can write for some k
:
x = (kn + i) mod N = (k(n/d)d + i) mod N
Let us denote z := k(n/d) mod N/d
, then by multiplying by d
, we have:
kn mod N = zd
and hence:
x = (kn + i) mod N = zd + i
Thus, x
is also in S(i)
. Similarly, if we take some y
from S(i)
we observe that for some k
:
y = kd + i
Since gcd(n/d, N/d) = 1
there exists a q
such that q(n/d) mod N/d = 1
(a modular inverse), thus we can write (multiplying by kd
):
kd = kqn mod N
and hence:
y = kd + i = ((kq)n + i) mod N
Thus, y
is also in T(i)
. We conclude that T(i) = S(i)
. From this fact we can easily show our previous assertions. First off, since the sets form a partition of {0,...,N - 1}
the third assertion (no two sequences contain a common element) is satisfied. Second, by the definition of the sets S(i)
we can take any group of d
adjacent elements in {0,...N - 1}
and each of them will be placed in a different set. This satisfies the second assertion.
What this means is that we can rotate all elements in positions 0, d, 2d, ..., (N/d - 1)d
by simply replacing the element at position n mod N
with the element at position 0
, the element at position 2n mod N
with the element at position n mod N
, and so on… until we return to the element in position 0
(which we are assured will happen). Here is a pseudo-code example:
temp <- A[0]
j <- N - (n mod N)
while j != 0 do
A[(j + n) mod N] <- A[j];
j <- (j - n) mod N
A[n mod N] <- temp;
This covers the entire set S(0)
. To cover the rest of the sets, namely S(1), … ,S(d-1)
, we will simply iterate over each set the same way we did for the first:
for i <- 0 to d - 1
temp <- A[i]
j <- N - ((n - i) mod N)
while j != i do
A[(j + n) mod N] <- A[j];
j <- (j - n) mod N
A[(i + n) mod N] <- temp;
Note that while we have two nested loops, each element is moved exactly once, and we use O(1)
space. An example of an implementation in Java:
public static int gcd(int a, int b) {
while(b != 0) {
int c = a;
a = b;
b = c % a;
}
return a;
}
public static void shift_array(int[] A, int n) {
int N = A.length;
n %= N;
if(n < 0)
n = N + n;
int d = gcd(N, n);
for(int i = 0; i < d; i++) {
int temp = A[i];
for(int j = i - n + N; j != i; j = (j - n + N) % N)
A[(j + n) % N] = A[j];
A[i + n] = temp;
}
}
I do belive that System.arraycopy
would actually just take all your data from one array, and put it into another one of the same length just shifted.
Anyways thinking about that problem is a quite interesting task. The only Solution i could think about right now is to shit it one by one. Without using another Array it would look like that:
for(int i = 0; i < shift;i++)
{
tmp = array[0];
for(int j = 0;j<array.length-1;j++)
array[j]=array[j+1];
array[array.length-1]=tmp;
}
for Arrays greater than 30 items it is but more efficient to use this:
for (int i = 0; i < shift; i++) {
tmp = array[0];
System.arraycopy( array, 1, array, 0, array.length - 1 );
array[array.length - 1] = tmp;
}
But for large arrays and great shift that are close to the array size aswell as for short arrays and small shifts this method wins the race:
int[] array2 = new int[shift];
for (int i = 0; i < shift; i++)
{
array2[i] = array[i];
}
System.arraycopy(array, shift, array, 0, array.length - shift);
for (int i = array.length - shift; i < array.length; i++)
{
array[i] = array2[shift + i - array.length];
}
Ive tested that with a few array sizes and shifts Here are the results for
int[] array = new int[100000];
int shift = 99999;
in nanoseconds: 1st method:5663109208 2nd method:4047735536 3rd method:6085690 So you should really use the 3rd method. Hope that helps