For example, given an integer array and its two consecutive sequence \'s beginning position which are \'b1\' and \'b2\', furthermore provided with the position \'last\' whic
I had an interview (with a very important company) a couple of hours ago and I was asked that. There is the answer in Java
public static void main(String[] args) {
int A[] = { 1, 3, 5, 6, 9 };
int B[] = new int[12];
B[0] = 3;
B[1] = 6;
B[2] = 8;
B[3] = 10;
B[4] = 11;
B[5] = 13;
B[6] = 15;
mergeInB(A, B, 7);
for (int n : B)
System.out.print(n + " ");
}
/**
* @param a
* @param b - it will be modified
* @param j = length of b
*/
public static void mergeInB(int[] a, int[] b, int j) {
int i = a.length - 1, k;
j --;
for (k = b.length-1; k >= 0; k--) {
if (i >= 0 && j >= 0) {
if (a[i] > b[j]) {
b[k] = a[i];
i --;
}
else
{
b[k] = b[j];
j --;
}
}
else break;
}
while(i>=0 && k >=0) {
b[k] = a[i];
k --;
i --;
}
while(j>= 0 && k >=0) {
b[k] = b[j];
j--;
k--;
}
}
There are such things as true in-place merges, but they are not straightforward enough that anybody is going to independently reinvent them in the middle of an interview - there have been papers describing a succession of pretty complex algorithms for this for years. One is Practical In-Place Merging, by Huang and Langston, CACM March 1988. The starting idea for this is to divide the data of length n into blocks of size sqrt(n), and use one block, filled with the largest elements of the data, to provide buffer space used in merging the others. The introduction to that paper says
"Given two sorted lists whose lengths sum to n, the obvious methods for merging in O(n) steps require a linear amount of extra memory as well. On the other hand, it is easy to merge in place using only a constant amount of additional space by heap-sorting, but at a cost of O(n log n) time"
Hence I claim that true merging in place can be done but is non-obvious.
Here is O(n-1) Memory (n+1)
/**
* Created by deian on 2016-12-22.
* We just need track the two smallest numbers
*/
public class Merge {
public static void swap(int[] a, int i1, int i2) {
int t = a[i1];
a[i1] = a[i2];
a[i2] = t;
}
public static void merge(int[] a) {
// i1 and i2 - always point to the smallest known numbers
// it would works as well with two m and n sized arrays
int i1 = 0;
int i2 = a.length / 2;
System.out.printf(" %s, i(%d,%d) \n", Arrays.toString(a), i1, i2);
for (int di = 0; di < a.length - 1; di++) {
int ni;
int oi1 = i1; int oi2 = i2;
if (a[i1] > a[i2]) {
ni = i2; i2++;
if (i2 >= a.length) { i2--; }
} else {
ni = i1; i1++;
if (i1 >= i2) { i1 = di; }
}
if (di == i1) { i1 = ni; }
swap(a, di, ni);
System.out.printf("#%d: %s, i(%d,%d)s(%d>%d)i(%d,%d) \n", di + 1, Arrays.toString(a), oi1, oi2, ni, di, i1, i2);
}
System.out.printf(" %s\n", Arrays.toString(a));
}
public static void main(String[] args) {
// int[] a = new int[]{1, 3, 6, 8, -5, -2, 3, 8};
// int[] a = new int[]{1, 3, 6, 8, -5, 2, 3, 8};
// int[] a = new int[]{1, 5, 6, 8, -5, 2, 3, 4};
// int[] a = new int[]{1, 5, 6, 8, -5, -2, -1, 4};
// int[] a = new int[]{ 1, 2, 3, 4, 5, 6, 7, 8};
// int[] a = new int[]{5, 6, 7, 8, 1, 2, 3, 4};
int[] a = new int[]{1, 3, 5, 7, 2, 4, 6, 8};
merge(a);
}
}
Though it is not possible entirely in O(n)
time, I have a proposition to do it faster than O(n^2)
. I use only O(1)
space which is temp in my code. I am sure it should run better than O(n^2)
.
private static int[] mergeSortedArrays(int[] a1, int[] a2) {
int i = 0, j = 0;
while (a1[i] != Integer.MIN_VALUE) {
if (a1[i] > a2[j]) {
int temp = a1[i];
a1[i] = a2[j];
a2[j] = temp;
for (int k = 1; k < a2.length; k++) {
if (a2[k - 1] > a2[k]) {
temp = a2[k - 1];
a2[k - 1] = a2[k];
a2[k] = temp;
}
}
}
i++;
}
while(j < a2.length){
a1[i++] = a2[j++];
}
return a1;
}
Kronrod's merge was the first published algorithm to do that. It goes roughly like this:
Split both parts of the array into blocks of size k=sqrt(n). Sort the blocks using their first elements as the basis for comparison. This can be done in sqrt(n)^2=O(n) by selection sort. The key property of selection sort here is that it has constant moves per block, so only #comparisons is square.
After this phase, for each element A[i]
in the array there are at most k-1
elements "wrongly sorted" below it, that is elements at positions j
<i
such that A[j]>A[i]
. These are (possibly) in the closest block below it that comes from the other merged part. Note that the first element of the block (and all other blocks below it) are already properly sorted relative to A[i]
because of the blocks being sorted on their first elements. This is why the second phase works, i.e. achieves the fully sorted array:
Now merge the first block with the second, then second with the third, etc., using the last 2 blocks as temporary space for the output of the merge. This will scramble the contents of the last two blocks but in the last phase they (together with the preceding block) can be sorted by selection sort in sqrt(n)^2=O(n) time.
This is by no means a simple problem It is possible, but rarely done in practice because it's so much more complicated than a standard merge using N-scratch space. Huang and Langston's paper has been around since the late 80's, though practical implementations didn't really surface until later. Earlier, L. Trabb-Prado's paper in 1977 predates Huang and Langston significantly, but I'm challanged to find the exact text that paper; only references abound.
An excellent later publication, Asymptotically efficient in-place merging (1995) by Geert, Katajainenb, and Pasanen is a good coverage of multiple algorithms, and references Trabb-Prado's contributions to the subject.