A local maximum in a 2D array can be defined as a value such that all it\'s 4 neighbours are less than or equal to it, ie, for a[i][j]
to be a local maximum,
Unless your array is square, your solution is actually O(I * J)
not O( n^2 )
. Strictly speaking you only have N
elements in your 2d array thus this solution is O(N)
. The only way it could be O( n^2 )
is if the array was square such that I = J = N
.
Since the compare is <=
rather than <
, you need still need to check the next element any shortcuts you try will likely be processor specific.
The worst case is that the entire array is a local maxima because the entire array equals the same value.
Thus you must visit every element once, making it O(N)
To improve real world performance in this you would need to use pointers to access you array as in most languages 2d arrays perform considerably worse than 1d arrays.
I believe this question can be answered using what is known as adversary arguments, which gives you a lower bound on the number of comparisons.
And in my opinion, you are right.. this would require atleast n^2 comparisons.
Above answers just defend a mathematical model.
Which result from a simplistic view of the problem.
If you work as a programmer, you should know what a processor can do. And you should be aware that code runs in a thread. You should wonder if a task is sub-dividable in smaller tasks, so you be able to work it out multi-threaded and get near a 1/total-threads execution speed-up.
The code to do that depends on the language, so I don't provide an example here.
You are given an array of size 4 x 4. You will need to do the following task.
Fill the array with some random numbers using rand() function.
For each cell (i,j) your will need to find the cell with maximum number among all the possible neighbors the cell (i,j) has.
Then put that maximum number in the that cell (i,j)
Sample Input:
177 -90 12 7
1 34 43 67
11 11 122 45
6 90 98 93
Sample Output:
34 177 67 67
177 177 122 122
90 122 98 122
90 98 122 122
Just a heads up, local maxima or minima of a 2D grid can be computed in O(nlgn) time using a divide and conquer strategy. This is a slightly better time bound than the brute force algorithm contained in the O(n^2) time complexity class. Furthermore, improvements can be made to the divide and conquer algorithm to obtain an O(n) algorithm for the 2D grid extrema finding.
Check out these notes on the theory behind such peak picking algorithms (I am sure their are more materials out there):
http://courses.csail.mit.edu/6.006/spring11/lectures/lec02.pdf
I am pretty sure this cannot be solved in less than O(n^2) comparisons. Assume a chess board 2d matrix where all the white squares are 1 and blacks are 0. It wiil will have O(n^2) solutions and each solution requires at least one comparison.