I have a function named resize, which takes a source array, and resizes to new widths and height. The method I\'m using, I think, is inefficient. I heard there\'s a better way t
What you have is pretty understandable, and I think it IS an O(n^4) algorithm. Ouchies!
You can improve it slightly by pushing the i*scale
and j*scale
out of the inner two loops - they are invariant where they are now. The optimizer might be doing it for you, however. There are also some other similar optimizations.
Regarding the error, run it twice, once with an input array that's got an even length (6x6) and another that's odd (7x7). And 6x7 and 7x6 while you're at it.
Based on your other question, it seems like you may be having trouble with mixing of types - with numeric conversions. One way to do this, which can make your code more debuggable and more readable to others not familiar with the problem space, would be to split the problematic line into multiple lines. Each minor operation would be one line, until you reach the final value. For example,
newImage[(int)(i*scale+s1)][(int)(j*scale+s2)] =source[i][j];
would become
int x = i * scale; x += s1; int y = j* scale; y +=s2; newImage[x][y] = source[i][j];
Now, you can run the code in a debugger and look at the values of each item after each operation is performed. When a value doesn't match what you think it should be, look at it and figure out why.
Now, back to the suspected problem: I expect that you need to use doubles somewhere, not ints - in your other question you talked about scaling factors. Is the factor less than 1? If so, when it's converted to an int, it'll be 0, and you'll get the wrong result.
So one scheme for changing the size of an image is to resample it (technically this is really the only way, every variation is really just a different kind of resampling function).
Cutting an image in half is super easy, you want to read every other pixel in each direction, and then load that pixel into the new half sized array. The hard part is making sure your bookkeeping is strong.
static int[][] halfImage(int[][] orig){
int[][] hi = new int[orig.length/2][orig[0].length/2];
for(int r = 0, newr = 0; r < orig.length; r += 2, newr++){
for(int c = 0, newc = 0; c < orig[0].length; c += 2, newc++){
hi[newr][newc] = orig[r][c];
}
}
return hi;
}
In the code above I'm indexing into the original image reading every other pixel in every other row starting at the 0th row and 0th column (assuming images are row major, here). Thus, r
tells us which row in the original image we're looking at, and c
tells us which column in the original image we're looking at. orig[r][c]
gives us the "current" pixel.
Similarly, newr
and newc
index into the "half-image" matrix designated hi
. For each increment in newr
or newc
we increment r
and c
by 2, respectively. By doing this, we skip every other pixel as we iterate through the image.
Writing a generalized resize routine that doesn't operate on nice fractional quantities (like 1/2, 1/4, 1/8, etc.) is really pretty hard. You'd need to define a way to determine the value of a sub-pixel -- a point between pixels -- for more complicated factors, like 0.13243, for example. This is, of course, easy to do, and you can develop a very naive linear interpolation principle, where when you need the value between two pixels you simply take the surrounding pixels, construct a line between their values, then read the sub-pixel point from the line. More complicated versions of interpolation might be a sinc
based interpolation...or one of many others in widely published literature.
Blowing up the size of the image involves something a little different than we've done here (and if you do in fact have to write a generalized resize function you might consider splitting your function to handle upscaling and downscaling differently). You need to somehow create more values than you have originally -- those interpolation functions work for that too. A trivial method might simply be to repeat a value between points until you have enough, and slight variations on this as well, where you might take so many values from the left and so many from the right for a particular position.
What I'd encourage you to think about -- and since this is homework I'll stay away from the implementation -- is treating the scaling factor as something that causes you to make observations on one image, and writes on the new image. When the scaling factor is less than 1 you generally sample from the original image to populate the new image and ignore some of the original image's pixels. When the scaling factor is greater than 1, you generally write more often to the new image and might need to read the same value several times from the old image. (I'm doing a poor job highlighting the difference here, hopefully you see the dualism I'm getting at.)