问题
I need to process(export subsections) of a large image(33600x19200) and I'm not sure how to start.
I've tried simply allocating an image using openframeworks but I got this error:
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
I'm not experienced with with processing images this large. Where should I start ?
回答1:
std::bad_alloc
occurs, because you dont have enough memory available to hold the whole image.
In order to work with such big things, one has to split them, e.g threat the picture as a set of subsections / subpictures with a well defined size (e.g 1000x1000) and process them one by one.
The other solution is simply to throw as much memory into your system as you can. If you have the money and the program should only run on one specific machine, its surely an option, but I think its clear which of the both solutions is the better one ;)
回答2:
I maintain vips, an image processing library which is designed to work on large images. It will automatically load, process and write an image in sections using many CPU cores. You can write your algorithm in C, C++, Python, PHP and Ruby. It's fast, free and cross-platform.
There's a very simple benchmark on the vips website: load an image, crop 100 pixels off every edge, shrink by 10%, sharpen, and save. For a 5k x 5k RGB TIF on my laptop, ImageMagick takes 1.6s and 500MB of RAM, vips takes 0.4s and 25MB.
ImageMagick is great, of course, it's free, well-documented, produces high-quality output and it's easy to use. vips is faster on large images.
回答3:
I used to encounter a problem like this, I find to a library GDAL , it saved me, it provides a function GDALDataset::RasterIO, which can read/write any part of the image by any resolution, I didn't find replacement in the introduction of openframeworks ,maybe someone would provide one for openframeworks.
来源:https://stackoverflow.com/questions/21082169/how-to-process-large-images