We have a rectangular area with translucent walls and a few light sources.We are considering only the top view,so it is a 2D problem. We need to find the approximate lighting (signal strength)at each point of the area.
We need to make the algorithm really fast. The brute force method was just too slow for our purposes. You can assume that all walls attenuate by same amount, even constant amount of attenuation is acceptable.
The area would be at most 1000x1000, and there would not be more than 100 light sources. The light sources can have a range of approx. 50-100 units (they are not infinite). Faster but approximate algorithms are welcome.
Thanks in advance!
What I tried was basically brute force method: comparing each sample point with each wall and light source to determine its luminosity. Obviously, it is O(n^3) and unacceptably slow.
By time I did not mean any specific limit: but it would be nice to do the whole image within 100ms or faster. Remember, I do not require accuracy as much as speed.
Just a stab in the dark: have you looked into (GPU-accelerated) photon mapping?
You can reduce the running time of a similar algorithm quadratically (eg. skip every 2nd x and y) by losing quality linearly (image gets half diameter and resample back to same size).
Use a bitmap to store luminosity, and render on a lower size bitmap (divide by aproximation factor) all your points lines atc (but divide all point by the aproximation factor too) then use gaussian blur and resample back to desired size. Then extract luminosity from pixels.
I uploaded a video on youtube showing the run of a test i did code to try if that could work. It seems omething similar to what you require (doing it with 'almost real time' on a single thread):
Of course here walls are lines with transparency properties, and light does not diffuse as you would expect but linearly, but the "approximation" should be usable by your algorithm, or you can adapt this one if speed is enough. And beware code is written very badly because I was merely experimenting.
Here luminosity is normalized, you would probably go with embedding luminosity in a logarithmic scale into pixels, so you can fit a greater range, to preserve original values.
Is you can use it for something: Here is the project:
If you optimize and thread it, probably 100ms for a 1000x1000 image with 100 lights with diameter 300 and like 20 walls of lenght 200, with aproximation of 5 is achievable.
来源:https://stackoverflow.com/questions/5909031/fast-2d-illumination-algorithm