A* Algorithm for very large graphs, any thoughts on caching shortcuts?

前端 未结 9 1300
鱼传尺愫
鱼传尺愫 2021-01-30 12:37

I\'m writing a courier/logistics simulation on OpenStreetMap maps and have realised that the basic A* algorithm as pictured below is not going to be fast enough for large maps (

9条回答
  •  囚心锁ツ
    2021-01-30 12:58

    There are dozens of A* variations that may fit the bill here. You have to think about your use cases, though.

    • Are you memory- (and also cache-) constrained?
    • Can you parallelize the search?
    • Will your algorithm implementation be used in one location only (e.g. Greater London and not NYC or Mumbai or wherever)?

    There's no way for us to know all the details that you and your employer are privy to. Your first stop thus should be CiteSeer or Google Scholar: look for papers that treat pathfinding with the same general set of constraints as you.

    Then downselect to three or four algorithms, do the prototyping, test how they scale up and finetune them. You should bear in mind you can combine various algorithms in the same grand pathfinding routine based on distance between the points, time remaining, or any other factors.

    As has already been said, based on the small scale of your target area dropping Haversine is probably your first step saving precious time on expensive trig evaluations. NOTE: I do not recommend using Euclidean distance in lat, lon coordinates - reproject your map into a e.g. transverse Mercator near the center and use Cartesian coordinates in yards or meters!

    Precomputing is the second one, and changing compilers may be an obvious third idea (switch to C or C++ - see https://benchmarksgame.alioth.debian.org/ for details).

    Extra optimization steps may include getting rid of dynamic memory allocation, and using efficient indexing for search among the nodes (think R-tree and its derivatives/alternatives).

提交回复
热议问题