Keep PostgreSQL from sometimes choosing a bad query plan

后端 未结 5 1684
梦毁少年i
梦毁少年i 2020-11-22 11:51

I have a strange problem with PostgreSQL performance for a query, using PostgreSQL 8.4.9. This query is selecting a set of points within a 3D volume, using a LEFT OUT

相关标签:
5条回答
  • 2020-11-22 12:00

    +1 for tuning statistics target & doing ANALYZE. And for PostGIS (for OP).

    But also, not quite related to the original question, but still, if anyone gets here looking for how to deal, in general, with inaccurate planner's row count estimates in complex queries, leading to undesired plans. An option might be to wrap a part of the initial query into a function and to set its ROWS option to something more or less expected. I've never done that but should work apparently.

    Also there are row estimation directives in pg_hint_plan. I would not advice planner hinting in general, but adjusting rows estimate is a softer option.

    And finally, to enforce a nested loop scan, sometimes one might do a LATERAL JOIN with LIMIT N or just OFFSET 0 inside the subquery. That will give you what you want. But note it's a very rough trick. At some point it WILL lead to bad performance IF the conditions change - because of table growth or just a different data distribution. Still this might be a good option just to urgently get some relief for a legacy system.

    0 讨论(0)
  • 2020-11-22 12:03

    I'm skeptical that this has anything to do with bad statistics unless you consider the combination of database statistics and your custom data type.

    My guess is that PostgreSQL is picking a nested loop join because it looks at the predicates (treenode.location).x >= 8000 AND (treenode.location).x <= (8000 + 4736) and does something funky in the arithmetic of your comparison. A nested loop is typically going to be used when you have a small amount of data in the inner side of the join.

    But, once you switch the constant to 10736 you get a different plan. It's always possible that the plan is of sufficiently complexity that the Genetic Query Optimization (GEQO) is kicking in and you're seeing the side effects of non-deterministic plan building. There are enough discrepancies in the order of evaluation in the queries to make me think that's what's going on.

    One option would be to examine using a parameterized/prepared statement for this instead of using ad hoc code. Since you're working in a 3-dimensional space, you might also want to considering using PostGIS. While it might be overkill, it may also be able to provide you with the performance that you need to get these queries running properly.

    While forcing planner behavior isn't the best choice, sometimes we do end up making better decisions than the software.

    0 讨论(0)
  • 2020-11-22 12:19

    What Erwin said about the statistics. Also:

    ORDER BY parentid DESC, id, z_diff
    

    Sorting on

    parentid DESC, id, z
    

    might give the optimiser a bit more room to shuffle. (I don't think it will matter much since it is the last term, and the sort is not that expensive, but you could give it a try)

    0 讨论(0)
  • 2020-11-22 12:21

    I am not positive it is the source of your problem but it looks like there were some changes made in the postgres query planner between versions 8.4.8 and 8.4.9. You could try using an older version and see if it makes a difference.

    http://postgresql.1045698.n5.nabble.com/BUG-6275-Horrible-performance-regression-td4944891.html

    Don't forget to reanalyze your tables if you change the version.

    0 讨论(0)
  • 2020-11-22 12:22

    If the query planner makes bad decisions it's mostly one of two things:

    1. The statistics are inaccurate.

    Do you run ANALYZE enough? Also popular in it's combined form VACUUM ANALYZE. If autovacuum is on (which is the default in modern-day Postgres), ANALYZE is run automatically. But consider:

    • Are regular VACUUM ANALYZE still recommended under 9.1?

    (Top two answers still apply for Postgres 12.)

    If your table is big and data distribution is irregular, raising the default_statistics_target may help. Or rather, just set the statistics target for relevant columns (those in WHERE or JOIN clauses of your queries, basically):

    ALTER TABLE ... ALTER COLUMN ... SET STATISTICS 400;  -- calibrate number
    

    The target can be set in the range 0 to 10000;

    Run ANALYZE again after that (on relevant tables).

    2. The cost settings for planner estimates are off.

    Read the chapter Planner Cost Constants in the manual.

    Look at the chapters default_statistics_target and random_page_cost on this generally helpful PostgreSQL Wiki page.

    There are many other possible reasons, but these are the most common ones by far.

    0 讨论(0)
提交回复
热议问题