How to automatically kill slow MongoDB queries?

前端 未结 4 1142
执笔经年
执笔经年 2021-01-13 04:54

Is there a way that I can protect my app against slow queries in MongoDB? My application has tons of possibilities of filters and I\'m monitoring all these queries but at th

相关标签:
4条回答
  • 2021-01-13 05:41

    Right now with version 2.6 this is possible. In their press release you can see the following:

    with MaxTimeMS operators and developers can specify auto-cancellation of queries, providing better control of resource utilization;

    Therefore with MaxTimeMS you can specify how much time you allow your query to be executed. For example I do not want a specific query to run more than 200 ms.

    db.collection.find({
      // my query
    }).maxTimeMS(200)
    

    What is cool about this, is that you can specify different timeouts for different operations.

    To answer OP's question in the comment. There is not global setting for this. One reason is that different queries can have different maximum tolerating time. For example you can have query that finds userInfo by it's ID. This is very common operation and should run super fast (otherwise we are doing something wrong). So we can not tolerate it to run longer than 200 ms.

    But we also have some aggregation query, which we run once a day. For this operation it is ok to run for 4 seconds. But we can not tolerate it longer than 10 seconds. So we can put 10000 as maxTimeMS.

    0 讨论(0)
  • 2021-01-13 05:44

    I guess there is currently no support for killing query by passing time argument. Though in your development side, you can set profiler level to 2. It will log every query that has been issued. From there you can see which queries take how much time. I know its not what you exactly wanted but it helps in getting the insight of what all queries are fat and then in your app logic you can have some way to gracefully handle such cases where those queries might originate. I usually go by this approach and it helps.

    0 讨论(0)
  • 2021-01-13 05:48

    The 'notablescan' option, as @ghik mentioned, will prevent you from running queries that are slow due to not using an index. However, that option is global to the server, and it is not appropriate for use in a production environment. It also won't protect you from any other source of slow queries besides table scans.

    Unfortunately, I don't think there is a way to directly do what you want right now. There is a JIRA ticket proposing the addition of a $maxTime or $maxScan query parameter, which sounds like it would help you, so please vote for it: https://jira.mongodb.org/browse/SERVER-2212.

    0 讨论(0)
  • 2021-01-13 05:52

    There are options available on the client side (maxTimeMS starting in 2.6 release).

    On the server side, there is no appealing global option, because it would impact all databases and all operations, even ones that the system needs to be long running for internal operation (for example tailing the oplog for replication). In addition, it may be okay for some of your queries to be long running by design.

    The correct way to solve this would be to monitor currently running queries via a script and kill the ones that are long running and user/client initiated - you can then build in exceptions for queries that are long running by design, or have different thresholds for different queries/collections/etc.

    You can then use db.currentOp() method (in the shell) to see all currently running operations. The field "secs_running" indicates how long the operation has been running. Be careful not to kill any long running operations that are not initiated by your application/client - it may be a necessary system operation, like chunk migration in a sharded cluster (as just one example).

    0 讨论(0)
提交回复
热议问题