问题
I just started using PHP profiler to find files in php script which caused slow mysql queries. Some suggested that I use xdebug to track it with the timestamp from the slow log-file entry compared to the files in php profiler that was executed at the same timestamp.
I have read the xdebug documentation but can't find an explanation of this problem.
Can anyone give me enlightenment?
I'm using php 7.0, Debian 9.
My slow-query-log-file entries:
# Thread_id: 222244 Schema: user QC_hit: No
# Query_time: 51.019708 Lock_time: 0.000119 Rows_sent: 1 Rows_examined: 13295012
# Rows_affected: 0
SET timestamp=1559388099;
SELECT (COUNT(*)) AS `count`
FROM statistics Statistics WHERE (id >= 1 AND ad_type <> 3);
Edit:
It's not about counting rows in a SELECT statement, but it's about how to track application files that cause slow requests to occur.
回答1:
The most efficient way for counting large tables is that you should be storing the count in a table somewhere else and increasing/decreasing that value when required, that way you're only querying a single cell and a 51second query becomes less than 1 second.
I know it feels like a redundant thing to do but it is the most efficient and optimal way
There are topics around that suggest querying the INFORMATION_SCHEMA but that doesn't help at all given your need for WHERE
and everything else is just as inefficient as your issue here.
All you need is the current count, a place to store it, functionality to increase/decrease it and you're good to go
来源:https://stackoverflow.com/questions/56412381/how-to-trace-mysql-slow-query-log-file-entries-using-php-profiler