Sql Azure - maxing DTU Percentage querying a 'empty table'

匿名 (未验证) 提交于 2019-12-03 01:20:02

问题:

I have been having trouble with a database for the last month or so... (it was fine in November). (S0 Standard tier - not even the lowest tier.) - Fixed in update 5

Select statements are causing my database to throttle (timeout even). To makes sure it wasn't just a problem with my database, Ive:

  1. Copied the database... same problem on both (unless increasing the tier size).
  2. Deleted the database, and created the database again (blank database) from entity framework code-first

The second one proved more interesting. Now my database has 'no' data, and it still peaks the DTU and makes things unresponsive.

Firstly ... is this normal?

I do have more complicated databases at work that use about 10% max of the dtu at the same level (s0). So i'm perplexed. This is just one user, one database and currently empty, and I can make it unresponsive.

Update 2: From the copy ("the one with data 10000~ records"). I upgraded it to standard S2 (5x more powerful than s0 potentially. No problems. Down-graded it to S0 again and

SET STATISTICS IO ON SET STATISTICS TIME ON select * from Competitions -- 6 records here...

SQL Server parse and compile time: CPU time = 0 ms, elapsed time = 1 ms.

SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms.

SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms.

(6 row(s) affected)

Table 'Competitions'. Scan count 1, logical reads 3, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times: CPU time = 407 ms, elapsed time = 21291 ms.

Am i miss understanding azure databases, that they need to keep warming up? If i run the same query again it will be immediate. If i close the connection and do it again its back to ~20 seconds.

Update 3: s1 level and it does the same query above for the first time at ~1 second

Update 4: s0 level again ... first query...

(6 row(s) affected)

Table 'Competitions'. Scan count 1, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times: CPU time = 16 ms, elapsed time = 35 ms.

Nothing is changing on these databases apart from the tier. After roaming around on one of my live sites (different database, schema and data) on s0 ... it peaked at 14.58% (its a stats site)

Its not my best investigation. But im tired :D I can give more updates if anyone is curious.

** Update: 5 - fixed sort of **

The first few 100% spikes were the same table. After updating the schema and removing a geography field (the data was null in that column) it has moved to the later smaller peaks ~1-4% and a result time back in the very low ms.

Thanks for the help, Matt

回答1:

The cause of the problem to the crippling 100% DTO was a GEOGRAPHY field: http://msdn.microsoft.com/en-gb/library/cc280766.aspx

Removing this from my queries fixed the problem. Removing it from my EF models will hopefully make sure it never comes back.

I do want to use the geography field in Azure (eventually and probably not for a few months), so if anyone knows why it was causing a unexpected amount of DTU to be spent on a (currently always null) column that would be very useful for future knowledge.



标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!