query-performance

How to get data from 4 tables in 1 sql query?

牧云@^-^@ 提交于 2019-12-13 01:19:45
问题 I have the following database schema: table courses: id tutor_id title table course_categories: id category_id course_id table categories: id name table tutors: id name table subscribers: id course_id user_id I need to make 1 sql to get a course with all it's categories, and the tutor for that course and the number of subscribers for that course. Can this be done in 1 query? Should this be done using stored procedures? 回答1: With this query you get what you want: select co.title as course, ca

Painfully slow Postgres query using WHERE on many adjacent rows

亡梦爱人 提交于 2019-12-12 18:37:48
问题 I have the following psql table. It has roughly 2 billion rows in total. id word lemma pos textid source 1 Stuffing stuff vvg 190568 AN 2 her her appge 190568 AN 3 key key nn1 190568 AN 4 into into ii 190568 AN 5 the the at 190568 AN 6 lock lock nn1 190568 AN 7 she she appge 190568 AN 8 pushed push vvd 190568 AN 9 her her appge 190568 AN 10 way way nn1 190568 AN 11 into into ii 190568 AN 12 the the appge 190568 AN 13 house house nn1 190568 AN 14 . . 190568 AN 15 She she appge 190568 AN 16 had

Why MongoDB different query plans show different nReturned value?

左心房为你撑大大i 提交于 2019-12-12 10:13:59
问题 I have a collection faults in my MongoDB database which every document has these fields: rack_name , timestamp Just for sake of testing and comparing performances, I have created these two indexes: rack -> {'rack_name': 1} and time -> {'timestamp': 1} Now I executed the following query with explain(): db.faults.find({ 'rack_name': { $in: [ 'providence1', 'helena2' ] }, 'timestamp': { $gt: 1501548359000 } }) .explain('allPlansExecution') and here is the result: { "queryPlanner" : {

MySQL: Optimized query to find matching strings from set of strings

你离开我真会死。 提交于 2019-12-12 04:52:52
问题 I am having 10 sets of strings each set having 9 strings. Of this 10 sets, all strings in first set have length 10, those in second set have length 9 and so on. Finally, all strings in 10th set have length 1. There is common prefix of (length-2) characters in each set. And the prefix length reduces by 1 in next set. Thus, first set has 8 characters in common, second has 7 and so on. Here is what a sample of 10 sets look like: pu3q0k0vwn pu3q0k0vwp pu3q0k0vwr pu3q0k0vwq pu3q0k0vwm pu3q0k0vwj

Slow query performance left joining a view

匆匆过客 提交于 2019-12-12 04:30:06
问题 I have 2 tables: describe CONSUMO Field Type Null Key Default Extra idconsumo int(11) NO PRI NULL auto_increment idkey int(11) NO MUL NULL ip varchar(50) NO Unknown fechahora datetime NO NULL describe CONTRATADO Field Type Null Key Default Extra idkey int(11) NO PRI NULL auto_increment idusuario int(11) NO MUL NULL idproducto int(11) NO MUL NULL key varchar(64) NO MUL NULL descripcion varchar(50) YES "API KEY" peticiones int(11) YES NULL caducidad datetime YES NULL And a view (that returns

Maximizing Performance with the Entity Framework [duplicate]

戏子无情 提交于 2019-12-12 04:17:47
问题 This question already has answers here : How to “warm-up” Entity Framework? When does it get “cold”? (5 answers) Closed 3 years ago . I am developing travel web site. When user input a location for search(autocomplete) my action return all cities, that cities regions, regions, regions translations, hotels..... which start with user input I used Entity code first. But it is response time is too much. How can I optimize this? How can I decrease time? public JsonResult AutoComplateCityxxxxxxxx

Effective way to delete duplicate rows from millions of records

北慕城南 提交于 2019-12-12 03:28:04
问题 I am looking to find an effective way to delete duplicated records from my database. First, I used a stored procedure that uses joins and such, which caused the query to execute very slow. Now, I am trying a different approach. Please consider the following queries: /* QUERY A */ SELECT * FROM my_table WHERE col1 = value AND col2 = value AND col3 = value This query just executed in 12 seconds, with a result of 182.400 records. The row count in the table is currently 420.930.407, and col1 and

query efficiency - select the 2 latest “group/batch” records from table

落爺英雄遲暮 提交于 2019-12-12 01:25:42
问题 We have a tested a quite interesting SQL query. Unfortunately, It turned out that this query runs a little bit slow - O(n2) - and we are looking for a optimized solution or maybe also a totally different one? Goal: We would like to get for: - some customers ("record_customer_id"), e.g. ID 5 - the latest 2 "record_init_proc_id" - for every "record_inventory_id" http://www.sqlfiddle.com/#!9/07e5d/4 The query works fine and shows the correct results but uses at least two full table scans which

Speed up mysql table query

泄露秘密 提交于 2019-12-12 00:51:11
问题 I have the following mysql table CREATE TABLE IF NOT EXISTS `customer_info` ( `auto_id` int(11) unsigned NOT NULL AUTO_INCREMENT, `customer_id` varchar(20) CHARACTER SET latin1 NOT NULL, `apply_dt` date DEFAULT NULL, `priority_opt` varchar(30) CHARACTER SET latin1 DEFAULT NULL, `cust_apl_by` varchar(30) CHARACTER SET latin1 DEFAULT NULL, `cust_upd_by` varchar(30) CHARACTER SET latin1 DEFAULT NULL, `cust_upd_dt` datetime DEFAULT NULL, `agent` varchar(30) COLLATE utf8_unicode_ci DEFAULT NULL,

RowNumber() and Partition By performance help wanted

徘徊边缘 提交于 2019-12-12 00:49:40
问题 I've got a table of stock market moving average values, and I'm trying to compare two values within a day, and then compare that value to the same calculation of the prior day. My sql as it stands is below... when I comment out the last select statement that defines the result set, and run the last cte shown as the result set, I get my data back in about 15 minutes. Long, but manageable since it'll run as an insert sproc overnight. When I run it as shown, I'm at 40 minutes before any results