query-optimization

Django: How to prefetch related for a model instance. Perhaps by wrapping in a queryset?

僤鯓⒐⒋嵵緔 提交于 2020-08-08 06:36:45
问题 I use Django rest framework and I have decent nesting in my model relations. I'm working on optimizing my queries. Many of my functions consume or manipulate a single instance of model and it's often further downstream in the data flow that it turns out I need some prefetching. One classic instance of this is with DRF serializers. Here's an example. @api_view(['GET']) def fetch_user_profile(request): profile = request.user.profile # has many nested relationships return Response

Django: How to prefetch related for a model instance. Perhaps by wrapping in a queryset?

陌路散爱 提交于 2020-08-08 06:36:18
问题 I use Django rest framework and I have decent nesting in my model relations. I'm working on optimizing my queries. Many of my functions consume or manipulate a single instance of model and it's often further downstream in the data flow that it turns out I need some prefetching. One classic instance of this is with DRF serializers. Here's an example. @api_view(['GET']) def fetch_user_profile(request): profile = request.user.profile # has many nested relationships return Response

MySQL index in between where clause and order by clause

删除回忆录丶 提交于 2020-07-29 07:17:07
问题 My table structure is something like below: CREATE TABLE test ( id INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, field_1 VARCHAR(60) NOT NULL, field_2 INT(10) UNSIGNED NULL, field_3 INT(10) UNSIGNED NULL, field_4 INT(10) UNSIGNED NULL, field_5 CHAR(2) NULL, field_6 INT(10) UNSIGNED NOT NULL, rank TINYINT(2) NOT NULL DEFAULT '0', status TINYINT(3) NOT NULL DEFAULT '0', PRIMARY KEY (id), INDEX (status) ) DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci ENGINE = MyISAM; On above table the fields

Converting Merge clause with Bulk collect/FORALL in pl/sql

跟風遠走 提交于 2020-06-29 03:34:15
问题 I wrote a procedure where the data gets updated/inserted simultaneously to the destination table from source table. The procedure is working fine for less no of records, but when i try to execute more records its taking more time to perform the operation. Can we convert merge clause with bulk collect where the logic remains same ? i dint find any useful resources. I have attached my merge procedure . create or replace PROCEDURE TEST1 ( p_array_size IN NUMBER ) IS CURSOR dtls IS SELECT

Converting Merge clause with Bulk collect/FORALL in pl/sql

╄→гoц情女王★ 提交于 2020-06-29 03:33:36
问题 I wrote a procedure where the data gets updated/inserted simultaneously to the destination table from source table. The procedure is working fine for less no of records, but when i try to execute more records its taking more time to perform the operation. Can we convert merge clause with bulk collect where the logic remains same ? i dint find any useful resources. I have attached my merge procedure . create or replace PROCEDURE TEST1 ( p_array_size IN NUMBER ) IS CURSOR dtls IS SELECT

Planner not using index order to sort the records using CTE

五迷三道 提交于 2020-06-28 06:13:11
问题 I am trying to pass some ids into an in-clause on a sorted index with the same order by condition but the query planner is explicitly sorting the data after performing index search. below are my queries. Generate a temporary table. SELECT a.n/20 as n, md5(a.n::TEXT) as b INTO temp_table From generate_series(1, 100000) as a(n); create an index CREATE INDEX idx_temp_table ON temp_table(n ASC, b ASC); In below query, planner uses index ordering and doesn't explicitly sorts the data.(expected)

PostgreSQL - fetch the row which has the Max value for a column

眉间皱痕 提交于 2020-06-27 05:29:08
问题 I'm dealing with a Postgres table (called "lives") that contains records with columns for time_stamp, usr_id, transaction_id, and lives_remaining. I need a query that will give me the most recent lives_remaining total for each usr_id There are multiple users (distinct usr_id's) time_stamp is not a unique identifier: sometimes user events (one by row in the table) will occur with the same time_stamp. trans_id is unique only for very small time ranges: over time it repeats remaining_lives (for

MySQL ALTER TABLE taking long in small table

我是研究僧i 提交于 2020-06-17 05:15:48
问题 I have two tables in my scenario table1, which has about 20 tuples table2, which has about 3 million tuples table2 has a foreign key referencing table1 "ID" column. When I try to execute the following query: ALTER TABLE table1 MODIFY vccolumn VARCHAR(1000); It takes forever. Why is it taking that long? I have read that it should not, because it only has 20 tuples. Is there any way to speed it up without having server downtime? Because the query is locking the table, also. 回答1: I would guess

MySQL ALTER TABLE taking long in small table

泄露秘密 提交于 2020-06-17 05:14:21
问题 I have two tables in my scenario table1, which has about 20 tuples table2, which has about 3 million tuples table2 has a foreign key referencing table1 "ID" column. When I try to execute the following query: ALTER TABLE table1 MODIFY vccolumn VARCHAR(1000); It takes forever. Why is it taking that long? I have read that it should not, because it only has 20 tuples. Is there any way to speed it up without having server downtime? Because the query is locking the table, also. 回答1: I would guess

Why does MySQL not always use index for select query?

旧城冷巷雨未停 提交于 2020-06-01 06:23:00
问题 I have two tables in my database users and articles. Records in my users and articles table are given below: +----+--------+ | id | name | +----+--------+ | 1 | user1 | | 2 | user2 | | 3 | user3 | +----+--------+ +----+---------+----------+ | id | user_id | article | +----+---------+----------+ | 1 | 1 | article1 | | 2 | 1 | article2 | | 3 | 1 | article3 | | 4 | 2 | article4 | | 5 | 2 | article5 | | 6 | 3 | article6 | +----+---------+----------+ Given below the queries and the respected