query-performance

Does wildcard in left-most column of composite index mean remaining columns in index aren't used in index lookup (MySQL)?

▼魔方 西西 提交于 2019-12-31 07:27:06
问题 Imagine you have a primary composite index of last_name,first_name . Then you performed a search of WHERE first_name LIKE 'joh%' AND last_name LIKE 'smi%' . Does the wildcard used in the last_name condition mean that the first_name condition will not be used in further helping MySQL find indexes? In other words, by putting a wildcard on the last_name condition MySQL will only do a partial index lookup (and ignores conditions given in the columns that are to the right of last_name)? Further

Query performance: Query on multiple tables Vs. Composite query

家住魔仙堡 提交于 2019-12-31 07:04:08
问题 Table A has column srno and few other columns. Table B has columns srno and id . I want to get srno from B for given id and then fetch out record(s) for that srno from table A. For example, if id is 7 then I can think of doing this by two ways: select * from A as table_a, B as table_b where table_a.srno=table_b.srno and table_b.id=7; And, select * from A where srno in (select srno from B where id=7); Both are doing same. But when there are huge number of records in both the tables,

Query performance: Query on multiple tables Vs. Composite query

笑着哭i 提交于 2019-12-31 07:04:02
问题 Table A has column srno and few other columns. Table B has columns srno and id . I want to get srno from B for given id and then fetch out record(s) for that srno from table A. For example, if id is 7 then I can think of doing this by two ways: select * from A as table_a, B as table_b where table_a.srno=table_b.srno and table_b.id=7; And, select * from A where srno in (select srno from B where id=7); Both are doing same. But when there are huge number of records in both the tables,

Should I sacrifice my innodb_buufer_pool _size/RAM to make space for query_cache_size ?

丶灬走出姿态 提交于 2019-12-31 03:55:06
问题 I have a 16GB dedicated Mysql server database.My innodb_buffer_pool_size is set to around 11GB ,i am implementing query cache in my system ,which has a size of 80mb. From where should i make this space ,innodb_buffer_pool_size or RAM ? 回答1: Back in Jun 2014 I answered https://dba.stackexchange.com/questions/66774/why-query-cache-type-is-disabled-by-default-start-from-mysql-5-6/66796#66796 In that post, I discussed how InnoDB micromanages changes between the InnoDB Buffer Pool and the Query

Why isn't my PostgreSQL array index getting used (Rails 4)?

旧巷老猫 提交于 2019-12-29 09:27:11
问题 I've got a PostgreSQL array of strings as a column in a table. I created an index using the GIN method. But ANY queries won't use the index (instead, they're doing a sequential scan of the whole table with a filter). What am I missing? Here's my migration: class CreateDocuments < ActiveRecord::Migration def up create_table :documents do |t| t.string :title t.string :tags, array: true, default: [] t.timestamps end add_index :documents, :tags, using: 'gin' (1..100000).each do |i| tags = [] tags

Execute multiple functions together without losing performance

不羁的心 提交于 2019-12-29 07:31:15
问题 I have this process that has to make a series of queries, using pl/pgsql: --process: SELECT function1(); SELECT function2(); SELECT function3(); SELECT function4(); To be able to execute everything in one call, I created a process function as such: CREATE OR REPLACE FUNCTION process() RETURNS text AS $BODY$ BEGIN PERFORM function1(); PERFORM function2(); PERFORM function3(); PERFORM function4(); RETURN 'process ended'; END; $BODY$ LANGUAGE plpgsql The problem is, when I sum the time that each

postgres not using index

本小妞迷上赌 提交于 2019-12-25 08:38:50
问题 There are lots of questions on this topic, but all of them seem to be more complex cases than what I'm looking at at the moment and the answers don't seem applicable. OHDSI=> \d record_counts Table "results2.record_counts" Column | Type | Modifiers ------------------------+-----------------------+----------- concept_id | integer | schema | text | table_name | text | column_name | text | column_type | text | descendant_concept_ids | bigint | rc | numeric | drc | numeric | domain_id | character

How to understand statistics of trace file in Oracle. Such as CPU, elapsed time, query…etc

北慕城南 提交于 2019-12-25 01:30:07
问题 I am learning query optimization in Oracle and I know that trace file will create statistic about the query execution and EXPLAIN Plan of the query. At the bottom of the trace file, it is EXPLAIN PLAN of the query. My first question is , does the part "time = 136437 us" show the time duration for the steps of query execution? what does "us" mean ? Is it unit of time? In addition, can anyone explain what statistics such as count, cpu, elapsed , disk and query mean? I google and read Oracle doc

snowflake sproc vs standalone sql

允我心安 提交于 2019-12-24 21:30:41
问题 I am thinking to create denormalized table for our BI purpose. While creating business logic from several tables i noticed queries perform better when denormalized table is updated in batches(sproc with multiple business logic SQL's) with merge statement as below. eg: sproc contains multiple SQL's like merge denormalized_data (select businesslogic1) merge denormalized_data (select businesslogic2) etc Is it better to include business logic in huge SQL or divide it so that each query handles

effect of number of projections on query performance

北城以北 提交于 2019-12-24 18:54:15
问题 I am looking to improve the performance of a query which selects several columns from a table. was wondering if limiting the number of columns would have any effect on performance of the query. 回答1: Reducing the number of columns would, I think, have only very limited effect on the speed of the query but would have a potentially larger effect on the transfer speed of the data. The less data you select, the less data that would need to be transferred over the wire to your application. 回答2: I