query-tuning

Optimizing huge value list in Teradata without volatile tables

拜拜、爱过 提交于 2021-01-28 04:11:30
问题 Have a value list like` `where a.c1 in ( list ) ` Then shoving the list in the volatile table is the best way out. However this is being done via cognos & IBM isn't smart enough to know what Teradata's volatile table is. I wish It was so I could use exclusion logic Exists to go through the volatile table contents. So without volatile table , I have a value list where a.c1 in ( list ) which has like 5K values. Keeping that list in the report is proving expensive. I wondered if it was possible

MarkLogic: Understanding searchable and unsearchable queries?

只谈情不闲聊 提交于 2019-12-24 07:07:27
问题 I have the following expression: let $q1 := cts:element-range-query(xs:QName("ts:week"), ">=" ,xs:date("2009-04-25")) return cts:search(fn:doc(), $q1, "unfiltered") I did a xdmp:plan , and got to know that range indexes are being used and the expression is searchable However, when I added a XPath: let $q1 := cts:element-range-query(xs:QName("ts:week"), ">=" ,xs:date("2009-04-25")) return cts:search(fn:doc(), $q1, "unfiltered")/ts:top-song/ts:title/text() On doing a xdmp:plan , it told me the

Date Parameter causing Clustered Index Scan

*爱你&永不变心* 提交于 2019-12-23 22:56:59
问题 I have the following query DECLARE @StartDate DATE = '2017-09-22' DECLARE @EndDate DATE = '2017-09-23' SELECT a.col1, a.col2, b.col1, b.col2, b.col3, a.col3 FROM TableA a JOIN TableB b ON b.pred = a.pred WHERE b.col2 > @StartDate AND b.col2 < @EndDate When I run this and inspect the actual execution plan, I can see that the most costly operator is a clustered index scan (The index is on a.pred) However, if I change the query as follows SELECT a.col1, a.col2, b.col1, b.col2, b.col3, a.col3

Cypher: Use WHERE clause or MATCH property definition for exact match?

你。 提交于 2019-12-19 19:43:18
问题 In Neo4j (version 3.0), the following queries return the same results: 1. MATCH (a:Label) WHERE a.property = "Something" RETURN a 2. MATCH (a:Label {property: "Something"}) RETURN a While playing with some large datasets, I noticed (and verified using EXPLAIN and PROFILE ) that for some instances, queries like the second one performs better and faster. While other instances exist where both versions performed equally, I didn't yet see one where the first version performed better. The neo4j

Expanded tree cache full error need to tune the query

假如想象 提交于 2019-12-13 00:39:49
问题 Description: $enumValues will have sequence of strings that I have to look into $assetSubGroup will have a element value from XML (for loop) i.e string that I have to match in above maintained sequence If match is not, I have to hold few element values and return. All three of my attempts below are giving me expanded tree cache full errors. There are ~470000 assets i.e XML I'm querying. How can I tune these queries to avoid expanded tree cache errors? approach 1: let $query-name := "get-asset

Create Index on partial CHAR Column

≯℡__Kan透↙ 提交于 2019-12-12 13:17:52
问题 I have a CHAR(250) column being used as a foreign key to a varchar(24) column. In MySQL I recall that I could create an index specifying column(24) in order to create an index on the leftmost 24 characters. This doesn't appear to be possible on MS SQL Server. My question is this: Is it possible to use an indexed view on SQL Server 2008 to index a substring of that column, and if so, would it have any side-effects on the table's performance? 回答1: You can create a persisted computed column,

Increase SQL Query Performance

半城伤御伤魂 提交于 2019-12-12 06:57:37
问题 Sql: select distinct DateAdd(Day, DateDiff(Day, 0, m.Receive_date), 0) as Date, (select count(*) from Raw_Mats A where DateAdd(Day, DateDiff(Day, 0, A.Receive_date), 0)=DateAdd(Day, DateDiff(Day, 0, m.Receive_date), 0)) as Total, (select count(*) from Raw_Mats B where DateAdd(Day, DateDiff(Day, 0, B.Receive_date), 0)=DateAdd(Day, DateDiff(Day, 0, m.Receive_date), 0) and B.status='Solved') as Delivered, (select count(*) from Raw_Mats C where DateAdd(Day, DateDiff(Day, 0, C.Receive_date), 0)

MySQL Query Tuning - Why is using a value from a variable so much slower than using a literal?

别说谁变了你拦得住时间么 提交于 2019-12-10 17:53:44
问题 UPDATE: I've answered this myself below. I'm trying to fix a performance issue in a MySQL query. What I think I'm seeing, is that assigning the result of a function to a variable, and then running a SELECT with a compare against that variable is relatively slow. If for testings sake however, I replace the compare to the variable with a compare to the string literal equivalent of what I know that function will return (for a given scenario), then the query runs much faster. For example: ... SET

Explain plan in mysql performance using Using temporary; Using filesort ; Using index condition

雨燕双飞 提交于 2019-12-02 11:21:18
问题 I read various blogs and documents online but just wanted to know how i can optimize the query. I am unable to decide if we have to rewrite the query or add indexes in order to optimize. Adding create table structure also CREATE TABLE `dsr_table` ( `DSR_VIA` CHAR(3) DEFAULT NULL, `DSR_PULLDATA_FLAG` CHAR(1) DEFAULT 'O', `DSR_BILLING_FLAG` CHAR(1) DEFAULT 'O', `WH_FLAG` CHAR(1) DEFAULT 'O', `ARCHIVE_FLAG` CHAR(1) NOT NULL DEFAULT 'O', `DSR_BOOKING_TYPE` INT(2) DEFAULT NULL, `DSR_BRANCH_CODE`

Explain plan in mysql performance using Using temporary; Using filesort ; Using index condition

旧时模样 提交于 2019-12-02 07:19:52
I read various blogs and documents online but just wanted to know how i can optimize the query. I am unable to decide if we have to rewrite the query or add indexes in order to optimize. Adding create table structure also CREATE TABLE `dsr_table` ( `DSR_VIA` CHAR(3) DEFAULT NULL, `DSR_PULLDATA_FLAG` CHAR(1) DEFAULT 'O', `DSR_BILLING_FLAG` CHAR(1) DEFAULT 'O', `WH_FLAG` CHAR(1) DEFAULT 'O', `ARCHIVE_FLAG` CHAR(1) NOT NULL DEFAULT 'O', `DSR_BOOKING_TYPE` INT(2) DEFAULT NULL, `DSR_BRANCH_CODE` CHAR(3) NOT NULL, `DSR_CNNO` CHAR(12) NOT NULL, `DSR_BOOKED_BY` CHAR(1) NOT NULL, `DSR_CUST_CODE`