I have been seeing quite a large variation in response times regarding LIKE
queries to a particular table in my database. Sometimes I will get results within 20
I recently had a similar issue with a table containing 200000 records and I need to do repeated LIKE queries. In my case, the string being search was fixed. Other fields varied. Because that, I was able to rewrite:
SELECT owner1 FROM parcels
WHERE lower(owner1) LIKE lower('%someones name%');
as
CREATE INDEX ix_parcels ON parcels(position(lower('someones name') in lower(owner1)));
SELECT owner1 FROM parcels
WHERE position(lower('someones name') in lower(owner1)) > 0;
I was delighted when the queries came back fast and verified the index is being used with EXPLAIN ANALYZE
:
Bitmap Heap Scan on parcels (cost=7.66..25.59 rows=453 width=32) (actual time=0.006..0.006 rows=0 loops=1)
Recheck Cond: ("position"(lower(owner1), 'someones name'::text) > 0)
-> Bitmap Index Scan on ix_parcels (cost=0.00..7.55 rows=453 width=0) (actual time=0.004..0.004 rows=0 loops=1)
Index Cond: ("position"(lower(owner1), 'someones name'::text) > 0)
Planning time: 0.075 ms
Execution time: 0.025 ms
You could install Wildspeed, a different type of index in PostgreSQL. Wildspeed does work with %word% wildcards, no problem. The downside is the size of the index, this can be large, very large.
Your like queries probably cannot use the indexes you created because:
1) your LIKE criteria begins with a wildcard.
2) you've used a function with your LIKE criteria.
for what it's worth, Django ORM tends to use UPPER(text)
for all LIKE
queries to make it case insensitive,
Adding an index on UPPER(column::text)
has greatly sped up my system, unlike any other thing.
As far as leading %, yes that will not use an index. See this blog for a great explanation:
https://use-the-index-luke.com/sql/where-clause/searching-for-ranges/like-performance-tuning
Please Execute below mentioned query for improve the LIKE query performance in postgresql. create an index like this for bigger tables:
CREATE INDEX <indexname> ON <tablename> USING btree (<fieldname> text_pattern_ops)
When ever you use a clause on a column with functions eg LIKE, ILIKE, upper, lower etc. Then postgres wont take your normal index into consideration. It will do a full scan of the table going through each row and therefore it will be slow.
The correct way would be to create a new index according to your query. For example if i want to match a column without case sensitivity and my column is a varchar. Then you can do it like this.
create index ix_tblname_col_upper on tblname (UPPER(col) varchar_pattern_ops);
Similarly if your column is a text then you do something like this
create index ix_tblname_col_upper on tblname (UPPER(col) text_pattern_ops);
Similarly you can change the function upper to any other function that you want.