问题
My search times are actually quite fast now but as soon as I start to rank them for the best results I hit a wall. The more hits I get, the slower it gets. For uncommon terms the search takes ~2ms and for more common ones it's ~900ms+. In the example I have gathered all possible structures within my data (simple, arrays, nested arrays).
CREATE TABLE book (
id BIGSERIAL NOT NULL,
data JSONB NOT NULL
);
Then I build a function which concatenate the name values of my nested array field 'author':
CREATE OR REPLACE FUNCTION author_function(
IN data JSONB,
OUT resultNames TSVECTOR
)
RETURNS TSVECTOR AS $$
DECLARE
authorRecords RECORD;
combinedAuthors JSONB [];
singleAuthor JSONB;
BEGIN
FOR authorRecords IN (SELECT value
FROM jsonb_array_elements(data #> '{authors}'))
LOOP
combinedAuthors := combinedAuthors || authorRecords.value;
END LOOP;
FOREACH singleAuthor IN ARRAY coalesce(combinedAuthors, '{}')
LOOP
resultNames := resultNames ||
coalesce(to_tsvector('english', singleAuthor ->> 'name'), to_tsvector('english', ''));
END LOOP;
END; $$
LANGUAGE plpgsql
IMMUTABLE;
And I need a function on which I can build an index for multiple concatenated fields:
CREATE OR REPLACE FUNCTION multi_field_function(
IN data JSONB
)
RETURNS TSVECTOR AS $$
BEGIN
RETURN
coalesce(to_tsvector('english', data ->> 'title'),
to_tsvector('english', '')) ||
coalesce(to_tsvector('english', data ->> 'subtitles'),
to_tsvector('english', '')) ||
coalesce(author_function(data),
to_tsvector('english', ''));
END; $$
LANGUAGE plpgsql
IMMUTABLE;
Now I need to build the indices.
CREATE INDEX book_title_idx
ON book USING GIN (to_tsvector('english', book.data ->> 'title'));
CREATE INDEX book_subtitle_idx
ON book USING GIN (to_tsvector('english', book.data ->> 'subtitles'));
CREATE INDEX book_author_idx
ON book USING GIN (author_function(book.data));
CREATE INDEX book_multi_field_idx
ON book USING GIN (multi_field_function(book.data));
Lastly I add some test data:
INSERT INTO book (data)
VALUES (CAST('{"title": "Cats",' ||
'"subtitles": ["Cats", "Dogs"],' ||
'"author": [{"id": 0, "name": "Cats"}, ' ||
' {"id": 1, "name": "Dogs"}]}' AS JSONB));
INSERT INTO book (data)
VALUES (CAST('{"title": "ats",' ||
'"subtitles": ["Cats", "ogs"],' ||
'"author": [{"id": 2, "name": "ats"}, ' ||
' {"id": 3, "name": "ogs"}]}' AS JSONB));
When I query on my multi_field_function I get the results listed as I want them.
EXPLAIN ANALYZE
SELECT *
FROM (
SELECT
id,
data,
ts_rank(query, 'cat:*') AS score
FROM
book,
multi_field_function(data) query
WHERE multi_field_function(data) @@ to_tsquery('cat:*')
ORDER BY score DESC) a
WHERE score > 0
ORDER BY score DESC;
On my real data this results in the following query plan. There you can see that only the last step the ranking is really slow.
Sort (cost=7921.72..7927.87 rows=2460 width=143) (actual time=949.644..952.263 rows=16926 loops=1)
Sort Key: (ts_rank(query.query, '''cat'':*'::tsquery)) DESC
Sort Method: external merge Disk: 4376kB
-> Nested Loop (cost=47.31..7783.17 rows=2460 width=143) (actual time=3.750..933.719 rows=16926 loops=1)
-> Bitmap Heap Scan on book (cost=47.06..7690.67 rows=2460 width=1305) (actual time=3.582..11.904 rows=16926 loops=1)
Recheck Cond: (multi_field_function(data) @@ to_tsquery('cat:*'::text))
Heap Blocks: exact=3695
-> Bitmap Index Scan on book_multi_field_idx (cost=0.00..46.45 rows=2460 width=0) (actual time=3.128..3.128 rows=16926 loops=1)
Index Cond: (multi_field_function(data) @@ to_tsquery('cat:*'::text))
-> Function Scan on multi_field_function query (cost=0.25..0.27 rows=1 width=32) (actual time=0.049..0.049 rows=1 loops=16926)
Filter: (ts_rank(query, '''cat'':*'::tsquery) > '0'::double precision)
Planning time: 0.163 ms
Execution time: 953.624 ms
Is there any way I can keep my json structure and still be able to get good and fast search results for multiple fields?
EDIT: I had to adapt Vao Tsun's query because it didn't recognize 'query' from the inner FROM.
EXPLAIN ANALYZE
SELECT
*,
ts_rank(query, 'cat:*') AS score
FROM (
SELECT
id,
data
FROM
book
WHERE multi_field_function(data) @@ to_tsquery('cat:*')
) a,
multi_field_function(a.data) query
ORDER BY score DESC;
Sadly the performance didn't change much:
Sort (cost=7880.82..7886.97 rows=2460 width=1343) (actual time=863.542..875.035 rows=16840 loops=1)
Sort Key: (ts_rank(query.query, '''cat'':*'::tsquery)) DESC
Sort Method: external merge Disk: 25280kB
-> Nested Loop (cost=43.31..7742.27 rows=2460 width=1343) (actual time=3.570..821.861 rows=16840 loops=1)
-> Bitmap Heap Scan on book (cost=43.06..7686.67 rows=2460 width=1307) (actual time=3.362..12.085 rows=16840 loops=1)
Recheck Cond: (multi_field_function(data) @@ to_tsquery('cat:*'::text))
Heap Blocks: exact=1
-> Bitmap Index Scan on book_multi_field_idx (cost=0.00..42.45 rows=2460 width=0) (actual time=2.934..2.934 rows=16840 loops=1)
Index Cond: (multi_field_function(data) @@ to_tsquery('cat:*'::text))
-> Function Scan on multi_field_function query (cost=0.25..0.26 rows=1 width=32) (actual time=0.047..0.047 rows=1 loops=16840)
Planning time: 0.090 ms
Execution time: 879.736 ms
来源:https://stackoverflow.com/questions/41042639/improve-ranking-times-on-multiple-jsonb-fields-search-in-postgresql