I have the following table:
CREATE TABLE mytable (
id serial PRIMARY KEY
, employee text UNIQUE NOT NULL
, data jsonb
);
With t
Your first query can be solved like this (shooting from the hip, no access to PG 9.4 here):
SELECT employee, json_object_agg(key, sales)::jsonb AS sales_
FROM (
SELECT t.employee, j.key, sum((e->>'value')::int) AS sales
FROM mytable t,
jsonb_each(t.data) j,
jsonb_array_elements(j.value) e
WHERE t.employee = 'Jim'
AND j.key like 'sales_%'
AND e->>'yr' = '2012'
GROUP BY t.employee, j.key) sub
GROUP BY employee;
The trick here is that you use LATERAL
joins to "peel away" outer layers of the jsonb
object to get at data deeper down. This query is assuming that Jim may have sales in multiple locations.
(Working on your query 2)
You treat the result of the first join as JSON, not as text string, so use jsonb_each() instead of jsonb_each_text()
:
SELECT t.employee, json_object_agg(a.k, d.value) AS sales
FROM mytable t
JOIN LATERAL jsonb_each(t.data) a(k,v) ON a.k LIKE 'sales_%'
JOIN LATERAL jsonb_to_recordset(a.v) d(yr text, value float) ON d.yr = '2012'
WHERE t.employee = 'Jim' -- works because employee is unique
GROUP BY 1;
GROUP BY 1
is shorthand for GROUP BY t.employee
.
Result:
employee | sales
---------+--------
Jim | '{ "sales_tv" : 40, "sales_radio" : 76 }'
I also untangled and simplified your query.
json_object_agg() is instrumental in aggregating name/value pairs as JSON object. Optionally cast to jsonb
if you need that - or use jsonb_object_agg() in Postgres 9.5 or later.
Using explicit JOIN
syntax to attach conditions in their most obvious place.
The same without explicit JOIN
syntax:
SELECT t.employee, json_object_agg(a.k, d.value) AS sales
FROM mytable t
, jsonb_each(t.data) a(k,v)
, jsonb_to_recordset(a.v) d(yr text, value float)
WHERE t.employee = 'Jim'
AND a.k LIKE 'sales_%'
AND d.yr = '2012'
GROUP BY 1;