I have a table in Postgres 9.6.3:
CREATE TABLE public."Records"
(
"Id" uuid NOT NULL,
"Json" jsonb,
CONSTRAINT "PK_Records" PRIMARY KEY ("Id")
)
Inside my "Json" column i store arrays like so:
[
{"a":"b0","c":0,"z":true},
{"a":"b1","c":1,"z":false},
{"a":"b2","c":2,"z":true}
]
There can be some 10 million entries in each array, and there can be some 5 million records in the table.
I want to get the JSON out, paged, e.g. skip 1 record and return 2 records.
I can do it like so:
select string_agg(txt, ',') as x FROM
(select jsonb_array_elements_text("Json") as txt
FROM "Records" where "Id" = 'de70aadc-70e8-4c77-bd4b-af75ed36897e' -- some id here
limit 50 offset 5000 -- paging parameters
) s;
However, the query takes almost a second (between 780 and 900 msec) to run on my dev laptop with some quite decent hardware (macbook pro 2017). Note: the timing is for the data sizes specified above, of course the sample data of 3 records returns faster.
Adding a GIN index like so: CREATE INDEX records_gin ON "Records" USING gin ("Json"); didn't actually do anything for the query performance (i suppose because i am not querying by the contents of the array).
Is there any way to make this work faster?

jsonb[]array. In that case you can use the native array syntax:some_col[2:10]to select a part of the array{'JSON HERE','JSON HERE}.