Postgres 不使用不同的查询计划来获得更高的偏移量
Posted
技术标签:
【中文标题】Postgres 不使用不同的查询计划来获得更高的偏移量【英文标题】:Postgres not Using different query plan for higher offsets 【发布时间】:2017-01-25 12:19:12 【问题描述】:我有这个 postgres 查询
explain SELECT "facilities".* FROM "facilities" INNER JOIN
resource_indices ON resource_indices.resource_id = facilities.uuid WHERE
(client_id IS NULL OR (client_tag=NULL AND client_id=7))
AND (ARRAY['country:india']::varchar[] && resource_indices.tags)
AND "facilities"."is_listed" = 't'
ORDER BY resource_indices.name LIMIT 11 OFFSET 100;
观察偏移量。当偏移量小于 200 时,它使用索引并且工作正常。 查询计划如下
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=23416.57..24704.45 rows=11 width=1457) (actual time=41.951..43.035 rows=11 loops=1)
-> Nested Loop (cost=0.71..213202.15 rows=1821 width=1457) (actual time=2.107..43.007 rows=211 loops=1)
-> Index Scan using index_resource_indices_on_name on resource_indices (cost=0.42..190226.95 rows=12460 width=28) (actual time=2.096..40.790 rows=408 loops=1)
Filter: ('country:india'::character varying[] && tags)
Rows Removed by Filter: 4495
-> Index Scan using index_facilities_on_uuid on facilities (cost=0.29..1.83 rows=1 width=1445) (actual time=0.005..0.005 rows=1 loops=408)
Index Cond: (uuid = resource_indices.resource_id)
Filter: ((client_id IS NULL) AND is_listed)
Planning time: 1.259 ms
Execution time: 43.121 ms
(10 rows)
增加四百的偏移量开始使用散列连接并提供更差的性能。增加偏移量会带来更差的性能。
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=34508.62..34508.65 rows=11 width=1457) (actual time=136.288..136.291 rows=11 loops=1)
-> Sort (cost=34507.62..34512.18 rows=1821 width=1457) (actual time=136.224..136.268 rows=411 loops=1)
Sort Key: resource_indices.name
Sort Method: top-N heapsort Memory: 638kB
-> Hash Join (cost=29104.96..34419.46 rows=1821 width=1457) (actual time=23.885..95.099 rows=6518 loops=1)
Hash Cond: (facilities.uuid = resource_indices.resource_id)
-> Seq Scan on facilities (cost=0.00..4958.39 rows=33790 width=1445) (actual time=0.010..48.732 rows=33711 loops=1)
Filter: ((client_id IS NULL) AND is_listed)
Rows Removed by Filter: 848
-> Hash (cost=28949.21..28949.21 rows=12460 width=28) (actual time=23.311..23.311 rows=12601 loops=1)
Buckets: 2048 Batches: 1 Memory Usage: 814kB
-> Bitmap Heap Scan on resource_indices (cost=1048.56..28949.21 rows=12460 width=28) (actual time=9.369..18.710 rows=12601 loops=1)
Recheck Cond: ('country:india'::character varying[] && tags)
Heap Blocks: exact=7334
-> Bitmap Index Scan on index_resource_indices_on_tags (cost=0.00..1045.45 rows=12460 width=0) (actual time=7.680..7.680 rows=13889 loops=1)
Index Cond: ('country:india'::character varying[] && tags)
Planning time: 1.408 ms
Execution time: 136.465 ms
(18 rows)
我该如何解决这个问题?谢谢
【问题讨论】:
【参考方案1】:这是不可避免的,因为没有其他方法可以实现LIMIT 10 OFFSET 10000
,只能获取前 10010 行并丢弃除最后 10 行之外的所有行。随着偏移量的增加,这势必会越来越糟糕。
PostgreSQL 切换到不同的计划,因为它必须检索更多的结果行,并且“快速启动”计划快速检索前几行并且通常涉及嵌套循环连接,当更多的结果行被需要。
OFFSET
是邪恶的,在大多数情况下你应该避免它。阅读what Markus Winand has to say about this topic,尤其是如何在没有OFFSET
的情况下对结果集进行分页。
【讨论】:
以上是关于Postgres 不使用不同的查询计划来获得更高的偏移量的主要内容,如果未能解决你的问题,请参考以下文章