Postgresql 内连接(尤其是自连接)上的多列优化
Posted
技术标签:
【中文标题】Postgresql 内连接(尤其是自连接)上的多列优化【英文标题】:Postgresql inner join (especially self join) on many columns optimization 【发布时间】:2017-06-02 12:27:02 【问题描述】:我正在运行 PostgresSQL 9.6.2,并且有一个包含 7 列、大约 2,900,000 行的表。该表是临时的,它是主题重复数据删除过程的一部分,旨在根据不同的规则集为相同的主题分配新的 id (s_id_new)。总的来说,我执行了大约 10-12 次内部连接,每次都在相似但略有不同的数据子集/不同的 WHERE 条件/不同的连接列上。
现在,查询效率太低,没有完成(必须在 2 小时后取消)。
出于优化的目的,我创建了一个数据子集(50 000 行)。
\d subject_subset;
Column | Type | Modifiers
----------------+------------------------+-----------
s_id | text |
surname_clean | character varying(20) |
name_clean | character varying(20) |
fullname_clean | character varying(100) |
id1 | character varying(20) |
id2 | character varying(20) |
id3 | character varying(20) |
s_id_new | character varying(20) |
Indexes:
"subject_subset_s_id_new_idx" btree (s_id_new)
我正在尝试优化的查询
select s_id_new, max(I_s_id) as s_id_deduplicated
from (select a.*, b.s_id_new as I_s_id
from public.subject_subset a
inner join public.subject_subset b on a.surname_clean=b.surname_clean
and a.id2=b.id2
where
a.id1 is null
and a.id2 is not null
and a.surname_clean is not null ) h
group by s_id_new;
The result of the EXPLAIN ANALYZE:
https://explain.depesz.com/s/7knH
"GroupAggregate (cost=5616.65..5620.39 rows=142 width=90) (actual time=32542.127..46938.858 rows=2889 loops=1)"
" Group Key: a.s_id_new"
" -> Sort (cost=5616.65..5617.42 rows=310 width=116) (actual time=32542.116..43194.626 rows=18356220 loops=1)"
" Sort Key: a.s_id_new"
" Sort Method: external merge Disk: 531760kB"
" -> Hash Join (cost=1114.72..5603.82 rows=310 width=116) (actual time=13.159..4892.011 rows=18356220 loops=1)"
" Hash Cond: (((b.surname_clean)::text = (a.surname_clean)::text) AND ((b.id2)::text = (a.id2)::text))"
" -> Seq Scan on subject_subset b (cost=0.00..1111.00 rows=50000 width=174) (actual time=0.011..10.775 rows=50000 loops=1)"
" -> Hash (cost=1111.00..1111.00 rows=248 width=174) (actual time=13.137..13.137 rows=15044 loops=1)"
" Buckets: 16384 (originally 1024) Batches: 1 (originally 1) Memory Usage: 1151kB"
" -> Seq Scan on subject_subset a (cost=0.00..1111.00 rows=248 width=174) (actual time=0.005..9.330 rows=15044 loops=1)"
" Filter: ((id1 IS NULL) AND (id2 IS NOT NULL) AND (surname_clean IS NOT NULL))"
" Rows Removed by Filter: 34956"
"Planning time: 0.236 ms"
"Execution time: 47013.839 ms"
据我所知,导致问题的是子查询的 SORT,在对整个表进行排序时会占用大量空间,但我不知道如何优化它。
唯一带来性能轻微改进的事情是使用dense_rank 分配新的整数ID,但这还不够。
【问题讨论】:
如果您用文字解释此特定查询试图实现的目标,将会有所帮助。否则我们必须尝试根据查询来猜测任务。 该查询旨在对主题(公司和自然人)进行重复数据删除,为他们分配相同的 ID。两个具有相同文档 ID 的 Jonh Smith 在数据库中具有不同的 ID (s_id) -> 代码为他们分配了一个新 ID = 他现在拥有的最大 s_id。有时辅助数据用于重复数据删除(地址、电话等),但想法保持不变。 【参考方案1】:大类要杀了你。
我有三个建议:
运行ANALYZE subject_subset
收集表的表统计信息。
不会自动收集临时表的统计信息,而且估计值与您的情况完全不同。
也许这足以让它变得更好!
尝试在(id2, surname_clean, s_id_new)
上创建一个索引,这将有助于嵌套循环连接(不过不知道这是否更快)。
你可以试试横向连接
SELECT a.s_id_new,
max(b.i_s_id) AS s_id_deduplicated
FROM subject_subset a
CROSS JOIN LATERAL (SELECT s_id_new AS i_s_id
FROM subject_subset
WHERE a.surname_clean = surname_clean
AND a.id2 = id2
ORDER BY s_id_new DESC
LIMIT 1
) b
GROUP BY a.s_id_new;
嵌套循环连接会很昂贵,但排序应该很快。
坚持使用哈希连接,但减少行数:
SELECT a.s_id_new,
max(b.i_s_id) AS s_id_deduplicated
FROM subject_subset a
JOIN (SELECT surname_clean, id2,
max(s_id_new) AS i_s_id
FROM subject_subset
GROUP BY surname_clean, id2
) b
USING (surname_clean, id2)
WHERE a.id1 IS NULL
AND a.id2 IS NOT NULL
AND a.surname_clean IS NOT NULL
GROUP BY a.s_id_new;
也许(surname_clean, id2)
上的索引会有所帮助,但不确定。
【讨论】:
以上是关于Postgresql 内连接(尤其是自连接)上的多列优化的主要内容,如果未能解决你的问题,请参考以下文章