在运行非常缓慢的 SQL 的非常大的表上删除查询
Posted
技术标签:
【中文标题】在运行非常缓慢的 SQL 的非常大的表上删除查询【英文标题】:Delete query on a very large table running extremely slowly SQL 【发布时间】:2020-10-06 19:30:06 【问题描述】:我有这个 SQL 查询:
delete from scans
where scandatetime>(current_timestamp - interval '21 days') and
scandatetime <> (select min(tt.scandatetime) from scans tt where tt.imb = scans.imb) and
scandatetime <> (select max(tt.scandatetime) from scans tt where tt.imb = scans.imb)
;
我用来从下表中删除记录:
|imb |scandatetime |status |scanfacilityzip|
+-----------+-------------------+---------+---------------+
|isdijh23452|2020-01-01 13:45:12|Intake |12345 |
|isdijh23452|2020-01-01 13:45:12|Intake |12345 |
|isdijh23452|2020-01-01 19:30:32|Received |12345 |
|isdijh23452|2020-01-02 04:50:22|Confirmed|12345 |
|isdijh23452|2020-01-03 19:32:18|Processed|45867 |
|awgjnh09864|2020-01-01 10:24:16|Intake |84676 |
|awgjnh09864|2020-01-01 19:30:32|Received |84676 |
|awgjnh09864|2020-01-01 19:30:32|Received |84676 |
|awgjnh09864|2020-01-02 02:15:52|Processed|84676 |
这样每个 IMB 只保留 2 条记录,一条具有最短的 scandatetime 和最长的 scandatetime。我也对此进行了限制,因此它仅对不到 3 周的记录执行此操作。结果表如下所示:
|imb |scandatetime |status |scanfacilityzip|
+-----------+-------------------+---------+---------------+
|isdijh23452|2020-01-01 13:45:12|Intake |12345 |
|isdijh23452|2020-01-03 19:32:18|Processed|45867 |
|awgjnh09864|2020-01-01 10:24:16|Intake |84676 |
|awgjnh09864|2020-01-02 02:15:52|Processed|84676 |
这个表有几个索引,有几千万行,所以查询通常需要很长时间才能运行。如何加快速度?
解释输出:
Delete on scans (cost=0.57..115934571.45 rows=10015402 width=6)
-> Index Scan using scans_staging_scandatetime_idx on scans (cost=0.57..115934571.45 rows=10015402 width=6)
Index Cond: (scandatetime > (CURRENT_TIMESTAMP - '21 days'::interval))
Filter: ((scandatetime <> (SubPlan 2)) AND (scandatetime <> (SubPlan 4)))
SubPlan 2
-> Result (cost=3.91..3.92 rows=1 width=8)
InitPlan 1 (returns $1)
-> Limit (cost=0.70..3.91 rows=1 width=8)
-> Index Only Scan using scans_staging_imb_scandatetime_idx on scans tt (cost=0.70..16.79 rows=5 width=8)
Index Cond: ((imb = scans.imb) AND (scandatetime IS NOT NULL))
SubPlan 4
-> Result (cost=3.91..3.92 rows=1 width=8)
InitPlan 3 (returns $3)
-> Limit (cost=0.70..3.91 rows=1 width=8)
-> Index Only Scan Backward using scans_staging_imb_scandatetime_idx on scans tt_1 (cost=0.70..16.79 rows=5 width=8)
Index Cond: ((imb = scans.imb) AND (scandatetime IS NOT NULL))
表 DDL:
-- Table Definition ----------------------------------------------
CREATE TABLE scans (
imb text,
scandatetime timestamp without time zone,
status text,
scanfacilityzip text
);
-- Indices -------------------------------------------------------
CREATE INDEX scans_staging_scandatetime_idx ON scans(scandatetime timestamp_ops);
CREATE INDEX scans_staging_imb_idx ON scans(imb text_ops);
CREATE INDEX scans_staging_status_idx ON scans(status text_ops);
CREATE INDEX scans_staging_scandatetime_status_idx ON scans(scandatetime timestamp_ops,status text_ops);
CREATE INDEX scans_staging_imb_scandatetime_idx ON scans(imb text_ops,scandatetime timestamp_ops);
编辑: 这是解释分析输出(注意,我将间隔更改为 1 天以使其运行更快):
Delete on scans (cost=0.58..3325615.74 rows=278811 width=6) (actual time=831562.877..831562.877 rows=0 loops=1)
-> Index Scan using scans_staging_scandatetime_idx on scans (cost=0.58..3325615.74 rows=278811 width=6) (actual time=831562.875..831562.875 rows=0 loops=1)
Index Cond: (scandatetime > (CURRENT_TIMESTAMP - '1 day'::interval))
Filter: ((scandatetime <> (SubPlan 2)) AND (scandatetime <> (SubPlan 4)))
Rows Removed by Filter: 277756
SubPlan 2
-> Result (cost=3.92..3.93 rows=1 width=8) (actual time=1.675..1.675 rows=1 loops=277756)
InitPlan 1 (returns $1)
-> Limit (cost=0.70..3.92 rows=1 width=8) (actual time=1.673..1.674 rows=1 loops=277756)
-> Index Only Scan using scans_staging_imb_scandatetime_idx on scans tt (cost=0.70..16.80 rows=5 width=8) (actual time=1.672..1.672 rows=1 loops=277756)
Index Cond: ((imb = scans.imb) AND (scandatetime IS NOT NULL))
Heap Fetches: 277761
SubPlan 4
-> Result (cost=3.92..3.93 rows=1 width=8) (actual time=0.086..0.086 rows=1 loops=164210)
InitPlan 3 (returns $3)
-> Limit (cost=0.70..3.92 rows=1 width=8) (actual time=0.084..0.085 rows=1 loops=164210)
-> Index Only Scan Backward using scans_staging_imb_scandatetime_idx on scans tt_1 (cost=0.70..16.80 rows=5 width=8) (actual time=0.083..0.083 rows=1 loops=164210)
Index Cond: ((imb = scans.imb) AND (scandatetime IS NOT NULL))
Heap Fetches: 164210
Planning Time: 11.360 ms
Execution Time: 831562.956 ms
编辑:解释分析缓冲区的结果:
Delete on scans (cost=0.57..1274693.83 rows=103787 width=6) (actual time=19309.026..19309.027 rows=0 loops=1)
Buffers: shared hit=743430 read=46033
I/O Timings: read=15917.966
-> Index Scan using scans_staging_scandatetime_idx on scans (cost=0.57..1274693.83 rows=103787 width=6) (actual time=19309.025..19309.025 rows=0 loops=1)
Index Cond: (scandatetime > (CURRENT_TIMESTAMP - '1 day'::interval))
Filter: ((scandatetime <> (SubPlan 2)) AND (scandatetime <> (SubPlan 4)))
Rows Removed by Filter: 74564
Buffers: shared hit=743430 read=46033
I/O Timings: read=15917.966
SubPlan 2
-> Result (cost=4.05..4.06 rows=1 width=8) (actual time=0.232..0.233 rows=1 loops=74564)
Buffers: shared hit=458108 read=27849
I/O Timings: read=15114.478
InitPlan 1 (returns $1)
-> Limit (cost=0.70..4.05 rows=1 width=8) (actual time=0.231..0.231 rows=1 loops=74564)
Buffers: shared hit=458108 read=27849
I/O Timings: read=15114.478
-> Index Only Scan using scans_staging_imb_scandatetime_idx on scans tt (cost=0.70..20.81 rows=6 width=8) (actual time=0.230..0.230 rows=1 loops=74564)
Index Cond: ((imb = scans.imb) AND (scandatetime IS NOT NULL))
Heap Fetches: 74583
Buffers: shared hit=458108 read=27849
I/O Timings: read=15114.478
SubPlan 4
-> Result (cost=4.05..4.06 rows=1 width=8) (actual time=0.042..0.042 rows=1 loops=34497)
Buffers: shared hit=228637 read=701
I/O Timings: read=507.724
InitPlan 3 (returns $3)
-> Limit (cost=0.70..4.05 rows=1 width=8) (actual time=0.041..0.041 rows=1 loops=34497)
Buffers: shared hit=228637 read=701
I/O Timings: read=507.724
-> Index Only Scan Backward using scans_staging_imb_scandatetime_idx on scans tt_1 (cost=0.70..20.81 rows=6 width=8) (actual time=0.040..0.040 rows=1 loops=34497)
Index Cond: ((imb = scans.imb) AND (scandatetime IS NOT NULL))
Heap Fetches: 34497
Buffers: shared hit=228637 read=701
I/O Timings: read=507.724
Planning Time: 5.350 ms
Execution Time: 19313.242 ms
【问题讨论】:
什么是非常慢...需要很长时间才能运行?请通过实际的数字比较来限定。 这个查询将需要 30,000 秒才能运行,如果不是更多的话 @a_horse_with_no_name 更新了简单的解释,运行完成后将添加分析 您要保留的每个 imb 1. 最新的行,2. 超过 21 天的所有行,3. 如果最旧的行在过去 21 天内,那么也是。是吗? 您能否将“21 天”更改为“1 天”以便更快速地获得EXPLAIN (ANALYZE, BUFFERS)
?
【参考方案1】:
没有预聚合(并避免 CTE):
DELETE FROM scans del
WHERE del.scandatetime > (current_timestamp - interval '21 days')
AND EXISTS (SELECT *
FROM scans x
WHERE x.imb = del.imb
AND x.scandatetime < del.scandatetime
)
AND EXISTS (SELECT *
FROM scans x
WHERE x.imb = del.imb
AND x.scandatetime > del.scandatetime
)
;
这个想法是:只有在之前(至少)有一条记录,并且(至少)在它之后有一条记录时,您才删除。 (使用相同的 imd)这不适用于第一条和最后一条记录,仅适用于中间的记录。
【讨论】:
你能再解释一下吗?在我看来,这似乎不会保留最大和最小时间戳。如果有 4 个扫描事件早于del.scandatetime
会怎样,它不会按原样将它们全部删除吗?
我选择这个作为答案,因为它也适用于 Redshift。 Redshift 目前不支持删除中的 with 子句
CTE 版本也可能很慢。例如,在 Pg-12 (?) 之前的 Postgres 中。【参考方案2】:
考虑运行聚合一次并将其合并到EXISTS
子句中。
with agg as (
select imb
, min(sub.scandatetime) as min_dt
, max(sub.scandatetime) as max_dt
from scans
group by imb
)
delete from scans s
where s.scandatetime > (current_timestamp - interval '21 days')
and exists
(select 1
from agg
where s.imb = agg.imb
and (s.scandatetime > agg.min_dt and
s.scandatetime < agg.max_dt)
);
【讨论】:
【参考方案3】:在请求 cmets 中,您说该表不包含超过 21 天的行。因此条件scandatetime > (current_timestamp - interval '21 days')
是多余的。这也意味着您从表中删除了几乎所有行。每个 imb 只保留一到两行。
DELETE
在这么多行(您提到数千万行)上可能会非常慢。不仅要一一删除表行,还要更新所有索引。
这就是说,您最好将那几行所需的行复制到临时表中,截断原始表并将行复制回来。 TRUNCATE
不像 DELETE
那样查看单行。它只是一次性清空整个表及其索引,并立即回收磁盘空间。
脚本看起来像这样:
create table temp_desired_scans as
select *
from scans s
where (imb, scandatetime) in
(
select imb, min(scandatetime) from scans group by imb
union all
select imb, max(scandatetime) from scans group by imb
);
truncate table scans;
insert into scans
select * from temp_desired_scans;
drop table temp_desired_scans;
(这种批量删除的另一个常见选项是保留临时表,删除原始表,将临时表重命名为原始表的名称,并在这个新表上安装所有约束和索引。)
【讨论】:
我在评论中说错了,我的意思是基本上在 21 天后数据被认为是最终的。我们需要保留它,因为 IMB 代表实际的扫描事件,但 21 天后它们引用与旧扫描事件完全无关的新扫描事件。本质上,21 天窗口中的数据是动态的,而任何较旧的数据都是静态的。我们仍然需要它。 好的。您是否知道您的报表没有保留过去 21 天的第一个和最后一个条目?如果 imb 的第一个条目较旧,则保留该条目(无论如何您都会这样做),但删除过去 21 天内的所有行,最新的行除外。 如果您主要在表中的最后 21 天工作,您可能需要考虑按日期对表进行分区。【参考方案4】:鉴于选择是问题所在,我将只专注于选择。您可以随时从中删除。如果有帮助,你可以试试这个表格:
select * from
(select *,
row_number() over (partition by imb order by scandatetime asc) ar,
row_number() over (partition by imb order by scandatetime desc) dr
from scans
)s
where ar>1 and dr>1 and scandatetime>(current_timestamp - interval '21 days')
【讨论】:
以上是关于在运行非常缓慢的 SQL 的非常大的表上删除查询的主要内容,如果未能解决你的问题,请参考以下文章