Redshift GROUP BY 时间间隔

Posted

技术标签:

【中文标题】Redshift GROUP BY 时间间隔【英文标题】:Redshift GROUP BY time interval 【发布时间】:2017-02-06 08:09:23 【问题描述】:

目前,我在 redshift 中有以下原始数据。

timestamp                   ,lead
==================================
"2008-04-09 10:02:01.000000",true
"2008-04-09 10:03:05.000000",true
"2008-04-09 10:31:07.000000",true
"2008-04-09 11:00:05.000000",false
...

所以,我想生成一个聚合数据,间隔为 30 分钟。我希望的结果是

timestamp                   ,count
==================================
"2008-04-09 10:00:00.000000",2
"2008-04-09 10:30:00.000000",1
"2008-04-09 11:00:00.000000",0
...

我提到了 https://***.com/a/12046382/3238864 ,它对 PostgreSQL 有效。

我尝试通过使用来模仿发布的代码

with thirty_min_intervals as (
    select
      (select min(timestamp)::date from events) + ( n    || ' minutes')::interval start_time,
      (select min(timestamp)::date from events) + ((n+30) || ' minutes')::interval end_time
    from generate_series(0, (24*60), 30) n
)
select count(CASE WHEN lead THEN 1 END) from events e
right join thirty_min_intervals f
on e.timestamp >= f.start_time and e.timestamp < f.end_time
group by f.start_time, f.end_time
order by f.start_time;

但是,我遇到了错误

[0A000] ERROR: Specified types or functions (one per INFO message) not supported on Redshift tables.

请问,在 redshift 中执行 N 区间的聚合数据计算的好方法是什么。

【问题讨论】:

Amazon-redshift 不支持generate_series() 功能。 Refer this for unsupported postgresql features 但是如果你在redshift中运行裸命令select * from generate_series(0, (24*60), 30) n;,它运行正常。 是的。 generate_series 将在父节点中工作。如果您尝试访问具有 generate_series 的查询中的 redshift 表,它将通过您这个错误,因为子节点将不支持 generate_series() 函数。如果您的查询不访问红移表,则 generate_series() 函数将为您提供结果。 一种选择是创建一个表,其中包含以 30 分钟为间隔的时间列表,然后加入它。我觉得generate_series()可以用来建表。 @JohnRotenstein 您无法使用generate_series() 创建表。 Redshift 不支持它。您必须使用多个选择查询创建这样的表。 【参考方案1】:

乔的回答是一个很好的解决方案。 我觉得在 Redshift 中工作时应该始终考虑数据是如何分布和排序的。它会对性能产生巨大的影响。

以 Joe 的出色回答为基础: 我将具体化示例事件。在实践中,事件将在一个表格中。

drop table if exists public.temporary_events;
create table public.temporary_events AS 
select ts::timestamp as ts 
    ,lead 
from 
(   SELECT '2017-02-16 10:02:01'::timestamp as ts, true::boolean  as lead
    UNION ALL SELECT '2017-02-16 10:03:05'::timestamp as ts, true::boolean  as lead
    UNION ALL SELECT '2017-02-16 10:31:07'::timestamp as ts, true::boolean  as lead
    UNION ALL SELECT '2017-02-16 11:00:05'::timestamp as ts, false::boolean as lead)
;

现在运行解释:

explain 
WITH time_dimension
AS (SELECT  dtm
           ,dtm - ((DATEPART(SECONDS,dtm) + (DATEPART(MINUTES,dtm)*60) % 1800) * INTERVAL '1 second') AS dtm_half_hour
    FROM /* Create a series of timestamp. 1 per second working backwards from NOW(). */
         /*  NB: `sysdate` could be substituted for an arbitrary ending timestamp */
         (SELECT DATE_TRUNC('SECONDS',sysdate) - (n * INTERVAL '1 second') AS dtm
          FROM /* Generate a number sequence of 100,000 values from a large internal table */
               (SELECT  ROW_NUMBER() OVER () AS n FROM stl_scan LIMIT 100000) rn) rn)

SELECT dtm_half_hour
      ,COUNT(CASE WHEN lead THEN 1 END)
FROM      time_dimension td
LEFT JOIN public.temporary_events e
       ON td.dtm = e.ts
WHERE td.dtm_half_hour BETWEEN '2017-02-16 09:30:00' AND '2017-02-16 11:00:00'
GROUP BY 1
-- ORDER BY 1 Just to simply the job a little

输出是:

XN HashAggregate  (cost=999999999999999967336168804116691273849533185806555472917961779471295845921727862608739868455469056.00..999999999999999967336168804116691273849533185806555472917961779471295845921727862608739868455469056.00 rows=1 width=9)
  ->  XN Hash Left Join DS_DIST_BOTH  (cost=0.05..999999999999999967336168804116691273849533185806555472917961779471295845921727862608739868455469056.00 rows=1 width=9)
        Outer Dist Key: ('2018-11-27 17:00:35'::timestamp without time zone - ((rn.n)::double precision * '00:00:01'::interval))
        Inner Dist Key: e."ts"
        Hash Cond: ("outer"."?column2?" = "inner"."ts")
        ->  XN Subquery Scan rn  (cost=0.00..14.95 rows=1 width=8)
              Filter: (((('2018-11-27 17:00:35'::timestamp without time zone - ((n)::double precision * '00:00:01'::interval)) - ((((("date_part"('minutes'::text, ('2018-11-27 17:00:35'::timestamp without time zone - ((n)::double precision * '00:00:01'::interval))) * 60) % 1800) + "date_part"('seconds'::text, ('2018-11-27 17:00:35'::timestamp without time zone - ((n)::double precision * '00:00:01'::interval)))))::double precision * '00:00:01'::interval)) <= '2017-02-16 11:00:00'::timestamp without time zone) AND ((('2018-11-27 17:00:35'::timestamp without time zone - ((n)::double precision * '00:00:01'::interval)) - ((((("date_part"('minutes'::text, ('2018-11-27 17:00:35'::timestamp without time zone - ((n)::double precision * '00:00:01'::interval))) * 60) % 1800) + "date_part"('seconds'::text, ('2018-11-27 17:00:35'::timestamp without time zone - ((n)::double precision * '00:00:01'::interval)))))::double precision * '00:00:01'::interval)) >= '2017-02-16 09:30:00'::timestamp without time zone))
              ->  XN Limit  (cost=0.00..1.95 rows=130 width=0)
                    ->  XN Window  (cost=0.00..1.95 rows=130 width=0)
                          ->  XN Network  (cost=0.00..1.30 rows=130 width=0)
                                Send to slice 0
                                ->  XN Seq Scan on stl_scan  (cost=0.00..1.30 rows=130 width=0)
        ->  XN Hash  (cost=0.04..0.04 rows=4 width=9)
              ->  XN Seq Scan on temporary_events e  (cost=0.00..0.04 rows=4 width=9)

卡布拉莫!

正如乔所说,您可以毫无问题地愉快地使用这种模式。但是,一旦您的数据变得足够大或您的 SQL 逻辑复杂,您可能需要优化。如果没有其他原因,当您在代码中添加更多 sql 逻辑时,您可能想了解解释计划。

我们可以看看三个方面:

    连接。使两组数据之间的连接在相同的数据类型上工作。在这里,我们将时间戳加入间隔。 数据分布。按时间戳实现和分发这两个表。 数据排序。如果事件按此时间戳排序,并且时间维度按两个时间戳排序,那么您可以使用合并连接完成整个查询,而无需移动任何数据,也无需将数据发送到领导节点进行聚合。

观察:

drop table if exists public.temporary_time_dimension;
create table public.temporary_time_dimension
distkey(dtm) sortkey(dtm, dtm_half_hour)
AS (SELECT  dtm::timestamp as dtm
           ,dtm - ((DATEPART(SECONDS,dtm) + (DATEPART(MINUTES,dtm)*60) % 1800) * INTERVAL '1 second') AS dtm_half_hour
    FROM /* Create a series of timestamp. 1 per second working backwards from NOW(). */
         /*  NB: `sysdate` could be substituted for an arbitrary ending timestamp */
         (SELECT DATE_TRUNC('SECONDS',sysdate) - (n * INTERVAL '1 second') AS dtm         
          FROM /* Generate a number sequence of 100,000 values from a large internal table */
               (SELECT  ROW_NUMBER() OVER () AS n FROM stl_scan LIMIT 100000) rn) rn)
;               

drop table if exists public.temporary_events;
create table public.temporary_events 
distkey(ts) sortkey(ts)
AS 
select ts::timestamp as ts 
    ,lead 
from 
(   SELECT '2017-02-16 10:02:01'::timestamp as ts, true::boolean  as lead
    UNION ALL SELECT '2017-02-16 10:03:05'::timestamp as ts, true::boolean  as lead
    UNION ALL SELECT '2017-02-16 10:31:07'::timestamp as ts, true::boolean  as lead
    UNION ALL SELECT '2017-02-16 11:00:05'::timestamp as ts, false::boolean as lead)
;

explain 
SELECT 
     dtm_half_hour
    ,COUNT(CASE WHEN lead THEN 1 END)
FROM public.temporary_time_dimension td
LEFT JOIN public.temporary_events e
       ON td.dtm = e.ts
WHERE td.dtm_half_hour BETWEEN '2017-02-16 09:30:00' AND '2017-02-16 11:00:00'
GROUP BY 1
--order by dtm_half_hour

然后给出:

XN HashAggregate  (cost=1512.67..1512.68 rows=1 width=9)
  ->  XN Merge Left Join DS_DIST_NONE  (cost=0.00..1504.26 rows=1682 width=9)
        Merge Cond: ("outer".dtm = "inner"."ts")
        ->  XN Seq Scan on temporary_time_dimension td  (cost=0.00..1500.00 rows=1682 width=16)
              Filter: ((dtm_half_hour <= '2017-02-16 11:00:00'::timestamp without time zone) AND (dtm_half_hour >= '2017-02-16 09:30:00'::timestamp without time zone))
        ->  XN Seq Scan on temporary_events e  (cost=0.00..0.04 rows=4 width=9)

重要提示:

我已经下单了。放回去会导致数据被发送到领导节点进行排序。如果您可以取消排序,请取消排序! 我确信选择时间戳作为事件表排序键在许多情况下并不理想。我只是想我会展示什么是可能的。 我认为您可能希望使用 diststyle all 创建时间维度并进行排序。这将确保您的加入不会产生网络流量。

【讨论】:

【参考方案2】:

您可以使用ROW_NUMBER() 生成系列。我使用我知道很大的内部表。 FWIW,我通常会将time_dimension 持久化到一个真实的表中以避免重复这样做。

给你:

WITH events
AS (          SELECT '2017-02-16 10:02:01'::timestamp as ts, true::boolean  as lead
    UNION ALL SELECT '2017-02-16 10:03:05'::timestamp as ts, true::boolean  as lead
    UNION ALL SELECT '2017-02-16 10:31:07'::timestamp as ts, true::boolean  as lead
    UNION ALL SELECT '2017-02-16 11:00:05'::timestamp as ts, false::boolean as lead)

,time_dimension
AS (SELECT  dtm
           ,dtm - ((DATEPART(SECONDS,dtm) + (DATEPART(MINUTES,dtm)*60) % 1800) * INTERVAL '1 second') AS dtm_half_hour
    FROM /* Create a series of timestamp. 1 per second working backwards from NOW(). */
         /*  NB: `sysdate` could be substituted for an arbitrary ending timestamp */
         (SELECT DATE_TRUNC('SECONDS',sysdate) - (n * INTERVAL '1 second') AS dtm
          FROM /* Generate a number sequence of 100,000 values from a large internal table */
               (SELECT  ROW_NUMBER() OVER () AS n FROM stl_scan LIMIT 100000) rn) rn)

SELECT dtm_half_hour
      ,COUNT(CASE WHEN lead THEN 1 END)
FROM      time_dimension td
LEFT JOIN events e
       ON td.dtm = e.ts
WHERE td.dtm_half_hour BETWEEN '2017-02-16 09:30:00' AND '2017-02-16 11:00:00'
GROUP BY 1
ORDER BY 1
;

【讨论】:

在红移时,必须将派生表(如 time_dimension)放入真实表中。因为“time_dimension”是由几个嵌套语句组成的,所以查询优化器不知道数据是如何在数据库节点中排序和分布的。它会假设最坏的情况和他妈的。 @hibernado 你能澄清你的评论吗?我使用上面的模式 daily 没有任何问题。 我添加了一个“答案”来详细说明。

以上是关于Redshift GROUP BY 时间间隔的主要内容,如果未能解决你的问题,请参考以下文章

在 Amazon Redshift 中使用窗口函数时需要 GROUP BY 聚合

Redshift 中的 GROUP BY 后不必要的 DS_BCAST_INNER

使用 group_by() 计算对象内第一次测量的时间间隔

SQL GROUP BY:连续的间隔?

AWS Redshift 列“view_table_B.cost”必须出现在 GROUP BY 子句中或用于聚合函数

如何在 Clickhouse 中使用 group by 间隔 1 小时?