在 Oracle 中独立地从多个列中有效地查找前 N 个值
Posted
技术标签:
【中文标题】在 Oracle 中独立地从多个列中有效地查找前 N 个值【英文标题】:Efficiently find top-N values from multiple columns independently in Oracle 【发布时间】:2011-09-02 03:16:39 【问题描述】:假设我有 300 亿行和多列,我想有效地独立找到每列的前 N 个最常用值,并且使用最优雅的 SQL。例如,如果我有
FirstName LastName FavoriteAnimal FavoriteBook
--------- -------- -------------- ------------
Ferris Freemont Possum Ubik
Nancy Freemont Lemur Housekeeping
Nancy Drew Penguin Ubik
Bill Ribbits Lemur Dhalgren
而我想要top-1,那么结果将是:
FirstName LastName FavoriteAnimal FavoriteBook
--------- -------- -------------- ------------
Nancy Freemont Lemur Ubik
我可能会想办法做到这一点,但不确定它们是否是最佳的,这在有 300 亿行时很重要;并且 SQL 可能又大又丑,而且可能会使用过多的临时空间。
使用 Oracle。
【问题讨论】:
你如何解决关系? 每列有多少不同的值?最多只有几万,还是更多? @MattBall Ties 由任何 dense_rank 决定解决。 @Thilo Distinct 值在某些情况下可能为数千万,或在其他情况下为 2 或 3。 【参考方案1】:这应该只在桌子上做一次。可以使用count()
的解析版独立获取每个值的频率:
select firstname, count(*) over (partition by firstname) as c_fn,
lastname, count(*) over (partition by lastname) as c_ln,
favoriteanimal, count(*) over (partition by favoriteanimal) as c_fa,
favoritebook, count(*) over (partition by favoritebook) as c_fb
from my_table;
FIRSTN C_FN LASTNAME C_LN FAVORIT C_FA FAVORITEBOOK C_FB
------ ---- -------- ---- ------- ---- ------------ ----
Bill 1 Ribbits 1 Lemur 2 Dhalgren 1
Ferris 1 Freemont 2 Possum 1 Ubik 2
Nancy 2 Freemont 2 Lemur 2 Housekeeping 1
Nancy 2 Drew 1 Penguin 1 Ubik 2
然后您可以将其用作 CTE(或子查询分解,我认为在 oracle 术语中)并仅从每列中提取最高频率值:
with tmp_tab as (
select /*+ MATERIALIZE */
firstname, count(*) over (partition by firstname) as c_fn,
lastname, count(*) over (partition by lastname) as c_ln,
favoriteanimal, count(*) over (partition by favoriteanimal) as c_fa,
favoritebook, count(*) over (partition by favoritebook) as c_fb
from my_table)
select (select firstname from (
select firstname,
row_number() over (partition by null order by c_fn desc) as r_fn
from tmp_tab
) where r_fn = 1) as firstname,
(select lastname from (
select lastname,
row_number() over (partition by null order by c_ln desc) as r_ln
from tmp_tab
) where r_ln = 1) as lastname,
(select favoriteanimal from (
select favoriteanimal,
row_number() over (partition by null order by c_fa desc) as r_fa
from tmp_tab
) where r_fa = 1) as favoriteanimal,
(select favoritebook from (
select favoritebook,
row_number() over (partition by null order by c_fb desc) as r_fb
from tmp_tab
) where r_fb = 1) as favoritebook
from dual;
FIRSTN LASTNAME FAVORIT FAVORITEBOOK
------ -------- ------- ------------
Nancy Freemont Lemur Ubik
您正在对每一列的 CTE 进行一次传递,但这仍然应该只命中真正的表一次(感谢 materialize
提示)。您可能需要添加到 order by
子句来调整如果有关系该怎么办。
这在概念上类似于 Thilo、ysth 和其他人的建议,只是您让 Oracle 跟踪所有计数。
编辑: 嗯,解释计划显示它进行四次全表扫描;可能需要多考虑一下这个...
编辑 2: 向 CTE 添加(未记录的)MATERIALIZE
提示似乎可以解决此问题;它正在创建一个临时临时表来保存结果,并且只进行一次全表扫描。不过,解释计划的成本更高——至少在这次样本数据集上是这样。对这样做有任何不利影响的任何 cmets 感兴趣。
【讨论】:
正确。请参阅我刚刚发布的类似解决方案。或许可以改进。【参考方案2】:到目前为止,我用纯 Oracle SQL 提出的最好的方法类似于 @AlexPoole 所做的。我使用 count(A) 而不是 count(*) 将空值推到底部。
with
NUM_ROWS_RETURNED as (
select 4 as NUM from dual
),
SAMPLE_DATA as (
select /*+ materialize */
A,B,C,D,E
from (
select 1 as A, 1 as B, 4 as C, 1 as D, 4 as E from dual
union all select 1 , -2 , 3 , 2 , 3 from dual
union all select 1 , -2 , 2 , 2 , 3 from dual
union all select null , 1 , 1 , 3 , 2 from dual
union all select null , 2 , 4 , null , 2 from dual
union all select null , 1 , 3 , null , 2 from dual
union all select null , 1 , 2 , null , 1 from dual
union all select null , 1 , 4 , null , 1 from dual
union all select null , 1 , 3 , 3 , 1 from dual
union all select null , 1 , 4 , 3 , 1 from dual
)
),
RANKS as (
select /*+ materialize */
rownum as RANKED
from
SAMPLE_DATA
where
rownum <= (select min(NUM) from NUM_ROWS_RETURNED)
)
select
r.RANKED,
max(case when A_RANK = r.RANKED then A else null end) as A,
max(case when B_RANK = r.RANKED then B else null end) as B,
max(case when C_RANK = r.RANKED then C else null end) as C,
max(case when D_RANK = r.RANKED then D else null end) as D,
max(case when E_RANK = r.RANKED then E else null end) as E
from (
select
A, dense_rank() over (order by A_COUNTS desc) as A_RANK,
B, dense_rank() over (order by B_COUNTS desc) as B_RANK,
C, dense_rank() over (order by C_COUNTS desc) as C_RANK,
D, dense_rank() over (order by D_COUNTS desc) as D_RANK,
E, dense_rank() over (order by E_COUNTS desc) as E_RANK
from (
select
A, count(A) over (partition by A) as A_COUNTS,
B, count(B) over (partition by B) as B_COUNTS,
C, count(C) over (partition by C) as C_COUNTS,
D, count(D) over (partition by D) as D_COUNTS,
E, count(E) over (partition by E) as E_COUNTS
from
SAMPLE_DATA
)
)
cross join
RANKS r
group by
r.RANKED
order by
r.RANKED
/
给予:
RANKED| A| B| C| D| E
------|----|----|----|----|----
1| 1| 1| 4| 3| 1
2|null| -2| 3| 2| 2
3|null| 2| 2| 1| 3
4|null|null| 1|null| 4
有计划:
--------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 93 | 57 (20)| 00:00:01 |
| 1 | TEMP TABLE TRANSFORMATION | | | | | |
| 2 | LOAD AS SELECT | | | | | |
| 3 | VIEW | | 10 | 150 | 20 (0)| 00:00:01 |
| 4 | UNION-ALL | | | | | |
| 5 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
| 6 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
| 7 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
| 8 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
| 9 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
| 10 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
| 11 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
| 12 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
| 13 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
| 14 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
| 15 | LOAD AS SELECT | | | | | |
|* 16 | COUNT STOPKEY | | | | | |
| 17 | VIEW | | 10 | | 2 (0)| 00:00:01 |
| 18 | TABLE ACCESS FULL | SYS_TEMP_0FD9| 10 | 150 | 2 (0)| 00:00:01 |
| 19 | SORT AGGREGATE | | 1 | | | |
| 20 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
| 21 | SORT GROUP BY | | 1 | 93 | 33 (34)| 00:00:01 |
| 22 | MERGE JOIN CARTESIAN | | 100 | 9300 | 32 (32)| 00:00:01 |
| 23 | VIEW | | 10 | 800 | 12 (84)| 00:00:01 |
| 24 | WINDOW SORT | | 10 | 800 | 12 (84)| 00:00:01 |
| 25 | WINDOW SORT | | 10 | 800 | 12 (84)| 00:00:01 |
| 26 | WINDOW SORT | | 10 | 800 | 12 (84)| 00:00:01 |
| 27 | WINDOW SORT | | 10 | 800 | 12 (84)| 00:00:01 |
| 28 | WINDOW SORT | | 10 | 800 | 12 (84)| 00:00:01 |
| 29 | VIEW | | 10 | 800 | 7 (72)| 00:00:01 |
| 30 | WINDOW SORT | | 10 | 150 | 7 (72)| 00:00:01 |
| 31 | WINDOW SORT | | 10 | 150 | 7 (72)| 00:00:01 |
| 32 | WINDOW SORT | | 10 | 150 | 7 (72)| 00:00:01 |
| 33 | WINDOW SORT | | 10 | 150 | 7 (72)| 00:00:01 |
| 34 | WINDOW SORT | | 10 | 150 | 7 (72)| 00:00:01 |
| 35 | VIEW | | 10 | 150 | 2 (0)| 00:00:01 |
| 36 | TABLE ACCESS FULL| SYS_TEMP_0FD9| 10 | 150 | 2 (0)| 00:00:01 |
| 37 | BUFFER SORT | | 10 | 130 | 33 (34)| 00:00:01 |
| 38 | VIEW | | 10 | 130 | 2 (0)| 00:00:01 |
| 39 | TABLE ACCESS FULL | SYS_TEMP_0FD9| 10 | 130 | 2 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
16 - filter( (SELECT MIN(4) FROM "SYS"."DUAL" "DUAL")>=ROWNUM)
但是对于一个真实的表,它看起来像(对于稍微修改的查询):
----------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 422 | | 6026M (1)|999:59:59 | | | | | |
| 1 | TEMP TABLE TRANSFORMATION | | | | | | | | | | | |
| 2 | LOAD AS SELECT | | | | | | | | | | | |
|* 3 | COUNT STOPKEY | | | | | | | | | | | |
| 4 | PX COORDINATOR | | | | | | | | | | | |
| 5 | PX SEND QC (RANDOM) | :TQ10000 | 10 | | | 2 (0)| 00:00:01 | | | Q1,00 | P->S | QC (RAND) |
|* 6 | COUNT STOPKEY | | | | | | | | | Q1,00 | PCWC | |
| 7 | PX BLOCK ITERATOR | | 10 | | | 2 (0)| 00:00:01 | 1 | 115 | Q1,00 | PCWC | |
| 8 | INDEX FAST FULL SCAN | IDX | 10 | | | 2 (0)| 00:00:01 | 1 | 115 | Q1,00 | PCWP | |
| 9 | SORT GROUP BY | | 1 | 422 | | 6026M (1)|999:59:59 | | | | | |
| 10 | MERGE JOIN CARTESIAN | | 22G| 8997G| | 6024M (1)|999:59:59 | | | | | |
| 11 | VIEW | | 2289M| 872G| | 1443M (1)|999:59:59 | | | | | |
| 12 | WINDOW SORT | | 2289M| 872G| 970G| 1443M (1)|999:59:59 | | | | | |
| 13 | WINDOW SORT | | 2289M| 872G| 970G| 1443M (1)|999:59:59 | | | | | |
| 14 | WINDOW SORT | | 2289M| 872G| 970G| 1443M (1)|999:59:59 | | | | | |
| 15 | WINDOW SORT | | 2289M| 872G| 970G| 1443M (1)|999:59:59 | | | | | |
| 16 | WINDOW SORT | | 2289M| 872G| 970G| 1443M (1)|999:59:59 | | | | | |
| 17 | WINDOW SORT | | 2289M| 872G| 970G| 1443M (1)|999:59:59 | | | | | |
| 18 | VIEW | | 2289M| 872G| | 248M (1)|829:16:06 | | | | | |
| 19 | WINDOW SORT | | 2289M| 162G| 198G| 248M (1)|829:16:06 | | | | | |
| 20 | WINDOW SORT | | 2289M| 162G| 198G| 248M (1)|829:16:06 | | | | | |
| 21 | WINDOW SORT | | 2289M| 162G| 198G| 248M (1)|829:16:06 | | | | | |
| 22 | WINDOW SORT | | 2289M| 162G| 198G| 248M (1)|829:16:06 | | | | | |
| 23 | WINDOW SORT | | 2289M| 162G| 198G| 248M (1)|829:16:06 | | | | | |
| 24 | WINDOW SORT | | 2289M| 162G| 198G| 248M (1)|829:16:06 | | | | | |
| 25 | PARTITION RANGE ALL| | 2289M| 162G| | 3587K (4)| 11:57:36 | 1 | 115 | | | |
| 26 | TABLE ACCESS FULL | LARGE_TABLE | 2289M| 162G| | 3587K (4)| 11:57:36 | 1 | 115 | | | |
| 27 | BUFFER SORT | | 10 | 130 | | 6026M (1)|999:59:59 | | | | | |
| 28 | VIEW | | 10 | 130 | | 2 (0)| 00:00:01 | | | | | |
| 29 | TABLE ACCESS FULL | SYS_TEMP_0FD9| 10 | 130 | | 2 (0)| 00:00:01 | | | | | |
----------------------------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - filter(ROWNUM<=10)
6 - filter(ROWNUM<=10)
不过,我可以使用from LARGE_TABLE sample (0.01)
来加快速度,但可能会导致图片失真。对于有 20 亿行的表,这会在 53 分钟内返回答案。
【讨论】:
【参考方案3】:你不能。
这里没有技巧,只是原始的工作。
简单地说,您必须遍历表中的每一行,并计算您感兴趣的每一列的出现次数,然后对这些结果进行排序以找到具有最高值的结果。
对于单列,很简单:
SELECT col, count(*) FROM table GROUP BY col ORDER BY count(*) DESC
并获取第一行。
N 列等于 N 表扫描。
如果您编写逻辑并通过表一次,那么您将计算每列的每个值的每个实例。
如果您有 300 亿行和 300 亿个值,您可以将它们全部存储起来,并且它们的计数都为 1。您可以为您关心的每一列都这样做。
如果此信息对您很重要,那么您最好在数据进入时独立并增量地跟踪它。但这是一个不同的问题。
【讨论】:
"N 列等于 N 表扫描。"恐怕是这样。我想知道是否至少可以使它们“并行”运行,以便每个块仅从磁盘加载一次,然后所有子选择都可以看到它,然后再继续下一个块。这样,它与单次扫描大致相同(在性能方面)。【参考方案4】:假设您在每一列中没有太多不同的值,您需要执行以下操作:
-
为每一列创建一个映射,为每个不同的值保留计数器
读取整个表格(逐行,但只读取一次)
对于每一行,增加计数器
之后,查看您的地图并找出最常见的值
对于单个列,SQL 会这样做:
select value from (
select value, count(*) from the_table
group by value
order by count(*) desc
) where rownum < 2
但是,如果您只是将其中的几个组合成一个大 SQL,我认为它会扫描表多次(每列一次),这是您不希望的。你能得到这个的执行计划吗?
因此,您可能必须编写一个程序来执行此操作,或者在服务器上(PL/SQL 或 Java,如果可用),或者作为客户端程序。
【讨论】:
【参考方案5】:遍历您的记录,在内存中记录每个感兴趣的列的每个值被遇到的次数。
每隔一段时间(每 X 条记录,或者当您累积的数据量达到固定内存限制时),循环遍历您的内存计数并增加某些磁盘存储中的相应计数并清除内存中的信息。
详细信息取决于您使用的编程语言。
【讨论】:
我可能做不到。但是假设我可以,那么一个有效的存储过程会是什么样子呢?理想情况下,我希望通过桌子单次通过。 单次通过表似乎不可行(排序太多)。如果你有四个独立的索引,你可以有四个单次通过索引(没有排序)。 我可能想对 100 列表中的每一列都执行此操作,并且没有足够的磁盘空间让所有列都有索引。 @Thilo: top N 不是否涉及对整个表格进行排序 Top N 没有,但是 Top N 本身并没有给你计数(只有前 N 个最大值,而不是前 N 个最频繁的值)。但我同意,这些也可以在没有排序的情况下完成。单程,柜台地图。如果您没有太多不同的值,则不管行数如何。【参考方案6】:下面,我提出了一种幼稚的方法。我认为,对于几十万以上的数据集,这将是完全行不通的。也许大师可以将其作为更合适答案的基础。
查询结果需要多长时间? 您可以将以下查询的“分组依据”部分的结果选择到某种缓存中,可能是每晚一次。
然后你就可以做最后的选择了。
另一种可能性是在相关表上创建一个触发器,该触发器将在每次插入/更新/删除时更新一个“计数器”表。
计数器表如下所示:
field_value count
Nancy 2
Bill 1
Ferris 1
您必须为要计算的每个字段都有一个计数器表。
简而言之,我认为您需要考虑间接观察这些数据的方法。我认为没有任何办法可以解决实际计数需要很长时间的事实。但是,如果您有办法逐步跟踪已更改的内容,那么您只需完成一次繁重的工作。然后你的缓存 + whats new 应该给你你需要的东西。
select top 1
firstname, COUNT(*) as freq
from
(
select
'Ferris' as firstname, 'Freemont' as lastname,
'Possum' as favoriteanimal, 'Ubik' as favoritebook
union all
select 'Nancy','Freemont','Lemur','Housekeeping'
union all
select 'Nancy','Drew','Penguin','Ubik'
union all
select 'Bill','Ribbits','Lemur','Dhalgren'
) sample_data
group by
firstname
order by
COUNT(*) desc
【讨论】:
“对于几十万以上的数据集,这将是完全行不通的,”实际上,我认为除非有很多 distinct 值,否则它不会变得行不通。行数应该不那么重要(即线性缩放,但没有办法解决)。问题是您只选择一列。因此,在 SQL 中,您(可能)必须对每一列重复大扫描,而聪明的程序可以将它们组合起来。 我主要将它用于一次性的事情,我想描述一个我不拥有的不熟悉的表的内容,所以没有触发器等。以上是关于在 Oracle 中独立地从多个列中有效地查找前 N 个值的主要内容,如果未能解决你的问题,请参考以下文章