计算由另一列值分组的列值在 pandas 数据框中的共现
Posted
技术标签:
【中文标题】计算由另一列值分组的列值在 pandas 数据框中的共现【英文标题】:Compute co-occurences in pandas dataframe for column values grouped by another column values 【发布时间】:2021-06-07 05:55:41 【问题描述】:问题
我在 Python 3.7.7 上使用 Pandas。我想计算由另一个变量的值y
分组的变量x
的分类值之间的互信息。我的数据如下表所示:
+-----+-----+
| x | y |
+-----+-----+
| x_1 | y_1 |
| x_2 | y_1 |
| x_3 | y_1 |
| x_1 | y_2 |
| x_2 | y_2 |
| x_4 | y_3 |
| x_6 | y_3 |
| x_9 | y_3 |
| x_1 | y_4 |
| ... | ... |
+-----+-----+
我想要一个数据结构(pandas MultiIndex 系列/数据框或 numpy 矩阵或任何合适的),它存储 (x_i
, x_j
) 对的共现数y_k
值。事实上,这会很棒,例如,可以轻松计算 PMI:
+-----+-----+--------+-------+
| x_i | x_j | cooc | pmi |
+-----+-----+--------+-------+
| x_1 | x_2 | | |
| x_1 | x_3 | | |
| x_1 | x_4 | | |
| x_1 | x_5 | | |
| ... | ... | ... | ... |
+-----+-----+--------+-------+
有没有合适的内存高效方式?
旁注:我正在使用相当大的数据(40k 不同的 x
值和 8k 不同的 y
值,总共 300k (x
,y
) 条目,所以内存友好和优化的方法会很棒(可能依赖第三方库作为Dask)
更新
未优化的解决方案
我想出了一个使用pd.crosstab 的解决方案。我在这里提供一个小例子:
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.randint(0,100,size=(100, 2)), columns=list('xy'))
"""
df:
+-----+-----+
| x | y |
+-----+-----+
| 4 | 99 |
| 1 | 39 |
| 39 | 56 |
| .. | .. |
| 59 | 20 |
| 82 | 57 |
+-----+-----+
100 rows × 2 columns
"""
# Compute cross tabulation:
crosstab = pd.crosstab(df["x"], df["y"])
"""
crosstab:
+------+-----+-----+-----+-----+
| y | 0 | 2 | 3 | ... |
| x +-----+-----+-----+-----+
| 1 | 0 | 0 | 0 | ... |
| 2 | 0 | 0 | 0 | ... |
| ... | ... | ... | ... | ... |
+------+-----+-----+-----+-----+
62 rows × 69 columns
"""
# Initialize a pandas MultiIndex Series storing PMI values
import itertools
x_pairs = list(itertools.combinations(crosstab.index, 2))
pmi = pd.Series(0, index = pd.MultiIndex.from_tuples(x_pairs))
"""
pmi:
+-------------+-----+
| index | val |
+------+------| |
| x_i | x_j | |
+------+------+-----+
| 1 | 2 | 0 |
| | 4 | 0 |
| ... | ... | ... |
| 95 | 98 | 0 |
| | 99 | 0 |
| 96 | 98 | 0 |
+------+------+-----+
Length: 1891, dtype: int64
"""
然后,我用来填充系列的循环结构如下:
for x1, x2 in x_pairs:
pmi.loc[x1, x2] = crosstab.loc[[x1, x2]].min().sum() / (crosstab.loc[x1].sum() * crosstab.loc[x2].sum())
这不是一个可选的解决方案,即使在小型用例中也表现不佳。
【问题讨论】:
我遇到了同样的问题,但最后通过过滤掉频率最低的数据来创建同现矩阵。 这将是减少条目的一个很好的解决方案,但它并不能解决大数据规模问题。事实上,在我的情况下,我的同现频率非常低,根据它们的频率进行过滤并不是最好的解决方案。 假设只有一些x
组合会被观察到是否公平? (使用稀疏矩阵表示)
没错,@SultanOrazbayev,从 40k x
distinct 值和 8k y
distinct 值,初始数据帧中的 300k 行不涵盖 x
值的全部 1.6M 组合
@SultanOrazbayev 我终于设法使用稀疏矩阵做到了,谢谢!
【参考方案1】:
优化方案
最后,我设法使用 scipy 稀疏矩阵以一种内存友好的方式计算交叉事件,用于中间计算:
import pandas as pd
import numpy as np
from scipy.sparse import csr_matrix
def df_compute_cooccurrences(df: pd.DataFrame, column1: str, column2: str) -> pd.DataFrame:
# pd.factorize encode the object as an enumerated type or categorical variable, returning:
# - `codes` (ndarray): an integer ndarray that’s an indexer into `uniques`.
# - `uniques` (ndarray, Index, or Categorical): the unique valid values
# see more at https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.factorize.html
i, rows = pd.factorize(df[column1])
# i -> array([ 0, 0, 0, ..., 449054, 0, 1])
# rows -> Index(['column1_label1', 'column1_label2', ...])
j, cols = pd.factorize(df[column2])
# j -> array([ 0, 1, 2, ..., 28544, -1, -1])
# cols -> Float64Index([column2_label1, column2_label2, ...])
ij, tups = pd.factorize(list(zip(i, j)))
# ij -> array([ 0, 1, 2, ..., 2878026, 2878027, 2878028])
# tups -> array([(0, 0), (0, 1), (0, 2), ..., (449054, 28544), (0, -1), (1, -1)]
# Then we can finally compute the crosstabulation matrix
crosstab = csr_matrix((np.bincount(ij), tuple(zip(*tups))))
# If we convert directly this into a Dataframe with
# pd.DataFrame.sparse.from_spmatrix(crosstab, rows, cols)
# we have the same result as using
# https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.crosstab.html
# but we obtained it in a memory-friendly way (allowing big data processing)
# In order to obtain the co-occurrences matrix for column 1,
# we have to multiply the crosstab matrix for its transposed
coocc = crosstab.dot(crosstab.transpose())
# Then we can finally return the co-occurence matrix in in a DataFrame form
return pd.DataFrame.sparse.from_spmatrix(coocc, rows, rows)
这里提供了一个小例子:
import pandas as pd
import numpy as np
from scipy.sparse import csr_matrix
def df_compute_cooccurrences(df: pd.DataFrame, column1: str, column2: str) -> pd.DataFrame:
i, rows = pd.factorize(df[column1])
j, cols = pd.factorize(df[column2])
ij, tups = pd.factorize(list(zip(i, j)))
crosstab = csr_matrix((np.bincount(ij), tuple(zip(*tups))))
coocc = crosstab.dot(crosstab.transpose())
return pd.DataFrame.sparse.from_spmatrix(coocc, rows, rows)
df = pd.DataFrame(zip([1,1,1,2,2,3,4],["a","a","a","a","a","b","b"]), columns=list('xy'))
"""
df:
+-----+-----+
¦ x ¦ y ¦
+-----+-----+
| 1 | a |
| 1 | a |
| 1 | a |
| 2 | a |
| 2 | a |
| 3 | b |
| 4 | b |
+-----+-----+
"""
cooc_df = df_compute_cooccurrences(df, "x", "y")
"""
cooc_df:
+---+---+---+---+
¦ 1 | 2 | 3 | 4 |
+---+---+---+---+---+
¦ 1 ¦ 9 | 6 | 0 | 0 |
¦ 2 ¦ 6 | 4 | 0 | 0 |
¦ 3 ¦ 0 | 0 | 1 | 1 |
¦ 4 ¦ 0 | 0 | 1 | 1 |
+---+---+---+---+---+
"""
cooc_df2 = df_compute_cooccurrences(df, "y", "x")
"""
cooc_df2:
+----+----+
¦ a ¦ b ¦
+---+----+----+
¦ a ¦ 13 | 0 |
¦ b ¦ 0 | 2 |
+---+----+----+
"""
【讨论】:
看起来不错,所以最终不需要计算PMI? 鉴于共现,很容易进行进一步的计算(例如 PMI,通过将每个共现值除以两个元素的绝对出现的总和),但我想提供答案是一种计算共现的简洁方法,无需处理进一步的特定案例计算以上是关于计算由另一列值分组的列值在 pandas 数据框中的共现的主要内容,如果未能解决你的问题,请参考以下文章
Pandas DataFrame:使用列值在另一列中切片字符串