根据另一列的元素从 pyspark 数组中删除元素

Posted

技术标签:

【中文标题】根据另一列的元素从 pyspark 数组中删除元素【英文标题】:Remove element from pyspark array based on element of another column 【发布时间】:2019-11-13 11:15:16 【问题描述】:

我想验证一个数组是否包含 Pyspark (Spark

示例数据框:

column_1 <Array>           |    column_2 <String>
--------------------------------------------
["2345","98756","8794"]    |       8794
--------------------------------------------
["8756","45678","987563"]  |       1234
--------------------------------------------
["3475","8956","45678"]    |       3475
--------------------------------------------

我想比较两列 column_1 和 column_2。如果 column_1 包含 column_2 我应该从 column_1 中跳过它的值。 我做了一个 udf 从 column_1 中提取 column_2,但没有工作:

def contains(x, y):
        try:
            sx, sy = set(x), set(y)
            if len(sx) == 0:
                return sx
            elif len(sy) == 0:
                return sx
            else:
                return sx - sy            
        # in exception, for example `x` or `y` is None (not a list)
        except:
            return sx
    udf_contains = udf(contains, 'string')
    new_df = my_df.withColumn('column_1', udf_contains(my_df.column_1, my_df.column_2))  

预期结果:

column_1 <Array>           |    column_2 <String>
--------------------------------------------------
["2345","98756"]           |       8794
--------------------------------------------------
["8756","45678","987563"]  |       1234
--------------------------------------------------
["8956","45678"]           |       3475
--------------------------------------------------

知道有时我的 column_1 是 [] 而 column_2 是 null ,我该怎么做?谢谢

【问题讨论】:

检查udf_contains = udf(lambda x,y: [e for e in x if e != y], 'array&lt;string&gt;') 如果 x 可以为空或非列表。 udf(lambda x,y: [e for e in x if e != y] if isinstance(x, list) else x, 'array') @jxc 我需要你的帮助 :) ***.com/questions/58875531/concatenate-array-pyspark/… 【参考方案1】:

Spark 2.4.0+

试试array_remove。从 spark 2.4.0 开始可用:

val df = Seq(
    (Seq("2345","98756","8794"), "8794"), 
    (Seq("8756","45678","987563"), "1234"), 
    (Seq("3475","8956","45678"), "3475"),
    (Seq(), "empty"),
    (null, "null")
).toDF("column_1", "column_2")
df.show(5, false)

df
    .select(
        $"column_1",
        $"column_2",
        array_remove($"column_1", $"column_2") as "diff"
    ).show(5, false)

它会返回:

+---------------------+--------+
|column_1             |column_2|
+---------------------+--------+
|[2345, 98756, 8794]  |8794    |
|[8756, 45678, 987563]|1234    |
|[3475, 8956, 45678]  |3475    |
|[]                   |empty   |
|null                 |null    |
+---------------------+--------+

+---------------------+--------+---------------------+
|column_1             |column_2|diff                 |
+---------------------+--------+---------------------+
|[2345, 98756, 8794]  |8794    |[2345, 98756]        |
|[8756, 45678, 987563]|1234    |[8756, 45678, 987563]|
|[3475, 8956, 45678]  |3475    |[8956, 45678]        |
|[]                   |empty   |[]                   |
|null                 |null    |null                 |
+---------------------+--------+---------------------+

对不起 scala,我想用 pyspark 做同样的事情很容易。

火花

%pyspark

from pyspark.sql.functions import udf
from pyspark.sql.types import ArrayType, StringType


data = [
    (["2345","98756","8794"], "8794"), 
    (["8756","45678","987563"], "1234"), 
    (["3475","8956","45678"], "3475"),
    ([], "empty"),
    (None,"null")    
    ]
df = spark.createDataFrame(data, ['column_1', 'column_2'])
df.printSchema()
df.show(5, False)

def contains(x, y):
    if x is None or y is None:
        return x
    else:
        sx, sy = set(x), set([y])
        return list(sx - sy)
udf_contains = udf(contains, ArrayType(StringType()))

df.select("column_1", "column_2", udf_contains("column_1", "column_2")).show(5, False)

结果:

root
 |-- column_1: array (nullable = true)
 |    |-- element: string (containsNull = true)
 |-- column_2: string (nullable = true)
+---------------------+--------+
|column_1             |column_2|
+---------------------+--------+
|[2345, 98756, 8794]  |8794    |
|[8756, 45678, 987563]|1234    |
|[3475, 8956, 45678]  |3475    |
|[]                   |empty   |
|null                 |null    |
+---------------------+--------+
+---------------------+--------+----------------------------+
|column_1             |column_2|contains(column_1, column_2)|
+---------------------+--------+----------------------------+
|[2345, 98756, 8794]  |8794    |[2345, 98756]               |
|[8756, 45678, 987563]|1234    |[8756, 987563, 45678]       |
|[3475, 8956, 45678]  |3475    |[8956, 45678]               |
|[]                   |empty   |[]                          |
|null                 |null    |null                        |
+---------------------+--------+----------------------------+

【讨论】:

感谢您的帮助,我只是这样做了:df.select(array_remove(df.data, 1)).collect(),但我得到“TypeError: 'Column' object is not可调用”可能是因为我使用了 @verojoucla 我用 pyspark 添加了 spark set("abc") > set(['a', 'c', 'b'])

以上是关于根据另一列的元素从 pyspark 数组中删除元素的主要内容,如果未能解决你的问题,请参考以下文章

利用 PySpark,确定数组列中有多少元素包含在另一列中的数组数组中

pyspark:删除作为另一列值的子字符串,并从给定列的值中包含正则表达式字符

Pyspark:如何根据另一列的值填充空值

Pyspark:如何根据另一列中的匹配值从数组中的第一次出现中选择直到最后的值

PySpark:根据另一列的顺序收集数据框列上的集合

Pyspark根据另一列的模式替换列中的字符串