使用scala删除长字符串中的重复单词

Posted

技术标签:

【中文标题】使用scala删除长字符串中的重复单词【英文标题】:drop duplicate words in long string using scala 【发布时间】:2018-10-03 07:44:10 【问题描述】:

我很想知道如何在数据框列中包含的字符串中删除重复的单词。我想使用 scala 来完成它。 例如,您可以在下面找到我想要转换的数据框。

数据框:

val dataset1 = Seq(("66", "a,b,c,a", "4"), ("67", "a,f,g,t", "0"), ("70", "b,b,b,d", "4")).toDF("KEY1", "KEY2", "ID") 

+----+-------+---+
|KEY1|   KEY2| ID|
+----+-------+---+
|  66|a,b,c,a|  4|
|  67|a,f,g,t|  0|
|  70|b,b,b,d|  4|
+----+-------+---+

结果:

+----+----------+---+
|KEY1|      KEY2| ID|
+----+----------+---+
|  66|   a, b, c|  4|
|  67|a, f, g, t|  0|
|  70|      b, d|  4|
+----+----------+---+

使用 pyspark,我使用了以下代码来获得上述结果。我无法通过 scala 重写这样的代码。你有什么建议吗?提前感谢您,祝您有美好的一天。

pyspark 代码:

# dataframe
l = [("66", "a,b,c,a", "4"),("67", "a,f,g,t", "0"),("70", "b,b,b,d", "4")]
#spark.createDataFrame(l).show()
df1 = spark.createDataFrame(l, ['KEY1', 'KEY2','ID'])


# function
import re
import numpy as np
# drop duplicates in a row
def drop_duplicates(row):
    # split string by ', ', drop duplicates and join back
    words = re.split(',',row)
    return ', '.join(np.unique(words))


# drop duplicates
from pyspark.sql.functions import udf

drop_duplicates_udf = udf(drop_duplicates)

dataset2 = df1.withColumn('KEY2', drop_duplicates_udf(df1.KEY2))
dataset2.show()

【问题讨论】:

【参考方案1】:

数据框解决方案

scala> val df = Seq(("66", "a,b,c,a", "4"), ("67", "a,f,g,t", "0"), ("70", "b,b,b,d", "4")).toDF("KEY1", "KEY2", "ID")
df: org.apache.spark.sql.DataFrame = [KEY1: string, KEY2: string ... 1 more field]

scala> val distinct :String => String = _.split(",").toSet.mkString(",")
distinct: String => String = <function1>

scala> val distinct_id = udf (distinct)
distinct_id: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(<function1>,StringType,Some(List(StringType)))

scala> df.select('key1,distinct_id('key2).as("distinct"),'id).show
+----+--------+---+
|key1|distinct| id|
+----+--------+---+
|  66|   a,b,c|  4|
|  67| a,f,g,t|  0|
|  70|     b,d|  4|
+----+--------+---+


scala>

【讨论】:

【参考方案2】:

可能有更优化的解决方案,但这可以帮助您。

val rdd2 = dataset1.rdd.map(x => x(1).toString.split(",").distinct.mkString(", "))

// 然后将其转换为数据集 // 或

val distinctUDF = spark.udf.register("distinctUDF", (s: String) => s.split(",").distinct.mkString(", "))

dataset1.createTempView("dataset1")

spark.sql("Select KEY1, distinctUDF(KEY2), ID from dataset1").show

【讨论】:

【参考方案3】:
import org.apache.spark.sql._

 val dfUpdated = dataset1.rdd.map
     case Row(x: String, y: String,z:String) => (x,y.split(",").distinct.mkString(", "),z)
 .toDF(dataset1.columns:_*)

在 spark-shell 中:

scala> val dataset1 = Seq(("66", "a,b,c,a", "4"), ("67", "a,f,g,t", "0"), ("70", "b,b,b,d", "4")).toDF("KEY1", "KEY2", "ID")    
dataset1: org.apache.spark.sql.DataFrame = [KEY1: string, KEY2: string ... 1 more field]

scala> dataset1.show
+----+-------+---+
|KEY1|   KEY2| ID|
+----+-------+---+
|  66|a,b,c,a|  4|
|  67|a,f,g,t|  0|
|  70|b,b,b,d|  4|
+----+-------+---+

scala> val dfUpdated = dataset1.rdd.map
           case Row(x: String, y: String,z:String) => (x,y.split(",").distinct.mkString(", "),z)
       .toDF(dataset1.columns:_*)
dfUpdated: org.apache.spark.sql.DataFrame = [KEY1: string, KEY2: string ... 1 more field]

scala> dfUpdated.show
+----+----------+---+
|KEY1|      KEY2| ID|
+----+----------+---+
|  66|   a, b, c|  4|
|  67|a, f, g, t|  0|
|  70|      b, d|  4|
+----+----------+---+

【讨论】:

以上是关于使用scala删除长字符串中的重复单词的主要内容,如果未能解决你的问题,请参考以下文章

删除重复元素并计算ArrayList中的重复次数

找到给定字符串的每个可能的子集[重复]

Leetcode(无重复字符的最长子串;删除排序链表中的重复元素II;加一;最后一个单词的长度;相同的树)

Scala学习-分割字符串中的单词

删除字符串中出现的重复单词

JavaScript 数组删除重复的单词或字符(如果只输入字符。不要从 1 个单词中删除所有重复项