pyspark 将最小值添加回数据框
Posted
技术标签:
【中文标题】pyspark 将最小值添加回数据框【英文标题】:pyspark add min value to back to dataframe 【发布时间】:2019-10-24 20:11:56 【问题描述】:我正在尝试在 pyspark 数据框中的“日期关闭”列中查找最小日期。然后我想在我的原始数据框中添加一列,以便每条记录都有最小日期“Open_Date”。这似乎真的不应该那么难,但我不断收到错误。我也尝试过使用“加入”并在两个数据框中创建一个只有一个值的字段并尝试将它们加入其中,但我再次遇到错误。有没有人有办法解决吗?
代码:
tst2_df=tst_df[['dateclosed']].agg('dateclosed':'min')\
.withColumnRenamed('min(dateclosed)','Open_Date')
tst_df.withColumn('Open_Date',lit(tst2_df[['Open_Date']].collect()[0])).show()
错误:
An error occurred while calling z:org.apache.spark.sql.functions.lit.
: java.lang.RuntimeException: Unsupported literal type class java.util.ArrayList [2017-01-01]
at org.apache.spark.sql.catalyst.expressions.Literal$.apply(literals.scala:78)
at org.apache.spark.sql.catalyst.expressions.Literal$$anonfun$create$2.apply(literals.scala:164)
at org.apache.spark.sql.catalyst.expressions.Literal$$anonfun$create$2.apply(literals.scala:164)
at scala.util.Try.getOrElse(Try.scala:79)
at org.apache.spark.sql.catalyst.expressions.Literal$.create(literals.scala:163)
at org.apache.spark.sql.functions$.typedLit(functions.scala:127)
at org.apache.spark.sql.functions$.lit(functions.scala:110)
at org.apache.spark.sql.functions.lit(functions.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Traceback (most recent call last):
File "/mnt/yarn/usercache/livy/appcache/application_1571940153295_0002/container_1571940153295_0002_01_000001/pyspark.zip/pyspark/sql/functions.py", line 44, in _
jc = getattr(sc._jvm.functions, name)(col._jc if isinstance(col, Column) else col)
File "/mnt/yarn/usercache/livy/appcache/application_1571940153295_0002/container_1571940153295_0002_01_000001/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "/mnt/yarn/usercache/livy/appcache/application_1571940153295_0002/container_1571940153295_0002_01_000001/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
return f(*a, **kw)
File "/mnt/yarn/usercache/livy/appcache/application_1571940153295_0002/container_1571940153295_0002_01_000001/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.sql.functions.lit.
: java.lang.RuntimeException: Unsupported literal type class java.util.ArrayList [2017-01-01]
at org.apache.spark.sql.catalyst.expressions.Literal$.apply(literals.scala:78)
at org.apache.spark.sql.catalyst.expressions.Literal$$anonfun$create$2.apply(literals.scala:164)
at org.apache.spark.sql.catalyst.expressions.Literal$$anonfun$create$2.apply(literals.scala:164)
at scala.util.Try.getOrElse(Try.scala:79)
at org.apache.spark.sql.catalyst.expressions.Literal$.create(literals.scala:163)
at org.apache.spark.sql.functions$.typedLit(functions.scala:127)
at org.apache.spark.sql.functions$.lit(functions.scala:110)
at org.apache.spark.sql.functions.lit(functions.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
更新:
这个 hack 成功了,感谢 Pault 的提示
tst_df2=tst_df.withColumn('BS',lit('a'))
w = Window.partitionBy('BS')
tst_df2.select('BS','dateclosed', min('dateclosed').over(w).alias('n')).show()
【问题讨论】:
使用Window
- 这类似于Adding a group count column to a PySpark dataframe,除了使用pyspark.sql.functions.min
作为聚合函数。
@pault 感谢您这么快回复我。对不起,我认为我在原始帖子中没有说清楚。所有记录的值应该相同。这将是整个专栏中的最小关闭日期。我不清楚如何使用窗口函数来获得它,它不会只给我分区值的最小值吗?
@pault 谢谢,你的小费我明白了。
没问题,但最好删除问题或添加您的解决方案作为答案。不要将其添加到原始问题中。
【参考方案1】:
tst_df2=tst_df.withColumn('BS',lit('a'))
w = Window.partitionBy('BS')
tst_df2.select('BS','dateclosed', min('dateclosed').over(w).alias('n')).show()
【讨论】:
以上是关于pyspark 将最小值添加回数据框的主要内容,如果未能解决你的问题,请参考以下文章