如何在 pyspark 的 For 循环中插入自定义函数?
Posted
技术标签:
【中文标题】如何在 pyspark 的 For 循环中插入自定义函数?【英文标题】:How to insert a custom function within For loop in pyspark? 【发布时间】:2021-02-12 10:27:12 【问题描述】:我在 Azure 数据块中面临火花挑战。我有一个数据集
+------------------+----------+-------------------+---------------+
| OpptyHeaderID| OpptyID| Date|BaseAmountMonth|
+------------------+----------+-------------------+---------------+
|0067000000i6ONPAA2|OP-0164615|2014-07-27 00:00:00| 4375.800000|
|0065w0000215k5kAAA|OP-0218055|2020-12-23 00:00:00| 4975.000000|
+------------------+----------+-------------------+---------------+
现在我需要使用循环函数将行附加到此数据帧。我想在 pyspark 中复制以下函数。
Result = ()
for i in (1:12)
select a.PootyHeaderID
,a.OpptyID
,dateadd(MONTH, i, a.Date) as Date
,BaseAmountMonth
from FinalOut
Result = Result.Append()
print(i)
每个附加行中的日期必须有一个连续的月份(滚动 12 个月)。它应该是这样的。
+------------------+----------+-------------------+---------------+
| OpptyHeaderID| OpptyID| Date|BaseAmountMonth|
+------------------+----------+-------------------+---------------+
|0067000000i6ONPAA2|OP-0164615|2014-07-27 00:00:00| 4375.800000|
|0067000000i6ONPAA2|OP-0164615|2014-08-27 00:00:00| 4375.800000|
|0067000000i6ONPAA2|OP-0164615|2014-09-27 00:00:00| 4375.800000|
.
.
.
|0067000000i6ONPAA2|OP-0164615|2015-06-27 00:00:00| 4375.800000|
|0065w0000215k5kAAA|OP-0218055|2020-12-23 00:00:00| 4975.000000|
|0065w0000215k5kAAA|OP-0218055|2021-01-23 00:00:00| 4975.000000|
|0065w0000215k5kAAA|OP-0218055|2021-02-23 00:00:00| 4975.000000|
.
.
.
|0065w0000215k5kAAA|OP-0218055|2021-11-23 00:00:00| 4975.000000|
+------------------+----------+-------------------+---------------+
[编辑 1]
如何根据另一个字段使间隔长度动态化?
+------------------+----------+-------------------+---------------+--------+
| OpptyHeaderID| OpptyID| Date|BaseAmountMonth|Interval|
+------------------+----------+-------------------+---------------+--------+
|0067000000i6ONPAA2|OP-0164615|2014-07-27 00:00:00| 4375.800000| 12|
|0065w0000215k5kAAA|OP-0218055|2020-12-23 00:00:00| 4975.000000| 7|
+------------------+----------+-------------------+---------------+--------+
【问题讨论】:
【参考方案1】:您可以分解一系列时间戳:
import pyspark.sql.functions as F
df2 = df.withColumn(
'Date',
F.expr("""
explode(
sequence(
timestamp(Date),
add_months(timestamp(Date), `Interval` - 1),
interval 1 month
)
)
""")
)
df2.show(99)
+------------------+----------+-------------------+---------------+--------+
| OpptyHeaderID| OpptyID| Date|BaseAmountMonth|Interval|
+------------------+----------+-------------------+---------------+--------+
|0067000000i6ONPAA2|OP-0164615|2014-07-27 00:00:00| 4375.800000| 12|
|0067000000i6ONPAA2|OP-0164615|2014-08-27 00:00:00| 4375.800000| 12|
|0067000000i6ONPAA2|OP-0164615|2014-09-27 00:00:00| 4375.800000| 12|
|0067000000i6ONPAA2|OP-0164615|2014-10-27 00:00:00| 4375.800000| 12|
|0067000000i6ONPAA2|OP-0164615|2014-11-27 00:00:00| 4375.800000| 12|
|0067000000i6ONPAA2|OP-0164615|2014-12-27 00:00:00| 4375.800000| 12|
|0067000000i6ONPAA2|OP-0164615|2015-01-27 00:00:00| 4375.800000| 12|
|0067000000i6ONPAA2|OP-0164615|2015-02-27 00:00:00| 4375.800000| 12|
|0067000000i6ONPAA2|OP-0164615|2015-03-27 00:00:00| 4375.800000| 12|
|0067000000i6ONPAA2|OP-0164615|2015-04-27 00:00:00| 4375.800000| 12|
|0067000000i6ONPAA2|OP-0164615|2015-05-27 00:00:00| 4375.800000| 12|
|0067000000i6ONPAA2|OP-0164615|2015-06-27 00:00:00| 4375.800000| 12|
|0065w0000215k5kAAA|OP-0218055|2020-12-23 00:00:00| 4975.000000| 7|
|0065w0000215k5kAAA|OP-0218055|2021-01-23 00:00:00| 4975.000000| 7|
|0065w0000215k5kAAA|OP-0218055|2021-02-23 00:00:00| 4975.000000| 7|
|0065w0000215k5kAAA|OP-0218055|2021-03-23 00:00:00| 4975.000000| 7|
|0065w0000215k5kAAA|OP-0218055|2021-04-23 00:00:00| 4975.000000| 7|
|0065w0000215k5kAAA|OP-0218055|2021-05-23 00:00:00| 4975.000000| 7|
|0065w0000215k5kAAA|OP-0218055|2021-06-23 00:00:00| 4975.000000| 7|
+------------------+----------+-------------------+---------------+--------+
【讨论】:
以上是关于如何在 pyspark 的 For 循环中插入自定义函数?的主要内容,如果未能解决你的问题,请参考以下文章
如何在 PySpark 中使用自定义行分组来 reduceByKey?
如何在不使用for循环的情况下从pyspark中的列表创建数据框?
如何使用 PySpark 进行嵌套的 for-each 循环