在 PySpark 中,有没有办法使用运行时给出的 Python 类的函数来动态注册 UDF? [复制]

Posted

技术标签:

【中文标题】在 PySpark 中,有没有办法使用运行时给出的 Python 类的函数来动态注册 UDF? [复制]【英文标题】:In PySpark is there any way to dynamically register UDF using functions of Python class given at run time? [duplicate] 【发布时间】:2019-05-30 07:12:22 【问题描述】:

我是 Python 新手,如果我的方法有任何错误,请致歉

我有一个场景,客户端可以提供其自定义 Python 函数并希望在 PySpark 中将它们注册为 UDF。

根据我最初的理解,我期望一个函数返回一个函数名称和函数定义的字典,从导入模块并在运行时调用这个方法。

随机自定义函数类示例

    class CustomFuntions():

        def reverse_statement(self, statement):
            temp_list = statement.split(' ')
            temp_list.reverse()
            return ' '.join(temp_list)

        def get_functions(self):
            return 
                "rev_statement" : self.reverse_statement
                

注册部分

    from importlib import import_module

    class UDFRegistration():

        def __init__(self, spark):
            self.spark = spark

        def get_function_dict(self, module_name, class_name, method_name):
            instance = getattr(import_module(module_name), class_name)
            func_dict = getattr(instance(), method_name)()
            return func_dict

        def register_udf(self, module_name, class_name, method_name):
            f_dict = self.get_function_dict(module_name, class_name, method_name)

            for func_name, func_def in f_dict.items():
                self.spark.udf.register(func_name, func_def)

主类

    CUST_UDF_MODULE = 'custom_functions'
    CUST_UDF_CLASS = 'CustomFuntions'
    CUST_UDF_METHOD = 'get_functions'

    if __name__ == '__main__':

        spark = SparkSession.builder.getOrCreate()
        spark.sparkContext.addPyFile("custom_functions.py")

        print('Registering custom UDF')
        udf_reg = UDFRegistration(spark)
        udf_reg.register_udf(CUST_UDF_MODULE, CUST_UDF_CLASS, CUST_UDF_METHOD)

        spark.sql('select rev_statement(\'!!! WORLD HELLO\')').show()

在本地机器上运行时出现以下错误。

控制台输出

To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Registering custom UDF
19/05/30 12:17:57 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
java.io.IOException: Cannot run program "python": CreateProcess error=2, The system cannot find the file specified
    at java.lang.ProcessBuilder.start(Unknown Source)
    at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:155)
    at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
    at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
    at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
    at org.apache.spark.sql.execution.python.BatchEvalPythonExec.evaluate(BatchEvalPythonExec.scala:77)
    at org.apache.spark.sql.execution.python.EvalPythonExec$$anonfun$doExecute$1.apply(EvalPythonExec.scala:127)
    at org.apache.spark.sql.execution.python.EvalPythonExec$$anonfun$doExecute$1.apply(EvalPythonExec.scala:89)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.run(Task.scala:121)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
    at java.lang.Thread.run(Unknown Source)
Caused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified
    at java.lang.ProcessImpl.create(Native Method)
    at java.lang.ProcessImpl.<init>(Unknown Source)
    at java.lang.ProcessImpl.start(Unknown Source)
    ... 30 more
19/05/30 12:17:57 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): java.io.IOException: Cannot run program "python": CreateProcess error=2, The system cannot find the file specified
    at java.lang.ProcessBuilder.start(Unknown Source)
    at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:155)
    at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
    at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
    at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
    at org.apache.spark.sql.execution.python.BatchEvalPythonExec.evaluate(BatchEvalPythonExec.scala:77)
    at org.apache.spark.sql.execution.python.EvalPythonExec$$anonfun$doExecute$1.apply(EvalPythonExec.scala:127)
    at org.apache.spark.sql.execution.python.EvalPythonExec$$anonfun$doExecute$1.apply(EvalPythonExec.scala:89)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.run(Task.scala:121)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
    at java.lang.Thread.run(Unknown Source)
Caused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified
    at java.lang.ProcessImpl.create(Native Method)
    at java.lang.ProcessImpl.<init>(Unknown Source)
    at java.lang.ProcessImpl.start(Unknown Source)
    ... 30 more

19/05/30 12:17:57 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
Traceback (most recent call last):
  File "C:\Users\USERNAME\eclipse-workspace\Workspace\SparkMain.py", line 31, in <module>
    spark.sql('select rev_statement(\'!!! WORLD HELLO\')').show()
  File "C:\Users\USERNAME\AppData\Local\Continuum\anaconda3\lib\site-packages\pyspark\sql\dataframe.py", line 378, in show
    print(self._jdf.showString(n, 20, vertical))
  File "C:\Users\USERNAME\AppData\Local\Continuum\anaconda3\lib\site-packages\py4j\java_gateway.py", line 1257, in __call__
    answer, self.gateway_client, self.target_id, self.name)
  File "C:\Users\USERNAME\AppData\Local\Continuum\anaconda3\lib\site-packages\pyspark\sql\utils.py", line 63, in deco
    return f(*a, **kw)
  File "C:\Users\USERNAME\AppData\Local\Continuum\anaconda3\lib\site-packages\py4j\protocol.py", line 328, in get_return_value
    format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o30.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): java.io.IOException: Cannot run program "python": CreateProcess error=2, The system cannot find the file specified

    at java.lang.ProcessBuilder.start(Unknown Source)

    at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:155)

    at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)

    at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)

    at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)

    at org.apache.spark.sql.execution.python.BatchEvalPythonExec.evaluate(BatchEvalPythonExec.scala:77)

    at org.apache.spark.sql.execution.python.EvalPythonExec$$anonfun$doExecute$1.apply(EvalPythonExec.scala:127)

    at org.apache.spark.sql.execution.python.EvalPythonExec$$anonfun$doExecute$1.apply(EvalPythonExec.scala:89)

    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(

【问题讨论】:

Cannot run program "python": CreateProcess error=2, The system cannot find the file specified 似乎与您分享的代码无关(看起来很合理)。更多是系统配置的问题。 【参考方案1】:

正如user10938362 强调的那样,问题出在系统配置方面而不是代码方面

在eclipse的PyDev解释器中设置PYSPARK_PYTHON后,代码执行成功

PYSPARK_PYTHON = C:\Users\User\anaconda3\python.exe

【讨论】:

以上是关于在 PySpark 中,有没有办法使用运行时给出的 Python 类的函数来动态注册 UDF? [复制]的主要内容,如果未能解决你的问题,请参考以下文章

有没有办法在笔记本中使用 PySpark 列出目录?

纱线集群模式下的 Pyspark

在 PySpark 作业上打印 Kafka 调试消息

有没有办法在 pyspark 中获取列数据类型?

在 PySpark 中读取文本文件时有没有办法控制分区数

如何在 AWS Glue PySpark 中运行并行线程?