从 PySpark 加载数据帧
Posted
技术标签:
【中文标题】从 PySpark 加载数据帧【英文标题】:Load dataframe from PySpark 【发布时间】:2020-11-29 12:42:53 【问题描述】:我正在尝试使用 spark.read.jdbc
从 PySpark 连接到 MS SQL DBimport os
from pyspark.sql import *
from pyspark.sql.functions import *
from pyspark import SparkContext;
from pyspark.sql.session import SparkSession
sc = SparkContext.getOrCreate()
spark = SparkSession(sc)
df = spark.read \
.format('jdbc') \
.option('url', 'jdbc:sqlserver://local:1433') \
.option('user', 'sa') \
.option('password', '12345') \
.option('dbtable', '(select COL1, COL2 from tbl1 WHERE COL1 = 2)')
然后我执行 df.load() 并返回错误:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\spark\spark\python\pyspark\sql\readwriter.py", line 172, in load
return self._df(self._jreader.load())
File "C:\spark\spark\python\lib\py4j-0.10.7-src.zip\py4j\java_gateway.py", line 1256, in __call__
File "C:\spark\spark\python\pyspark\sql\utils.py", line 63, in deco
return f(*a, **kw)
File "C:\spark\spark\python\lib\py4j-0.10.7-src.zip\py4j\protocol.py", line 326, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o42.load.
: java.sql.SQLException: No suitable driver
at java.sql.DriverManager.getDriver(Unknown Source)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$6.apply(JDBCOptions.scala:105)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$6.apply(JDBCOptions.scala:105)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:104)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:35)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:32)
怎么了?
【问题讨论】:
【参考方案1】:您需要下载 JDBC 驱动程序并将其放入您的 spark/jars 文件夹中。
SQL SERVER JDBC驱动可以从https://docs.microsoft.com/en-us/sql/connect/jdbc/download-microsoft-jdbc-driver-for-sql-server?view=sql-server-ver15下载
【讨论】:
以上是关于从 PySpark 加载数据帧的主要内容,如果未能解决你的问题,请参考以下文章
使用 pyspark 从 s3 位置读取镶木地板文件的文件夹到 pyspark 数据帧