python pyspark.sql代码示例

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了python pyspark.sql代码示例相关的知识,希望对你有一定的参考价值。

from pyspark.sql import functions as F

df = spark.createDataFrame([(1, 1.2345), (2, 9.8765)], ["col1", "col2"])

# 型のキャスト
df.select(F.col("col2").cast("int")).show()
#  ↓
# +----+
# |col2|
# +----+
# |   1|
# |   9|
# +----+

# 四捨五入
df.select(F.round("col2", 1)).show()
#  ↓
# +--------------+
# |round(col2, 1)|
# +--------------+
# |           1.2|
# |           9.9|
# +--------------+

# 相関係数の計算
from pyspark.ml.stat import Correlation
from pyspark.ml.feature import VectorAssembler
assembler = VectorAssembler(inputCols=["c1","c2"], outputCol="features")
df_vector = assembler.transform(df.select(f.col("c1").cast("double"), f.col("c2").cast("double")))
pearsonCorr = Correlation.corr(df_vector, 'features', 'pearson').collect()[0][0]
print(str(pearsonCorr).replace('nan', 'NaN'))

以上是关于python pyspark.sql代码示例的主要内容,如果未能解决你的问题,请参考以下文章