如何在pyspark Hive SQL中获取等效的postgres命令'nth_value'以进行分区?
Posted
技术标签:
【中文标题】如何在pyspark Hive SQL中获取等效的postgres命令\'nth_value\'以进行分区?【英文标题】:How to get postgres command 'nth_value' equivalent in pyspark Hive SQL for partition over?如何在pyspark Hive SQL中获取等效的postgres命令'nth_value'以进行分区? 【发布时间】:2020-07-21 23:00:21 【问题描述】:我正在解决这个例子:
https://www.windowfunctions.com/questions/grouping/6
在这里,他们使用 Oracle 或 postgres 命令 nth_value
来获得答案,但这在 pyspark 使用的 Hive SQL 中没有实现,我想知道如何在 pyspark 中获得相同的结果。
postgres sql 代码
select distinct(breed),
nth_value(weight, 2) over ( partition by breed order by weight
RANGE BETWEEN UNBOUNDED PRECEDING
AND UNBOUNDED FOLLOWING
) as imagined_weight
from cats
order by breed
问题:如何使用 pyspark 得到以下结果?
breed imagined_weight
British Shorthair 4.8
Maine Coon 5.4
Persian 4.5
Siamese 6.1
数据
import numpy as np
import pandas as pd
import pyspark
from pyspark.sql.types import *
from pyspark.sql import functions as F
from pyspark.sql.window import Window
from pyspark import SparkConf, SparkContext, SQLContext
spark = pyspark.sql.SparkSession.builder.appName('app').getOrCreate()
sc = spark.sparkContext
sqlContext = SQLContext(sc)
sqc = sqlContext
# spark_df = sqlContext.createDataFrame(pandas_df)
df = pd.DataFrame(
'name': [
'Molly', 'Ashes', 'Felix', 'Smudge', 'Tigger', 'Alfie', 'Oscar',
'Millie', 'Misty', 'Puss', 'Smokey', 'Charlie'
],
'breed': [
'Persian', 'Persian', 'Persian', 'British Shorthair',
'British Shorthair', 'Siamese', 'Siamese', 'Maine Coon', 'Maine Coon',
'Maine Coon', 'Maine Coon', 'British Shorthair'
],
'weight': [4.2, 4.5, 5.0, 4.9, 3.8, 5.5, 6.1, 5.4, 5.7, 5.1, 6.1, 4.8],
'color': [
'Black', 'Black', 'Tortoiseshell', 'Black', 'Tortoiseshell', 'Brown',
'Black', 'Tortoiseshell', 'Brown', 'Tortoiseshell', 'Brown', 'Black'
],
'age': [1, 5, 2, 4, 2, 5, 1, 5, 2, 2, 4, 4]
)
schema = StructType([
StructField('name', StringType(), True),
StructField('breed', StringType(), True),
StructField('weight', DoubleType(), True),
StructField('color', StringType(), True),
StructField('age', IntegerType(), True),
])
sdf = sqlContext.createDataFrame(df, schema)
sdf.createOrReplaceTempView("cats")
spark.sql('select * from cats limit 2').show()
到目前为止我的尝试
# My attempt
q = """
select
distinct(breed),
( max(case when rn = 2 then weight end)
over(partition by breed order by weight
RANGE BETWEEN UNBOUNDED PRECEDING
AND UNBOUNDED FOLLOWING)
) imagined_weight
from (
select
c.*,
row_number() over(order by weight) rn
from cats c
) c
"""
spark.sql(q).show()
参考文献
How to get postgres command 'nth_value' equivalent in pyspark Hive SQL?【问题讨论】:
你的代码有什么问题? 【参考方案1】:如果您希望每个品种的体重第二低:
select breed,
max(case when seqnum = 2 then weight end) as imagined_weight
from (select c.*, row_number() over (partition by breed order by weight) as seqnum
from cats c
) c
group by breed;
【讨论】:
以上是关于如何在pyspark Hive SQL中获取等效的postgres命令'nth_value'以进行分区?的主要内容,如果未能解决你的问题,请参考以下文章
使用 Pyspark 在 Hive 中搜索 IS_DATE 等效项
Pyspark 中是不是有等效于 SQL 的 MSCK REPAIR TABLE 的方法
如何使用 jupyter notebook 在 pyspark 中的 Hive 上使用 %sql Magic 字符串启用 spark SQL