spark操作hive方式(scala)

Posted kwzblog

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了spark操作hive方式(scala)相关的知识,希望对你有一定的参考价值。

第一种方式:

def operatorHive: Unit = {
    Class.forName("org.apache.hive.jdbc.HiveDriver")
    val url = "jdbc:hive2://192.168.2.xxx:10000"
    val connection: Connection = DriverManager.getConnection(url, "root", "[email protected]")
    val createStatement: Statement = connection.createStatement()
    val query: ResultSet = createStatement.executeQuery("select * from diagbot.ord_lis_trend limit 2")
    while (query.next()) {
      println(query.getString(1))
    }
  }

第二种方式:

object SparkOperaterHive {
  val sparkConf: SparkConf = new SparkConf().setAppName(SparkOperaterHive.getClass.getSimpleName)
  val sparkSession: SparkSession = SparkSession.builder().config(sparkConf).enableHiveSupport().getOrCreate()
  val sc: SparkContext = sparkSession.sparkContext
  val sqlContext: SQLContext = sparkSession.sqlContext
  
  def main(args: Array[String]) {
   
    import sparkSession.implicits._
    val sql1: DataFrame = sparkSession.sql("select * from janggan.diagnosismedication")
    val properties: Properties = new Properties()
    properties.put("user", "root")
    properties.put("password", "[email protected]")
    properties.put("driver", "com.mysql.jdbc.Driver")
    //    sql1.write.mode(SaveMode.Append).jdbc(url,"doc_info_hive",properties)
    println("总数为:" + sql1.count())
    println("sddhdj" + sql1.columns(1))

    sparkSession.stop()
  }
}

 

以上是关于spark操作hive方式(scala)的主要内容,如果未能解决你的问题,请参考以下文章

本地Spark连接远程集群Hive(Scala/Python)

spark-1.6.1安装编译&&sparksql操作hive

通过 Spark R 操作 Hive

Spark&Hive:如何使用scala开发spark作业,并访问hive。

Mapreduce+Hive+Spark+Scala平台搭建

将几个变量从 scala / spark-shell 馈送到 hive 表