将DataFrame转换为scala中的字符串
Posted
技术标签:
【中文标题】将DataFrame转换为scala中的字符串【英文标题】:convert a DataFrame into a string in scala 【发布时间】:2017-10-11 15:34:19 【问题描述】:我正在尝试在 sbt 中下载 Hive jar,但出现以下错误。有没有人遇到过这个?
请告诉我创建 GenericUDF 类所需的正确配置单元版本是什么
这是我的 SBT 文件
name := "Test"
version := "0.1"
scalaVersion := "2.11.8"
libraryDependencies += "org.apache.hive" % "hive-exec" % "1.2.1"
以下是我收到的错误消息
Error:Error while importing SBT project:<br/>...<br/><pre>[error] at sbt.internal.LibraryManagement$.cachedUpdate(LibraryManagement.scala:118)
[error] at sbt.Classpaths$.$anonfun$updateTask$5(Defaults.scala:2353)
[error] at scala.Function1.$anonfun$compose$1(Function1.scala:44)
[error] at sbt.internal.util.$tilde$greater.$anonfun$$u2219$1(TypeFunctions.scala:42)
[error] at sbt.std.Transform$$anon$4.work(System.scala:64)
[error] at sbt.Execute.$anonfun$submit$2(Execute.scala:257)
[error] at sbt.internal.util.ErrorHandling$.wideConvert(ErrorHandling.scala:16)
[error] at sbt.Execute.work(Execute.scala:266)
[error] at sbt.Execute.$anonfun$submit$1(Execute.scala:257)
[error] at sbt.ConcurrentRestrictions$$anon$4.$anonfun$submitValid$1(ConcurrentRestrictions.scala:167)
[error] at sbt.CompletionService$$anon$2.call(CompletionService.scala:32)
[error] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[error] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[error] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[error] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[error] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[error] at java.lang.Thread.run(Thread.java:748)
[error] (*:update) sbt.librarymanagement.ResolveException: unresolved dependency: org.pentaho#pentaho-aggdesigner-algorithm;5.1.5-jhyde: not found
[error] (*:ssExtractDependencies) sbt.librarymanagement.ResolveException: unresolved dependency: org.pentaho#pentaho-aggdesigner-algorithm;5.1.5-jhyde: not found
[error] Total time: 20 s, completed Oct 11, 2017 4:27:03 PM</pre><br/>See complete log in <a href="file:/Users/spachari/Library/Logs/IdeaIC2017.2/sbt.last.log">file:/Users/spachari/Library/Logs/IdeaIC2017.2/sbt.last.log</a>
【问题讨论】:
【参考方案1】:请尝试使用以下 SBT,它应该会有所帮助。
name := "测试" 版本:=“0.1” scalaVersion := "2.11.8" libraryDependencies ++= Seq("org.apache.hive" % "hive-exec" % "1.2.1").map(_.exclude("org.pentaho", "pentaho- aggdesigner-算法"))
【讨论】:
以上是关于将DataFrame转换为scala中的字符串的主要内容,如果未能解决你的问题,请参考以下文章
如何在 scala 中将 RDD[(int, string)] 转换为 Dataframe
在 spark DataFrame-Scala 中格式化 TimestampType
Scala Spark DataFrame SQL withColumn - 如何使用函数(x:字符串)进行转换