不允许设置 spark.sql.warehouse.dir ,应该使用 Java 为跨会话使用静态设置

Posted

技术标签:

【中文标题】不允许设置 spark.sql.warehouse.dir ,应该使用 Java 为跨会话使用静态设置【英文标题】:Not allowing to set spark.sql.warehouse.dir , it should be set statically for cross-session usages using Java 【发布时间】:2020-08-17 06:57:39 【问题描述】:

我尝试学习 Spark,但在这里我发现了一个异常 (不允许设置 spark.sql.warehouse.dir,它应该为跨会话使用静态设置,异常是 线程“main”中的异常 java.lang.NoClassDefFoundError: org/apache/hadoop/hdfs/DistributedFileSystem )。我正在使用 Windows 10 电脑。

主类:

package com.rakib;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SparkSession;

import java.util.logging.Level;
import java.util.logging.Logger;

public class App 
    public static void main(String[] args) 

        System.setProperty("hadoop.home.dir", "c:/hadoop");
        Logger.getLogger("org.apache").setLevel(Level.WARNING);

        SparkSession session = SparkSession.builder().appName("SparkSQL").master("local[*]")
                .config("spark.sql.warehouse.dir", "file:///c:/temp/")
                .getOrCreate();

        Dataset<Row> dataSet = session.read().option("header", true).csv("src/main/resources/student.csv");
        dataSet.show();

        long numberOfRows = dataSet.count();
        System.out.println("Total : " + numberOfRows);

        session.close();
    

例外:

20/08/17 12:12:27 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:///c:/temp/').
20/08/17 12:12:27 INFO SharedState: Warehouse path is 'file:///c:/temp/'.
20/08/17 12:12:27 **WARN SharedState: Not allowing to set spark.sql.warehouse.dir or hive.metastore.warehouse.dir in SparkSession's options, it should be set statically for cross-session usages
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/hdfs/DistributedFileSystem
    at** org.apache.spark.sql.execution.datasources.InMemoryFileIndex$.listLeafFiles(InMemoryFileIndex.scala:316)
    at org.apache.spark.sql.execution.datasources.InMemoryFileIndex$.$anonfun$bulkListLeafFiles$1(InMemoryFileIndex.scala:195)
    at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
    at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
    at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
    at scala.collection.TraversableLike.map(TraversableLike.scala:238)
    at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
    at scala.collection.AbstractTraversable.map(Traversable.scala:108)
    at org.apache.spark.sql.execution.datasources.InMemoryFileIndex$.bulkListLeafFiles(InMemoryFileIndex.scala:187)
    at org.apache.spark.sql.execution.datasources.InMemoryFileIndex.listLeafFiles(InMemoryFileIndex.scala:135)
    at org.apache.spark.sql.execution.datasources.InMemoryFileIndex.refresh0(InMemoryFileIndex.scala:98)
    at org.apache.spark.sql.execution.datasources.InMemoryFileIndex.<init>(InMemoryFileIndex.scala:70)
    at org.apache.spark.sql.execution.datasources.DataSource.createInMemoryFileIndex(DataSource.scala:561)
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:399)
    at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:279)
    at org.apache.spark.sql.DataFrameReader.$anonfun$load$2(DataFrameReader.scala:268)
    at scala.Option.getOrElse(Option.scala:189)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:268)
    at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:705)
    at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:535)
    at com.rakib.App.main(App.java:21)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hdfs.DistributedFileSystem
    at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
    ... 22 more
20/08/17 12:12:28 INFO SparkContext: Invoking stop() from shutdown hook
20/08/17 12:12:28 INFO SparkUI: Stopped Spark web UI at http://DESKTOP-3147U79:4040
20/08/17 12:12:28 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
20/08/17 12:12:28 INFO MemoryStore: MemoryStore cleared
20/08/17 12:12:28 INFO BlockManager: BlockManager stopped
20/08/17 12:12:28 INFO BlockManagerMaster: BlockManagerMaster stopped
20/08/17 12:12:28 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
20/08/17 12:12:28 INFO SparkContext: Successfully stopped SparkContext
20/08/17 12:12:28 INFO ShutdownHookManager: Shutdown hook called
20/08/17 12:12:28 INFO ShutdownHookManager: Deleting directory C:\Users\itc\AppData\Local\Temp\spark-ab377bad-43d5-48ad-a938-b99234abe546

Pom.XML

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>org.example</groupId>
    <artifactId>Test_One</artifactId>
    <version>1.0-SNAPSHOT</version>
    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <configuration>
                    <source>8</source>
                    <target>8</target>
                </configuration>
            </plugin>
        </plugins>
    </build>

    <dependencies>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.12</artifactId>
            <version>3.0.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_2.12</artifactId>
            <version>3.0.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-hdfs</artifactId>
            <version>3.3.0</version>
        </dependency>
        <dependency>
            <groupId>log4j</groupId>
            <artifactId>log4j</artifactId>
            <version>1.2.17</version>
        </dependency>

    </dependencies>
</project>

【问题讨论】:

你设置HADOOP_HOME和winutils.exe了吗? **是的先生,我添加了 hadoop/bin/winutils.exe ** 其实spark测试不需要使用hadoop。删除 hadoop 的所有选项,然后始终从本地文件系统中删除路径。 【参考方案1】:

请在pom.xml 中添加以下依赖项并试一试,它应该可以工作,因为org.apache.hadoop.hdfs.DistributedFileSystem 类是hadoop-hdfs-client:3.3.0 依赖项的一部分,参考:https://repo1.maven.org/maven2/org/apache/hadoop/

<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-hdfs-client</artifactId>
    <version>3.3.0</version>
</dependency>

<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-common</artifactId>
    <version>3.3.0</version>
</dependency>

您的 pom.xml 的依赖已更新,


<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>org.example</groupId>
    <artifactId>Test_One</artifactId>
    <version>1.0-SNAPSHOT</version>
    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <configuration>
                    <source>8</source>
                    <target>8</target>
                </configuration>
            </plugin>
        </plugins>
    </build>

    <dependencies>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.12</artifactId>
            <version>3.0.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_2.12</artifactId>
            <version>3.0.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-hdfs</artifactId>
            <version>3.3.0</version>
        </dependency>
        
        <dependency>
            <groupId>log4j</groupId>
            <artifactId>log4j</artifactId>
            <version>1.2.17</version>
        </dependency>
        
        <dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-hdfs-client</artifactId>
    <version>3.3.0</version>
</dependency>

<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-common</artifactId>
    <version>3.3.0</version>
</dependency>
</dependencies>
</project>

所以请试一试,如果您可以继续进行火花运行,请告诉我?

【讨论】:

非常感谢先生,它工作正常。请从深呼吸中吸取爱和尊重。 很高兴它对您有用,请随时接受我的回答 - meta.stackexchange.com/questions/5234/… 是的,先生,当我的声望点达到 15 时我会这样做。目前我无法这样做。

以上是关于不允许设置 spark.sql.warehouse.dir ,应该使用 Java 为跨会话使用静态设置的主要内容,如果未能解决你的问题,请参考以下文章

工业部分 人工智能

pyspark delta-lake 元存储

如何使用包含“$”的列名进行查询?

怎么设置单元格区域不允许用户编辑

为啥浏览器允许设置一些没有 CORS 的标头,但不允许设置其他标头?试图避免预检

Android系统,如何设置某个应用程序不允许访问网络?