Spark记录-spark报错Unable to load native-hadoop library for your platform

Posted 信方互联网硬汉

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Spark记录-spark报错Unable to load native-hadoop library for your platform相关的知识,希望对你有一定的参考价值。

解决方案一:

#cp $HADOOP_HOME/lib/native/libhadoop.so  $JAVA_HOME/jre/lib/amd64

#源码编译snappy---./configure  make & make install

#cp libsnappy.so $JAVA_HOME/jre/lib/amd64

主要是jre目录下缺少了libhadoop.so和libsnappy.so两个文件。具体是,spark-shell依赖的是scala,scala依赖的是JAVA_HOME下的jdk
当这两个文件准备好后再次启动spark shell不会出现这个问题。

解决方案二:

在spark的conf/spark-env.sh文件加入:export  LD_LIBRARY_PATH=$HADOOP_HOME/lib/native

在/etc/profile设置一下:export LD_LIBRARY_PATH=$HADOOP_HOME/lib/native


以上是关于Spark记录-spark报错Unable to load native-hadoop library for your platform的主要内容,如果未能解决你的问题,请参考以下文章

导入spark程序的maven依赖包时,无法导入,报错Unable to import maven project: See logs for details

spark-shell启动报错:Yarn application has already ended! It might have been killed or unable to launch ap

Spark - ERROR Executor: Exception in tjava.lang.OutOfMemoryError: unable to create new native thread

spark 启动时候报 Unable to load native-hadoop library for your platform 警告

hive moving data报错,unable to move source...,yarn显示任务执行成功

hive moving data报错,unable to move source...,yarn显示任务执行成功