MapReduce报错:Exception in thread “main“ java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio
Posted ZSYL
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了MapReduce报错:Exception in thread “main“ java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio相关的知识,希望对你有一定的参考价值。
问题描述
MapReduce测试WordCount代码实例,运行代码环境Windows11、JDK13、hadoop3.1.3、IDEA.
执行Driver类时,出现问题:
package com.zs.mapreduce.wordcount;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import java.io.IOException;
public class WordCountDriver {
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
// 1. 获取job
Configuration conf = new Configuration();
Job job = Job.getInstance(conf);
// 2. 设置jar包路径
job.setJarByClass(WordCountDriver.class);
// 3. 关联mapper和reducer
job.setMapperClass(WordCountMapper.class);
job.setReducerClass(WordCountReducer.class);
// 4. 设置map输出的kv类型
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
// 5. 设置最终输出kV类型
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
// 6.设置输入路径和输出路径
FileInputFormat.setInputPaths(job, new Path("D:\\\\software\\\\hadoop\\\\input\\\\inputword"));
FileOutputFormat.setOutputPath(job, new Path("D:\\\\software\\\\hadoop\\\\output\\\\output1"));
// 7. 提交job
boolean result = job.waitForCompletion(true);
System.exit(result ? 0 : 1);
}
}
异常完整描述:
Exception in thread "main" java.lang.UnsatisfiedLinkError:org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
异常追踪信息如下:
java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:609)
at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:980)
at org.apache.hadoop.util.DiskChecker.checkAccessByFileMethods(DiskChecker.java:187)
at org.apache.hadoop.util.DiskChecker.checkDirAccess(DiskChecker.java:174)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:108)
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:314)
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:377)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:151)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:132)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:116)
at org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:125)
at org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(LocalJobRunner.java:171)
at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:758)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:244)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1304)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1304)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1325)
……
报错位置:
// 7.提交job
boolean result = job.waitForCompletion(true);
原因分析
该异常常见于Windows环境中进行hadoop本地调试的场景。
可能是Windows所需环境依赖没有配置完整!
解决方案1
下载对应Hadoop版本的Windows环境依赖(hadoop3.1.3可以私聊我)然后将其中的hadoop.dll
文件放到hadoop安装路径的bin文件夹下(配置好HADOOP_HOME的环境变量),然后重启电脑。
或者:
直接将hadoop.dll
文件拷贝到Windows目录C:\\Windows\\System32
中,即可!(亲测有效)有的可能需要重启。
我的报错可能是点击winutils.exe后,Windows环境依赖,没有权限将
hadoop.dll
拷贝到System32中。
所以需要我手动导入。
成功输出MapReduce结果!
解决方案2
看网上搜到的好像需要自己新建缺少的依赖包org.apache.hadoop.io.nativeio
,但好像大多都是Hadoop2.x系列的,我尝试了,但是代码复制过来都是报错!然后就试了第一种简单的方式,就解决了。
可以参考:新建 NativeIO.java 第二种解决方式!
以上是关于MapReduce报错:Exception in thread “main“ java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio的主要内容,如果未能解决你的问题,请参考以下文章
SparkSpark cache 报错 Exception thrown in awaitResult
Exception in thread main org.apache.ibatis.exceptions.PersistenceException:报错解决
Unity2021打包报错: Exception: OBSOLETE - Providing Android resources in Assets/Plugins/Android/
java使用类数组 报错Exception in thread "main" java.lang.NullPointerException
ArrayList循环删除报错Exception in thread “main“ java.util.ConcurrentModificationException
Java报错:Exception in thread “main“ java.lang.NullPointerException