如何使用idea开发hadoop程序

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了如何使用idea开发hadoop程序相关的知识,希望对你有一定的参考价值。

参考技术A (1)准备工作
1) 安装JDK 6或者JDK 7
2) 安装scala 2.10.x (注意版本)
2)下载Intellij IDEA最新版(本文以IntelliJ IDEA Community Edition 13.1.1为例说明,不同版本,界面布局可能不同):
3)将下载的Intellij IDEA解压后,安装scala插件,流程如下:
依次选择“Configure”–> “Plugins”–> “Browse repositories”,输入scala,然后安装即可

(2)搭建Spark源码阅读环境(需要联网)
一种方法是直接依次选择“import project”–> 选择spark所在目录 –> “SBT”,之后intellij会自动识别SBT文件,并下载依赖的外部jar包,整个流程用时非常长,取决于机器的网络环境(不建议在windows下操作,可能遇到各种问题),一般需花费几十分钟到几个小时。注意,下载过程会用到git,因此应该事先安装了git。
第二种方法是首先在linux操作系统上生成intellij项目文件,然后在intellij IDEA中直接通过“Open Project”打开项目即可。在linux上生成intellij项目文件的方法(需要安装git,不需要安装scala,sbt会自动下载)是:在spark源代码根目录下,输入sbt/sbt gen-idea
注:如果你在windows下阅读源代码,建议先在linux下生成项目文件,然后导入到windows中的intellij IDEA中。
参考技术B 其实,你弄错了hadoop的真正意图。首先,hadoop不适合于开发WEB程序。hadoop的优势在于大规模的分布式数据处理。负责数据的分析并采用分布式数据库(hbase)来存储。但是,hadoop有个特点是,所有的数据处理作业都是批处理的,也就是说hadoop在实...本回答被提问者采纳

本地idea开发mapreduce程序提交到远程hadoop集群执行

通过idea开发mapreduce程序并直接run,提交到远程hadoop集群执行mapreduce。

简要流程:本地开发mapreduce程序–>设置yarn 模式 --> 直接本地run–>远程集群执行mapreduce程序;

完整的流程:本地开发mapreduce程序——> 设置yarn模式——>初次编译产生jar文件——>增加 job.setJar("mapreduce/build/libs/mapreduce-0.1.jar");——>直接在Idea中run——>远程集群执行mapreduce程序;

一图说明问题:

源码
build.gradle

plugins 
    id 'java'



group 'com.ruizhiedu'
version '0.1'

sourceCompatibility = 1.8

repositories 
    mavenCentral()


dependencies 
    compile group: 'org.apache.hadoop', name: 'hadoop-common', version: '3.1.0'
    compile group: 'org.apache.hadoop', name: 'hadoop-mapreduce-client-core', version: '3.1.0'
    compile group: 'org.apache.hadoop', name: 'hadoop-mapreduce-client-jobclient', version: '3.1.0'

    testCompile group: 'junit', name: 'junit', version: '4.12'


java文件

输入、输出已经让我写死了,可以直接run。不需要再运行时候设置idea运行参数

wc.java

package com;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Counter;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
import org.apache.hadoop.util.StringUtils;

import java.io.BufferedReader;

import java.io.FileReader;
import java.io.IOException;
import java.net.URI;
import java.util.*;



/**
 * @author wangxiaolei(王小雷)
 * @since 2018/11/22
 */

public class wc 
    public static class TokenizerMapper
            extends Mapper<Object, Text, Text, IntWritable> 

        static enum CountersEnum  INPUT_WORDS 

        private final static IntWritable one = new IntWritable(1);
        private Text word = new Text();

        private boolean caseSensitive;
        private Set<String> patternsToSkip = new HashSet<String>();

        private Configuration conf;
        private BufferedReader fis;

        @Override
        public void setup(Context context) throws IOException,
                InterruptedException 
            conf = context.getConfiguration();
            caseSensitive = conf.getBoolean("wordcount.case.sensitive", true);
            if (conf.getBoolean("wordcount.skip.patterns", false)) 
                URI[] patternsURIs = Job.getInstance(conf).getCacheFiles();
                for (URI patternsURI : patternsURIs) 
                    Path patternsPath = new Path(patternsURI.getPath());
                    String patternsFileName = patternsPath.getName().toString();
                    parseSkipFile(patternsFileName);
                
            
        

        private void parseSkipFile(String fileName) 
            try 
                fis = new BufferedReader(new FileReader(fileName));
                String pattern = null;
                while ((pattern = fis.readLine()) != null) 
                    patternsToSkip.add(pattern);
                
             catch (IOException ioe) 
                System.err.println("Caught exception while parsing the cached file '"
                        + StringUtils.stringifyException(ioe));
            
        

        @Override
        public void map(Object key, Text value, Context context
        ) throws IOException, InterruptedException 
            String line = (caseSensitive) ?
                    value.toString() : value.toString().toLowerCase();
            for (String pattern : patternsToSkip) 
                line = line.replaceAll(pattern, "");
            
            StringTokenizer itr = new StringTokenizer(line);
            while (itr.hasMoreTokens()) 
                word.set(itr.nextToken());
                context.write(word, one);
                Counter counter = context.getCounter(CountersEnum.class.getName(),
                        CountersEnum.INPUT_WORDS.toString());
                counter.increment(1);
            
        
    

    public static class IntSumReducer
            extends Reducer<Text,IntWritable,Text,IntWritable> 
        private IntWritable result = new IntWritable();

        public void reduce(Text key, Iterable<IntWritable> values,
                           Context context
        ) throws IOException, InterruptedException 
            int sum = 0;
            for (IntWritable val : values) 
                sum += val.get();
            
            result.set(sum);
            context.write(key, result);
        
    

    public static void main(String[] args) throws Exception 

        Configuration conf = new Configuration();
        conf.set("yarn.resourcemanager.address", "192.168.56.101:8050");
        conf.set("mapreduce.framework.name", "yarn");
        conf.set("fs.defaultFS", "hdfs://vbusuanzi:9000/"); 
//        conf.set("mapred.jar", "mapreduce/build/libs/mapreduce-0.1.jar"); // 也可以在这里设置刚刚编译好的jar
        conf.set("mapred.job.tracker", "vbusuanzi:9001");
//        conf.set("mapreduce.app-submission.cross-platform", "true");// Windows开发者需要设置跨平台
       args = new String[]"/tmp/test/LICENSE.txt","/tmp/test/out30";
        GenericOptionsParser optionParser = new GenericOptionsParser(conf, args);
        String[] remainingArgs = optionParser.getRemainingArgs();


        if ((remainingArgs.length != 2) && (remainingArgs.length != 4)) 
            System.err.println("Usage: wordcount <in> <out> [-skip skipPatternFile]");
            System.exit(2);
        

        Job job = Job.getInstance(conf,"test");
        job.setJar("mapreduce/build/libs/mapreduce-0.1.jar");
        job.setJarByClass(com.wc.class);


        job.setMapperClass(TokenizerMapper.class);
        job.setCombinerClass(IntSumReducer.class);
        job.setReducerClass(IntSumReducer.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);

        List<String> otherArgs = new ArrayList<String>();
        for (int i=0; i < remainingArgs.length; ++i) 
            if ("-skip".equals(remainingArgs[i])) 
                job.addCacheFile(new Path(remainingArgs[++i]).toUri());
                job.getConfiguration().setBoolean("wordcount.skip.patterns", true);
             else 
                otherArgs.add(remainingArgs[i]);
            
        
        FileInputFormat.addInputPath(job, new Path(otherArgs.get(0)));
        FileOutputFormat.setOutputPath(job, new Path(otherArgs.get(1)));



        job.waitForCompletion(true);

        System.exit(job.waitForCompletion(true) ? 0 : 1);
    


可以解决的问题:
Error: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class com.wc$TokenizerMapper not found

其实日志中有提示该问题出在哪: Not adding any jar to the list of resources.
2018-11-22 16:03:29,086 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Job jar is not present. Not adding any jar to the list of resources.

所有增加

job.setJar("mapreduce/build/libs/mapreduce-0.1.jar");

以上是关于如何使用idea开发hadoop程序的主要内容,如果未能解决你的问题,请参考以下文章

如何在Mac使用Intellij idea搭建远程Hadoop开发环境

如何在Mac使用Intellij idea搭建远程Hadoop开发环境

如何添加Hadoop依赖通过Maven

如何在idea里面直接运行spark streaming程序

老师,IDEA如何在本地运行和调试Hadoop程序?

hadoop的idea依赖包怎么下载