Hadoop--07---MapReduce_02----WordCount 案例实操

Posted 高高for 循环

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Hadoop--07---MapReduce_02----WordCount 案例实操相关的知识,希望对你有一定的参考价值。

提示:文章写完后,目录可以自动生成,如何生成可参考右边的帮助文档

文章目录


WordCount 案例实操

1)需求

2)需求分析

按照MapReduce 编程规范,分别编写Mapper,Reducer,Driver。

3)环境准备

(1)创建maven 工程,MapReduceDemo
(2)在pom.xml 文件中添加如下依赖

    <dependencies>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-client</artifactId>
            <version>3.1.3</version>
        </dependency>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.12</version>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-log4j12</artifactId>
            <version>1.7.30</version>
        </dependency>
    </dependencies>

(3)在项目的src/main/resources 目录下,新建一个文件,命名为“log4j.properties”,在
文件中填入。

log4j.rootLogger=INFO, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d %p [%c] - %m%n
log4j.appender.logfile=org.apache.log4j.FileAppender
log4j.appender.logfile.File=target/spring.log
log4j.appender.logfile.layout=org.apache.log4j.PatternLayout
log4j.appender.logfile.layout.ConversionPattern=%d %p [%c] - %m%n

(4)创建包名:com.atguigu.mapreduce.wordcount

4)编写 Mapper类

package com.atguigu.mapreduce.wordcount;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

import java.io.IOException;

/**
 * KEYIN, map阶段输入的key的类型:LongWritable
 * VALUEIN,map阶段输入value类型:Text
 * KEYOUT,map阶段输出的Key类型:Text
 * VALUEOUT,map阶段输出的value类型:IntWritable
 */
public class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable> 
    private Text outK = new Text();

    private IntWritable outV = new IntWritable(1);

    @Override
    protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException 

        // 1 获取一行
        // atguigu atguigu
        String line = value.toString();

        // 2 切割
        // atguigu
        // atguigu
        String[] words = line.split(" ");

        // 3 循环写出
        for (String word : words) 
            // 封装outk
            outK.set(word);

            // 写出
            context.write(outK, outV);
        

    


5)编写 Reducer类

package com.atguigu.mapreduce.wordcount;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

import java.io.IOException;

/**
 * KEYIN, reduce阶段输入的key的类型:Text
 * VALUEIN,reduce阶段输入value类型:IntWritable
 * KEYOUT,reduce阶段输出的Key类型:Text
 * VALUEOUT,reduce阶段输出的value类型:IntWritable
 */
public class WordCountReducer extends Reducer<Text, IntWritable,Text,IntWritable> 
    private IntWritable outV = new IntWritable();

    @Override
    protected void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException 

        int sum = 0;
        // atguigu, (1,1)
        // 累加
        for (IntWritable value : values) 
            sum += value.get();
        

        outV.set(sum);

        // 写出
        context.write(key,outV);
    


6)编写 Driver驱动类

package com.atguigu.mapreduce.wordcount;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import java.io.IOException;

public class WordCountDriver 

    public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException 

        // 1 获取job
        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf);

        // 2 设置jar包路径
        job.setJarByClass(WordCountDriver.class);

        // 3 关联mapper和reducer
        job.setMapperClass(WordCountMapper.class);
        job.setReducerClass(WordCountReducer.class);

        // 4 设置map输出的kv类型
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(IntWritable.class);

        // 5 设置最终输出的kV类型
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);

        // 6 设置输入路径和输出路径
        FileInputFormat.setInputPaths(job, new Path("C:\\\\Users\\\\gaogao\\\\Desktop\\\\大数据\\\\input"));
        FileOutputFormat.setOutputPath(job, new Path("C:\\\\Users\\\\gaogao\\\\Desktop\\\\大数据\\\\output"));

        // 7 提交job
        boolean result = job.waitForCompletion(true);

        System.exit(result ? 0 : 1);
    


7)本地测试



8)提交到集群测试

(1)用 maven打 jar包,需要添加的打包插件

 <build>
        <plugins>
            <plugin>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.6.1</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                </configuration>
            </plugin>
            <plugin>
                <artifactId>maven-assembly-plugin</artifactId>
                <configuration>
                    <descriptorRefs>
                        <descriptorRef>jar-with-dependencies</descriptorRef>
                    </descriptorRefs>
                </configuration>
                <executions>
                    <execution>
                        <id>make-assembly</id>
                        <phase>package</phase>
                        <goals>
                            <goal>single</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>

注意 :如果工程上显示 红叉。 在项目上右键 ->maven->Reimport刷新 即可。

(2)将程序打成 jar包

(3,4,5)

以上是关于Hadoop--07---MapReduce_02----WordCount 案例实操的主要内容,如果未能解决你的问题,请参考以下文章

S-Docker_02_基本概念_02_容器

24c02程序 高手请进

week02_python内置数据结构__02

net.ucanaccess.triggers.TriggerException:表不存在。 :Z_2015_02_24

极客时间每日一课

有道词典_每日一句_2022/02