Yarn命令使用及wordcount解析
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Yarn命令使用及wordcount解析相关的知识,希望对你有一定的参考价值。
前言:
前面几篇博客主要介绍了MapReduce与Yarn的架构设计及简单工作流程,本篇文章将以wordcount程序为例,简单介绍下Yarn的使用。
1.wordcount示例运行
[[email protected] ~]# su - hadoop
[[email protected] ~]$ jps
9201 SecondaryNameNode
9425 ResourceManager
13875 Jps
9540 NodeManager
8852 NameNode
8973 DataNode
# 创建wordcount目录
[[email protected] ~]$ hdfs dfs -mkdir -p /wordcount/input
[[email protected] ~]$ vi test.log
jepson ruoze
hero yimi xjp
123
a b a
[[email protected] ~]$ hdfs dfs -put test.log /wordcount/input
[[email protected] ~]$ hdfs dfs -ls /wordcount/input
Found 1 items
-rw-r--r-- 1 hadoop supergroup 37 2018-05-29 20:38 /wordcount/input/test.log
# 执行wordcount示例jar包
[[email protected] ~]$ yarn jar > /opt/software/hadoop-2.8.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.1.jar > wordcount > /wordcount/input > /wordcount/output
18/05/29 20:40:59 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
18/05/29 20:40:59 INFO input.FileInputFormat: Total input files to process : 1
18/05/29 20:41:00 INFO mapreduce.JobSubmitter: number of splits:1
18/05/29 20:41:00 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1526991305992_0001
18/05/29 20:41:01 INFO impl.YarnClientImpl: Submitted application application_1526991305992_0001
18/05/29 20:41:01 INFO mapreduce.Job: The url to track the job: http://hadoop000:8088/proxy/application_1526991305992_0001/
18/05/29 20:41:01 INFO mapreduce.Job: Running job: job_1526991305992_0001
18/05/29 20:41:14 INFO mapreduce.Job: Job job_1526991305992_0001 running in uber mode : false
18/05/29 20:41:14 INFO mapreduce.Job: map 0% reduce 0%
18/05/29 20:41:23 INFO mapreduce.Job: map 100% reduce 0%
18/05/29 20:41:29 INFO mapreduce.Job: map 100% reduce 100%
18/05/29 20:41:30 INFO mapreduce.Job: Job job_1526991305992_0001 completed successfully
18/05/29 20:41:30 INFO mapreduce.Job: Counters: 49
# 查看结果
[[email protected] ~]$ hdfs dfs -ls /wordcount/output
Found 2 items
-rw-r--r-- 1 hadoop supergroup 0 2018-05-29 20:41 /wordcount/output/_SUCCESS
-rw-r--r-- 1 hadoop supergroup 51 2018-05-29 20:41 /wordcount/output/part-r-00000
[[email protected] ~]$ hdfs dfs -cat /wordcount/output/part-r-00000
123 1
a 2
b 1
hero 1
jepson 1
ruoze 1
xjp 1
yimi 1
登录网页查看相关信息:http://192.168.6.217:8088/cluster
2.Yarn常用命令总结
yarn jar <jar> --run a jar file
yarn application -list --列出在跑的job
yarn application -kill application_1526991305992_0001(job的id) --杀掉在跑的job
3.wordcount流程详解
参考:https://blog.csdn.net/yczws1/article/details/21794873
以上是关于Yarn命令使用及wordcount解析的主要内容,如果未能解决你的问题,请参考以下文章
spark怎么以master yarn-cluster模式运行wordcount
05-flink-1.10.1-flink on yarn 流处理WordCount