ubuntu-hadoop伪分布

Posted NoBug ㅤ

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了ubuntu-hadoop伪分布相关的知识,希望对你有一定的参考价值。

文章目录

1. ubuntu-hadoop伪分布-环境配置

1.1 创建新用户(确保环境最干净)

  • sudo useradd -m hduser -s /bin/bash(创建新用户)
  • sudo passwd hduser(为新用户设置密码-必设)
  • sudo adduser hduser sudo(为新用户赋予sudo权限)
  • sudo apt update(更新软件列表)
  • sudo apt upgrade(安装列表中的安装包)

1.2 jdk

  • sudo tar zxvf jdk-18_linux-x64_bin.tar.gz -C /usr/local(注意:进入到安装包文件夹)
  • sudo tar zxvf hadoop-3.3.4.tar.gz -C /usr/local(后面要安装hadoop,一起了)
  • sudo gedit /etc/profile(改配置文件)
# java environment
export JAVA_HOME=/usr/local/jdk-18.0.2.1
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib
export PATH=.:$JAVA_HOME/bin:$PATH
  • source /etc/profile(刷新配置文件)
  • sudo gedit ~/.bashrc(主配置文件,让profile配置次次生效)
if [ -f /etc/profile ]; then
        . /etc/profile
fi

1.3 hadoop配置

ssh无密码(分布式的结点以ssh控制,有密码不行)

  • sudo apt install openssh-server
  • cd /home/hduser/.ssh
  • ssh-keygen -t rsa
  • cat ./id_rsa.pub >> ./authorized_keys

hadoop环境变量

  • sudo gedit /etc/profile(配置文件)
# hadoop
export HADOOP_HOME=/usr/local/hadoop-3.3.4
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HAME/sbin
  • 单机模式验证(可以不做,只是一个测试)
- cd /usr/local/hadoop-3.3.4
- mkdir ./input
- cp ./etc/hadoop/*.xml ./input
- ./bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar grep ./input ./output 'dfs[a-z.]+'
- cat ./output/*
- rm -r ./output

伪分布模式

  • sudo chown -R hduser /usr/local/hadoop-3.3.4(修改hadoop-3.3.4的权限为hduser)

  • cd /usr/local/hadoop-3.3.4/etc/hadoop

  • gedit hadoop-env.sh

export JAVA_HOME=/usr/local/jdk-18.0.2.1
  • gedit core-site.xml
<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/usr/local/hadoop-3.3.4/tmp</value>
        <description>Abase for other temporary directories.</description>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>
  • gedit hdfs-site.xml
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/usr/local/hadoop-3.3.4/tmp/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/usr/local/hadoop-3.3.4/tmp/dfs/data</value>
    </property>
</configuration>
  • cd /usr/local/hadoop-3.3.4
  • ./bin/hdfs namenode -format(可能要执行两边)
  • ./sbin/start-dfs.sh

检查

  • jps检查
  • http://localhost:9870

停止/启用hadoop

  • /usr/local/hadoop-3.3.4/sbin/stop-dfs.sh
  • /usr/local/hadoop-3.3.4/sbin/start-dfs.sh

2. 伪分布实例

2.1 估计pi值

  • cd /usr/local/hadoop-3.3.4/share/hadoop/mapreduce
  • hadoop jar hadoop-mapreduce-examples-3.3.4.jar pi 1000 50000

2.2 统计文本

  • hdfs dfs -mkdir /input
  • cd /usr/local/hadoop-3.3.4/share/hadoop/mapreduce
  • mkdir temp
  • gedit ./temp/data.txt
I love you
you love me
I love you and you love me
  • cd temp
  • hdfs dfs -put ./data.txt /input
  • cd …(返回上级目录)
  • hadoop jar hadoop-mapreduce-examples-3.3.4.jar wordcount /input/data.txt /output/wct
  • 查看:hdfs dfs -ls /(可以不执行)
  • hdfs dfs -cat /output/wct/part-r-00000
  • hdfs dfs -rm -r /output(hadoop实例不会自动覆盖,每次要自己删除输出文件)

以上是关于ubuntu-hadoop伪分布的主要内容,如果未能解决你的问题,请参考以下文章

伪分布式hadoop,mapreduce任务日志在哪?

Flink的安装和部署--伪分布模式

hadoop2.7.3伪分布式环境搭建详细安装过程

Hadoop中单机模式和伪分布式的区别是啥

Flink的安装和部署--伪分布模式

Hadoop伪分布安装详解+MapReduce运行原理+基于MapReduce的KNN算法实现