Hadoop 3.3.0:RPC 响应的长度无效

Posted

技术标签:

【中文标题】Hadoop 3.3.0:RPC 响应的长度无效【英文标题】:Hadoop 3.3.0: RPC response has invalid length 【发布时间】:2021-05-12 06:28:25 【问题描述】:

我刚刚通过 Homebrew 安装了 PySpark,目前正在尝试将东西放入 Hadoop。

问题

与 Hadoop 的任何交互都失败了。

我跟着a tutorial在MacOS上搭建Hadoop3.3.0。

即使我修复了某些版本(特定 JDK、mysql 等)的唯一问题,它也无法解决。

每当我尝试运行任何与 Hadoop 相关的命令时,都会收到以下信息:

▶ hadoop fs -ls /
2021-05-12 07:45:44,647 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
ls: RPC response has invalid length

在笔记本中运行this code:

from pyspark.sql.session import SparkSession

# https://saagie.zendesk.com/hc/en-us/articles/360029759552-PySpark-Read-and-Write-Files-from-HDFS
sparkSession = SparkSession.builder.appName("example-pyspark-read-and-write").getOrCreate()
# Create data
data = [('First', 1), ('Second', 2), ('Third', 3), ('Fourth', 4), ('Fifth', 5)]
df = sparkSession.createDataFrame(data)

# Write into HDFS
df.write.csv("hdfs://localhost:9000/cluster/example.csv")
# Read from HDFS
df_load = sparkSession.read.csv("hdfs://localhost:9000/cluster/example.csv")
df_load.show()

sc.stop()

扔给我

Py4JJavaError                             Traceback (most recent call last)
<ipython-input-5-e25cae5a6cac> in <module>
      8 
      9 # Write into HDFS
---> 10 df.write.csv("hdfs://localhost:9000/cluster/example.csv")
     11 # Read from HDFS
     12 df_load = sparkSession.read.csv("hdfs://localhost:9000/cluster/example.csv")

/usr/local/Cellar/apache-spark/3.1.1/libexec/python/pyspark/sql/readwriter.py in csv(self, path, mode, compression, sep, quote, escape, header, nullValue, escapeQuotes, quoteAll, dateFormat, timestampFormat, ignoreLeadingWhiteSpace, ignoreTrailingWhiteSpace, charToEscapeQuoteEscaping, encoding, emptyValue, lineSep)
   1369                        charToEscapeQuoteEscaping=charToEscapeQuoteEscaping,
   1370                        encoding=encoding, emptyValue=emptyValue, lineSep=lineSep)
-> 1371         self._jwrite.csv(path)
   1372 
   1373     def orc(self, path, mode=None, partitionBy=None, compression=None):

/usr/local/lib/python3.9/site-packages/py4j/java_gateway.py in __call__(self, *args)
   1307 
   1308         answer = self.gateway_client.send_command(command)
-> 1309         return_value = get_return_value(
   1310             answer, self.gateway_client, self.target_id, self.name)
   1311 

/usr/local/Cellar/apache-spark/3.1.1/libexec/python/pyspark/sql/utils.py in deco(*a, **kw)
    109     def deco(*a, **kw):
    110         try:
--> 111             return f(*a, **kw)
    112         except py4j.protocol.Py4JJavaError as e:
    113             converted = convert_exception(e.java_exception)

/usr/local/lib/python3.9/site-packages/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    324             value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
    325             if answer[1] == REFERENCE_TYPE:
--> 326                 raise Py4JJavaError(
    327                     "An error occurred while calling 012.\n".
    328                     format(target_id, ".", name), value)

Py4JJavaError: An error occurred while calling o99.csv.
: java.io.IOException: Failed on local exception: org.apache.hadoop.ipc.RpcException: RPC response has invalid length; Host Details : local host is: "blkpingu16-MBP.fritz.box/192.xxx.xxx.xx"; destination host is: "localhost":9000; 
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:816)
    at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515)
    ...
    at org.apache.spark.sql.DataFrameWriter.csv(DataFrameWriter.scala:979)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:567)
    ...
    at java.base/java.lang.Thread.run(Thread.java:830)
Caused by: org.apache.hadoop.ipc.RpcException: RPC response has invalid length
    at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1827)
    at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1173)
    at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1069)

它是:RPC response has invalid length

我已经在各种配置文件中配置并验证了我的所有路径,例如

core-site.xml

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>ipc.maximum.data.length</name>
<value>134217728</value>
</property>
</configuration>

.zshrc

JAVA_HOME="/Library/Java/JavaVirtualMachines/jdk1.8.0_201.jdk/Contents/Home"

...

## JAVA env variablesexport JAVA_HOME="/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home"
export PATH=$PATH:$JAVA_HOME/bin

## HADOOP env variables
export HADOOP_HOME="/usr/local/Cellar/hadoop/3.3.0/libexec"
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
export HADOOP_CLASSPATH=$JAVA_HOME/lib/tools.jar

## HIVE env variables
export HIVE_HOME=/usr/local/Cellar/hive/3.1.2_3/libexec
export PATH=$PATH:/$HIVE_HOME/bin

## MySQL ENV
export PATH=$PATH:/usr/local/Cellar/mysql/8.0.23_1/bin

Hdfs-site.xml

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>

hadoop-env.sh

export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_141.jdk/Contents/Home

如果我启动 Hadoop,它似乎会启动所有节点:

▶ $HADOOP_HOME/sbin/start-all.sh
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [blkpingu16-MBP.fritz.box]
2021-05-12 08:18:15,786 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting resourcemanager
Starting nodemanagers

jps 表明 Hadoop 的东西正在运行,还有一些 Spark 东西

▶ jps
166 Jps
99750 ResourceManager
99544 SecondaryNameNode
99851 NodeManager
98154 SparkSubmit
99405 DataNode
39326 Master

http://localhost:8088/cluster 可用并显示 Hadoop 仪表板(Yarn,根据the tutorial I followed) http://localhost:8080 可用并显示 Spark 仪表板 http://localhost:9870 不可用(应该显示一些与 Hadoop 相关的内容)

我的主要问题是我不知道为什么我的 namenode 不存在,因为它应该以及为什么我不能与 HDFS 通信以便通过命令行与其交互(将数据放入其中)或通过笔记本请求数据。 与 Hadoop 相关的东西坏了,我不知道如何修复它。

【问题讨论】:

【参考方案1】:

我今天遇到了同样的问题,如果有人遇到类似的问题,我想在这里指出。一个快速命令 jps 告诉我 NameNode 进程不存在 - 尽管没有出现警告或错误。

我在Hadoop的NameNode的.log文件中发现,有一个java.net.BindException: Problem binding to [localhost:9000],这让我觉得9000端口被另一个进程使用了​​。我使用来自this source 的命令来检查打开的端口,实际上它被python 进程使用(当时我只运行了PySpark)。 (顺便说一句,sudo lsof -i -P -n | grep LISTEN 任何人都需要)

解决方案非常简单:将etc/core-site.xmlfs.defaultFS 字段中的端口号更改为另一个未使用的端口(我的是9900)。

【讨论】:

以上是关于Hadoop 3.3.0:RPC 响应的长度无效的主要内容,如果未能解决你的问题,请参考以下文章

未处理的拒绝错误:无效的 JSON RPC 响应:“”

Android Json RPC 到 pysjonrpc 抛出无效的 JSON 响应

OOZIE:JA009:RPC 响应超出最大数据长度

错误:无效的 Json RPC 响应:“无法连接 127.0.0.1:7545”| Web3js |反应原生|移动的

Hadoop RPC机制中Server类的实现:基于Java NIO

Hadoop RPC机制中Server类的实现:基于Java NIO