无法通过 java 客户端获取 Hadoop 作业信息
Posted
技术标签:
【中文标题】无法通过 java 客户端获取 Hadoop 作业信息【英文标题】:Unable to get Hadoop job information through java client 【发布时间】:2014-03-14 17:56:20 【问题描述】:我正在使用 Hadoop 1.2.1 并尝试通过 java 客户端打印作业详细信息,但它没有打印任何内容,这是我的 java 代码
Configuration configuration = new Configuration();
configuration.addResource(new Path("/usr/local/hadoop/conf/core-site.xml"));
configuration.addResource(new Path("/usr/local/hadoop/conf/hdfs-site.xml"));
configuration.addResource(new Path("/usr/local/hadoop/conf/mapred-site.xml"));
InetSocketAddress jobtracker = new InetSocketAddress("localhost", 54311);
JobClient jobClient;
jobClient = new JobClient(jobtracker, configuration);
jobClient.setConf(configuration);
JobStatus[] jobs = jobClient.getAllJobs();
System.out.println(jobs.length);//it is printing 0.
for (int i = 0; i < jobs.length; i++)
JobStatus js = jobs[i];
JobID jobId = js.getJobID();
System.out.println(jobId);
但从工作跟踪器历史记录中,我可以看到三个工作。这是屏幕截图 任何人都可以告诉我哪里出错了。我只想打印所有作业详细信息。
这是我的配置文件:
core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name
<value>/data/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system. A URI whose</description>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time.
</description>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task.
</description>
</property>
</configuration>
【问题讨论】:
我不确定jobClient.getAllJobs()
是否访问已完成的作业。
谢谢@Chaos,那我怎样才能得到完整的工作信息。
有同样的问题。你有想过这个吗?
【参考方案1】:
试试这样的
jobClient.displayTasks(jobID, "map", "completed");
作业 ID 在哪里
JobID jobID = new JobID(jobIdentifier, jobNumber);
或
TaskReport[] taskReportList = jobClient.getMapTaskReports(jobID);
【讨论】:
以上是关于无法通过 java 客户端获取 Hadoop 作业信息的主要内容,如果未能解决你的问题,请参考以下文章
在使用 java 运行 Hadoop map reduce 作业时抛出空指针异常
无法让 pyspark 作业在 hadoop 集群的所有节点上运行