Hadoop MapReduce 作业成功完成,但未向 DB 写入任何内容
Posted
技术标签:
【中文标题】Hadoop MapReduce 作业成功完成,但未向 DB 写入任何内容【英文标题】:Hadoop MapReduce job completes successfully but doesn't write anything to DB 【发布时间】:2015-02-28 17:09:50 【问题描述】:我正在编写一个 MR 作业来挖掘网络服务器日志。作业的输入来自文本文件,输出到 mysql 数据库。问题是,作业成功完成,但没有向数据库写入任何内容。我已经有一段时间没有进行 MR 编程了,所以很可能是我找不到的错误。这不是模式匹配(见下文),我已经过单元测试并且工作正常。我错过了什么?
Mac OS X, Oracle JDK 1.8.0_31, hadoop-2.6.0
注意:记录了异常,为简洁起见,我省略了它们。
可跳过的日志记录:
public class SkippableLogRecord implements WritableComparable<SkippableLogRecord>
// fields
public SkippableLogRecord(Text line)
readLine(line.toString());
private void readLine(String line)
Matcher m = PATTERN.matcher(line);
boolean isMatchFound = m.matches() && m.groupCount() >= 5;
if (isMatchFound)
try
jvm = new Text(m.group("jvm"));
Calendar cal = getInstance();
cal.setTime(new SimpleDateFormat(DATE_FORMAT).parse(m
.group("date")));
day = new IntWritable(cal.get(DAY_OF_MONTH));
month = new IntWritable(cal.get(MONTH));
year = new IntWritable(cal.get(YEAR));
String p = decode(m.group("path"), UTF_8.name());
root = new Text(p.substring(1, p.indexOf(FILE_SEPARATOR, 1)));
filename = new Text(
p.substring(p.lastIndexOf(FILE_SEPARATOR) + 1));
path = new Text(p);
status = new IntWritable(Integer.parseInt(m.group("status")));
size = new LongWritable(Long.parseLong(m.group("size")));
catch (ParseException | UnsupportedEncodingException e)
isMatchFound = false;
public boolean isSkipped()
return jvm == null;
@Override
public void readFields(DataInput in) throws IOException
jvm.readFields(in);
day.readFields(in);
// more code
@Override
public void write(DataOutput out) throws IOException
jvm.write(out);
day.write(out);
// more code
@Override
public int compareTo(SkippableLogRecord other) ...
@Override
public boolean equals(Object obj) ...
映射器:
public class LogMapper extends
Mapper<LongWritable, Text, SkippableLogRecord, NullWritable>
@Override
protected void map(LongWritable key, Text line, Context context)
SkippableLogRecord rec = new SkippableLogRecord(line);
if (!rec.isSkipped())
try
context.write(rec, NullWritable.get());
catch (IOException | InterruptedException e) ...
减速机:
public class LogReducer extends
Reducer<SkippableLogRecord, NullWritable, DBRecord, NullWritable>
@Override
protected void reduce(SkippableLogRecord rec,
Iterable<NullWritable> values, Context context)
try
context.write(new DBRecord(rec), NullWritable.get());
catch (IOException | InterruptedException e) ...
数据库记录:
public class DBRecord implements Writable, DBWritable
// fields
public DBRecord(SkippableLogRecord logRecord)
jvm = logRecord.getJvm().toString();
day = logRecord.getDay().get();
// more code for rest of the fields
@Override
public void readFields(ResultSet rs) throws SQLException
jvm = rs.getString("jvm");
day = rs.getInt("day");
// more code for rest of the fields
@Override
public void write(PreparedStatement ps) throws SQLException
ps.setString(1, jvm);
ps.setInt(2, day);
// more code for rest of the fields
司机:
public class Driver extends Configured implements Tool
@Override
public int run(String[] args) throws Exception
Configuration conf = getConf();
DBConfiguration.configureDB(conf, "com.mysql.jdbc.Driver", // driver
"jdbc:mysql://localhost:3306/aac", // db url
"***", // user name
"***"); // password
Job job = Job.getInstance(conf, "log-miner");
job.setJarByClass(getClass());
job.setMapperClass(LogMapper.class);
job.setReducerClass(LogReducer.class);
job.setMapOutputKeyClass(SkippableLogRecord.class);
job.setMapOutputValueClass(NullWritable.class);
job.setOutputKeyClass(DBRecord.class);
job.setOutputValueClass(NullWritable.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(DBOutputFormat.class);
FileInputFormat.setInputPaths(job, new Path(args[0]));
DBOutputFormat.setOutput(job, "log", // table name
new String[] "jvm", "day", "month", "year", "root",
"filename", "path", "status", "size" // table columns
);
return job.waitForCompletion(true) ? 0 : 1;
public static void main(String[] args) throws Exception
GenericOptionsParser parser = new GenericOptionsParser(
new Configuration(), args);
ToolRunner.run(new Driver(), parser.getRemainingArgs());
作业执行日志:
15/02/28 02:17:58 INFO mapreduce.Job: map 100% reduce 100%
15/02/28 02:17:58 INFO mapreduce.Job: Job job_local166084441_0001 completed successfully
15/02/28 02:17:58 INFO mapreduce.Job: Counters: 35
File System Counters
FILE: Number of bytes read=37074
FILE: Number of bytes written=805438
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=476788498
HDFS: Number of bytes written=0
HDFS: Number of read operations=11
HDFS: Number of large read operations=0
HDFS: Number of write operations=0
Map-Reduce Framework
Map input records=482230
Map output records=0
Map output bytes=0
Map output materialized bytes=12
Input split bytes=210
Combine input records=0
Combine output records=0
Reduce input groups=0
Reduce shuffle bytes=12
Reduce input records=0
Reduce output records=0
Spilled Records=0
Shuffled Maps =2
Failed Shuffles=0
Merged Map outputs=2
GC time elapsed (ms)=150
Total committed heap usage (bytes)=1381498880
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=171283337
File Output Format Counters
Bytes Written=0
【问题讨论】:
你试过不使用 Hadoop 吗?如果您的工作流程无法扩展,请仅将其用作最后的手段。摆脱内部循环中的所有new
调用——也没有新的Matcher
。这些都是非常昂贵的。并且不要忽略异常...很可能,您只是无法解析每一行...
@Anony-Mousse 正如我所说,解析之所以有效,是因为我对其进行了单元测试。异常并没有真正被忽略,为了简洁起见,我没有展示它们。最后,我想让程序先运行,然后再担心扩展。一个可以完美扩展但什么都不做的程序一文不值。
mapreduce 内部的单元测试,还是外部的其他数据类型?显然,您的地图产生 0 条记录!所以它必须跳过所有内容。此外,立即设计内存,而不是稍后再次重写......遵循最佳实践。例如,Text
存在是因为 String
太贵了,而 IntWritable
是一个可重复使用的 Integer
@Anony-Mousse JUnit 测试将Text
发送到SkippableLogRecord
并验证是否获得了匹配项。阴性测试也。在这些测试中与 MR 或 Hadoop 无关,除了我使用 Text
数据类型。
例如,可能包括/不包括换行符。无论哪种方式,据我所知,您的台词都不匹配。
【参考方案1】:
回答我自己的问题,问题是导致匹配器失败的前导空格。单元测试没有使用前导空格进行测试,但实际日志由于某种原因有这些。
上面发布的代码的另一个问题是类中的所有字段都在readLine
方法中初始化。正如@Anony-Mousse 所提到的,这很昂贵,因为 Hadoop 数据类型被设计为可重用。这也导致了序列化和反序列化的更大问题。当 Hadoop 试图通过调用 readFields
来重构类时,它会导致 NPE,因为所有字段都是空的。
我还使用一些 Java 8 类和语法进行了其他小的改进。最后,即使我让它工作了,我还是使用 Spring Boot、Spring Data JPA 和 Spring 对异步处理的支持使用 @Async
重写了代码。
【讨论】:
以上是关于Hadoop MapReduce 作业成功完成,但未向 DB 写入任何内容的主要内容,如果未能解决你的问题,请参考以下文章
Hadoop MapReduce 一文详解MapReduce及工作机制