从 pig 脚本运行时,PIG 未从 hdfs 读取文件

Posted

技术标签:

【中文标题】从 pig 脚本运行时,PIG 未从 hdfs 读取文件【英文标题】:PIG not reading file from hdfs when running from pig script 【发布时间】:2015-09-10 12:25:23 【问题描述】:

我正在尝试使用 pigscript 从 hdfs 加载文件

data = LOAD '/user/Z013W7X/typeahead/time_decayed_clickdata.tsv' using PigStorage('\t') as (keyword :chararray , search_count: double, clicks: double, cartadds: double);

上面提到的路径是hdfs路径。 当我使用 pig grunt 运行相同的脚本时,它执行没有任何问题,但使用脚本的相同代码显示以下问题:

输入: 从“/user/Z013W7X/typeahead/time_decayed_clickdata.tsv”读取数据失败

这是我用来调用 pig 脚本的 shell 脚本...

jar_path=/home_dir/z013w7x/workspace/tapipeline/Typeahead-APP/tapipeline/libs/takeygen-0.0.1-SNAPSHOT-jar-with-dependencies.jar
scripts_path=/home_dir/z013w7x/workspace/tapipeline/Typeahead-APP/tapipeline/pig_scripts/daily_running_scripts
dataset_path=hdfs://d-3zkyk02.target.com:8020/user/Z013W7X/typeahead
data_files=/user/Z013W7X/typeahead/data_files.zip#data
ngrams_gen_script=$scripts_path/generate_ngrams.pig
time_decayed_clickdata_file=$dataset_path/time_decayed_clickdata.tsv
all_suggestions_file=$results_path/all_suggestions.tsv
top_suggestions_file=$results_path/top_suggestions.tsv

pig -f $ngrams_gen_script -param "INPUT_TIME_DECAYED_CLICKDATA_FILE=$time_decayed_clickdata_file" -param "OUTPUT_ALL_SUGGESTIONS_FILE=$all_suggestions_file" -param "OUTPUT_TOP_SUGGESTIONS_FILE=$top_suggestions_file" -param "REGISTER=$jar_path" -param "INPUT_DATA_ARCHIVE=$data_files"

猪脚本如下-

SET mapred.create.symlink yes
SET mapred.cache.archives $INPUT_DATA_ARCHIVE

register $REGISTER
click_data = LOAD '$INPUT_TIME_DECAYED_CLICKDATA_FILE' using PigStorage('\t') as (keyword :chararray , search_count: double, clicks: double, cartadds: double);
ordered_click_data = order click_data by search_count desc;
sample_data = LIMIT ordered_click_data 3000000;
mclick_data = foreach sample_data generate keyword, CEIL(search_count) as search_count, CEIL(clicks) as clicks, CEIL(cartadds) as cartadds;
fclick_data = filter mclick_data by (keyword is not null and search_count is not null and keyword != 'NULL' );

ngram_data = foreach fclick_data generate flatten(com.tgt.search.typeahead.takeygen.udf.NGramScore(keyword, search_count, clicks, cartadds))
 as (stemmedKeyword:chararray, keyword:chararray, dscore:double, isUserQuery:int, contrib:double, keyscore:chararray);

grouped_data = group ngram_data by stemmedKeyword;
agg_data = foreach grouped_data generate group, flatten(com.tgt.search.typeahead.takeygen.udf.StemmedKeyword(ngram_data.keyscore)) as keyword,
                                                                                                                 SUM(ngram_data.dscore) as ascore, SUM(ngram_data.isUserQuery) as isUserQuery, SUM(ngram_data.contrib) as contrib;
filter_queries = filter agg_data by isUserQuery > 0;
all_suggestions = foreach  filter_queries generate keyword, ascore;
ordered_suggestions = order all_suggestions by ascore desc;
top_suggestions = limit ordered_suggestions 200000;

rmf /tmp/all_suggestions
rmf $OUTPUT_ALL_SUGGESTIONS_FILE
rmf /tmp/top_suggestions
rmf $OUTPUT_TOP_SUGGESTIONS_FILE

store ordered_suggestions  into '/tmp/all_suggestions' using PigStorage('\t','-schema');
store top_suggestions  into '/tmp/top_suggestions' using PigStorage('\t','-schema');
cp /tmp/all_suggestions/part-r-00000 $OUTPUT_ALL_SUGGESTIONS_FILE
cp /tmp/top_suggestions/part-r-00000 $OUTPUT_TOP_SUGGESTIONS_FILE

【问题讨论】:

你是如何运行你的脚本的? 我正在从 shell 脚本运行脚本 确保您没有在本地模式下运行 pig 脚本。 不,不是这样的......它在读取输入文件时遇到了一些问题...... 您可以尝试将“data_files=/user/Z013W7X/typeahead/data_files.zip#data”替换为“data_files=hdfs://d-3zkyk02.target.com:8020/user/Z013W7X”吗/typeahead/data_files.zip#data" ? 【参考方案1】:

需要在输入路径前加上hdfs://namenode_host:54310。试试下面。

data = LOAD 'hdfs://namenode_host:54310/user/Z013W7X/typeahead/time_decayed_clickdata.tsv' using PigStorage('\t') as (keyword :chararray , search_count: double, clicks: double, cartadds: double);

【讨论】:

立即尝试。更新。您需要添加名称节点主机和端口。 您的路径是否正确。请检查“hdfs dfs -ls”。 当我直接使用它时,相同的路径正在运行 pig grunt.. 只有当我在脚本中使用它时,它才会显示问题.. 你到底在运行什么,你能把它粘贴到问题中吗?

以上是关于从 pig 脚本运行时,PIG 未从 hdfs 读取文件的主要内容,如果未能解决你的问题,请参考以下文章

pig@hadoop:使用多核处理没有 hdfs 的本地文件

Pig - 地图缩减模式下的权限被拒绝

从 oozie 以本地模式运行 PIG

如何读取pig运行结果part

从 Pig UDF 访问 HDFS

从 ES 加载数据并使用 pig 在 HDFS 中存储为 avro