phoenix通过上传批量csv文件命令看不懂?
Posted
技术标签:
【中文标题】phoenix通过上传批量csv文件命令看不懂?【英文标题】:phoenix through upload bulk csv file command not understand? 【发布时间】:2017-07-17 11:20:26 【问题描述】:我想使用 phonix 上传批量 csv 文件,但我无法理解下面的命令。你能详细解释一下吗?
HADOOP_CLASSPATH=$(hbase mapredcp):/path/to/hbase/conf hadoop jar phoenix-<version>-client.jar org.apache.phoenix.mapreduce.CsvBulkLoadTool --table EXAMPLE --input /data/example.csv
我从以下网站获取此命令。 https://phoenix.apache.org/bulk_dataload.html
【问题讨论】:
【参考方案1】:我不确定您是否仍在寻找答案。但在这里。您首先设置 HADOOP_CLASSPATH,然后使用 jar 选项调用可执行文件“hadoop”以查找 phoenix 客户端 jar 和要使用参数运行的类。 以下可以帮助您了解 hadoop 命令的用法(尝试在您的 ssh shell 上输入 hadoop)
Usage: hadoop [--config confdir] [COMMAND | CLASSNAME]
CLASSNAME run the class named CLASSNAME
or
where COMMAND is one of:
fs run a generic filesystem user client
version print the version
jar <jar> run a jar file
note: please use "yarn jar" to launch
YARN applications, not this command.
checknative [-a|-h] check native hadoop and compression libraries availability
distcp <srcurl> <desturl> copy file or directories recursively
envvars display computed Hadoop environment variables
archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive
classpath prints the class path needed to get the
credential interact with credential providers
Hadoop jar and the required libraries
daemonlog get/set the log level for each daemon
trace view and modify Hadoop tracing settings
Most commands print help when invoked w/o parameters.
【讨论】:
以上是关于phoenix通过上传批量csv文件命令看不懂?的主要内容,如果未能解决你的问题,请参考以下文章