Sqoop学习
Posted jifengblog
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Sqoop学习相关的知识,希望对你有一定的参考价值。
正文
一、概述
sqoop 是 apache 旗下一款“Hadoop 和关系数据库服务器之间传送数据”的工具。
核心的功能有两个:
导入、迁入
导出、迁出
导入数据:mysql,Oracle 导入数据到 Hadoop 的 HDFS、HIVE、HBASE 等数据存储系统
导出数据:从 Hadoop 的文件系统中导出数据到关系数据库 mysql 等 Sqoop 的本质还是一个命令行工具,和 HDFS,Hive 相比,并没有什么高深的理论。
sqoop:
工具:本质就是迁移数据, 迁移的方式:就是把sqoop的迁移命令转换成MR程序
hive
工具,本质就是执行计算,依赖于HDFS存储数据,把SQL转换成MR程序
二、工作机制
将导入或导出命令翻译成 MapReduce 程序来实现 在翻译出的 MapReduce 中主要是对 InputFormat 和 OutputFormat 进行定制
三、安装
1、前提概述
将来sqoop在使用的时候有可能会跟那些系统或者组件打交道?
HDFS, MapReduce, YARN, ZooKeeper, Hive, HBase, MySQL
sqoop就是一个工具, 只需要在一个节点上进行安装即可。
补充一点: 如果你的sqoop工具将来要进行hive或者hbase等等的系统和MySQL之间的交互
你安装的SQOOP软件的节点一定要包含以上你要使用的集群或者软件系统的安装包
补充一点: 将来要使用的azakban这个软件 除了会调度 hadoop的任务或者hbase或者hive的任务之外, 还会调度sqoop的任务
azkaban这个软件的安装节点也必须包含以上这些软件系统的客户端/2、
2、软件下载
下载地址http://mirrors.hust.edu.cn/apache/
sqoop版本说明
绝大部分企业所使用的sqoop的版本都是 sqoop1
sqoop-1.4.6 或者 sqoop-1.4.7 它是 sqoop1
sqoop-1.99.4----都是 sqoop2
此处使用sqoop-1.4.6版本sqoop-1.4.6.bin__hadoop-2.0.4-alpha.tar.gz
3、安装步骤
(1)上传解压缩安装包到指定目录
因为之前hive只是安装在hadoop3机器上,所以sqoop也同样安装在hadoop3机器上
[[email protected] ~]$ tar -zxvf sqoop-1.4.6.bin__hadoop-2.0.4-alpha.tar.gz -C apps/
(2)进入到 conf 文件夹,找到 sqoop-env-template.sh,修改其名称为 sqoop-env.sh cd conf
[[email protected] ~]$ cd apps/ [[email protected] apps]$ ls apache-hive-2.3.3-bin hadoop-2.7.5 hbase-1.2.6 sqoop-1.4.6.bin__hadoop-2.0.4-alpha zookeeper-3.4.10 [[email protected] apps]$ mv sqoop-1.4.6.bin__hadoop-2.0.4-alpha/ sqoop-1.4.6 [[email protected] apps]$ cd sqoop-1.4.6/conf/ [[email protected] conf]$ ls oraoop-site-template.xml sqoop-env-template.sh sqoop-site.xml sqoop-env-template.cmd sqoop-site-template.xml [[email protected] conf]$ mv sqoop-env-template.sh sqoop-env.sh
(3)修改 sqoop-env.sh
[[email protected] conf]$ vi sqoop-env.sh
export HADOOP_COMMON_HOME=/home/hadoop/apps/hadoop-2.7.5 #Set path to where hadoop-*-core.jar is available export HADOOP_MAPRED_HOME=/home/hadoop/apps/hadoop-2.7.5 #set the path to where bin/hbase is available export HBASE_HOME=/home/hadoop/apps/hbase-1.2.6 #Set the path to where bin/hive is available export HIVE_HOME=/home/hadoop/apps/apache-hive-2.3.3-bin #Set the path for where zookeper config dir is export ZOOCFGDIR=/home/hadoop/apps/zookeeper-3.4.10/conf
为什么在sqoop-env.sh 文件中会要求分别进行 common和mapreduce的配置呢???
在apache的hadoop的安装中;四大组件都是安装在同一个hadoop_home中的
但是在CDH, HDP中, 这些组件都是可选的。
在安装hadoop的时候,可以选择性的只安装HDFS或者YARN,
CDH,HDP在安装hadoop的时候,会把HDFS和MapReduce有可能分别安装在不同的地方。
(4)加入 mysql 驱动包到 sqoop1.4.6/lib 目录下
[[email protected] ~]$ cp mysql-connector-java-5.1.40-bin.jar apps/sqoop-1.4.6/lib/
(5)配置系统环境变量
[[email protected] ~]$ vi .bashrc
#Sqoop export SQOOP_HOME=/home/hadoop/apps/sqoop-1.4.6 export PATH=$PATH:$SQOOP_HOME/bin
保存退出使其立即生效
[[email protected] ~]$ source .bashrc
(6)验证安装是否成功
sqoop-version 或者 sqoop version
四、Sqoop的基本命令
基本操作
首先,我们可以使用 sqoop help 来查看,sqoop 支持哪些命令
[[email protected] ~]$ sqoop help Warning: /home/hadoop/apps/sqoop-1.4.6/../hcatalog does not exist! HCatalog jobs will fail. Please set $HCAT_HOME to the root of your HCatalog installation. Warning: /home/hadoop/apps/sqoop-1.4.6/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. 18/04/12 13:37:19 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6 usage: sqoop COMMAND [ARGS] Available commands: codegen Generate code to interact with database records create-hive-table Import a table definition into Hive eval Evaluate a SQL statement and display the results export Export an HDFS directory to a database table help List available commands import Import a table from a database to HDFS import-all-tables Import tables from a database to HDFS import-mainframe Import datasets from a mainframe server to HDFS job Work with saved jobs list-databases List available databases on a server list-tables List available tables in a database merge Merge results of incremental imports metastore Run a standalone Sqoop metastore version Display version information See ‘sqoop help COMMAND‘ for information on a specific command. [[email protected] ~]$
然后得到这些支持了的命令之后,如果不知道使用方式,可以使用 sqoop command 的方式 来查看某条具体命令的使用方式,比如:
[[email protected] ~]$ sqoop help import Warning: /home/hadoop/apps/sqoop-1.4.6/../hcatalog does not exist! HCatalog jobs will fail. Please set $HCAT_HOME to the root of your HCatalog installation. Warning: /home/hadoop/apps/sqoop-1.4.6/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. 18/04/12 13:38:29 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6 usage: sqoop import [GENERIC-ARGS] [TOOL-ARGS] Common arguments: --connect <jdbc-uri> Specify JDBC connect string --connection-manager <class-name> Specify connection manager class name --connection-param-file <properties-file> Specify connection parameters file --driver <class-name> Manually specify JDBC driver class to use --hadoop-home <hdir> Override $HADOOP_MAPRED_HOME_ARG --hadoop-mapred-home <dir> Override $HADOOP_MAPRED_HOME_ARG --help Print usage instructions -P Read password from console --password <password> Set authentication password --password-alias <password-alias> Credential provider password alias --password-file <password-file> Set authentication password file path --relaxed-isolation Use read-uncommitted isolation for imports --skip-dist-cache Skip copying jars to distributed cache --username <username> Set authentication username --verbose Print more information while working Import control arguments: --append Imports data in append mode --as-avrodatafile Imports data to Avro data files --as-parquetfile Imports data to Parquet files --as-sequencefile Imports data to SequenceFile s --as-textfile Imports data as plain text (default) --autoreset-to-one-mapper Reset the number of mappers to one mapper if no split key available --boundary-query <statement> Set boundary query for retrieving max and min value of the primary key --columns <col,col,col...> Columns to import from table --compression-codec <codec> Compression codec to use for import --delete-target-dir Imports data in delete mode --direct Use direct import fast path --direct-split-size <n> Split the input stream every ‘n‘ bytes when importing in direct mode -e,--query <statement> Import results of SQL ‘statement‘ --fetch-size <n> Set number ‘n‘ of rows to fetch from the database when more rows are needed --inline-lob-limit <n> Set the maximum size for an inline LOB -m,--num-mappers <n> Use ‘n‘ map tasks to import in parallel --mapreduce-job-name <name> Set name for generated mapreduce job --merge-key <column> Key column to use to join results --split-by <column-name> Column of the table used to split work units --table <table-name> Table to read --target-dir <dir> HDFS plain table destination --validate Validate the copy using the configured validator --validation-failurehandler <validation-failurehandler> Fully qualified class name for ValidationFa ilureHandler --validation-threshold <validation-threshold> Fully qualified class name for ValidationTh reshold --validator <validator> Fully qualified class name for the Validator --warehouse-dir <dir> HDFS parent for table destination --where <where clause> WHERE clause to use during import -z,--compress Enable compression Incremental import arguments: --check-column <column> Source column to check for incremental change --incremental <import-type> Define an incremental import of type ‘append‘ or ‘lastmodified‘ --last-value <value> Last imported value in the incremental check column Output line formatting arguments: --enclosed-by <char> Sets a required field enclosing character --escaped-by <char> Sets the escape character --fields-terminated-by <char> Sets the field separator character --lines-terminated-by <char> Sets the end-of-line character --mysql-delimiters Uses MySQL‘s default delimiter set: fields: , lines: escaped-by: optionally-enclosed-by: ‘ --optionally-enclosed-by <char> Sets a field enclosing character Input parsing arguments: --input-enclosed-by <char> Sets a required field encloser --input-escaped-by <char> Sets the input escape character --input-fields-terminated-by <char> Sets the input field separator --input-lines-terminated-by <char> Sets the input end-of-line char --input-optionally-enclosed-by <char> Sets a field enclosing character Hive arguments: --create-hive-table Fail if the target hive table exists --hive-database <database-name> Sets the database name to use when importing to hive --hive-delims-replacement <arg> Replace Hive record x01 and row delimiters ( ) from imported string fields with user-defined string --hive-drop-import-delims Drop Hive record x01 and row delimiters ( ) from imported string fields --hive-home <dir> Override $HIVE_HOME --hive-import Import tables into Hive (Uses Hive‘s default delimiters if none are set.) --hive-overwrite Overwrite existing data in the Hive table --hive-partition-key <partition-key> Sets the partition key to use when importing to hive --hive-partition-value <partition-value> Sets the partition value to use when importing to hive --hive-table <table-name> Sets the table name to use when importing to hive --map-column-hive <arg> Override mapping for specific column to hive types. HBase arguments: --column-family <family> Sets the target column family for the import --hbase-bulkload Enables HBase bulk loading --hbase-create-table If specified, create missing HBase tables --hbase-row-key <col> Specifies which input column to use as the row key --hbase-table <table> Import to <table> in HBase HCatalog arguments: --hcatalog-database <arg> HCatalog database name --hcatalog-home <hdir> Override $HCAT_HOME --hcatalog-partition-keys <partition-key> Sets the partition keys to use when importing to hive --hcatalog-partition-values <partition-value> Sets the partition values to use when importing to hive --hcatalog-table <arg> HCatalog table name --hive-home <dir> Override $HIVE_HOME --hive-partition-key <partition-key> Sets the partition key to use when importing to hive --hive-partition-value <partition-value> Sets the partition value to use when importing to hive --map-column-hive <arg> Override mapping for specific column to hive types. HCatalog import specific options: --create-hcatalog-table Create HCatalog before import --hcatalog-storage-stanza <arg> HCatalog storage stanza for table creation Accumulo arguments: --accumulo-batch-size <size> Batch size in bytes --accumulo-column-family <family> Sets the target column family for the import --accumulo-create-table If specified, create missing Accumulo tables --accumulo-instance <instance> Accumulo instance name. --accumulo-max-latency <latency> Max write latency in milliseconds --accumulo-password <password> Accumulo password. --accumulo-row-key <col> Specifies which input column to use as the row key --accumulo-table <table> Import to <table> in Accumulo --accumulo-user <user> Accumulo user name. --accumulo-visibility <vis> Visibility token to be applied to all rows imported --accumulo-zookeepers <zookeepers> Comma-separated list of zookeepers (host:port) Code generation arguments: --bindir <dir> Output directory for compiled objects --class-name <name> Sets the generated class name. This overrides --package-name. When combined with --jar-file, sets the input class. --input-null-non-string <null-str> Input null non-string representation --input-null-string <null-str> Input null string representation --jar-file <file> Disable code generation; use specified jar --map-column-java <arg> Override mapping for specific columns to java types --null-non-string <null-str> Null non-string representation --null-string <null-str> Null string representation --outdir <dir> Output directory for generated code --package-name <name> Put auto-generated classes in this package Generic Hadoop command-line arguments: (must preceed any tool-specific arguments) Generic options supported are -conf <configuration file> specify an application configuration file -D <property=value> use value for given property -fs <local|namenode:port> specify a namenode -jt <local|resourcemanager:port> specify a ResourceManager -files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster -libjars <comma separated list of jars> specify comma separated jar files to include in the classpath. -archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines. The general command line syntax is bin/hadoop command [genericOptions] [commandOptions] At minimum, you must specify --connect and --table Arguments to mysqldump and other subprograms may be supplied after a ‘--‘ on the command line. [[email protected] ~]$
示例
列出MySQL数据有哪些数据库
[[email protected] ~]$ sqoop list-databases > --connect jdbc:mysql://hadoop1:3306/ > --username root > --password root Warning: /home/hadoop/apps/sqoop-1.4.6/../hcatalog does not exist! HCatalog jobs will fail. Please set $HCAT_HOME to the root of your HCatalog installation. Warning: /home/hadoop/apps/sqoop-1.4.6/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. 18/04/12 13:43:51 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6 18/04/12 13:43:51 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 18/04/12 13:43:51 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset. information_schema hivedb mysql performance_schema test [[email protected] ~]$
列出MySQL中的某个数据库有哪些数据表:
[[email protected] ~]$ sqoop list-tables
> --connect jdbc:mysql://hadoop1:3306/mysql
> --username root
> --password root
创建一张跟mysql中的help_keyword表一样的hive表hk:
sqoop create-hive-table --connect jdbc:mysql://hadoop1:3306/mysql --username root --password root --table help_keyword --hive-table hk
五、Sqoop的数据导入
“导入工具”导入单个表从 RDBMS 到 HDFS。表中的每一行被视为 HDFS 的记录。所有记录 都存储为文本文件的文本数据(或者 Avro、sequence 文件等二进制数据)
1、从RDBMS导入到HDFS中
语法格式
sqoop import (generic-args) (import-args)
常用参数
--connect <jdbc-uri> jdbc 连接地址 --connection-manager <class-name> 连接管理者 --driver <class-name> 驱动类 --hadoop-mapred-home <dir> $HADOOP_MAPRED_HOME --help help 信息 -P 从命令行输入密码 --password <password> 密码 --username <username> 账号 --verbose 打印流程信息 --connection-param-file <filename> 可选参数
示例
普通导入:导入mysql库中的help_keyword的数据到HDFS上
导入的默认路径:/user/hadoop/help_keyword
sqoop import --connect jdbc:mysql://hadoop1:3306/mysql --username root --password root --table help_keyword -m 1
查看导入的文件
[[email protected] ~]$ hadoop fs -cat /user/hadoop/help_keyword/part-m-00000
导入: 指定分隔符和导入路径
sqoop import --connect jdbc:mysql://hadoop1:3306/mysql --username root --password root --table help_keyword --target-dir /user/hadoop11/my_help_keyword1 --fields-terminated-by ‘ ‘ -m 2
导入数据:带where条件
sqoop import --connect jdbc:mysql://hadoop1:3306/mysql --username root --password root --where "name=‘STRING‘ " --table help_keyword --target-dir /sqoop/hadoop11/myoutport1 -m 1
查询指定列
sqoop import --connect jdbc:mysql://hadoop1:3306/mysql --username root --password root --columns "name" --where "name=‘STRING‘ " --table help_keyword --target-dir /sqoop/hadoop11/myoutport22 -m 1 selct name from help_keyword where name = "string"
导入:指定自定义查询SQL
sqoop import --connect jdbc:mysql://hadoop1:3306/ --username root --password root --target-dir /user/hadoop/myimport33_1 --query ‘select help_keyword_id,name from mysql.help_keyword where $CONDITIONS and name = "STRING"‘ --split-by help_keyword_id --fields-terminated-by ‘ ‘ -m 4
在以上需要按照自定义SQL语句导出数据到HDFS的情况下:
1、引号问题,要么外层使用单引号,内层使用双引号,$CONDITIONS的$符号不用转义, 要么外层使用双引号,那么内层使用单引号,然后$CONDITIONS的$符号需要转义
2、自定义的SQL语句中必须带有WHERE $CONDITIONS
2、把MySQL数据库中的表数据导入到Hive中
Sqoop 导入关系型数据到 hive 的过程是先导入到 hdfs,然后再 load 进入 hive
普通导入:数据存储在默认的default hive库中,表名就是对应的mysql的表名:
sqoop import --connect jdbc:mysql://hadoop1:3306/mysql --username root --password root --table help_keyword --hive-import -m 1
导入过程
第一步:导入mysql.help_keyword的数据到hdfs的默认路径
第二步:自动仿造mysql.help_keyword去创建一张hive表, 创建在默认的default库中
第三步:把临时目录中的数据导入到hive表中
查看数据
[[email protected] ~]$ hadoop fs -cat /user/hive/warehouse/help_keyword/part-m-00000
指定行分隔符和列分隔符,指定hive-import,指定覆盖导入,指定自动创建hive表,指定表名,指定删除中间结果数据目录
sqoop import --connect jdbc:mysql://hadoop1:3306/mysql --username root --password root --table help_keyword --fields-terminated-by " " --lines-terminated-by " " --hive-import --hive-overwrite --create-hive-table --delete-target-dir --hive-database mydb_test --hive-table new_help_keyword
报错原因是hive-import 当前这个导入命令。 sqoop会自动给创建hive的表。 但是不会自动创建不存在的库
手动创建mydb_test数据块
hive> create database mydb_test; OK Time taken: 6.147 seconds hive>
之后再执行上面的语句没有报错
查询一下
select * from new_help_keyword limit 10;
上面的导入语句等价于
sqoop import --connect jdbc:mysql://hadoop1:3306/mysql --username root --password root --table help_keyword --fields-terminated-by " " --lines-terminated-by " " --hive-import --hive-overwrite --create-hive-table --hive-table mydb_test.new_help_keyword --delete-target-dir
增量导入
执行增量导入之前,先清空hive数据库中的help_keyword表中的数据
truncate table help_keyword;
sqoop import --connect jdbc:mysql://hadoop1:3306/mysql --username root --password root --table help_keyword --target-dir /user/hadoop/myimport_add --incremental append --check-column help_keyword_id --last-value 500 -m 1
语句执行成功
查看结果
3、把MySQL数据库中的表数据导入到hbase
普通导入
sqoop import --connect jdbc:mysql://hadoop1:3306/mysql --username root --password root --table help_keyword --hbase-table new_help_keyword --column-family person --hbase-row-key help_keyword_id
此时会报错,因为需要先创建Hbase里面的表,再执行导入的语句
hbase(main):001:0> create ‘new_help_keyword‘, ‘base_info‘ 0 row(s) in 3.6280 seconds => Hbase::Table - new_help_keyword hbase(main):002:0>
以上是关于Sqoop学习的主要内容,如果未能解决你的问题,请参考以下文章