搭建hadoop+spark+hive环境(配置安装hive)

Posted 沉默的赌徒

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了搭建hadoop+spark+hive环境(配置安装hive)相关的知识,希望对你有一定的参考价值。

总共分为三步:

第一步安装配置hadoop:搭建hadoop+spark+hive环境(centos全远程hadoop极速安装及配置)

第二步安装配置spark:搭建hadoop+spark+hive环境(centos极速安装和配置spark)

第三步安装配置hive:搭建hadoop+spark+hive环境(centos极速安装和配置hive)

I、下载并且解压hive

#下载hive
wget http://apache.claz.org/hive/hive-2.3.6/apache-hive-2.3.6-bin.tar.gz

#解压
tar zxf apache-hive-2.3.6-bin.tar.gz

#移动到hadoop文件夹中
mv apache-hive-2.3.6-bin /usr/local/hadoop/hive-2.1.1

#配置系统环境变量
vim /etc/profile
#添加下面三行

export HIVE_HOME=/usr/local/hadoop/hive
export HIVE_CONF_DIR=$HIVE_HOME/conf
export PATH=$PATH:$HIVE_HOME/bin

 

II、安装其他依赖包

1、mysql-connector驱动

#下载mysql-connector 驱动
wget https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.1.48.tar.gz

#解压
tar zxf mysql-connector-java-5.1.48.tar.gz 

#进入并将mysql-connector-java-5.1.48-bin.jar,mysql-connector-java-5.1.48.jar两个包移动到/hive/lib中
cp mysql-connector-java-5.1.48-bin.jar /usr/local/hadoop/hive-2.1.1/lib/
cp mysql-connector-java-5.1.48.jar /usr/local/hadoop/hive-2.1.1/lib/

2、hive on spark所必须的包

#进入已经安装好的/hadoop/spark/jars目录下的scala-library spark-core spark-network-common包复制到hive/lib下,依据版本不同,后缀数字不同
cp scala-library-2.11.12.jar spark-core_2.11-2.4.4.jar spark-network-common_2.11-2.4.4.jar /usr/local/hadoop/hive/lib/

3、接下来就是超级巨大配置工程了,请耐心

①、开启hadoop并且创建hadoop分布式文件

开启hadoop
./usr/local/hadoop/hadoop-2.7.7/sbin/start-all.sh
#创建hdfs目录(这里不是传统的文件系统,所以用bash ls是找不到的)
hadoop fs -mkdir /data
hadoop fs -mkdir /data/hive
hadoop fs -mkdir /data/hive/warehouse
hadoop fs -mkdir /data/hive/tmp
hadoop fs -mkdir /data/hive/log
hadoop fs -mkdir /spark-logs

②配置hive-env.sh

#在conf目录中配置hive-env.sh
cp hive-env.sh.template  hive-env.sh

#添加下面三行,注意修改路径 HADOOP_HOME=/usr/local/hadoop/hadoop-2.7.7 #hive配置文件存放路径 export HIVE_CONF_DIR=/usr/local/hadoop/hive/conf #hive相关jar存放路径 export HIVE_AUX_JARS_PATH=/usr/local/hadoop/hive/lib

 

③配置hive-site.xml文件

这一步工作量非常大,我先直接将自己的hive-site.xml帖出来,同样会上传到百度文库供大家下载使用

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!--
   Licensed to the Apache Software Foundation (ASF) under one or more
   contributor license agreements.  See the NOTICE file distributed with
   this work for additional information regarding copyright ownership.
   The ASF licenses this file to You under the Apache License, Version 2.0
   (the "License"); you may not use this file except in compliance with
   the License.  You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
--><configuration>
  <!-- WARNING!!! This file is auto generated for documentation purposes ONLY! -->
  <!-- WARNING!!! Any changes you make to this file will be ignored by Hive.   -->
  <!-- WARNING!!! You must make your changes in hive-site.xml instead.         -->
  <!-- Hive Execution Parameters -->
  <property>
    <name>hive.exec.script.wrapper</name>
    <value/>
    <description/>
  </property>
  <property>
    <name>hive.exec.plan</name>
    <value/>
    <description/>
  </property>
  <property>
    <name>hive.exec.stagingdir</name>
    <value>.hive-staging</value>
    <description>Directory name that will be created inside table locations in order to support HDFS encryption. This is replaces ${hive.exec.scratchdir} for query results with the exception of read-only tables. In all cases ${hive.exec.scratchdir} is still used for other temporary files, such as job plans.</description>
  </property>
  <property>
    <name>hive.exec.scratchdir</name>
    <value>hdfs://servera:9000/data/hive/temp</value>
    <description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/<username> is created, with ${hive.scratch.dir.permission}.</description>
  </property>
  <property>
    <name>hive.repl.rootdir</name>
    <value>/user/hive/repl/</value>
    <description>HDFS root dir for all replication dumps.</description>
  </property>
  <property>
    <name>hive.repl.cm.enabled</name>
    <value>false</value>
    <description>Turn on ChangeManager, so delete files will go to cmrootdir.</description>
  </property>
  <property>
    <name>hive.repl.cmrootdir</name>
    <value>/user/hive/cmroot/</value>
    <description>Root dir for ChangeManager, used for deleted files.</description>
  </property>
  <property>
    <name>hive.repl.cm.retain</name>
    <value>24h</value>
    <description>
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is hour if not specified.
      Time to retain removed files in cmrootdir.
    </description>
  </property>
  <property>
    <name>hive.repl.cm.interval</name>
    <value>3600s</value>
    <description>
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      Inteval for cmroot cleanup thread.
    </description>
  </property>
  <property>
    <name>hive.exec.local.scratchdir</name>
    <value>/usr/local/hadoop/hive/temp</value>
    <description>Local scratch space for Hive jobs</description>
  </property>
  <property>
    <name>hive.downloaded.resources.dir</name>
    <value>/usr/local/hadoop/hive/temp</value>
    <description>Temporary local directory for added resources in the remote file system.</description>
  </property>
  <property>
    <name>hive.scratch.dir.permission</name>
    <value>700</value>
    <description>The permission for the user specific scratch directories that get created.</description>
  </property>
  <property>
    <name>hive.exec.submitviachild</name>
    <value>false</value>
    <description/>
  </property>
  <property>
    <name>hive.exec.submit.local.task.via.child</name>
    <value>true</value>
    <description>
      Determines whether local tasks (typically mapjoin hashtable generation phase) runs in 
      separate JVM (true recommended) or not. 
      Avoids the overhead of spawning new JVM, but can lead to out-of-memory issues.
    </description>
  </property>
  <property>
    <name>hive.exec.script.maxerrsize</name>
    <value>100000</value>
    <description>
      Maximum number of bytes a script is allowed to emit to standard error (per map-reduce task). 
      This prevents runaway scripts from filling logs partitions to capacity
    </description>
  </property>
  <property>
    <name>hive.exec.script.allow.partial.consumption</name>
    <value>false</value>
    <description>
      When enabled, this option allows a user script to exit successfully without consuming 
      all the data from the standard input.
    </description>
  </property>
  <property>
    <name>stream.stderr.reporter.prefix</name>
    <value>reporter:</value>
    <description>Streaming jobs that log to standard error with this prefix can log counter or status information.</description>
  </property>
  <property>
    <name>stream.stderr.reporter.enabled</name>
    <value>true</value>
    <description>Enable consumption of status and counter messages for streaming jobs.</description>
  </property>
  <property>
    <name>hive.exec.compress.output</name>
    <value>false</value>
    <description>
      This controls whether the final outputs of a query (to a local/HDFS file or a Hive table) is compressed. 
      The compression codec and other options are determined from Hadoop config variables mapred.output.compress*
    </description>
  </property>
  <property>
    <name>hive.exec.compress.intermediate</name>
    <value>false</value>
    <description>
      This controls whether intermediate files produced by Hive between multiple map-reduce jobs are compressed. 
      The compression codec and other options are determined from Hadoop config variables mapred.output.compress*
    </description>
  </property>
  <property>
    <name>hive.intermediate.compression.codec</name>
    <value/>
    <description/>
  </property>
  <property>
    <name>hive.intermediate.compression.type</name>
    <value/>
    <description/>
  </property>
  <property>
    <name>hive.exec.reducers.bytes.per.reducer</name>
    <value>256000000</value>
    <description>size per reducer.The default is 256Mb, i.e if the input size is 1G, it will use 4 reducers.</description>
  </property>
  <property>
    <name>hive.exec.reducers.max</name>
    <value>1009</value>
    <description>
      max number of reducers will be used. If the one specified in the configuration parameter mapred.reduce.tasks is
      negative, Hive will use this one as the max number of reducers when automatically determine number of reducers.
    </description>
  </property>
  <property>
    <name>hive.exec.pre.hooks</name>
    <value/>
    <description>
      Comma-separated list of pre-execution hooks to be invoked for each statement. 
      A pre-execution hook is specified as the name of a Java class which implements the 
      org.apache.hadoop.hive.ql.hooks.ExecuteWithHookContext interface.
    </description>
  </property>
  <property>
    <name>hive.exec.post.hooks</name>
    <value/>
    <description>
      Comma-separated list of post-execution hooks to be invoked for each statement. 
      A post-execution hook is specified as the name of a Java class which implements the 
      org.apache.hadoop.hive.ql.hooks.ExecuteWithHookContext interface.
    </description>
  </property>
  <property>
    <name>hive.exec.failure.hooks</name>
    <value/>
    <description>
      Comma-separated list of on-failure hooks to be invoked for each statement. 
      An on-failure hook is specified as the name of Java class which implements the 
      org.apache.hadoop.hive.ql.hooks.ExecuteWithHookContext interface.
    </description>
  </property>
  <property>
    <name>hive.exec.query.redactor.hooks</name>
    <value/>
    <description>
      Comma-separated list of hooks to be invoked for each query which can 
      tranform the query before it\'s placed in the job.xml file. Must be a Java class which 
      extends from the org.apache.hadoop.hive.ql.hooks.Redactor abstract class.
    </description>
  </property>
  <property>
    <name>hive.client.stats.publishers</name>
    <value/>
    <description>
      Comma-separated list of statistics publishers to be invoked on counters on each job. 
      A client stats publisher is specified as the name of a Java class which implements the 
      org.apache.hadoop.hive.ql.stats.ClientStatsPublisher interface.
    </description>
  </property>
  <property>
    <name>hive.ats.hook.queue.capacity</name>
    <value>64</value>
    <description>
      Queue size for the ATS Hook executor. If the number of outstanding submissions 
      to the ATS executor exceed this amount, the Hive ATS Hook will not try to log queries to ATS.
    </description>
  </property>
  <property>
    <name>hive.exec.parallel</name>
    <value>false</value>
    <description>Whether to execute jobs in parallel</description>
  </property>
  <property>
    <name>hive.exec.parallel.thread.number</name>
    <value>8</value>
    <description>How many jobs at most can be executed in parallel</description>
  </property>
  <property>
    <name>hive.mapred.reduce.tasks.speculative.execution</name>
    <value>true</value>
    <description>Whether speculative execution for reducers should be turned on. </description>
  </property>
  <property>
    <name>hive.exec.counters.pull.interval</name>
    <value>1000</value>
    <description>
      The interval with which to poll the JobTracker for the counters the running job. 
      The smaller it is the more load there will be on the jobtracker, the higher it is the less granular the caught will be.
    </description>
  </property>
  <property>
    <name>hive.exec.dynamic.partition</name>
    <value>true</value>
    <description>Whether or not to allow dynamic partitions in DML/DDL.</description>
  </property>
  <property>
    <name>hive.exec.dynamic.partition.mode</name>
    <value>strict</value>
    <description>
      In strict mode, the user must specify at least one static partition
      in case the user accidentally overwrites all partitions.
      In nonstrict mode all partitions are allowed to be dynamic.
    </description>
  </property>
  <property>
    <name>hive.exec.max.dynamic.partitions</name>
    <value>1000</value>
    <description>Maximum number of dynamic partitions allowed to be created in total.</description>
  </property>
  <property>
    <name>hive.exec.max.dynamic.partitions.pernode</name>
    <value>100</value>
    <description>Maximum number of dynamic partitions allowed to be created in each mapper/reducer node.</description>
  </property>
  <property>
    <name>hive.exec.max.created.files</name>
    <value>100000</value>
    <description>Maximum number of HDFS files created by all mappers/reducers in a MapReduce job.</description>
  </property>
  <property>
    <name>hive.exec.default.partition.name</name>
    <value>__HIVE_DEFAULT_PARTITION__</value>
    <description>
      The default partition name in case the dynamic partition column value is null/empty string or any other values that cannot be escaped. 
      This value must not contain any special character used in HDFS URI (e.g., \':\', \'%\', \'/\' etc). 
      The user has to be aware that the dynamic partition value should not contain this value to avoid confusions.
    </description>
  </property>
  <property>
    <name>hive.lockmgr.zookeeper.default.partition.name</name>
    <value>__HIVE_DEFAULT_ZOOKEEPER_PARTITION__</value>
    <description/>
  </property>
  <property>
    <name>hive.exec.show.job.failure.debug.info</name>
    <value>true</value>
    <description>
      If a job fails, whether to provide a link in the CLI to the task with the
      most failures, along with debugging hints if applicable.
    </description>
  </property>
  <property>
    <name>hive.exec.job.debug.capture.stacktraces</name>
    <value>true</value>
    <description>
      Whether or not stack traces parsed from the task logs of a sampled failed task 
      for each failed job should be stored in the SessionState
    </description>
  </property>
  <property>
    <name>hive.exec.job.debug.timeout</name>
    <value>30000</value>
    <description/>
  </property>
  <property>
    <name>hive.exec.tasklog.debug.timeout</name>
    <value>20000</value>
    <description/>
  </property>
  <property>
    <name>hive.output.file.extension</name>
    <value/>
    <description>
      String used as a file extension for output files. 
      If not set, defaults to the codec extension for text files (e.g. ".gz"), or no extension otherwise.
    </description>
  </property>
  <property>
    <name>hive.exec.mode.local.auto</name>
    <value>false</value>
    <description>Let Hive determine whether to run in local mode automatically</description>
  </property>
  <property>
    <name>hive.exec.mode.local.auto.inputbytes.max</name>
    <value>134217728</value>
    <description>When hive.exec.mode.local.auto is true, input bytes should less than this for local mode.</description>
  </property>
  <property>
    <name>hive.exec.mode.local.auto.input.files.max</name>
    <value>4</value>
    <description>When hive.exec.mode.local.auto is true, the number of tasks should less than this for local mode.</description>
  </property>
  <property>
    <name>hive.exec.drop.ignorenonexistent</name>
    <value>true</value>
    <description>Do not report an error if DROP TABLE/VIEW/Index/Function specifies a non-existent table/view/index/function</description>
  </property>
  <property>
    <name>hive.ignore.mapjoin.hint</name>
    <value>true</value>
    <description>Ignore the mapjoin hint</description>
  </property>
  <property>
    <name>hive.file.max.footer</name>
    <value>100</value>
    <description>maximum number of lines for footer user can define for a table file</description>
  </property>
  <property>
    <name>hive.resultset.use.unique.column.names</name>
    <value>true</value>
    <description>
      Make column names unique in the result set by qualifying column names with table alias if needed.
      Table alias will be added to column names for queries of type "select *" or 
      if query explicitly uses table alias "select r1.x..".
    </description>
  </property>
  <property>
    <name>fs.har.impl</name>
    <value>org.apache.hadoop.hive.shims.HiveHarFileSystem</value>
    <description>The implementation for accessing Hadoop Archives. Note that this won\'t be applicable to Hadoop versions less than 0.20</description>
  </property>
  <property>
    <name>hive.metastore.warehouse.dir</name>
    <value>hdfs://servera:9000/data/hive/warehouse</value>
    <description>location of default database for the warehouse</description>
  </property>
  <property>
    <name>hive.metastore.uris</name>
    <value/>
    <description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
  </property>
  <property>
    <name>hive.metastore.client.capability.check</name>
    <value>true</value>
    <description>Whether to check client capabilities for potentially breaking API usage.</description>
  </property>
  <property>
    <name>hive.metastore.fastpath</name>
    <value>false</value>
    <description>Used to avoid all of the proxies and object copies in the metastore.  Note, if this is set, you MUST use a local metastore (hive.metastore.uris must be empty) otherwise undefined and most likely undesired behavior will result</description>
  </property>
  <property>
    <name>hive.metastore.fshandler.threads</name>
    <value>15</value>
    <description>Number of threads to be allocated for metastore handler for fs operations.</description>
  </property>
  <property>
    <name>hive.metastore.hbase.catalog.cache.size</name>
    <value>50000</value>
    <description>Maximum number of objects we will place in the hbase metastore catalog cache.  The objects will be divided up by types that we need to cache.</description>
  </property>
  <property>
    <name>hive.metastore.hbase.aggregate.stats.cache.size</name>
    <value>10000</value>
    <description>Maximum number of aggregate stats nodes that we will place in the hbase metastore aggregate stats cache.</description>
  </property>
  <property>
    <name>hive.metastore.hbase.aggregate.stats.max.partitions</name>
    <value>10000</value>
    <description>Maximum number of partitions that are aggregated per cache node.</description>
  </property>
  <property>
    <name>hive.metastore.hbase.aggregate.stats.false.positive.probability</name>
    <value>0.01</value>
    <description>Maximum false positive probability for the Bloom Filter used in each aggregate stats cache node (default 1%).</description>
  </property>
  <property>
    <name>hive.metastore.hbase.aggregate.stats.max.variance</name>
    <value>0.1</value>
    <description>Maximum tolerable variance in number of partitions between a cached node and our request (default 10%).</description>
  </property>
  <property>
    <name>hive.metastore.hbase.cache.ttl</name>
    <value>600s</value>
    <description>
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      Number of seconds for a cached node to be active in the cache before they become stale.
    </description>
  </property>
  <property>
    <name>hive.metastore.hbase.cache.max.writer.wait</name>
    <value>5000ms</value>
    <description>
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified.
      Number of milliseconds a writer will wait to acquire the writelock before giving up.
    </description>
  </property>
  <property>
    <name>hive.metastore.hbase.cache.max.reader.wait</name>
    <value>1000ms</value>
    <description>
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified.
      Number of milliseconds a reader will wait to acquire the readlock before giving up.
    </description>
  </property>
  <property>
    <name>hive.metastore.hbase.cache.max.full</name>
    <value>0.9</value>
    <description>Maximum cache full % after which the cache cleaner thread kicks in.</description>
  </property>
  <property>
    <name>hive.metastore.hbase.cache.clean.until</name>
    <value>0.8</value>
    <description>The cleaner thread cleans until cache reaches this % full size.</description>
  </property>
  <property>
    <name>hive.metastore.hbase.connection.class</name>
    <value>org.apache.hadoop.hive.metastore.hbase.VanillaHBaseConnection</value>
    <description>Class used to connection to HBase</description>
  </property>
  <property>
    <name>hive.metastore.hbase.aggr.stats.cache.entries</name>
    <value>10000</value>
    <description>How many in stats objects to cache in memory</description>
  </property>
  <property>
    <name>hive.metastore.hbase.aggr.stats.memory.ttl</name>
    <value>60s</value>
    <description>
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      Number of seconds stats objects live in memory after they are read from HBase.
    </description>
  </property>
  <property>
    <name>hive.metastore.hbase.aggr.stats.invalidator.frequency</name>
    <value>5s</value>
    <description>
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      How often the stats cache scans its HBase entries and looks for expired entries
    </description>
  </property>
  <property>
    <name>hive.metastore.hbase.aggr.stats.hbase.ttl</name>
    <value>604800s</value>
    <description>
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      Number of seconds stats entries live in HBase cache after they are created.  They may be invalided by updates or partition drops before this.  Default is one week.
    </description>
  </property>
  <property>
    <name>hive.metastore.hbase.file.metadata.threads</name>
    <value>1</value>
    <description>Number of threads to use to read file metadata in background to cache it.</description>
  </property>
  <property>
    <name>hive.metastore.connect.retries</name>
    <value>3</value>
    <description>Number of retries while opening a connection to metastore</description>
  </property>
  <property>
    <name>hive.metastore.failure.retries</name>
    <value>1</value>
    <description>Number of retries upon failure of Thrift metastore calls</description>
  </property>
  <property>
    <name>hive.metastore.port</name>
    <value>9083</value>
    <description>Hive metastore listener port</description>
  </property>
  <property>
    <name>hive.metastore.client.connect.retry.delay</name>
    <value>1s</value>
    <description>
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      Number of seconds for the client to wait between consecutive connection attempts
    </description>
  </property>
  <property>
    <name>hive.metastore.client.socket.timeout</name>
    <value>600s</value>
    <description>
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      MetaStore Client socket timeout in seconds
    </description>
  </property>
  <property>
    <name>hive.metastore.client.socket.lifetime</name>
    <value>0s</value>
    <description>
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      MetaStore Client socket lifetime in seconds. After this time is exceeded, client
      reconnects on the next MetaStore operation. A value of 0s means the connection
      has an infinite lifetime.
    </description>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>123456</value>
    <description>password to use against metastore database</description>
  </property>
  <property>
    <name>hive.metastore.ds.connection.url.hook</name>
    <value/>
    <description>Name of the hook to use for retrieving the JDO connection URL. If empty, the value in javax.jdo.option.ConnectionURL is used</description>
  </property>
  <property>
    <name>javax.jdo.option.Multithreaded</name>
    <value>true</value>
    <description>Set this to true if multiple threads access metastore through JDO concurrently.</description>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://servera:3306/hive?createDatabaseIfNotExist=true</value>
    <description>
      JDBC connect string for a JDBC metastore.
      To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.
      For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
    </description>
  </property>
  <property>
    <name>hive.metastore.dbaccess.ssl.properties</name>
    <value/>
    <description>
      Comma-separated SSL properties for metastore to access database when JDO connection URL
      enables SSL access. e.g. javax.net.ssl.trustStore=/tmp/truststore,javax.net.ssl.trustStorePassword=pwd.
    </description>
  </property>
  <property>
    <name>hive.hmshandler.retry.attempts</name>
    <value>10</value>
    <description>The number of times to retry a HMSHandler call if there were a connection error.</description>
  </property>
  <property>
    <name>hive.hmshandler.retry.interval</name>
    <value>2000ms</value>
    <description>
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified.
      The time between HMSHandler retry attempts on failure.
    </description>
  </property>
  <property>
    <name>hive.hmshandler.force.reload.conf</name>
    <value>false</value>
    <description>
      Whether to force reloading of the HMSHandler configuration (including
      the connection URL, before the next metastore query that accesses the
      datastore. Once reloaded, this value is reset to false. Used for
      testing only.
    </description>
  </property>
  <property>
    <name>hive.metastore.server.max.message.size</name>
    <value>104857600</value>
    <description>Maximum message size in bytes a HMS will accept.</description>
  </property>
  <property>
    <name>hive.metastore.server.min.threads</name>
    <value>200</value>
    <description>Minimum number of worker threads in the Thrift server\'s pool.</description>
  </property>
  <property>
    <name>hive.metastore.server.max.threads</name>
    <value>1000</value>
    <description>Maximum number of worker threads in the Thrift server\'s pool.</description>
  </property>
  <property>
    <name>hive.metastore.server.tcp.keepalive</name>
    <value>true</value>
    <description>Whether to enable TCP keepalive for the metastore server. Keepalive will prevent accumulation of half-open connections.</description>
  </property>
  <property>
    <name>hive.metastore.archive.intermediate.original</name>
    <value>_INTERMEDIATE_ORIGINAL</value>
    <description>
      Intermediate dir suffixes used for archiving. Not important what they
      are, as long as collisions are avoided
    </description>
  </property>
  <property>
    <name>hive.metastore.archive.intermediate.archived</name>
    <value>_INTERMEDIATE_ARCHIVED</value>
    <description/>
  </property>
  <property>
    <name>hive.metastore.archive.intermediate.extracted</name>
    <value>_INTERMEDIATE_EXTRACTED</value>
    <description/>
  </property>
  <property>
    <name>hive.metastore.kerberos.keytab.file</name>
    <value/>
    <description>The path to the Kerberos Keytab file containing the metastore Thrift server\'s service principal.</description>
  </property>
  <property>
    <name>hive.metastore.kerberos.principal</name>
    <value>hive-metastore/_HOST@EXAMPLE.COM</value>
    <description>
      The service principal for the metastore Thrift server. 
      The special string _HOST will be replaced automatically with the correct host name.
    </description>
  </property>
  <property>
    <name>hive.metastore.sasl.enabled</name>
    <value>false</value>
    <description>If true, the metastore Thrift interface will be secured with SASL. Clients must authenticate with Kerberos.</description>
  </property>
  <property>
    <name>hive.metastore.thrift.framed.transport.enabled</name>
    <value>false</value>
    <description>If true, the metastore Thrift interface will use TFramedTransport. When false (default) a standard TTransport is used.</description>
  </property>
  <property>
    <name>hive.metastore.thrift.compact.protocol.enabled</name>
    <value>false</value>
    <description>
      If true, the metastore Thrift interface will use TCompactProtocol. When false (default) TBinaryProtocol will be used.
      Setting it to true will break compatibility with older clients running TBinaryProtocol.
    </description>
  </property>
  <property>
    <name>hive.metastore.token.signature</name>
    <value/>
    <description>The delegation token service name to match when selecting a token from the current user\'s tokens.</description>
  </property>
  <property>
    <name>hive.cluster.delegation.token.store.class</name>
    <value>org.apache.hadoop.hive.thrift.MemoryTokenStore</value>
    <description>The delegation token store implementation. Set to org.apache.hadoop.hive.thrift.ZooKeeperTokenStore for load-balanced cluster.</description>
  </property>
  <property>
    <name>hive.cluster.delegation.token.store.zookeeper.connectString</name>
    <value/>
    <description>
      The ZooKeeper token store connect string. You can re-use the configuration value
      set in hive.zookeeper.quorum, by leaving this parameter unset.
    </description>
  </property>
  <property>
    <name>hive.cluster.delegation.token.store.zookeeper.znode</name>
    <value>/hivedelegation</value>
    <description>
      The root path for token store data. Note that this is used by both HiveServer2 and
      MetaStore to store delegation Token. One directory gets created for each of them.
      The final directory names would have the servername appended to it (HIVESERVER2,
      METASTORE).
    </description>
  </property>
  <property>
    <name>hive.cluster.delegation.token.store.zookeeper.acl</name>
    <value/>
    <description>
      ACL for token store entries. Comma separated list of ACL entries. For example:
      sasl:hive/host1@MY.DOMAIN:cdrwa,sasl:hive/host2@MY.DOMAIN:cdrwa
      Defaults to all permissions for the hiveserver2/metastore process user.
    </description>
  </property>
  <property>
    <name>hive.metastore.cache.pinobjtypes</name>
    <value>Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order</value>
    <description>List of comma separated metastore object types that should be pinned in the cache</description>
  </property>
  <property>
    <name>datanucleus.connectionPoolingType</name>
    <value>BONECP</value>
    <description>
      Expects one of [bonecp, dbcp, hikaricp, none].
      Specify connection pool library for datanucleus
    </description>
  </property>
  <property>
    <name>datanucleus.connectionPool.maxPoolSize</name>
    <value>10</value>
    <description>
      Specify the maximum number of connections in the connection pool. Note: The configured size will be used by
       2 connection pools (TxnHandler and ObjectStore). When configuring the max connection pool size, it is 
      recommended to take into account the number of metastore instances and the number of HiveServer2 instances 
      configured with embedded metastore. To get optimal performance, set config to meet the following condition
      (2 * pool_size * metastore_instances + 2 * pool_size * HS2_instances_with_embedded_metastore) = 
      (2 * physical_core_count + hard_disk_count).
    </description>
  </property>
  <property>
    <name>datanucleus.rdbms.initializeColumnInfo</name>
    <value>NONE</value>
    <description>initializeColumnInfo setting for DataNucleus; set to NONE at least on Postgres.</description>
  </property>
  <property>
    <name>datanucleus.schema.validateTables</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema</description>
  </property>
  <property>
    <name>datanucleus.schema.validateColumns</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema</description>
  </property>
  <property>
    <name>datanucleus.schema.validateConstraints</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema</description>
  </property>
  <property>
    <name>datanucleus.storeManagerType</name>
    <value>rdbms</value>
    <description>metadata store type</description>
  </property>
  <property>
    <name>datanucleus.schema.autoCreateAll</name>
    <value>false</value>
    <description>Auto creates necessary schema on a startup if one doesn\'t exist. Set this to false, after creating it once.To enable auto create also set hive.metastore.schema.verification=false. Auto creation is not recommended for production use cases, run schematool command instead.</description>
  </property>
  <property>
    <name>hive.metastore.schema.verification</name>
    <value>true</value>
    <description>
      Enforce metastore schema version consistency.
      True: Verify that version information stored in is compatible with one from Hive jars.  Also disable automatic
            schema migration attempt. Users are required to manually migrate schema after Hive upgrade which ensures
            proper metastore schema migration. (Default)
      False: Warn if the version information stored in metastore doesn\'t match with one from in Hive jars.
    </description>
  </property>
  <property>
    <name>hive.metastore.schema.verification.record.version</name>
    <value>false</value>
    <description>
      When true the current MS version is recorded in the VERSION table. If this is disabled and verification is
       enabled the MS will be unusable.
    </description>
  </property>
  <property>
    <name>datanucleus.transactionIsolation</name>
    <value>read-committed</value>
    <description>Default transaction isolation level for identity generation.</description>
  </property>
  <property>
    <name>datanucleus.cache.level2</name>
    <value>false</value>
    <description>Use a level 2 cache. Turn this off if metadata is changed independently of Hive metastore server</description>
  </property>
  <property>
    <name>datanucleus.cache.level2.type</name>
    <value>none</value>
    <description/>
  </property>
  <property>
    <name>datanucleus.identifierFactory</name>
    <value>datanucleus1</value>
    <description>
      Name of the identifier factory to use when generating table/column names etc. 
      \'datanucleus1\' is used for backward compatibility with DataNucleus v1
    </description>
  </property>
  <property>
    <name>datanucleus.rdbms.useLegacyNativeValueStrategy</name>
    <value>true</value>
    <description/>
  </property>
  <property>
    <name>datanucleus.plugin.pluginRegistryBundleCheck</name>
    <value>LOG</value>
    <description>Defines what happens when plugin bundles are found and are duplicated [EXCEPTION|LOG|NONE]</description>
  </property>
  <property>
    <name>hive.metastore.batch.retrieve.max</name>
    <value>300</value>
    <description>
      Maximum number of objects (tables/partitions) can be retrieved from metastore in one batch. 
      The higher the number, the less the number of round trips is needed to the Hive metastore server, 
      but it may also cause higher memory requirement at the client side.
    </description>
  </property>
  <property>
    <name>hive.metastore.batch.retrieve.table.partition.max</name>
    <value>1000</value>
    <description>Maximum number of objects that metastore internally retrieves in one batch.</description>
  </property>
  <property>
    <name>hive.metastore.init.hooks</name>
    <value/>
    <description>
      A comma separated list of hooks to be invoked at the beginning of HMSHandler initialization. 
      An init hook is specified as the name of Java class which extends org.apache.hadoop.hive.metastore.MetaStoreInitListener.
    </description>
  </property>
  <property>
    <name>hive.metastore.pre.event.listeners</name>
    <value/>
    <description>List of comma separated listeners for metastore events.</description>
  </property>
  <property>
    <name>hive.metastore.event.listeners</name>
    <value/>
    <description>A comma separated list of Java classes that implement the org.apache.hadoop.hive.metastore.MetaStoreEventListener interface. The metastore event and corresponding listener method will be invoked in separate JDO transactions. Alternatively, configure hive.metastore.transactional.event.listeners to ensure both are invoked in same JDO transaction.</description>
  </property>
  <property>
    <name>hive.metastore.transactional.event.listeners</name>
    <value/>
    <description>A comma separated list of Java classes that implement the org.apache.hadoop.hive.metastore.MetaStoreEventListener interface. Both the metastore event and corresponding listener method will be invoked in the same JDO transaction.</description>
  </property>
  <property>
    <name>hive.metastore.event.db.listener.timetolive</name>
    <value>86400s</value>
    <description>
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      time after which events will be removed from the database listener queue
    </description>
  </property>
  <property>
    <name>hive.metastore.authorization.storage.checks</name>
    <value>false</value>
    <description>
      Should the metastore do authorization checks against the underlying storage (usually hdfs) 
      for operations like drop-partition (disallow the drop-partition if the user in
      question doesn\'t have permissions to delete the corresponding directory
      on the storage).
    </description>
  </property>
  <property>
    <name>hive.metastore.authorization.storage.check.externaltable.drop</name>
    <value>true</value>
    <description>
      Should StorageBasedAuthorization check permission of the storage before dropping external table.
      StorageBasedAuthorization already does this check for managed table. For external table however,
      anyone who has read permission of the directory could drop external table, which is surprising.
      The flag is set to false by default to maintain backward compatibility.
    </description>
  </property>
  <property>
    <name>hive.metastore.event.clean.freq</name>
    <value>0s</value>
    <description>
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      Frequency at which timer task runs to purge expired events in metastore.
    </description>
  </property>
  <property>
    <name>hive.metastore.event.expiry.duration</name>
    <value>0s</value>
    <description>
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      Duration after which events expire from events table
    </description>
  </property>
  <property>
    <name>hive.metastore.event.message.factory</name>
    <value>org.apache.hadoop.hive.metastore.messaging.json.JSONMessageFactory</value>
    <description>Factory class for making encoding and decoding messages in the events generated.</description>
  </property>
  <property>
    <name>hive.metastore.execute.setugi</name>
    <value>true</value>
    <description>
      In unsecure mode, setting this property to true will cause the metastore to execute DFS operations using 
      the client\'s reported user and group permissions. Note that this property must be set on 
      both the client and server sides. Further note that its best effort. 
      If client sets its to true and server sets it to false, client setting will be ignored.
    </description>
  </property>
  <property>
    <name>hive.metastore.partition.name.whitelist.pattern</name>
    <value/>
    <description>Partition names will be checked against this regex pattern and rejected if not matched.</description>
  </property>
  <property>
    <name>hive.metastore.integral.jdo.pushdown</name>
    <value>false</value>
    <description>
      Allow JDO query pushdown for integral partition columns in metastore. Off by default. This
      improves metastore perf for integral columns, especially if there\'s a large number of partitions.
      However, it doesn\'t work correctly with integral values that are not normalized (e.g. have
      leading zeroes, like 0012). If metastore direct SQL is enabled and works, this optimization
      is also irrelevant.
    </description>
  </property>
  <property>
    <name>hive.metastore.try.direct.sql</name>
    <value>true</value>
    <description>
      Whether the Hive metastore should try to use direct SQL queries instead of the
      DataNucleus for certain read paths. This can improve metastore performance when
      fetching many partitions or column statistics by orders of magnitude; however, it
      is not guaranteed to work on all RDBMS-es and all versions. In case of SQL failures,
      the metastore will fall back to the DataNucleus, so it\'s safe even if SQL doesn\'t
      work for all queries on your datastore. If all SQL queries fail (for example, your
      metastore is backed by MongoDB), you might want to disable this to save the
      try-and-fall-back cost.
    </description>
  </property>
  <property>
    <name>hive.metastore.direct.sql.batch.size</name>
    <value>0</value>
    <description>
      Batch size for partition and other object retrieval from the underlying DB in direct
      SQL. For some DBs like Oracle and MSSQL, there are hardcoded or perf-based limitations
      that necessitate this. For DBs that can handle the queries, this isn\'t necessary and
      may impede performance. -1 means no batching, 0 means automatic batching.
    </description>
  </property>
  <property>
    <name>hive.metastore.try.direct.sql.ddl</name>
    <value>true</value>
    <description>
      Same as hive.metastore.try.direct.sql, for read statements within a transaction that
      modifies metastore data. Due to non-standard behavior in Postgres, if a direct SQL
      select query has incorrect syntax or something similar inside a transaction, the
      entire transaction will fail and fall-back to DataNucleus will not be possible. You
      should disable the usage of direct SQL inside transactions if that happens in your case.
    </description>
  </property>
  <property>
    <name>hive.direct.sql.max.query.length</name>
    <value>100</value>
    <description>
      The maximum
       size of a query string (in KB).
    </description>
  </property>
  <property>
    <name>hive.direct.sql.max.elements.in.clause</name>
    <value>1000</value>
    <description>
      The maximum number of values in a IN clause. Once exceeded, it will be broken into
       multiple OR separated IN clauses.
    </description>
  </property>
  <property>
    <name>hive.direct.sql.max.elements.values.clause</name>
    <value>1000</value>
    <description>The maximum number of values in a VALUES clause for INSERT statement.</description>
  </property>
  <property>
    <name>hive.metastore.orm.retrieveMapNullsAsEmptyStrings</name>
    <value>false</value>
    <description>Thrift does not support nulls in maps, so any nulls present in maps retrieved from ORM must either be pruned or converted to empty strings. Some backing dbs such as Oracle persist empty strings as nulls, so we should set this parameter if we wish to reverse that behaviour. For others, pruning is the correct behaviour</description>
  </property>
  <property>
    <name>hive.metastore.disallow.incompatible.col.type.changes</name>
    <value>true</value>
    <description>
      If true (default is false), ALTER TABLE operations which change the type of a
      column (say STRING) to an incompatible type (say MAP) are disallowed.
      RCFile default SerDe (ColumnarSerDe) serializes the values in such a way that the
      datatypes can be converted from string to any type. The map is also serialized as
      a string, which can be read as a string as well. However, with any binary
      serialization, this is not true. Blocking the ALTER TABLE prevents ClassCastExceptions
      when subsequently trying to access old partitions.
      
      Primitive types like INT, STRING, BIGINT, etc., are compatible with each other and are
      not blocked.
      
      See HIVE-4409 for more details.
    </description>
  </property>
  <property>
    <name>hive.metastore.limit.partition.request</name>
    <value>-1</value>
    <description>
      This limits the number of partitions that can be requested from the metastore for a given table.
      The default value "-1" means no limit.
    </description>
  </property>
  <property>
    <name>hive.table.parameters.default</name>
    <value/>
    <description>Default property values for newly created tables</description>
  </property>
  <property>
    <name>hive.ddl.createtablelike.properties.whitelist</name>
    <value/>
    <description>Table Properties to copy over when executing a Create Table Like.</description>
  </property>
  <property>
    <name>hive.metastore.rawstore.impl</name>
    <value>org.apache.hadoop.hive.metastore.ObjectStore</value>
    <description>
      Name of the class that implements org.apache.hadoop.hive.metastore.rawstore interface. 
      This class is used to store and retrieval of raw metadata objects such as table, database
    </description>
  </property>
  <property>
    <name>hive.metastore.txn.store.impl</name>
    <value>org.apache.hadoop.hive.metastore.txn.CompactionTxnHandler</value>
    <description>Name of class that implements org.apache.hadoop.hive.metastore.txn.TxnStore.  This class is used to store and retrieve transactions and locks</description>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
    <description>Driver class name for a JDBC metastore</description>
  </property>
  <property>
    <name>javax.jdo.PersistenceManagerFactoryClass</name>
    <value>org.datanucleus.api.jdo.JDOPersistenceManagerFactory</value>
    <description>class implementing the jdo persistence</description>
  </property>
  <property>
    <name>hive.metastore.expression.proxy</name>
    <value>org.apache.hadoop.hive.ql.optimizer.ppr.PartitionExpressionForMetastore</value>
    <description/>
  </property>
  <property>
    <name>javax.jdo.option.DetachAllOnCommit</name>
    <value>true</value>
    <description>Detaches all objects from session so that they can be used after transaction is committed</description>
  </property>
  <property>
    <name>javax.jdo.option.NonTransactionalRead</name>
    <value>true</value>
    <description>Reads outside of transactions</description>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>hive</value>
    <description>Username to use against metastore database</description>
  </property>
  <property>
    <name>hive.metastore.end.function.listeners</name>
    <value/>
    <description>List of comma separated listeners for the end of metastore functions.</description>
  </property>
  <property>
    <name>hive.metastore.partition.inherit.table.properties</name>
    <value/>
    <description>
      List of comma separated keys occurring in table properties which will get inherited to newly created partitions. 
      * implies all the keys will get inherited.
    </description>
  </property>
  <property>
    <name>hive.metastore.filter.hook</name>
    <value>org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl</value>
    <description>Metastore hook class for filtering the metadata read results. If hive.security.authorization.manageris set to instance of HiveAuthorizerFactory, then this value is ignored.</description>
  </property>
  <property>
    <name>hive.metastore.dml.events</name>
    <value>false</value>
    <description>If true, the metastore will be asked to fire events for DML operations</description>
  </property>
  <property>
    <name>hive.metastore.client.drop.partitions.using.expressions</name>
    <value>true</value>
    <description>Choose whether dropping partitions with HCatClient pushes the partition-predicate to the metastore, or drops partitions iteratively</description>
  </property>
  <property>
    <name>hive.metastore.aggregate.stats.cache.enabled</name>
    <value>true</value>
    <description>Whether aggregate stats caching is enabled or not.</description>
  </property>
  <property>
    <name>hive.metastore.aggregate.stats.cache.size</name>
    <value>10000</value>
    <description>Maximum number of aggregate stats nodes that we will place in the metastore aggregate stats cache.</description>
  </property>
  <property>
    <name>hive.metastore.aggregate.stats.cache.max.partitions</name>
    <value>10000</value>
    <description>Maximum number of partitions that are aggregated per cache node.</description>
  </property>
  <property>
    <name>hive.metastore.aggregate.stats.cache.fpp</name>
    <value>0.01</value>
    <description>Maximum false positive probability for the Bloom Filter used in each aggregate stats cache node (default 1%).</description>
  </property>
  <property>
    <name>hive.metastore.aggregate.stats.cache.max.variance</name>
    <value>0.01</value>
    <description>Maximum tolerable variance in number of partitions between a cached node and our request (default 1%).</description>
  </property>
  <property>
    <name>hive.metastore.aggregate.stats.cache.ttl</name>
    <value>600s</value>
    <description>
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      Number of seconds for a cached node to be active in the cache before they become stale.
    </description>
  </property>
  <property>
    <name>hive.metastore.aggregate.stats.cache.max.writer.wait</name>
    <value>5000ms</value>
    <description>
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified.
      Number of milliseconds a writer will wait to acquire the writelock before giving up.
    </description>
  </property>
  <property>
    <name>hive.metastore.aggregate.stats.cache.max.reader.wait</name>
    <value>1000ms</value>
    <description>
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified.
      Number of milliseconds a reader will wait to acquire the readlock before giving up.
    </description>
  </property>
  <property>
    <name>hive.metastore.aggregate.stats.cache.max.full</name>
    <value>0.9</value>
    <description>Maximum cache full % after which the cache cleaner thread kicks in.</description>
  </property>
  <property>
    <name>hive.metastore.aggregate.stats.cache.clean.until</name>
    <value>0.8</value>
    <description>The cleaner thread cleans until cache reaches this % full size.</description>
  </property>
  <property>
    <name>hive.metastore.m

以上是关于搭建hadoop+spark+hive环境(配置安装hive)的主要内容,如果未能解决你的问题,请参考以下文章

Spark SQL 高级编程之 HadoopHiveSpark 环境搭建

Spark SQL 高级编程之 HadoopHiveSpark 环境搭建

Spark集群框架搭建VM15+CentOS7+Hadoop+Scala+Spark+Zookeeper+HBase+Hive

Mac下hadoop,hive, hbase,spark单机环境搭建

Hadoop+Hive+Mysql环境搭建

Mac上搭建Hadoop+Hive+Spark开发环境