Hive3.1.2的HQL执行过程

Posted 虎鲸不是鱼

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Hive3.1.2的HQL执行过程相关的知识,希望对你有一定的参考价值。

Hive3.1.2的HQL执行过程

前言

上一篇讲解了Hive3.1.2的Beeline执行过程:lizhiyong.blog.csdn.net/article/details/126634843

总结概括如下:

Main方法初始化Beeline对象→吊起唯一的方法→读取配置的load方法→
启动历史的setupHistory方法→平滑退出的addBeelineShutdownHook方法→
初始化命令行读取器的initializeConsoleReader方法→初始化入参的initArgs方法→
调度的dispatch方法→执行脚本文件的executeFile方法【不一定执行】→
最终执行execute方法【如果有-f传入脚本文件,内部也会吊起调度的dispatch方法】

通过扒源码的方式得出结论:Beeline底层是走JDBC方式操作Hive。

[root@zhiyong2 ~]# cd /opt/usdp-srv/srv/udp/2.0.0.0/hive/bin
[root@zhiyong2 bin]# ls -ltr
总用量 64
-rwxrwxrwx. 1 hadoop hadoop   884 823 2019 schematool
-rwxrwxrwx. 1 hadoop hadoop   832 823 2019 metatool
-rwxrwxrwx. 1 hadoop hadoop  3064 823 2019 init-hive-dfs.sh
-rwxrwxrwx. 1 hadoop hadoop   880 823 2019 hplsql
-rwxrwxrwx. 1 hadoop hadoop   885 823 2019 hiveserver2
-rwxrwxrwx. 1 hadoop hadoop   881 823 2019 beeline
drwxrwxrwx. 3 hadoop hadoop  4096 1224 2020 ext
-rwxrwxrwx. 1 hadoop hadoop  1981 1214 2021 hive-config.sh
-rwxrwxrwx. 1 hadoop hadoop 10414 31 2022 hive
-rwxrwxrwx. 1 hadoop hadoop   141 31 2022 init-metastore-db.sh
-rwxrwxrwx. 1 hadoop hadoop   601 31 2022 metastore-ctl.sh
-rwxrwxrwx. 1 hadoop hadoop   588 31 2022 hive-server2-ctl.sh
-rwxrwxrwx. 1 hadoop hadoop   962 31 2022 check-warehouse-dir.sh
-rwxrwxrwx. 1 hadoop hadoop  1077 31 2022 check-tez-dir.sh

在Hive安装路径的bin下,有beeline及hive这2个shell脚本。beeline的内容:

#!/usr/bin/env bash

bin=`dirname "$0"`
bin=`cd "$bin"; pwd`

. "$bin"/hive --service beeline "$@"

很简洁,就是切到bin路径,然后执行了hive这个shell脚本,并且传参。hive的脚本:

#!/usr/bin/env bash

cygwin=false
case "`uname`" in
   CYGWIN*) cygwin=true;;
esac

bin=`dirname "$0"`
bin=`cd "$bin"; pwd`

. "$bin"/hive-config.sh

SERVICE=""
HELP=""
SKIP_HBASECP=false
SKIP_HADOOPVERSION=false

SERVICE_ARGS=()
while [ $# -gt 0 ]; do
  case "$1" in
    --version)
      shift
      SERVICE=version
      ;;
    --service)
      shift
      SERVICE=$1
      shift
      ;;
    --rcfilecat)
      SERVICE=rcfilecat
      shift
      ;;
    --orcfiledump)
      SERVICE=orcfiledump
      shift
      ;;
    --llapdump)
      SERVICE=llapdump
      shift
      ;;
    --skiphadoopversion)
      SKIP_HADOOPVERSION=true
      shift
      ;;
    --skiphbasecp)
      SKIP_HBASECP=true
      shift
      ;;
    --help)
      HELP=_help
      shift
      ;;
    --debug*)
      DEBUG=$1
      shift
      ;;
    *)
      SERVICE_ARGS=("$SERVICE_ARGS[@]" "$1")
      shift
      ;;
  esac
done

if [ "$SERVICE" = "" ] ; then
  if [ "$HELP" = "_help" ] ; then
    SERVICE="help"
  else
    SERVICE="cli"
  fi
fi

if [[ "$SERVICE" == "cli" && "$USE_BEELINE_FOR_HIVE_CLI" == "true" ]] ; then
  SERVICE="beeline"
fi

if [[ "$SERVICE" =~ ^(help|version|orcfiledump|rcfilecat|schemaTool|cleardanglingscratchdir|metastore|beeline|llapstatus|llap)$ ]] ; then
  SKIP_HBASECP=true
fi

if [[ "$SERVICE" =~ ^(help|schemaTool)$ ]] ; then
  SKIP_HADOOPVERSION=true
fi

if [ -f "$HIVE_CONF_DIR/hive-env.sh" ]; then
  . "$HIVE_CONF_DIR/hive-env.sh"
fi

if [[ -z "$SPARK_HOME" ]]
then
  bin=`dirname "$0"`
  # many hadoop installs are in dir/spark,hive,hadoop,..
  if test -e $bin/../../spark; then
    sparkHome=$(readlink -f $bin/../../spark)
    if [[ -d $sparkHome ]]
    then
      export SPARK_HOME=$sparkHome
    fi
  fi
fi

CLASSPATH="$TEZ_CONF_DIR:-/etc/tez/conf:$HIVE_CONF_DIR"

HIVE_LIB=$HIVE_HOME/lib

# needed for execution
if [ ! -f $HIVE_LIB/hive-exec-*.jar ]; then
  echo "Missing Hive Execution Jar: $HIVE_LIB/hive-exec-*.jar"
  exit 1;
fi

if [ ! -f $HIVE_LIB/hive-metastore-*.jar ]; then
  echo "Missing Hive MetaStore Jar"
  exit 2;
fi

# cli specific code
if [ ! -f $HIVE_LIB/hive-cli-*.jar ]; then
  echo "Missing Hive CLI Jar"
  exit 3;
fi

# Hbase and Hadoop use their own log4j jars.  Including hives log4j jars can cause
# log4j warnings.  So save hives log4j jars in LOG_JAR_CLASSPATH, and add it to classpath
# after Hbase and Hadoop calls finish
LOG_JAR_CLASSPATH="";

for f in $HIVE_LIB/*.jar; do
  if [[ $f == *"log4j"* ]]; then
    LOG_JAR_CLASSPATH=$LOG_JAR_CLASSPATH:$f;
  else
    CLASSPATH=$CLASSPATH:$f;
  fi
done

# add the auxillary jars such as serdes
if [ -d "$HIVE_AUX_JARS_PATH" ]; then
  hive_aux_jars_abspath=`cd $HIVE_AUX_JARS_PATH && pwd`
  for f in $hive_aux_jars_abspath/*.jar; do
    if [[ ! -f $f ]]; then
        continue;
    fi
    if $cygwin; then
	f=`cygpath -w "$f"`
    fi
    AUX_CLASSPATH=$AUX_CLASSPATH:$f
    if [ "$AUX_PARAM" == "" ]; then
        AUX_PARAM=file://$f
    else
        AUX_PARAM=$AUX_PARAM,file://$f;
    fi
  done
elif [ "$HIVE_AUX_JARS_PATH" != "" ]; then
  HIVE_AUX_JARS_PATH=`echo $HIVE_AUX_JARS_PATH | sed 's/,/:/g'`
  if $cygwin; then
      HIVE_AUX_JARS_PATH=`cygpath -p -w "$HIVE_AUX_JARS_PATH"`
      HIVE_AUX_JARS_PATH=`echo $HIVE_AUX_JARS_PATH | sed 's/;/,/g'`
  fi
  AUX_CLASSPATH=$AUX_CLASSPATH:$HIVE_AUX_JARS_PATH
  AUX_PARAM="file://$(echo $HIVE_AUX_JARS_PATH | sed 's/:/,file:\\/\\//g')"
fi

# adding jars from auxlib directory
for f in $HIVE_HOME/auxlib/*.jar; do
  if [[ ! -f $f ]]; then
      continue;
  fi
  if $cygwin; then
      f=`cygpath -w "$f"`
  fi
  AUX_CLASSPATH=$AUX_CLASSPATH:$f
  if [ "$AUX_PARAM" == "" ]; then
    AUX_PARAM=file://$f
  else
    AUX_PARAM=$AUX_PARAM,file://$f;
  fi
done
if $cygwin; then
    CLASSPATH=`cygpath -p -w "$CLASSPATH"`
    CLASSPATH=$CLASSPATH;$AUX_CLASSPATH
else
    CLASSPATH=$CLASSPATH:$AUX_CLASSPATH
fi

# supress the HADOOP_HOME warnings in 1.x.x
export HADOOP_HOME_WARN_SUPPRESS=true

# to make sure log4j2.x and jline jars are loaded ahead of the jars pulled by hadoop
export HADOOP_USER_CLASSPATH_FIRST=true

# pass classpath to hadoop
if [ "$HADOOP_CLASSPATH" != "" ]; then
  export HADOOP_CLASSPATH="$CLASSPATH:$HADOOP_CLASSPATH"
else
  export HADOOP_CLASSPATH="$CLASSPATH"
fi

# also pass hive classpath to hadoop
if [ "$HIVE_CLASSPATH" != "" ]; then
  export HADOOP_CLASSPATH="$HADOOP_CLASSPATH:$HIVE_CLASSPATH";
fi

# check for hadoop in the path
HADOOP_IN_PATH=`which hadoop 2>/dev/null`
if [ -f $HADOOP_IN_PATH ]; then
  HADOOP_DIR=`dirname "$HADOOP_IN_PATH"`/..
fi
# HADOOP_HOME env variable overrides hadoop in the path
HADOOP_HOME=$HADOOP_HOME:-$HADOOP_PREFIX:-$HADOOP_DIR
if [ "$HADOOP_HOME" == "" ]; then
  echo "Cannot find hadoop installation: \\$HADOOP_HOME or \\$HADOOP_PREFIX must be set or hadoop must be in the path";
  exit 4;
fi

# add distcp to classpath, hive depends on it
for f in $HADOOP_HOME/share/hadoop/tools/lib/hadoop-distcp-*.jar; do
  export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f;
done

HADOOP=$HADOOP_HOME/bin/hadoop
if [ ! -f $HADOOP ]; then
  echo "Cannot find hadoop installation: \\$HADOOP_HOME or \\$HADOOP_PREFIX must be set or hadoop must be in the path";
  exit 4;
fi

if [ "$SKIP_HADOOPVERSION" = false ]; then
  # Make sure we're using a compatible version of Hadoop
  if [ "x$HADOOP_VERSION" == "x" ]; then
      HADOOP_VERSION=$($HADOOP version 2>&2 | awk -F"\\t" '/Hadoop/ print $0' | cut -d' ' -f 2);
  fi

  # Save the regex to a var to workaround quoting incompatabilities
  # between Bash 3.1 and 3.2
  hadoop_version_re="^([[:digit:]]+)\\.([[:digit:]]+)(\\.([[:digit:]]+))?.*$"

  if [[ "$HADOOP_VERSION" =~ $hadoop_version_re ]]; then
      hadoop_major_ver=$BASH_REMATCH[1]
      hadoop_minor_ver=$BASH_REMATCH[2]
      hadoop_patch_ver=$BASH_REMATCH[4]
  else
      echo "Unable to determine Hadoop version information."
      echo "'hadoop version' returned:"
      echo `$HADOOP version`
      exit 5
  fi

  if [ "$hadoop_major_ver" -lt "1" -a  "$hadoop_minor_ver$hadoop_patch_ver" -lt "201" ]; then
      echo "Hive requires Hadoop 0.20.x (x >= 1)."
      echo "'hadoop version' returned:"
      echo `$HADOOP version`
      exit 6
  fi
fi

if [ "$SKIP_HBASECP" = false ]; then
  # HBase detection. Need bin/hbase and a conf dir for building classpath entries.
  # Start with BigTop defaults for HBASE_HOME and HBASE_CONF_DIR.
  HBASE_HOME=$HBASE_HOME:-"/usr/lib/hbase"
  HBASE_CONF_DIR=$HBASE_CONF_DIR:-"/etc/hbase/conf"
  if [[ ! -d $HBASE_CONF_DIR ]] ; then
    # not explicitly set, nor in BigTop location. Try looking in HBASE_HOME.
    HBASE_CONF_DIR="$HBASE_HOME/conf"
  fi

  # perhaps we've located the HBase config. if so, include it on classpath.
  if [[ -d $HBASE_CONF_DIR ]] ; then
    export HADOOP_CLASSPATH="$HADOOP_CLASSPATH:$HBASE_CONF_DIR"
  fi

  # look for the hbase script. First check HBASE_HOME and then ask PATH.
  if [[ -e $HBASE_HOME/bin/hbase ]] ; then
    HBASE_BIN="$HBASE_HOME/bin/hbase"
  fi
  HBASE_BIN=$HBASE_BIN:-"$(which hbase)"

  # perhaps we've located HBase. If so, include its details on the classpath
  if [[ -n $HBASE_BIN ]] ; then
    # exclude ZK, PB, and Guava (See HIVE-2055)
    # depends on HBASE-8438 (hbase-0.94.14+, hbase-0.96.1+) for `hbase mapredcp` command
    for x in $($HBASE_BIN mapredcp 2>&2 | tr ':' '\\n') ; do
      if [[ $x == *zookeeper* || $x == *protobuf-java* || $x == *guava* ]] ; then
        continue
      fi
      # TODO: should these should be added to AUX_PARAM as well?
      export HADOOP_CLASSPATH="$HADOOP_CLASSPATH:$x"
    done
  fi
fi

if [ "$AUX_PARAM" != "" ]; then
  if [[ "$SERVICE" != beeline ]]; then
    HIVE_OPTS="$HIVE_OPTS --hiveconf hive.aux.jars.path=$AUX_PARAM"
  fi
  AUX_JARS_CMD_LINE="-libjars $AUX_PARAM"
fi

if [ "$SERVICE" = "hiveserver2" ] ; then
  export HADOOP_CLIENT_OPTS="$HADOOP_CLIENT_OPTS $HIVE_SERVER2_JMX_OPTS "
fi

if [ "$SERVICE" = "metastore" ] ; then
  export HADOOP_CLIENT_OPTS="$HADOOP_CLIENT_OPTS $HIVE_METASTORE_JMX_OPTS "
fi

SERVICE_LIST=""

for i in "$bin"/ext/*.sh ; do
  . $i
done

for i in "$bin"/ext/util/*.sh ; do
  . $i
done

if [ "$DEBUG" ]; then
  if [ "$HELP" ]; then
    debug_help
    exit 0
  else
    get_debug_params "$DEBUG"
    export HADOOP_CLIENT_OPTS="$HADOOP_CLIENT_OPTS $HIVE_MAIN_CLIENT_DEBUG_OPTS"
  fi
fi

TORUN=""
for j in $SERVICE_LIST ; do
  if [ "$j" = "$SERVICE" ] ; then
    TORUN=$j$HELP
  fi
done

# to initialize logging for all services

export HADOOP_CLIENT_OPTS="$HADOOP_CLIENT_OPTS -Dlog4j2.formatMsgNoLookups=true -Dlog4j.configurationFile=hive-log4j2.properties "

if [ -f "$HIVE_CONF_DIR/parquet-logging.properties" ]; then
  export HADOOP_CLIENT_OPTS="$HADOOP_CLIENT_OPTS -Djava.util.logging.config.file=$HIVE_CONF_DIR/parquet-logging.properties "
else
  export HADOOP_CLIENT_OPTS="$HADOOP_CLIENT_OPTS -Djava.util.logging.config.file=$bin/../conf/parquet-logging.properties "
fi

if [[ "$SERVICE" =~ ^(hiveserver2|beeline|cli)$ ]] ; then
  # If process is backgrounded, don't change terminal settings
  if [[ ( ! $(ps -o stat= -p $$) =~ "+" ) && ! ( -p /dev/stdin ) && ( ! $(ps -o tty= -p $$) =~ "?" ) ]]; then
    export HADOOP_CLIENT_OPTS="$HADOOP_CLIENT_OPTS -Djline.terminal=jline.UnsupportedTerminal"
  fi
fi

# include the log4j jar that is used for hive into the classpath
CLASSPATH="$CLASSPATH:$LOG_JAR_CLASSPATH"
export HADOOP_CLASSPATH="$HADOOP_CLASSPATH:$LOG_JAR_CLASSPATH"

if [ "$TORUN" = "" ] ; then
  echo "Service $SERVICE not found"
  echo "Available Services: $SERVICE_LIST"
  exit 7
else
  set -- "$SERVICE_ARGS[@]"
  $TORUN "$@"
fi

从hive脚本可以看出,

for i in "$bin"/ext/*.sh ; do
  . $i
done

for i in "$bin"/ext/util/*.sh ; do
  . $i
done

Hive脚本会把安装包路径下/ext及/ext/util的所有脚本都执行一遍!!!也就是这些脚本:

/opt/usdp-srv/srv/udp/2.0.0.0/hive/bin
[root@zhiyong2 bin]# ll
总用量 64
-rwxrwxrwx. 1 hadoop hadoop   881 823 2019 beeline
-rwxrwxrwx. 1 hadoop hadoop  1077 31 2022 check-tez-dir.sh
-rwxrwxrwx. 1 hadoop hadoop   962 31 2022 check-warehouse-dir.sh
drwxrwxrwx. 3 hadoop hadoop  4096 1224 2020 ext
-rwxrwxrwx. 1 hadoop hadoop Hive3.1.2的HQL执行过程

Hive3.1.2的Beeline执行过程

Hive3.1.2的Beeline执行过程

Hive3.1.2的Beeline执行过程

Hive3.1.2自带的系统函数及UDF的随系统自动注册

Hive3.1.2自带的系统函数及UDF的随系统自动注册