beeline中所有Hadoop及Hive可调参数

Posted 虎鲸不是鱼

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了beeline中所有Hadoop及Hive可调参数相关的知识,希望对你有一定的参考价值。

Hive3.1.2的Beeline执行过程

本文是上一篇的附录。具体见:https://lizhiyong.blog.csdn.net/article/details/126634843

beeline中所有Hadoop及Hive可调参数

附beeline中执行set -v的结果供读者参考:

Session stopped
    - Press <return> to exit tab
    - Press R to restart session
    - Press S to save terminal output to file
    ┌──────────────────────────────────────────────────────────────────────┐
    │                 • MobaXterm Personal Edition v21.4 •                 │
    │               (SSH client, X server and network tools)               │
    │                                                                      │
    │ ➤ SSH session to root@192.168.88.101                                 │
    │   • Direct SSH      :  ✔                                             │
    │   • SSH compression :  ✔                                             │
    │   • SSH-browser     :  ✔                                             │
    │   • X11-forwarding  :(remote display is forwarded through SSH)  │
    │                                                                      │
    │ ➤ For more info, ctrl+click on help or visit our website.            │
    └──────────────────────────────────────────────────────────────────────┘

Last login: Wed Aug 31 23:10:00 2022 from 192.168.88.1
[root@zhiyong2 ~]# beeline
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/usdp-srv/srv/udp/2.0.0.0/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/usdp-srv/srv/udp/2.0.0.0/yarn/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Beeline version 3.1.2 by Apache Hive
beeline> !connect jdbc:hive2://192.168.88.101:10000/default;
Connecting to jdbc:hive2://192.168.88.101:10000/default;
Enter username for jdbc:hive2://192.168.88.101:10000/default:
Enter password for jdbc:hive2://192.168.88.101:10000/default: ******
Connected to: Apache Hive (version 3.1.2)
Driver: Hive JDBC (version 3.1.2)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://192.168.88.101:10000/default> set -v;
+----------------------------------------------------+
|                        set                         |
+----------------------------------------------------+
| _hive.hdfs.session.path=/zhiyong-1/tmp/hive/hadoop/151c065e-2d33-417a-80cb-a55a71cce6a2 |
| _hive.local.session.path=/tmp/hadoop/151c065e-2d33-417a-80cb-a55a71cce6a2 |
| _hive.tmp_table_space=/zhiyong-1/tmp/hive/hadoop/151c065e-2d33-417a-80cb-a55a71cce6a2/_tmp_space.db |
| adl.feature.ownerandgroup.enableupn=false          |
| datanode.https.port=50475                          |
| datanucleus.cache.level2=false                     |
| datanucleus.cache.level2.type=none                 |
| datanucleus.connectionPool.maxPoolSize=10          |
| datanucleus.connectionPoolingType=HikariCP         |
| datanucleus.identifierFactory=datanucleus1         |
| datanucleus.plugin.pluginRegistryBundleCheck=LOG   |
| datanucleus.rdbms.initializeColumnInfo=NONE        |
| datanucleus.rdbms.useLegacyNativeValueStrategy=true |
| datanucleus.schema.autoCreateAll=false             |
| datanucleus.schema.validateColumns=false           |
| datanucleus.schema.validateConstraints=false       |
| datanucleus.schema.validateTables=false            |
| datanucleus.storeManagerType=rdbms                 |
| datanucleus.transactionIsolation=read-committed    |
| dfs.balancer.address=0.0.0.0:0                     |
| dfs.balancer.block-move.timeout=0                  |
| dfs.balancer.dispatcherThreads=200                 |
| dfs.balancer.getBlocks.min-block-size=10485760     |
| dfs.balancer.getBlocks.size=2147483648             |
| dfs.balancer.keytab.enabled=false                  |
| dfs.balancer.max-iteration-time=1200000            |
| dfs.balancer.max-no-move-interval=60000            |
| dfs.balancer.max-size-to-move=10737418240          |
| dfs.balancer.movedWinWidth=5400000                 |
| dfs.balancer.moverThreads=1000                     |
| dfs.block.access.key.update.interval=600           |
| dfs.block.access.token.enable=false                |
| dfs.block.access.token.lifetime=600                |
| dfs.block.access.token.protobuf.enable=false       |
| dfs.block.invalidate.limit=1000                    |
| dfs.block.misreplication.processing.limit=10000    |
| dfs.block.placement.ec.classname=org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyRackFaultTolerant |
| dfs.block.replicator.classname=org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault |
| dfs.block.scanner.volume.bytes.per.second=1048576  |
| dfs.blockreport.incremental.intervalMsec=0         |
| dfs.blockreport.initialDelay=0s                    |
| dfs.blockreport.intervalMsec=21600000              |
| dfs.blockreport.split.threshold=1000000            |
| dfs.blocksize=134217728                            |
| dfs.bytes-per-checksum=512                         |
| dfs.cachereport.intervalMsec=10000                 |
| dfs.checksum.combine.mode=MD5MD5CRC                |
| dfs.checksum.type=CRC32C                           |
| dfs.client-write-packet-size=65536                 |
| dfs.client.block.write.locateFollowingBlock.initial.delay.ms=400 |
| dfs.client.block.write.locateFollowingBlock.retries=5 |
| dfs.client.block.write.replace-datanode-on-failure.best-effort=false |
| dfs.client.block.write.replace-datanode-on-failure.enable=true |
| dfs.client.block.write.replace-datanode-on-failure.min-replication=0 |
| dfs.client.block.write.replace-datanode-on-failure.policy=DEFAULT |
| dfs.client.block.write.retries=3                   |
| dfs.client.cached.conn.retry=3                     |
| dfs.client.context=default                         |
| dfs.client.datanode-restart.timeout=30s            |
| dfs.client.domain.socket.data.traffic=false        |
| dfs.client.failover.connection.retries=0           |
| dfs.client.failover.connection.retries.on.timeouts=0 |
| dfs.client.failover.max.attempts=15                |
| dfs.client.failover.proxy.provider.zhiyong-1=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider |
| dfs.client.failover.random.order=false             |
| dfs.client.failover.sleep.base.millis=500          |
| dfs.client.failover.sleep.max.millis=15000         |
| dfs.client.hedged.read.threadpool.size=0           |
| dfs.client.hedged.read.threshold.millis=500        |
| dfs.client.https.keystore.resource=ssl-client.xml  |
| dfs.client.https.need-auth=false                   |
| dfs.client.key.provider.cache.expiry=864000000     |
| dfs.client.max.block.acquire.failures=3            |
| dfs.client.mmap.cache.size=256                     |
| dfs.client.mmap.cache.timeout.ms=3600000           |
| dfs.client.mmap.enabled=true                       |
| dfs.client.mmap.retry.timeout.ms=300000            |
| dfs.client.read.short.circuit.replica.stale.threshold.ms=1800000 |
| dfs.client.read.shortcircuit=false                 |
| dfs.client.read.shortcircuit.buffer.size=1048576   |
| dfs.client.read.shortcircuit.skip.checksum=false   |
| dfs.client.read.shortcircuit.streams.cache.expiry.ms=300000 |
| dfs.client.read.shortcircuit.streams.cache.size=256 |
| dfs.client.read.striped.threadpool.size=18         |
| dfs.client.retry.interval-ms.get-last-block-length=4000 |
| dfs.client.retry.max.attempts=10                   |
| dfs.client.retry.policy.enabled=false              |
| dfs.client.retry.policy.spec=10000,6,60000,10      |
| dfs.client.retry.times.get-last-block-length=3     |
| dfs.client.retry.window.base=3000                  |
| dfs.client.server-defaults.validity.period.ms=3600000 |
| dfs.client.short.circuit.replica.stale.threshold.ms=1800000 |
| dfs.client.slow.io.warning.threshold.ms=30000      |
| dfs.client.socket-timeout=900000                   |
| dfs.client.socket.send.buffer.size=0               |
| dfs.client.socketcache.capacity=16                 |
| dfs.client.socketcache.expiryMsec=3000             |
| dfs.client.test.drop.namenode.response.number=0    |
| dfs.client.use.datanode.hostname=false             |
| dfs.client.use.legacy.blockreader.local=false      |
+----------------------------------------------------+
|                        set                         |
+----------------------------------------------------+
| dfs.client.write.byte-array-manager.count-limit=2048 |
| dfs.client.write.byte-array-manager.count-reset-time-period-ms=10000 |
| dfs.client.write.byte-array-manager.count-threshold=128 |
| dfs.client.write.byte-array-manager.enabled=false  |
| dfs.client.write.exclude.nodes.cache.expiry.interval.millis=600000 |
| dfs.client.write.max-packets-in-flight=80          |
| dfs.content-summary.limit=5000                     |
| dfs.content-summary.sleep-microsec=500             |
| dfs.data.transfer.client.tcpnodelay=true           |
| dfs.data.transfer.server.tcpnodelay=true           |
| dfs.datanode.address=0.0.0.0:9866                  |
| dfs.datanode.available-space-volume-choosing-policy.balanced-space-preference-fraction=0.75f |
| dfs.datanode.available-space-volume-choosing-policy.balanced-space-threshold=10737418240 |
| dfs.datanode.balance.bandwidthPerSec=10485760      |
| dfs.datanode.balance.max.concurrent.moves=50       |
| dfs.datanode.block-pinning.enabled=false           |
| dfs.datanode.block.id.layout.upgrade.threads=12    |
| dfs.datanode.bp-ready.timeout=20s                  |
| dfs.datanode.cache.revocation.polling.ms=500       |
| dfs.datanode.cache.revocation.timeout.ms=900000    |
| dfs.datanode.cached-dfsused.check.interval.ms=600000 |
| dfs.datanode.data.dir=/data/udp/2.0.0.0/hdfs/dfs/data |
| dfs.datanode.data.dir.perm=750                     |
| dfs.datanode.directoryscan.interval=21600s         |
| dfs.datanode.directoryscan.threads=1               |
| dfs.datanode.directoryscan.throttle.limit.ms.per.sec=1000 |
| dfs.datanode.disk.check.min.gap=15m                |
| dfs.datanode.disk.check.timeout=10m                |
| dfs.datanode.dns.interface=default                 |
| dfs.datanode.dns.nameserver=default                |
| dfs.datanode.drop.cache.behind.reads=false         |
| dfs.datanode.drop.cache.behind.writes=false        |
| dfs.datanode.du.reserved=0                         |
| dfs.datanode.du.reserved.calculator=org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReservedSpaceCalculator$ReservedSpaceCalculatorAbsolute |
| dfs.datanode.du.reserved.pct=0                     |
| dfs.datanode.ec.reconstruction.stripedread.buffer.size=65536 |
| dfs.datanode.ec.reconstruction.stripedread.timeout.millis=5000 |
| dfs.datanode.ec.reconstruction.threads=8           |
| dfs.datanode.ec.reconstruction.xmits.weight=0.5    |
| dfs.datanode.failed.volumes.tolerated=0            |
| dfs.datanode.fileio.profiling.sampling.percentage=0 |
| dfs.datanode.fsdatasetcache.max.threads.per.volume=4 |
| dfs.datanode.handler.count=50                      |
| dfs.datanode.http.address=0.0.0.0:9864             |
| dfs.datanode.http.internal-proxy.port=0            |
| dfs.datanode.https.address=0.0.0.0:9865            |
| dfs.datanode.ipc.address=0.0.0.0:9867              |
| dfs.datanode.lazywriter.interval.sec=60            |
| dfs.datanode.max.locked.memory=0                   |
| dfs.datanode.max.transfer.threads=8192             |
| dfs.datanode.metrics.logger.period.seconds=600     |
| dfs.datanode.network.counts.cache.max.size=2147483647 |
| dfs.datanode.oob.timeout-ms=1500,0,0,0             |
| dfs.datanode.outliers.report.interval=30m          |
| dfs.datanode.peer.stats.enabled=false              |
| dfs.datanode.readahead.bytes=4194304               |
| dfs.datanode.restart.replica.expiration=50         |
| dfs.datanode.scan.period.hours=504                 |
| dfs.datanode.shared.file.descriptor.paths=/dev/shm,/tmp |
| dfs.datanode.slow.io.warning.threshold.ms=300      |
| dfs.datanode.socket.reuse.keepalive=4000           |
| dfs.datanode.socket.write.timeout=480000           |
| dfs.datanode.sync.behind.writes=false              |
| dfs.datanode.sync.behind.writes.in.background=false |
| dfs.datanode.transfer.socket.recv.buffer.size=0    |
| dfs.datanode.transfer.socket.send.buffer.size=0    |
| dfs.datanode.transferTo.allowed=true               |
| dfs.datanode.use.datanode.hostname=false           |
| dfs.default.chunk.view.size=32768                  |
| dfs.disk.balancer.block.tolerance.percent=5        |
| dfs.disk.balancer.enabled=true                     |
| dfs.disk.balancer.max.disk.errors=5                |
| dfs.disk.balancer.max.disk.throughputInMBperSec=50 |
| dfs.disk.balancer.plan.threshold.percent=2         |
| dfs.disk.balancer.plan.valid.interval=1d           |
| dfs.domain.socket.disable.interval.seconds=600     |
| dfs.edit.log.transfer.bandwidthPerSec=0            |
| dfs.edit.log.transfer.timeout=30000                |
| dfs.encrypt.data.transfer=false                    |
| dfs.encrypt.data.transfer.cipher.key.bitlength=128 |
| dfs.ha.automatic-failover.enabled=true             |
| dfs.ha.fencing.methods=sshfence(hadoop:22)         |
| dfs.ha.fencing.ssh.connect-timeout=30000           |
| dfs.ha.fencing.ssh.private-key-files=/home/hadoop/.ssh/id_rsa |
| dfs.ha.log-roll.period=120s                        |
| dfs.ha.namenodes.zhiyong-1=nn1,nn2                 |
| dfs.ha.standby.checkpoints=true                    |
| dfs.ha.tail-edits.in-progress=false                |
| dfs.ha.tail-edits.namenode-retries=3               |
| dfs.ha.tail-edits.period=60s                       |
| dfs.ha.tail-edits.rolledits.timeout=60             |
| dfs.ha.zkfc.nn.http.timeout.ms=20000               |
| dfs.ha.zkfc.port=8019                              |
| dfs.heartbeat.interval=3s                          |
| dfs.hosts.exclude=/srv/udp/2.0.0.0/hdfs/etc/hadoop/excludes |
| dfs.http.client.failover.max.attempts=15           |
| dfs.http.client.failover.sleep.base.millis=500     |
| dfs.http.client.failover.sleep.max.millis=15000    |
| dfs.http.client.retry.max.attempts=10              |
| dfs.http.client.retry.policy.enabled=false         |
+----------------------------------------------------+
|                        set                         |
+----------------------------------------------------+
| dfs.http.client.retry.policy.spec=10000,6,60000,10 |
| dfs.http.policy=HTTP_ONLY                          |
| dfs.https.server.keystore.resource=ssl-server.xml  |
| dfs.image.compress=true                            |
| dfs.image.compression.codec=org.apache.hadoop.io.compress.DefaultCodec |
| dfs.image.transfer-bootstrap-standby.bandwidthPerSec=0 |
| dfs.image.transfer.bandwidthPerSec=0               |
| dfs.image.transfer.chunksize=65536                 |
| dfs.image.transfer.timeout=60000                   |
| dfs.journalnode.edits.dir=/data/udp/2.0.0.0/hdfs/jnData |
| dfs.journalnode.enable.sync=true                   |
| dfs.journalnode.http-address=0.0.0.0:8480          |
| dfs.journalnode.https-address=0.0.0.0:8481         |
| dfs.journalnode.rpc-address=0.0.0.0:8485           |
| dfs.journalnode.sync.interval=120000               |
| dfs.lock.suppress.warning.interval=10s             |
| dfs.ls.limit=1000                                  |
| dfs.mover.address=0.0.0.0:0                        |
| dfs.mover.keytab.enabled=false                     |
| dfs.mover.max-no-move-interval=60000               |
| dfs.mover.movedWinWidth=5400000                    |
| dfs.mover.moverThreads=1000                        |
| dfs.mover.retry.max.attempts=10                    |
| dfs.namenode.accesstime.precision=3600000          |
| dfs.namenode.acls.enabled=false                    |
| dfs.namenode.audit.log.async=false                 |
| dfs.namenode.audit.log.token.tracking.id=false     |
| dfs.namenode.audit.loggers=default                 |
| dfs.namenode.available-space-block-placement-policy.balanced-space-preference-fraction=0.6 |
| dfs.namenode.avoid.read.stale.datanode=false       |
| dfs.namenode.avoid.write.stale.datanode=false      |
| dfs.namenode.backup.address=0.0.0.0:50100          |
| dfs.namenode.backup.http-address=0.0.0.0:50105     |
| dfs.namenode.block-placement-policy.default.prefer-local-node=true |
| dfs.namenode.blocks.per.postponedblocks.rescan=10000 |
| dfs.namenode.checkpoint.check.period=60s           |
| dfs.namenode.checkpoint.check.quiet-multiplier=1.5 |
| dfs.namenode.checkpoint.dir=file://$hadoop.tmp.dir/dfs/namesecondary |
| dfs.namenode.checkpoint.edits.dir=$dfs.namenode.checkpoint.dir |
| dfs.namenode.checkpoint.max-retries=3              |
| dfs.namenode.checkpoint.period=3600s               |
| dfs.namenode.checkpoint.txns=1000000               |
| dfs.namenode.datanode.registration.ip-hostname-check=true |
| dfs.namenode.decommission.blocks.per.interval=500000 |
| dfs.namenode.decommission.interval=30s             |
| dfs.namenode.decommission.max.concurrent.tracked.nodes=100 |
| dfs.namenode.delegation.key.update-interval=86400000 |
| dfs.namenode.delegation.token.always-use=false     |
| dfs.namenode.delegation.token.max-lifetime=604800000 |
| dfs.namenode.delegation.token.renew-interval=86400000 |
| dfs.namenode.ec.policies.max.cellsize=4194304      |
| dfs.namenode.ec.system.default.policy=RS-6-3-1024k |
| dfs.namenode.edekcacheloader.initial.delay.ms=3000 |
| dfs.namenode.edekcacheloader.interval.ms=1000      |
| dfs.namenode.edit.log.autoroll.check.interval.ms=300000 |
| dfs.namenode.edit.log.autoroll.multiplier.threshold=2.0 |
| dfs.namenode.edits.asynclogging=true               |
| dfs.namenode.edits.dir=$dfs.namenode.name.dir    |
| dfs.namenode.edits.dir.minimum=1                   |
| dfs.namenode.edits.journal-plugin.qjournal=org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager |
| dfs.namenode.edits.noeditlogchannelflush=false     |
| dfs.namenode.enable.retrycache=true                |
| dfs.namenode.file.close.num-committed-allowed=0    |
| dfs.namenode.fs-limits.max-blocks-per-file=10000   |
| dfs.namenode.fs-limits.max-component-length=255    |
| dfs.namenode.fs-limits.max-directory-items=1048576 |
| dfs.namenode.fs-limits.max-xattr-size=16384        |
| dfs.namenode.fs-limits.max-xattrs-per-inode=32     |
| dfs.namenode.fs-limits.min-block-size=1048576      |
| dfs.namenode.fslock.fair=true                      |
| dfs.namenode.full.block.report.lease.length.ms=300000 |
| dfs.namenode.handler.count=50                      |
| dfs.namenode.heartbeat.recheck-interval=45000      |
| dfs.namenode.hosts.provider.classname=org.apache.hadoop.hdfs.server.blockmanagement.HostFileManager |
| dfs.namenode.http-address=0.0.0.0:9870             |
| dfs.namenode.http-address.zhiyong-1.nn1=zhiyong2:50070 |
| dfs.namenode.http-address.zhiyong-1.nn2=zhiyong3:50070 |
| dfs.namenode.https-address=0.0.0.0:9871            |
| dfs.namenode.inotify.max.events.per.rpc=1000       |
| dfs.namenode.invalidate.work.pct.per.iteration=0.32f |
| dfs.namenode.kerberos.internal.spnego.principal=$dfs.web.authentication.kerberos.principal |
| dfs.namenode.kerberos.principal.pattern=*          |
| dfs.namenode.lazypersist.file.scrub.interval.sec=300 |
| dfs.namenode.lease-recheck-interval-ms=2000        |
| dfs.namenode.lifeline.handler.ratio=0.10           |
| dfs.namenode.list.cache.directives.num.responses=100 |
| dfs.namenode.list.cache.pools.num.responses=100    |
| dfs.namenode.list.encryption.zones.num.responses=100 |
| dfs.namenode.list.openfiles.num.responses=1000     |
| dfs.namenode.list.reencryption.status.num.responses=100 |
| dfs.namenode.lock.detailed-metrics.enabled=false   |
| dfs.namenode.maintenance.replication.min=1         |
| dfs.namenode.max-lock-hold-to-release-lease-ms=25  |
| dfs.namenode.max-num-blocks-to-log=1000            |
| dfs.namenode.max.extra.edits.segments.retained=10000 |
| dfs.namenode.max.full.block.report.leases=6        |
| dfs.namenode.max.objects=0                         |
| dfs.namenode.max.op.size=52428800                  |
| dfs.namenode.metrics.logger.period.seconds=600     |
| dfs.namenode.missing.checkpoint.periods.before.shutdown=3 |
+----------------------------------------------------+
|                        set                         |
+----------------------------------------------------+
| dfs.namenode.name.cache.threshold=10               |
| dfs.namenode.name.dir=/data/udp/2.0.0.0/hdfs/dfs/nn |
| dfs.namenode.name.dir.restore=false                |
| dfs.namenode.num.checkpoints.retained=12           |
| dfs.namenode.num.extra.edits.retained=1000000      |
| dfs.namenode.path.based.cache.block.map.allocation.percent=0.25 |
| dfs.namenode.path.based.cache.refresh.interval.ms=30000 |
| dfs.namenode.path.based.cache.retry.interval.ms=30000 |
| dfs.namenode.posix.acl.inheritance.enabledbeeline中所有Hadoop及Hive可调参数

hivehive的安装配置,beeline使用

hive beeline使用

java使用JDBC连接hive(使用beeline与hiveserver2)

Hive_ JDBC访问

华为MRS_HADOOP集群 beeline使用操作