大数据HDFS用户客户端命令行(User Client Commands)详细使用说明
Posted 笑起来贼好看
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了大数据HDFS用户客户端命令行(User Client Commands)详细使用说明相关的知识,希望对你有一定的参考价值。
User Commands
概览
所有的HDFS命令都是执行bin/hdfs
脚本,当执行此脚本时,如果不带任何参数,就会打印使用说明usage。
Usage: hdfs [OPTIONS] SUBCOMMAND [SUBCOMMAND OPTIONS]
OPTIONS is none or any of:
--buildpaths attempt to add class files from build tree
--config dir Hadoop config directory
--daemon (start|status|stop) operate on a daemon
--debug turn on shell script debug mode
--help usage information
--hostnames list[,of,host,names] hosts to use in worker mode
--hosts filename list of hosts to use in worker mode
--loglevel level set the log4j level for this command
--workers turn on worker mode
SUBCOMMAND is one of:
Admin Commands:
cacheadmin configure the HDFS cache
crypto configure HDFS encryption zones
debug run a Debug Admin to execute HDFS debug commands
dfsadmin run a DFS admin client
dfsrouteradmin manage Router-based federation
ec run a HDFS ErasureCoding CLI
fsck run a DFS filesystem checking utility
haadmin run a DFS HA admin client
jmxget get JMX exported values from NameNode or DataNode.
oev apply the offline edits viewer to an edits file
oiv apply the offline fsimage viewer to an fsimage
oiv_legacy apply the offline fsimage viewer to a legacy fsimage
storagepolicies list/get/set/satisfyStoragePolicy block storage policies
Client Commands:
classpath prints the class path needed to get the hadoop jar and the required libraries
dfs run a filesystem command on the file system
envvars display computed Hadoop environment variables
fetchdt fetch a delegation token from the NameNode
getconf get config values from configuration
groups get the groups which users belong to
lsSnapshottableDir list all snapshottable dirs owned by the current user
snapshotDiff diff two snapshots of a directory or diff the current directory contents with a snapshot
version print the version
Daemon Commands:
balancer run a cluster balancing utility
datanode run a DFS datanode
dfsrouter run the DFS router
diskbalancer Distributes data evenly among disks on a given node
httpfs run HttpFS server, the HDFS HTTP Gateway
journalnode run the DFS journalnode
mover run a utility to move block replicas across storage types
namenode run the DFS namenode
nfs3 run an NFS version 3 gateway
portmap run a portmap service
secondarynamenode run the DFS secondary namenode
sps run external storagepolicysatisfier
zkfc run the ZK Failover Controller daemon
SUBCOMMAND may print help when invoked w/o parameters or with -h.
命令详解
classpath
该命令展示hadoop的classpath路径
usage
classpath [--glob|--jar <path>|-h|--help] :
Prints the classpath needed to get the Hadoop jar and the required
libraries.
Options:
--glob expand wildcards
--jar <path> write classpath as manifest in jar named <path>
-h, --help print help
此命令调用的是 org.apache.hadoop.util.Classpath
类
[root@spark-31 hadoop-3.3.1]# bin/hdfs classpath
/data/apps/hadoop-3.3.1/etc/hadoop:/data/apps/hadoop-3.3.1/share/hadoop/common/lib/*:/data/apps/hadoop-3.3.1/share/hadoop/common/*:/data/apps/hadoop-3.3.1/share/hadoop/hdfs:/data/apps/hadoop-3.3.1/share/hadoop/hdfs/lib/*:/data/apps/hadoop-3.3.1/share/hadoop/hdfs/*:/data/apps/hadoop-3.3.1/share/hadoop/mapreduce/*:/data/apps/hadoop-3.3.1/share/hadoop/yarn:/data/apps/hadoop-3.3.1/share/hadoop/yarn/lib/*:/data/apps/hadoop-3.3.1/share/hadoop/yarn/*
dfs
envvars
fetchdt
fsck
getconf
groups
httpfs
lsSnapshottableDir
jmxget
oev
oiv
oiv_legacy
snapshotDiff
version
以上是关于大数据HDFS用户客户端命令行(User Client Commands)详细使用说明的主要内容,如果未能解决你的问题,请参考以下文章