大数据Hadoop-Kms 安装及相关详细配置,看完你就会了
Posted 笑起来贼好看
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了大数据Hadoop-Kms 安装及相关详细配置,看完你就会了相关的知识,希望对你有一定的参考价值。
简介
Hadoop KMS是基于Hadoop的KeyProvider API的加密密钥管理服务器,它提供了使用REST API通过HTTP进行通信的客户端和服务器组件。
客户端是一个KeyProvider实现,使用KMS HTTP REST API与KMS交互。
KMS及其客户端具有内置的安全性,它们支持HTTP SPNEGO Kerberos身份验证和HTTPS安全传输。
KMS是一个Java Jetty web应用程序。
KMS与Hadoop结合,可以实现HDFS客户端透明的数据加密传输以及细粒度的权限控制。
本文使用Hadoop 3.3.1 为例进行KMS服务配置启动及hdfs文件加密传输示例。
安装部署Hadoop KMS
利用keytool生成秘钥
keytool -genkey -alias 'sandbox' -keystore /root/kms.jks -dname "CN=localhost, OU=localhost, O=localhost, L=SH, ST=SH, C=CN" -keypass 123456 -storepass 123456 -validity 180
将秘钥存储密码放在hadoop配置目录下
cd $HADOOP_HOME/etc/hadoop
echo "123456" > kms.keystore.password
配置 kms server端配置 kms-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>hadoop.kms.http.port</name>
<value>9600</value>
</property>
<property>
<name>hadoop.kms.key.provider.uri</name>
<value>jceks://file@/$user.home/kms.keystore</value>
</property>
<property>
<name>hadoop.security.keystore.java-keystore-provider.password-file</name>
<value>kms.keystore.password</value>
</property>
<!-- KMS缓存 -->
<property>
<name>hadoop.kms.cache.enable</name>
<value>true</value>
</property>
<property>
<name>hadoop.kms.cache.timeout.ms</name>
<value>600000</value>
</property>
<property>
<name>hadoop.kms.current.key.cache.timeout.ms</name>
<value>30000</value>
</property>
<property>
<name>hadoop.security.kms.encrypted.key.cache.size</name>
<value>500</value>
</property>
<property>
<name>hadoop.security.kms.encrypted.key.cache.low.watermark</name>
<value>0.3</value>
</property>
<property>
<name>hadoop.security.kms.encrypted.key.cache.num.fill.threads</name>
<value>2</value>
</property>
<property>
<name>hadoop.security.kms.encrypted.key.cache.expiry</name>
<value>43200000</value>
</property>
<!-- KMS 聚集Audit 日志 -->
<property>
<name>hadoop.kms.aggregation.delay.ms</name>
<value>10000</value>
</property>
<!-- KMS 代理用户配置 -->
<property>
<name>hadoop.kms.proxyuser.#USER#.users</name>
<value>*</value>
</property>
<property>
<name>hadoop.kms.proxyuser.#USER#.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.kms.proxyuser.#USER#.hosts</name>
<value>*</value>
</property>
<!-- KMS Delegation Token 配置 -->
<property>
<name>hadoop.kms.authentication.delegation-token.update-interval.sec</name>
<value>86400</value>
<description>
How often the master key is rotated, in seconds. Default value 1 day.
</description>
</property>
<property>
<name>hadoop.kms.authentication.delegation-token.max-lifetime.sec</name>
<value>604800</value>
<description>
Maximum lifetime of a delagation token, in seconds. Default value 7 days.
</description>
</property>
<property>
<name>hadoop.kms.authentication.delegation-token.renew-interval.sec</name>
<value>86400</value>
<description>
Renewal interval of a delagation token, in seconds. Default value 1 day.
</description>
</property>
<property>
<name>hadoop.kms.authentication.delegation-token.removal-scan-interval.sec</name>
<value>3600</value>
<description>
Scan interval to remove expired delegation tokens.
</description>
</property>
</configuration>
配置 client 端 kms配置,在core-site.xml中添加
<!-- kms -->
<property>
<name>hadoop.security.key.provider.path</name>
<value>kms://http@bp1:9600/kms</value>
</property>
<property>
<name>hadoop.security.kms.client.encrypted.key.cache.size</name>
<value>500</value>
</property>
<property>
<name>hadoop.security.kms.client.encrypted.key.cache.low-watermark</name>
<value>0.3</value>
</property>
<property>
<name>hadoop.security.kms.client.encrypted.key.cache.num.refill.threads</name>
<value>2</value>
</property>
<property>
<name>hadoop.security.kms.client.encrypted.key.cache.expiry</name>
<value>43200000</value>
</property>
启动kms
cd $HADOOP_HOME
sbin/kms.sh start (遗弃)
sbin/kms.sh status (遗弃)
hadoop --daemon start kms
hadoop --daemon status kms
重启hadoop
cd $HADOOP_HOME
sbin/stop-all.sh
sbin/start-all.sh
测试kms
hadoop key create sandbox
hadoop key list
hadoop fs -mkdir /aaaaa
hdfs crypto -createZone -keyName sandbox -path /aaaaa
hdfs crypto -listZones
以上是关于大数据Hadoop-Kms 安装及相关详细配置,看完你就会了的主要内容,如果未能解决你的问题,请参考以下文章
走进大数据 | zookeeper-zookeeper的单节点及集群安装配置