简介
Hadoop KMS是基于Hadoop的KeyProvider API的加密密钥管理服务器,它提供了使用REST API通过HTTP进行通信的客户端和服务器组件。
客户端是一个KeyProvider实现,使用KMS HTTP REST API与KMS交互。
KMS及其客户端具有内置的安全性,它们支持HTTP SPNEGO Kerberos身份验证和HTTPS安全传输。
KMS是一个Java Jetty web应用程序。
KMS与Hadoop结合,可以实现HDFS客户端透明的数据加密传输以及细粒度的权限控制。
本文使用Hadoop 3.3.1 为例进行KMS服务配置启动及hdfs文件加密传输示例。
安装部署Hadoop KMS
利用keytool生成秘钥
复制代码
1
2keytool -genkey -alias 'sandbox' -keystore /root/kms.jks -dname "CN=localhost, OU=localhost, O=localhost, L=SH, ST=SH, C=CN" -keypass 123456 -storepass 123456 -validity 180
将秘钥存储密码放在hadoop配置目录下
复制代码
1
2
3cd ${HADOOP_HOME}/etc/hadoop echo "123456" > kms.keystore.password
配置 kms server端配置 kms-site.xml
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111<?xml version="1.0" encoding="UTF-8"?> <!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>hadoop.kms.http.port</name> <value>9600</value> </property> <property> <name>hadoop.kms.key.provider.uri</name> <value>jceks://file@/${user.home}/kms.keystore</value> </property> <property> <name>hadoop.security.keystore.java-keystore-provider.password-file</name> <value>kms.keystore.password</value> </property> <!-- KMS缓存 --> <property> <name>hadoop.kms.cache.enable</name> <value>true</value> </property> <property> <name>hadoop.kms.cache.timeout.ms</name> <value>600000</value> </property> <property> <name>hadoop.kms.current.key.cache.timeout.ms</name> <value>30000</value> </property> <property> <name>hadoop.security.kms.encrypted.key.cache.size</name> <value>500</value> </property> <property> <name>hadoop.security.kms.encrypted.key.cache.low.watermark</name> <value>0.3</value> </property> <property> <name>hadoop.security.kms.encrypted.key.cache.num.fill.threads</name> <value>2</value> </property> <property> <name>hadoop.security.kms.encrypted.key.cache.expiry</name> <value>43200000</value> </property> <!-- KMS 聚集Audit 日志 --> <property> <name>hadoop.kms.aggregation.delay.ms</name> <value>10000</value> </property> <!-- KMS 代理用户配置 --> <property> <name>hadoop.kms.proxyuser.#USER#.users</name> <value>*</value> </property> <property> <name>hadoop.kms.proxyuser.#USER#.groups</name> <value>*</value> </property> <property> <name>hadoop.kms.proxyuser.#USER#.hosts</name> <value>*</value> </property> <!-- KMS Delegation Token 配置 --> <property> <name>hadoop.kms.authentication.delegation-token.update-interval.sec</name> <value>86400</value> <description> How often the master key is rotated, in seconds. Default value 1 day. </description> </property> <property> <name>hadoop.kms.authentication.delegation-token.max-lifetime.sec</name> <value>604800</value> <description> Maximum lifetime of a delagation token, in seconds. Default value 7 days. </description> </property> <property> <name>hadoop.kms.authentication.delegation-token.renew-interval.sec</name> <value>86400</value> <description> Renewal interval of a delagation token, in seconds. Default value 1 day. </description> </property> <property> <name>hadoop.kms.authentication.delegation-token.removal-scan-interval.sec</name> <value>3600</value> <description> Scan interval to remove expired delegation tokens. </description> </property> </configuration>
配置 client 端 kms配置,在core-site.xml中添加
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22<!-- kms --> <property> <name>hadoop.security.key.provider.path</name> <value>kms://http@bp1:9600/kms</value> </property> <property> <name>hadoop.security.kms.client.encrypted.key.cache.size</name> <value>500</value> </property> <property> <name>hadoop.security.kms.client.encrypted.key.cache.low-watermark</name> <value>0.3</value> </property> <property> <name>hadoop.security.kms.client.encrypted.key.cache.num.refill.threads</name> <value>2</value> </property> <property> <name>hadoop.security.kms.client.encrypted.key.cache.expiry</name> <value>43200000</value> </property>
启动kms
复制代码
1
2
3
4
5
6
7
8cd ${HADOOP_HOME} sbin/kms.sh start (遗弃) sbin/kms.sh status (遗弃) hadoop --daemon start kms hadoop --daemon status kms
重启hadoop
复制代码
1
2
3
4cd ${HADOOP_HOME} sbin/stop-all.sh sbin/start-all.sh
测试kms
复制代码
1
2
3
4
5
6
7hadoop key create sandbox hadoop key list hadoop fs -mkdir /aaaaa hdfs crypto -createZone -keyName sandbox -path /aaaaa hdfs crypto -listZones
最后
以上就是健忘酒窝最近收集整理的关于【大数据】Hadoop-Kms 安装及相关详细配置,看完你就会了的全部内容,更多相关【大数据】Hadoop-Kms内容请搜索靠谱客的其他文章。
本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
发表评论 取消回复