HortonWorks HDP 2.6:NameNode 通过 Ambari 安装问题

Posted

技术标签:

【中文标题】HortonWorks HDP 2.6:NameNode 通过 Ambari 安装问题【英文标题】:HortonWorks HDP 2.6: NameNode Install issue via Ambari 【发布时间】:2017-05-09 08:04:18 【问题描述】:

我尝试在 3 个节点(node0.local、node1.local、node2.local)上使用 Ambari 安装 HDP V2.6,但在安装过程中,node0 上发生了以下 NameNode 故障:

"OSError: [Errno 1] Operation not allowed: '/boot/efi/hadoop/hdfs/namenode'"

注意:在“Assign Slaves and Clients”步骤中,DataNode 和 NodeManager 的选项 [All] 已被选中。

谢谢。

Ambari screenshots

日志:

---------------------------------------------------------
- [stderr: /var/lib/ambari-agent/data/errors-1185.txt]

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 424, in <module>
    NameNode().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 314, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 85, in install
    self.configure(env)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 117, in locking_configure
    original_configure(obj, *args, **kw)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 92, in configure
    namenode(action="configure", hdfs_binary=hdfs_binary, env=env)
  File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 98, in namenode
    create_name_dirs(params.dfs_name_dir)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 282, in create_name_dirs
    cd_access="a",
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 114, in __new__
    cls(names_list.pop(0), env, provider, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 199, in action_create
    recursion_follow_links=self.resource.recursion_follow_links, safemode_folders=self.resource.safemode_folders)
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 75, in _ensure_metadata
    sudo.chown(path, user_entity, group_entity)
  File "/usr/lib/python2.6/site-packages/resource_management/core/sudo.py", line 39, in chown
    return os.chown(path, uid, gid)
OSError: [Errno 1] Operation not permitted: '/boot/efi/hadoop/hdfs/namenode'

---------------------------------------------------------
- [stdout: /var/lib/ambari-agent/data/output-1185.txt]

2017-05-09 00:05:01,564 - Stack Feature Version Info: stack_version=2.6, version=None, current_cluster_version=None -> 2.6
2017-05-09 00:05:01,572 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
User Group mapping (user_group) is missing in the hostLevelParams
2017-05-09 00:05:01,573 - Group['livy'] 
2017-05-09 00:05:01,574 - Group['spark'] 
2017-05-09 00:05:01,574 - Group['zeppelin'] 
2017-05-09 00:05:01,574 - Group['hadoop'] 
2017-05-09 00:05:01,575 - Group['users'] 
2017-05-09 00:05:01,575 - Group['knox'] 
2017-05-09 00:05:01,575 - User['hive'] 'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']
2017-05-09 00:05:01,576 - User['storm'] 'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']
2017-05-09 00:05:01,576 - User['infra-solr'] 'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']
2017-05-09 00:05:01,577 - User['zookeeper'] 'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']
2017-05-09 00:05:01,577 - User['atlas'] 'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']
2017-05-09 00:05:01,578 - User['oozie'] 'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']
2017-05-09 00:05:01,578 - User['ams'] 'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']
2017-05-09 00:05:01,579 - User['falcon'] 'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']
2017-05-09 00:05:01,579 - User['tez'] 'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']
2017-05-09 00:05:01,580 - User['zeppelin'] 'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'zeppelin', u'hadoop']
2017-05-09 00:05:01,580 - User['accumulo'] 'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']
2017-05-09 00:05:01,581 - User['mahout'] 'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']
2017-05-09 00:05:01,582 - User['livy'] 'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']
2017-05-09 00:05:01,582 - User['spark'] 'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']
2017-05-09 00:05:01,583 - User['ambari-qa'] 'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']
2017-05-09 00:05:01,583 - User['flume'] 'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']
2017-05-09 00:05:01,584 - User['kafka'] 'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']
2017-05-09 00:05:01,584 - User['hdfs'] 'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']
2017-05-09 00:05:01,585 - User['sqoop'] 'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']
2017-05-09 00:05:01,585 - User['yarn'] 'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']
2017-05-09 00:05:01,586 - User['hbase'] 'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']
2017-05-09 00:05:01,586 - User['hcat'] 'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']
2017-05-09 00:05:01,587 - User['mapred'] 'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']
2017-05-09 00:05:01,588 - User['knox'] 'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']
2017-05-09 00:05:01,588 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] 'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555
2017-05-09 00:05:01,590 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] 'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'
2017-05-09 00:05:01,596 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-05-09 00:05:01,597 - Directory['/tmp/hbase-hbase'] 'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'
2017-05-09 00:05:01,598 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] 'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555
2017-05-09 00:05:01,599 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] 'not_if': '(test $(id -u hbase) -gt 1000) || (false)'
2017-05-09 00:05:01,607 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2017-05-09 00:05:01,608 - Group['hdfs'] 
2017-05-09 00:05:01,608 - User['hdfs'] 'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']
2017-05-09 00:05:01,609 - FS Type: 
2017-05-09 00:05:01,609 - Directory['/etc/hadoop'] 'mode': 0755
2017-05-09 00:05:01,624 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] 'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'
2017-05-09 00:05:01,625 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] 'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777
2017-05-09 00:05:01,640 - Initializing 2 repositories
2017-05-09 00:05:01,640 - Repository['HDP-2.6'] 'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.0.3', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[repo_id]\nname=repo_id\n% if mirror_list %mirrorlist=mirror_list% else %baseurl=base_url% endif %\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None
2017-05-09 00:05:01,647 - File['/etc/yum.repos.d/HDP.repo'] 'content': '[HDP-2.6]\nname=HDP-2.6\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.0.3\n\npath=/\nenabled=1\ngpgcheck=0'
2017-05-09 00:05:01,648 - Repository['HDP-UTILS-1.1.0.21'] 'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[repo_id]\nname=repo_id\n% if mirror_list %mirrorlist=mirror_list% else %baseurl=base_url% endif %\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None
2017-05-09 00:05:01,651 - File['/etc/yum.repos.d/HDP-UTILS.repo'] 'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'
2017-05-09 00:05:01,652 - Package['unzip'] 'retry_on_repo_unavailability': False, 'retry_count': 5
2017-05-09 00:05:01,734 - Skipping installation of existing package unzip
2017-05-09 00:05:01,735 - Package['curl'] 'retry_on_repo_unavailability': False, 'retry_count': 5
2017-05-09 00:05:01,743 - Skipping installation of existing package curl
2017-05-09 00:05:01,743 - Package['hdp-select'] 'retry_on_repo_unavailability': False, 'retry_count': 5
2017-05-09 00:05:01,752 - Skipping installation of existing package hdp-select
2017-05-09 00:05:01,992 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-05-09 00:05:01,993 - Stack Feature Version Info: stack_version=2.6, version=None, current_cluster_version=None -> 2.6
2017-05-09 00:05:02,015 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-05-09 00:05:02,028 - checked_call['rpm -q --queryformat '%version-%release' hdp-select | sed -e 's/\.el[0-9]//g''] 'stderr': -1
2017-05-09 00:05:02,081 - checked_call returned (0, '2.6.0.3-8', '')
2017-05-09 00:05:02,094 - Package['hadoop_2_6_0_3_8'] 'retry_on_repo_unavailability': False, 'retry_count': 5
2017-05-09 00:05:02,211 - Skipping installation of existing package hadoop_2_6_0_3_8
2017-05-09 00:05:02,213 - Package['hadoop_2_6_0_3_8-client'] 'retry_on_repo_unavailability': False, 'retry_count': 5
2017-05-09 00:05:02,224 - Skipping installation of existing package hadoop_2_6_0_3_8-client
2017-05-09 00:05:02,225 - Package['snappy'] 'retry_on_repo_unavailability': False, 'retry_count': 5
2017-05-09 00:05:02,235 - Skipping installation of existing package snappy
2017-05-09 00:05:02,236 - Package['snappy-devel'] 'retry_on_repo_unavailability': False, 'retry_count': 5
2017-05-09 00:05:02,247 - Skipping installation of existing package snappy-devel
2017-05-09 00:05:02,248 - Package['hadoop_2_6_0_3_8-libhdfs'] 'retry_on_repo_unavailability': False, 'retry_count': 5
2017-05-09 00:05:02,257 - Skipping installation of existing package hadoop_2_6_0_3_8-libhdfs
2017-05-09 00:05:02,259 - Package['libtirpc-devel'] 'retry_on_repo_unavailability': False, 'retry_count': 5
2017-05-09 00:05:02,268 - Skipping installation of existing package libtirpc-devel
2017-05-09 00:05:02,270 - Directory['/etc/security/limits.d'] 'owner': 'root', 'create_parents': True, 'group': 'root'
2017-05-09 00:05:02,275 - File['/etc/security/limits.d/hdfs.conf'] 'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644
2017-05-09 00:05:02,275 - XmlConfig['hadoop-policy.xml'] 'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': , 'configurations': ...
2017-05-09 00:05:02,286 - Generating config: /usr/hdp/current/hadoop-client/conf/hadoop-policy.xml
2017-05-09 00:05:02,286 - File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] 'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'
2017-05-09 00:05:02,295 - XmlConfig['ssl-client.xml'] 'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': , 'configurations': ...
2017-05-09 00:05:02,303 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-client.xml
2017-05-09 00:05:02,303 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml'] 'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'
2017-05-09 00:05:02,309 - Directory['/usr/hdp/current/hadoop-client/conf/secure'] 'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'
2017-05-09 00:05:02,310 - XmlConfig['ssl-client.xml'] 'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf/secure', 'configuration_attributes': , 'configurations': ...
2017-05-09 00:05:02,318 - Generating config: /usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml
2017-05-09 00:05:02,318 - File['/usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml'] 'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'
2017-05-09 00:05:02,324 - XmlConfig['ssl-server.xml'] 'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': , 'configurations': ...
2017-05-09 00:05:02,332 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-server.xml
2017-05-09 00:05:02,332 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml'] 'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'
2017-05-09 00:05:02,339 - XmlConfig['hdfs-site.xml'] 'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': u'final': u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true', 'configurations': ...
2017-05-09 00:05:02,346 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml
2017-05-09 00:05:02,347 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] 'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'
2017-05-09 00:05:02,390 - XmlConfig['core-site.xml'] 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': u'final': u'fs.defaultFS': u'true', 'owner': 'hdfs', 'configurations': ...
2017-05-09 00:05:02,398 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml
2017-05-09 00:05:02,398 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] 'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'
2017-05-09 00:05:02,421 - File['/usr/hdp/current/hadoop-client/conf/slaves'] 'content': Template('slaves.j2'), 'owner': 'hdfs'
2017-05-09 00:05:02,425 - Directory['/hadoop/hdfs/namenode'] 'owner': 'hdfs', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'
2017-05-09 00:05:02,425 - Directory['/boot/efi/hadoop/hdfs/namenode'] 'owner': 'hdfs', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'
2017-05-09 00:05:02,425 - Creating directory Directory['/boot/efi/hadoop/hdfs/namenode'] since it doesn't exist.
2017-05-09 00:05:02,426 - Changing owner for /boot/efi/hadoop/hdfs/namenode from 0 to hdfs
2017-05-09 00:05:02,426 - Changing group for /boot/efi/hadoop/hdfs/namenode from 0 to hadoop

Command failed after 1 tries

【问题讨论】:

【参考方案1】:

脚本尝试使用 chown-command 更改目录的所有者。那是不允许的。 /boot/efi/ 是 namenode 目录的一个非常奇怪的文件夹。它应该放在像/hadoop/hdfs/namenode 这样的根目录中。 Boot-Folder 在 Linux 中是一个非常受限制的文件夹。

【讨论】:

确实,在通过 Ambari 安装期间(步骤“自定义服务”),文件夹 '/boot/efi/hadoop/hdfs/namenode' 被提议到 NameNode 和 DataNode 的默认目录列表中.默认情况下提出这样的路径有点奇怪!谢谢

以上是关于HortonWorks HDP 2.6:NameNode 通过 Ambari 安装问题的主要内容,如果未能解决你的问题,请参考以下文章

Hortonworks(HDP)关闭不须要的组件(服务)

Powershell windows server 2012 .Path 安装 Hortonworks 数据平台 (HDP) 时出错

将 Apache NiFi 添加到现有的 Hortonworks HDP 集群

无法从主机连接到 ZooKeeper/Hive 到 Sandbox Hortonworks HDP VM

2017.4.17 HDP和HDF

基于hortonworks的大数据集群环境部署流水