Last active
August 29, 2015 13:56
-
-
Save darKoram/9051450 to your computer and use it in GitHub Desktop.
Ambari restart-services give-bash: /usr/lib/hadoop/bin/hadoop-daemon.sh: Permission denied <br> An 11 node virtualized hdfs stack was set up using ambari 1.3.2 . Setup was successful. I then wanted to shutdown the cluster to add ceph as a backend storage. On restart I get permission denied errors on the daemons, jobtracker datanode etc. I don't …
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
via grep, the only error I see is JMX metrics | |
05:29:17,335 ERROR [pool-3-thread-98] JMXPropertyProvider:469 - Caught exception getting JMX metrics : Connection refused | |
05:29:17,553 INFO [qtp936154025-338] HeartBeatHandler:113 - Received heartbeat from host, hostname=txoig-stag-elastic02.tx1.21ct.com, currentResponseId=694, receivedResponseId=694 | |
05:29:17,554 INFO [qtp936154025-338] AgentResource:109 - Sending heartbeat response with response id 695 | |
05:29:17,569 ERROR [pool-3-thread-90] JMXPropertyProvider:469 - Caught exception getting JMX metrics : Connection refused | |
05:29:18,292 INFO [qtp936154025-340] HeartBeatHandler:113 - Received heartbeat from host, hostname=txoig-stag-zoo02.tx1.21ct.com, currentResponseId=756, receivedResponseId=756 | |
05:29:18,293 INFO [qtp936154025-340] AgentResource:109 - Sending heartbeat response with response id 757 | |
05:29:20,780 ERROR [pool-3-thread-96] JMXPropertyProvider:469 - Caught exception getting JMX metrics : Connection refused | |
05:29:20,783 ERROR [pool-3-thread-22] JMXPropertyProvider:469 - Caught exception getting JMX metrics : Connection refused | |
05:29:20,806 ERROR [pool-3-thread-77] JMXPropertyProvider:469 - Caught exception getting JMX metrics : Connection refused | |
05:29:20,809 ERROR [pool-3-thread-58] JMXPropertyProvider:469 - Caught exception getting JMX metrics : Connection refused | |
05:29:20,820 ERROR [pool-3-thread-97] JMXPropertyProvider:469 - Caught exception getting JMX metrics : Connection refused | |
05:29:20,842 ERROR [pool-3-thread-67] JMXPropertyProvider:469 - Caught exception getting JMX metrics : Connection refused | |
05:29:20,862 ERROR [pool-3-thread-98] JMXPropertyProvider:469 - Caught exception getting JMX metrics : Connection refused | |
05:29:21,055 INFO [qtp936154025-340] HeartBeatHandler:113 - Received heartbeat from host, hostname=txoig-stag-hdfs03.tx1.21ct. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
--- | |
# ansible deployment of ceph jars and core-site.xml | |
# | |
# ceph/tasks/main.yml | |
# following http://ceph.com/docs/master/cephfs/hadoop/ | |
# - name: dump | |
# local_action: template src="{{ANSIBLE_21CT_HOME}}/roles/utils/templates/dump.j2" dest=~/dump.txt | |
- name: Ensure ambari agents are stopped | |
shell: ambari-agent stop | |
- name: Ensure ambari-server is stopped | |
shell: ambari-server stop | |
when: inventory_hostname in groups['ambari_master'] | |
- name: Make /usr/lib/hadoop | |
file: dest=/usr/lib/hadoop mode=0666 | |
- name: Fetch the hadoop-ceph.jar | |
get_url: url=http://ceph.com/download/hadoop-cephfs.jar | |
dest="{{HADOOP_PREFIX}}/lib" | |
mode=0755 group="{{mapred_user}}" owner="{{hdfs_user}}" | |
- name: Template core-site.xml | |
template: src=core-site.xml dest="{{HADOOP_CONF_DIR}}/core-site.xml" | |
group="{{hdfs_user}}" owner="{{hdfs_user}}" | |
- name: Restart ambari agents | |
shell: ambari-agent stop | |
- name: Restart ambari-server | |
shell: ambari-server start | |
when: inventory_hostname in groups['ambari_master'] | |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Cluster Stack Version: HDP-1.3.2 | |
Service Version Description | |
HDFS 1.2.0.1.3.2.0 Apache Hadoop Distributed File System | |
MapReduce 1.2.0.1.3.2.0 Apache Hadoop Distributed Processing Framework | |
Nagios 3.5.0 Nagios Monitoring and Alerting system | |
Ganglia 3.5.0 Ganglia Metrics Collection system | |
ZooKeeper 3.4.5.1.3.2.0 Centralized service which provides highly reliable distributed coordination |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
<!-- copied to /etc/hadoop/conf/core-site.xml on all hdfs nodes + accumulo master --> | |
<!-- {{ansible_managed}} --> | |
<?xml version="1.0"?> | |
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> | |
<!-- Put site-specific property overrides in this file. --> | |
<!-- accumulo quickstart setting --> | |
<configuration> | |
<property> | |
<name>fs.default.name</name> | |
<value>{{fs.default.name}}</value> | |
</property> | |
<property> | |
<name>ceph.data.pools</name> | |
<value>{{ceph.data.pools}}</value> | |
</property> | |
<property> | |
<name>ceph.conf.file</name> | |
<value>/etc/ceph/ceph.conf</value> | |
</property> | |
<property> | |
<name>ceph.conf.options</name> | |
<value>opt1=val1,opt2=val2</value> | |
</property> | |
<property> | |
<name>ceph.root.dir</name> | |
<value>/</value> | |
</property> | |
<property> | |
<name>ceph.mon.address</name> | |
<value>{{ceph.mon.address}}</value> | |
</property> | |
<property> | |
<name>ceph.auth.id</name> | |
<value>{{ceph.auth.id}}</value> | |
</property> | |
<property> | |
<name>ceph.auth.keyfile</name> | |
<value>/do-not-know</value> | |
</property> | |
<property> | |
<name>ceph.auth.keyring</name> | |
<value>{{ceph.auth.keyring}}</value> | |
</property> | |
<!-- keyring value might be ceph.mon.keyring --> | |
<property> | |
<name>ceph.object.size</name> | |
<value>67108864</value> | |
<!-- 64 MB --> | |
</property> | |
<property> | |
<name>ceph.localize.reads</name> | |
<value>true</value> | |
</property> | |
<property> | |
<name>webinterface.private.actions</name> | |
<value>false</value> | |
</property> | |
<property> | |
<name>hadoop.security.authentication</name> | |
<value>simple</value> | |
</property> | |
<property> | |
<name>ipc.client.connection.maxidletime</name> | |
<value>30000</value> | |
</property> | |
<property> | |
<name>fs.checkpoint.edits.dir</name> | |
<value>${fs.checkpoint.dir}</value> | |
</property> | |
<property> | |
<name>ipc.client.connect.max.retries</name> | |
<value>50</value> | |
</property> | |
<property> | |
<name>fs.checkpoint.period</name> | |
<value>21600</value> | |
</property> | |
<property> | |
<name>fs.trash.interval</name> | |
<value>360</value> | |
</property> | |
<property> | |
<name>ipc.client.idlethreshold</name> | |
<value>8000</value> | |
</property> | |
<property> | |
<name>io.compression.codec.lzo.class</name> | |
<value>com.hadoop.compression.lzo.LzoCodec</value> | |
</property> | |
<property> | |
<name>io.file.buffer.size</name> | |
<value>131072</value> | |
</property> | |
<property> | |
<name>fs.checkpoint.size</name> | |
<value>0.5</value> | |
</property> | |
<property> | |
<name>io.compression.codecs</name> | |
<value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec,org.apache.hadoop.io.compress.SnappyCodec</value> | |
</property> | |
<property> | |
<name>io.serializations</name> | |
<value>org.apache.hadoop.io.serializer.WritableSerialization</value> | |
</property> | |
<property> | |
<name>fs.checkpoint.dir</name> | |
<value>/hadoop/hdfs/namesecondary</value> | |
</property> | |
</configuration> | |
</configuration> |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Components | |
txoig-stag-accmaster.tx1.21ct.com 192.168.0.7 8 62.92GB 0.08 6 Components | |
txoig-stag-ambari.tx1.21ct.com 192.168.0.16 8 15.58GB 0.07 4 Components | |
Ganglia Monitor | |
Ganglia Server | |
MapReduce Client | |
Nagios Server | |
txoig-stag-hdfs01.tx1.21ct.com 192.168.0.4 8 62.92GB 0.05 6 Components | |
DataNode | |
Ganglia Monitor | |
HDFS Client | |
MapReduce Client | |
TaskTracker | |
ZooKeeper Client | |
txoig-stag-hdfs02.tx1.21ct.com 192.168.0.5 8 62.92GB 0.05 6 Components | |
txoig-stag-hdfs03.tx1.21ct.com 192.168.0.6 8 62.92GB 0.04 6 Components | |
txoig-stag-name.tx1.21ct.com 192.168.0.8 8 62.92GB 0.05 2 Components | |
txoig-stag-sname.tx1.21ct.com 192.168.0.9 8 62.92GB 0.03 3 Components | |
txoig-stag-zoo01.tx1.21ct.com 192.168.0.10 8 62.92GB 0.03 2 Components | |
txoig-stag-zoo02.tx1.21ct.com 192.168.0.11 8 62.92GB 0.03 2 Components | |
txoig-stag-zoo03.tx1.21ct.com 192.168.0.12 8 62.92GB 0.05 2 Components |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
[root@xxx-yyy-hdfs01:~]$ su - hdfs -c 'export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode' | |
-bash: /usr/lib/hadoop/bin/hadoop-daemon.sh: Permission denied | |
I noticed everything under /usr/lib/hadoop was owned by root:root so I | |
chown -R mapred:hadoop /usr/lib/hadoop | |
But i still get the permission error. | |
[root@xxx-yyy-hdfs01:~]$ ls -al /usr/lib/hadoop/bin | |
total 120K | |
drwxr-xr-x. 2 mapred hadoop 4.0K Feb 17 04:38 . | |
drw-rw-rw-. 10 mapred hadoop 4.0K Feb 17 04:38 .. | |
-rwxr-xr-x. 1 mapred hadoop 16K Aug 20 01:38 hadoop | |
-rwxr-xr-x. 1 mapred hadoop 2.6K Aug 20 01:38 hadoop-config.sh | |
-rwxr-xr-x. 1 hdfs hdfs 5.0K Feb 17 05:06 hadoop-daemon.sh | |
-rwxr-xr-x. 1 mapred hadoop 1.3K Aug 20 01:38 hadoop-daemons.sh | |
-rwxr-xr-x. 1 mapred hadoop 2.8K Aug 20 01:38 rcc | |
-rwxr-xr-x. 1 mapred hadoop 2.1K Aug 20 01:38 slaves.sh | |
-rwxr-xr-x. 1 mapred hadoop 1.2K Aug 20 01:38 start-all.sh | |
-rwxr-xr-x. 1 mapred hadoop 1.1K Aug 20 01:38 start-balancer.sh | |
-rwxr-xr-x. 1 mapred hadoop 1.8K Aug 20 01:38 start-dfs.sh | |
-rwxr-xr-x. 1 mapred hadoop 1.2K Aug 20 01:38 start-jobhistoryserver.sh | |
-rwxr-xr-x. 1 mapred hadoop 1.3K Aug 20 01:38 start-mapred.sh | |
-rwxr-xr-x. 1 mapred hadoop 1.1K Aug 20 01:38 stop-all.sh | |
-rwxr-xr-x. 1 mapred hadoop 1.1K Aug 20 01:38 stop-balancer.sh | |
-rwxr-xr-x. 1 mapred hadoop 1.3K Aug 20 01:38 stop-dfs.sh | |
-rwxr-xr-x. 1 mapred hadoop 1.2K Aug 20 01:38 stop-jobhistoryserver.sh | |
-rwxr-xr-x. 1 mapred hadoop 1.2K Aug 20 01:38 stop-mapred.sh | |
-rwxr-x---. 1 mapred hadoop 32K Aug 20 01:40 task-controller |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2014-02-17 04:44:55,840 INFO org.apache.hadoop.mapred.JvmManager: JVM Runner jvm_201402170443_0001_m_1794847342 spawned. | |
2014-02-17 04:44:55,845 INFO org.apache.hadoop.mapred.TaskController: Writing commands to /hadoop/mapred/ttprivate/taskTracker/ambari-qa/jobcache/job_201402170443_0001/attempt_201402170443_0001_m_000001_0/taskjvm.sh | |
2014-02-17 04:44:56,733 INFO org.apache.hadoop.mapred.TaskTracker: Received KillTaskAction for task: attempt_201402170443_0001_r_000000_0 | |
2014-02-17 04:44:56,733 INFO org.apache.hadoop.mapred.TaskTracker: About to purge task: attempt_201402170443_0001_r_000000_0 | |
2014-02-17 04:44:56,996 INFO org.apache.hadoop.mapred.TaskTracker: JVM with ID: jvm_201402170443_0001_m_1794847342 given task: attempt_201402170443_0001_m_000001_0 | |
2014-02-17 04:44:57,657 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201402170443_0001_m_000001_0 0.0% | |
2014-02-17 04:44:57,778 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201402170443_0001_m_000001_0 0.0% cleanup | |
2014-02-17 04:44:57,782 INFO org.apache.hadoop.mapred.TaskTracker: Task attempt_201402170443_0001_m_000001_0 is done. | |
2014-02-17 04:44:57,782 INFO org.apache.hadoop.mapred.TaskTracker: reported output size for attempt_201402170443_0001_m_000001_0 was -1 | |
2014-02-17 04:44:57,783 INFO org.apache.hadoop.mapred.TaskTracker: addFreeSlot : current free slots : 4 | |
2014-02-17 04:44:58,007 INFO org.apache.hadoop.mapred.JvmManager: JVM : jvm_201402170443_0001_m_1794847342 exited with exit code 0. Number of tasks it ran: 1 | |
2014-02-17 04:44:58,734 INFO org.apache.hadoop.mapred.TaskTracker: Received 'KillJobAction' for job: job_201402170443_0001 | |
2014-02-17 04:44:58,735 INFO org.apache.hadoop.mapred.IndexCache: Map ID attempt_201402170443_0001_m_000001_0 not found in cache | |
2014-02-17 04:44:58,737 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201402170443_0001 for user-log deletion with retainTimeStamp:1392698698736 | |
: java.io.IOException: Connection reset by peer | |
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1155) | |
at org.apache.hadoop.ipc.Client.call(Client.java:1123) | |
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229) | |
at org.apache.hadoop.mapred.$Proxy5.heartbeat(Unknown Source) | |
at org.apache.hadoop.mapred.TaskTracker.transmitHeartBeat(TaskTracker.java:2068) | |
at org.apache.hadoop.mapred.TaskTracker.offerService(TaskTracker.java:1862) | |
at org.apache.hadoop.mapred.TaskTracker.run(TaskTracker.java:2714) | |
at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:3977) | |
Caused by: java.io.IOException: Connection reset by peer | |
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) | |
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) | |
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:225) | |
at sun.nio.ch.IOUtil.read(IOUtil.java:198) | |
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:375) | |
at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55) | |
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) | |
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155) | |
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128) | |
at java.io.FilterInputStream.read(FilterInputStream.java:133) | |
at org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:370) | |
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) | |
at java.io.BufferedInputStream.read(BufferedInputStream.java:254) | |
at java.io.DataInputStream.readInt(DataInputStream.java:387) | |
at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:852) | |
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:797) | |
2014-02-17 04:51:07,685 INFO org.apache.hadoop.mapred.TaskTracker: Resending 'status' to 'xxx-yyy-sname.tx1.21ct.com' with reponseId '1535 | |
2014-02-17 04:51:08,689 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: xxx-yyy-sname.tx1.21ct.com/192.168.0.9:50300. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1 SECONDS) | |
2014-02-17 04:51:09,691 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: txoig-stag-sname.tx1.21ct.com/192.168.0.9:50300. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1 SECONDS) | |
2014-02-17 04:51:10,694 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: txoig-stag-sname.tx1.21ct.com/192.168.0.9:50300. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1 SECONDS) | |
2014-02-17 04:51:11,286 INFO org.apache.hadoop.mapred.TaskTracker: SHUTDOWN_MSG: | |
/************************************************************ | |
SHUTDOWN_MSG: Shutting down TaskTracker at txoig-stag-hdfs01.tx1.21ct.com/192.168.0.4 | |
************************************************************/ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
grep ERROR /var/log/ambari-agent/ambari-agent.log | |
ERROR 2014-02-17 13:47:52,220 PuppetExecutor.py:213 - Error running puppet: | |
ERROR 2014-02-17 03:19:38,112 PingPortListener.py:44 - Failed to start ping port listener of:[Errno 98] Address already in use | |
ERROR 2014-02-17 03:17:56,899 Controller.py:204 - Unable to connect to: https://txoig-stag-ambari.tx1.21ct.com:8441/agent/v1/heartbeat/txoig-stag-hdfs01.tx1.21ct.com due to [Errno 111] Connection refused | |
ERROR 2014-02-17 03:05:12,716 Controller.py:204 - Unable to connect to: https://xxx-yyy-ambari.example.com:8441/agent/v1/heartbeat/xxx-yyy-hdfs01.example.com due to Error occured during connecting to the server: |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2014-02-17 04:42:02,843 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /hadoop/hdfs/data is not formatted | |
2014-02-17 04:42:02,843 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting ... | |
2014-02-17 04:42:02,926 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Registered FSDatasetStatusMBean | |
2014-02-17 04:42:02,948 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened data transfer server at 50010 | |
2014-02-17 04:42:02,953 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 6250000 bytes/s | |
2014-02-17 04:42:02,966 INFO org.apache.hadoop.util.NativeCodeLoader: Loaded the native-hadoop library | |
2014-02-17 04:42:03,082 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog | |
2014-02-17 04:42:03,211 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter) | |
2014-02-17 04:42:03,239 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled = true | |
2014-02-17 04:42:03,242 INFO org.apache.hadoop.http.HttpServer: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/* | |
[root@xxx-yyy-hdfs01:~]$ cat /var/log/hadoop/hdfs/hadoop-hdfs-datanode-xxx-yyy-hdfs01.log |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
notice: /Stage[1]/Hdp::Snappy::Package/Hdp::Snappy::Package::Ln[32]/Hdp::Exec[hdp::snappy::package::ln 32]/Exec[hdp::snappy::package::ln 32]/returns: executed successfully | |
notice: /Stage[2]/Hdp-hadoop::Initialize/Configgenerator::Configfile[hdfs-site]/File[/etc/hadoop/conf/hdfs-site.xml]/content: content changed '{md5}9e03a634ad133f505a420b063757423c' to '{md5}bfb0e223b609a5eb5abd2c18f4eb9299' | |
notice: /Stage[2]/Hdp-hadoop::Initialize/Configgenerator::Configfile[core-site]/File[/etc/hadoop/conf/core-site.xml]/content: content changed '{md5}11e3fecd3e4b88118dd4b436fa2a23ba' to '{md5}24b1301181ff11e4996338164b441151' | |
notice: /Stage[2]/Hdp-hadoop::Datanode/Hdp-hadoop::Datanode::Create_data_dirs[/hadoop/hdfs/data]/Hdp::Directory_recursive_create_ignore_failure[/hadoop/hdfs/data]/Hdp::Exec[chown hdfs:hadoop /hadoop/hdfs/data; exit 0]/Exec[chown hdfs:hadoop /hadoop/hdfs/data; exit 0]/returns: executed successfully | |
notice: /Stage[2]/Hdp-hadoop::Datanode/Hdp-hadoop::Datanode::Create_data_dirs[/hadoop/hdfs/data]/Hdp::Directory_recursive_create_ignore_failure[/hadoop/hdfs/data]/Hdp::Exec[chmod 0750 /hadoop/hdfs/data ; exit 0]/Exec[chmod 0750 /hadoop/hdfs/data ; exit 0]/returns: executed successfully | |
notice: /Stage[main]/Hdp-hadoop/Hdp-hadoop::Update-log4j-properties[log4j.properties]/Hdp-hadoop::Update-log4j-property[namelog4j.additivity.org.apache.hadoop.mapred.JobHistory$JobHistoryLoggervaluetrue]/Hdp::Exec[sed -i 's~\(###\)\?log4j.additivity.org.apache.hadoop.mapred.JobHistory$JobHistoryLogger=.*~###log4j.additivity.org.apache.hadoop.mapred.JobHistory$JobHistoryLogger=true~' /etc/hadoop/conf/log4j.properties]/Exec[sed -i 's~\(###\)\?log4j.additivity.org.apache.hadoop.mapred.JobHistory$JobHistoryLogger=.*~###log4j.additivity.org.apache.hadoop.mapred.JobHistory$JobHistoryLogger=true~' /etc/hadoop/conf/log4j.properties]/returns: executed successfully | |
notice: /Stage[main]/Hdp-hadoop/Hdp-hadoop::Update-log4j-properties[log4j.properties]/Hdp-hadoop::Update-log4j-property[nameambari.jobhistory.uservaluemapred]/Hdp::Exec[sed -i 's~\(###\)\?ambari.jobhistory.user=.*~###ambari.jobhistory.user=mapred~' /etc/hadoop/conf/log4j.properties]/Exec[sed -i 's~\(###\)\?ambari.jobhistory.user=.*~###ambari.jobhistory.user=mapred~' /etc/hadoop/conf/log4j.properties]/returns: executed successfully | |
notice: /Stage[main]/Hdp-hadoop/Hdp-hadoop::Update-log4j-properties[log4j.properties]/Hdp-hadoop::Update-log4j-property[namelog4j.appender.JHAvalueorg.apache.ambari.log4j.hadoop.mapreduce.jobhistory.JobHistoryAppender]/Hdp::Exec[sed -i 's~\(###\)\?log4j.appender.JHA=.*~###log4j.appender.JHA=org.apache.ambari.log4j.hadoop.mapreduce.jobhistory.JobHistoryAppender~' /etc/hadoop/conf/log4j.properties]/Exec[sed -i 's~\(###\)\?log4j.appender.JHA=.*~###log4j.appender.JHA=org.apache.ambari.log4j.hadoop.mapreduce.jobhistory.JobHistoryAppender~' /etc/hadoop/conf/log4j.properties]/returns: executed successfully | |
notice: /Stage[main]/Hdp-hadoop/Hdp-hadoop::Update-log4j-properties[log4j.properties]/Hdp-hadoop::Update-log4j-property[namelog4j.appender.JHA.databasevalue${ambari.jobhistory.database}]/Hdp::Exec[sed -i 's~\(###\)\?log4j.appender.JHA.database=.*~###log4j.appender.JHA.database=${ambari.jobhistory.database}~' /etc/hadoop/conf/log4j.properties]/Exec[sed -i 's~\(###\)\?log4j.appender.JHA.database=.*~###log4j.appender.JHA.database=${ambari.jobhistory.database}~' /etc/hadoop/conf/log4j.properties]/returns: executed successfully | |
notice: /Stage[main]/Hdp-hadoop/Hdp-hadoop::Update-log4j-properties[log4j.properties]/Hdp-hadoop::Update-log4j-property[nameambari.jobhistory.drivervalueorg.postgresql.Driver]/Hdp::Exec[sed -i 's~\(###\)\?ambari.jobhistory.driver=.*~###ambari.jobhistory.driver=org.postgresql.Driver~' /etc/hadoop/conf/log4j.properties]/Exec[sed -i 's~\(###\)\?ambari.jobhistory.driver=.*~###ambari.jobhistory.driver=org.postgresql.Driver~' /etc/hadoop/conf/log4j.properties]/returns: executed successfully | |
notice: /Stage[main]/Hdp-hadoop/Hdp-hadoop::Update-log4j-properties[log4j.properties]/Hdp-hadoop::Update-log4j-property[namelog4j.appender.JHA.uservalue${ambari.jobhistory.user}]/Hdp::Exec[sed -i 's~\(###\)\?log4j.appender.JHA.user=.*~###log4j.appender.JHA.user=${ambari.jobhistory.user}~' /etc/hadoop/conf/log4j.properties]/Exec[sed -i 's~\(###\)\?log4j.appender.JHA.user=.*~###log4j.appender.JHA.user=${ambari.jobhistory.user}~' /etc/hadoop/conf/log4j.properties]/returns: executed successfully | |
notice: /Stage[main]/Hdp-hadoop/Hdp-hadoop::Update-log4j-properties[log4j.properties]/Hdp-hadoop::Update-log4j-property[namelog4j.appender.JHA.passwordvalue${ambari.jobhistory.password}]/Hdp::Exec[sed -i 's~\(###\)\?log4j.appender.JHA.password=.*~###log4j.appender.JHA.password=${ambari.jobhistory.password}~' /etc/hadoop/conf/log4j.properties]/Exec[sed -i 's~\(###\)\?log4j.appender.JHA.password=.*~###log4j.appender.JHA.password=${ambari.jobhistory.password}~' /etc/hadoop/conf/log4j.properties]/returns: executed successfully | |
notice: /Stage[main]/Hdp-hadoop/Hdp-hadoop::Update-log4j-properties[log4j.properties]/Hdp-hadoop::Update-log4j-property[namelog4j.appender.JHA.drivervalue${ambari.jobhistory.driver}]/Hdp::Exec[sed -i 's~\(###\)\?log4j.appender.JHA.driver=.*~###log4j.appender.JHA.driver=${ambari.jobhistory.driver}~' /etc/hadoop/conf/log4j.properties]/Exec[sed -i 's~\(###\)\?log4j.appender.JHA.driver=.*~###log4j.appender.JHA.driver=${ambari.jobhistory.driver}~' /etc/hadoop/conf/log4j.properties]/returns: executed successfully | |
notice: /Stage[main]/Hdp-hadoop/Hdp-hadoop::Update-log4j-properties[log4j.properties]/Hdp-hadoop::Update-log4j-property[nameambari.jobhistory.passwordvaluemapred]/Hdp::Exec[sed -i 's~\(###\)\?ambari.jobhistory.password=.*~###ambari.jobhistory.password=mapred~' /etc/hadoop/conf/log4j.properties]/Exec[sed -i 's~\(###\)\?ambari.jobhistory.password=.*~###ambari.jobhistory.password=mapred~' /etc/hadoop/conf/log4j.properties]/returns: executed successfully | |
notice: /Stage[main]/Hdp-hadoop/Hdp-hadoop::Update-log4j-properties[log4j.properties]/Hdp-hadoop::Update-log4j-property[nameambari.jobhistory.loggervalueDEBUG,JHA]/Hdp::Exec[sed -i 's~\(###\)\?ambari.jobhistory.logger=.*~###ambari.jobhistory.logger=DEBUG,JHA~' /etc/hadoop/conf/log4j.properties]/Exec[sed -i 's~\(###\)\?ambari.jobhistory.logger=.*~###ambari.jobhistory.logger=DEBUG,JHA~' /etc/hadoop/conf/log4j.properties]/returns: executed successfully | |
notice: /Stage[main]/Hdp-hadoop/Hdp-hadoop::Update-log4j-properties[log4j.properties]/Hdp-hadoop::Update-log4j-property[nameambari.jobhistory.databasevaluejdbc:postgresql://txoig-stag-ambari.tx1.21ct.com/ambarirca]/Hdp::Exec[sed -i 's~\(###\)\?ambari.jobhistory.database=.*~###ambari.jobhistory.database=jdbc:postgresql://txoig-stag-ambari.tx1.21ct.com/ambarirca~' /etc/hadoop/conf/log4j.properties]/Exec[sed -i 's~\(###\)\?ambari.jobhistory.database=.*~###ambari.jobhistory.database=jdbc:postgresql://txoig-stag-ambari.tx1.21ct.com/ambarirca~' /etc/hadoop/conf/log4j.properties]/returns: executed successfully | |
notice: /Stage[2]/Hdp-hadoop::Initialize/Configgenerator::Configfile[mapred-site]/File[/etc/hadoop/conf/mapred-site.xml]/content: content changed '{md5}080662ecfd9b6605f70106f70ddbe916' to '{md5}f1ec1673d89cb824d4001248f37d6dca' | |
notice: /Stage[2]/Hdp-hadoop::Datanode/Hdp-hadoop::Service[datanode]/Exec[delete_pid_before_datanode_start]/returns: executed successfully | |
notice: /Stage[main]/Hdp-hadoop/Hdp-hadoop::Update-log4j-properties[log4j.properties]/Hdp-hadoop::Update-log4j-property[namelog4j.logger.org.apache.hadoop.mapred.JobHistory$JobHistoryLoggervalue${ambari.jobhistory.logger}]/Hdp::Exec[sed -i 's~\(###\)\?log4j.logger.org.apache.hadoop.mapred.JobHistory$JobHistoryLogger=.*~###log4j.logger.org.apache.hadoop.mapred.JobHistory$JobHistoryLogger=${ambari.jobhistory.logger}~' /etc/hadoop/conf/log4j.properties]/Exec[sed -i 's~\(###\)\?log4j.logger.org.apache.hadoop.mapred.JobHistory$JobHistoryLogger=.*~###log4j.logger.org.apache.hadoop.mapred.JobHistory$JobHistoryLogger=${ambari.jobhistory.logger}~' /etc/hadoop/conf/log4j.properties]/returns: executed successfully | |
notice: /Stage[2]/Hdp-hadoop::Datanode/Hdp-hadoop::Service[datanode]/Hdp::Exec[su - hdfs -c 'export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode']/Exec[su - hdfs -c 'export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode']/returns: -bash: /usr/lib/hadoop/bin/hadoop-daemon.sh: Permission denied | |
err: /Stage[2]/Hdp-hadoop::Datanode/Hdp-hadoop::Service[datanode]/Hdp::Exec[su - hdfs -c 'export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode']/Exec[su - hdfs -c 'export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode']/returns: change from notrun to 0 failed: su - hdfs -c 'export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode' returned 126 instead of one of [0] at /var/lib/ambari-agent/puppet/modules/hdp/manifests/init.pp:480 | |
notice: /Stage[2]/Hdp-hadoop::Datanode/Hdp-hadoop::Service[datanode]/Hdp::Exec[su - hdfs -c 'export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode']/Anchor[hdp::exec::su - hdfs -c 'export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode'::end]: Dependency Exec[su - hdfs -c 'export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode'] has failures: true | |
notice: /Stage[2]/Hdp-hadoop::Datanode/Hdp-hadoop::Service[datanode]/Hdp::Exec[sleep 5; ls /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid >/dev/null 2>&1 && ps `cat /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid` >/dev/null 2>&1]/Anchor[hdp::exec::sleep 5; ls /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid >/dev/null 2>&1 && ps `cat /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid` >/dev/null 2>&1::begin]: Dependency Exec[su - hdfs -c 'export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode'] has failures: true | |
notice: /Stage[2]/Hdp-hadoop::Datanode/Hdp-hadoop::Service[datanode]/Hdp::Exec[sleep 5; ls /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid >/dev/null 2>&1 && ps `cat /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid` >/dev/null 2>&1]/Exec[sleep 5; ls /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid >/dev/null 2>&1 && ps `cat /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid` >/dev/null 2>&1]: Dependency Exec[su - hdfs -c 'export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode'] has failures: true | |
notice: /Stage[2]/Hdp-hadoop::Datanode/Hdp-hadoop::Service[datanode]/Hdp::Exec[sleep 5; ls /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid >/dev/null 2>&1 && ps `cat /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid` >/dev/null 2>&1]/Anchor[hdp::exec::sleep 5; ls /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid >/dev/null 2>&1 && ps `cat /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid` >/dev/null 2>&1::end]: Dependency Exec[su - hdfs -c 'export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode'] has failures: true | |
notice: /Stage[2]/Hdp-hadoop::Datanode/Hdp-hadoop::Service[datanode]/Anchor[hdp-hadoop::service::datanode::end]: Dependency Exec[su - hdfs -c 'export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode'] has failures: true | |
notice: /Stage[main]/Hdp-hadoop/Anchor[hdp-hadoop::end]: Dependency Exec[su - hdfs -c 'export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode'] has failures: true | |
notice: /Stage[2]/Hdp-hadoop::Initialize/Hdp-hadoop::Common[common]/Anchor[hdp-hadoop::common::end]: Dependency Exec[su - hdfs -c 'export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode'] has failures: true |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment