2

I am trying to setup a Hadoop cluster, where master is my laptop and slave is the virtualbox, following this guide. So, I did, from master:

gsamaras@gsamaras:/home/hadoopuser/hadoop/sbin$ sudo ./start-dfs.sh
Starting namenodes on [master]
root@master's password: 
master: namenode running as process 2911. Stop it first.
root@master's password: root@slave-1's password: 
master: datanode running as process 3057. Stop it first.
<I gave password again here>

slave-1: starting datanode, logging to /home/hadoopuser/hadoop/logs/hadoop-root-datanode-gsamaras-VirtualBox.out
Starting secondary namenodes [0.0.0.0]
[email protected]'s password: 
0.0.0.0: secondarynamenode running as process 3234. Stop it first.
gsamaras@gsamaras:/home/hadoopuser/hadoop/sbin$ su - hadoopuser
Password: 
-su: /home/hduser/hadoop/sbin: No such file or directory
hadoopuser@gsamaras:~$ jps
15845 Jps

And the guide states: "The output of this command should list NameNode, SecondaryNameNode, DataNode on master node, and DataNode on all slave nodes.", which doesn't seem to be the case here (does it?) and then I checked the slave's log:

cat hadoop-root-datanode-gsamaras-VirtualBox.log
..rver: master/192.168.1.2:54310. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-01-24 02:42:14,160 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: master/192.168.1.2:54310

gsamaras@gsamaras-VirtualBox:/home/hadoopuser/hadoop/logs$ ssh master
gsamaras@master's password: 
Welcome to Ubuntu 14.04.3..

The logs in the master node seem error-less. Notice that I can do a password-less ssh from master to slave, but not vice versa, the guide doesn't mention something like this. Any ideas please?


When I execute stop-dfs.sh, I am getting the erroneous message:

slave-1: no datanode to stop

Now, I did it again and I got in the master:

gsamaras@gsamaras:/home/hadoopuser/hadoop/sbin$ sudo ./stop-dfs.sh
Stopping namenodes on [master]
root@master's password: 
master: no namenode to stop
root@master's password: root@slave-1's password: 
master: no datanode to stop   
slave-1: stopping datanode
Stopping secondary namenodes [0.0.0.0]
[email protected]'s password: 
0.0.0.0: stopping secondarynamenode
gsamaras@gsamaras:/home/hadoopuser/hadoop/sbin$ jps
19048 Jps
gsamaras@gsamaras:/home/hadoopuser/hadoop/sbin$ ps axww | grep hadoop
19277 pts/1    S+     0:00 grep --color=auto hadoop
gsamaras@gsamaras:/home/hadoopuser/hadoop/sbin$ jps
19278 Jps

and ps axww | grep hadoop in the slave, gave process with id 2553.

gsamaras
  • 191
  • 1
  • 4
  • 12

1 Answers1

0

I had to set permissions not only in the hadoop-data foler as I thought, but in the hadoop folder itself:

sudo chown -R hadoopuser /home/hadoopuser/hadoop/

I got the idea from here.

gsamaras
  • 191
  • 1
  • 4
  • 12