Windows hadoop installation

1.Install Cygwin

2.Install Cygwin components:openssh,openssl,sed,subversion

3.Add Cygwin/bin and Cygwin/usr/sbin to windows path

4.Install sshd

In Cygwin, runssh-host-config

Should privilege separation used (no)

Do you want to install sshd as a service (yes)

Cygwin will also prompt whether you want to create a new windows user to start the service, default user created is “cyg_server”, it is better to use the current domain user

5.Config ssh login

In Cygwin, run ssh-keygen

6.Start sshd service in windowscontrol panel “service”

Or call net start sshd, if the service failed to start, check \var\log\ssh.log

7.Verify ssh login

In Cygwin, run ssh localhost

Sometimes the default port 22 is not good for usage

We can change port by modify file sshd_config:Port xxx, and change command to ssh localhost-p xxx

For detailed logs using ssh –v localhost

8.Download and extract hadoop in afile folder

9.Change JAVA_HOME in conf/hadoop-env.sh

10.Test setup

cp conf/*.xml input

bin/hadoop jar hadoop-examples-*.jar grep input output ‘dfs[a-z.]+’




Problems encountered during installation

1.The first time, installsshd service failed

I need to run

sc delete sshd to delete the service and run ssh-host-config again



2.Error:Privilege separation user sshd does not exist

Manually add the following line

sshd:x:74:74:Privilege-separated SSH:/var/empty/sshd:/sbin/nologin to file:“etc/passwd


etc/pwd format:

username:password:user id:group id:description:login main directory:shell name

When user logs in, a shell process is started to pass user input to kernel




3.Error:Connection closed by 1


If user A need to ssh connect to user B on host B,we need to copy A’s public key to a file called “authorized_keys” under host B’s“home/<user B>” folder

Create authorized_keys file:vi authorized_keys

Copy public key to authorized_keys file:cat id_rsa.pub >> authorized_keys



For ssh, access right of .ssh folder and authorized_keys file need to be set correctly

Chmod 700 /.ssh

Chmod 600/.ssh/authorized_keys (we cannot grant write access to authorized_keys file)



4.Error: Starting hadoop: Java.io.IOException:failed to set permissions of path:\tmp\hadoop-jizhan\mapred\staging\jizhan…..\.staging


This problem occurs because of a compatibility problem in class org.apache.hadoop.fs.FileUtil

We need to manually change the method checkReturnValue,just log warn message instead of throw exception



Reference

http://bbym010.iteye.com/blog/1019653





Running Hadoop

1.Under stand-alone mode:

Leave defaultconfiguration

Put file to process directly under hadoop/input folder(no need for hadoop file system upload). Output file will be written to hadoop/output folder






2.Under pseudo-distributedmode:

Core-site.xml

<configuration>

     <property>

       <name>fs.default.name</name>

       <value>hdfs://localhost:9890</value>

     </property>

</configuration>


Mapred-site.xml

<configuration>

     <property>

       <name>mapred.job.tracker</name>

       <value>hdfs://localhost:9891</value>

     </property>

</configuration>


Hdfs-site.xml

<configuration>

     <property>

       <name>dfs.replication</name>

       <value>1</value>

     </property>

</configuration>


Make sure thatlocalhost is in master file

Make sure thatlocalhost is in slaves file






Problem encountered running in standalone mode

1.Reducer does not execute.

There are a few things to check when encountering this problem


It is good to explicitly specify mapper and reducer’s output keyclass and value

actual mapper and reducer’s parameter type must match specification,mapper’s output parameter type must match reducer’s input parameter type

Raw Context object will not be accepted for map or reduce method,you need to use a strong typed context.

Mapper<InputKey, InputValue,OutputKey, OutputValue>.Context

Reducer<InputKey, InputValue,OutputKey, OutputValue>.Context



2.Line Reader does not readline correctly, a shorter line carries additional characters from previouslonger line

This is due to a wrong way of using Text, a text has an internal byte array and an end index, so the Text object may contain additional data due to internal buffer expansion after reading a longer line, those chars will not be cleared and only chars before index should be read for a shorter line.

Do not usenew String(text.getBytes())to convert text to string, usetext.toString()



Problem encountered running in pseudo-distributed mode

Error running map-reduce program

14/01/19 12:21:25 WARN mapred.JobClient: Error reading task output http://L-SHC-0436751.corp.ebay.com:50060/tasklog?plaintext=true&attemptid=attempt_20140119128_0002_m_000001_2&filter=stderr


Hadoop uses unix file link to redirect output in {HADOOP_DIR}/logs to tmp/hadoop-jizhan/mapred/local(note that hadoop.tmp.dir-> tmp/hadoop-jizhan/)

This is not recognized as a directory in windows by jdk and exception is thrown


To avoid redirection, we can set property HADOOP_LOG_DIR directly pointing to /tmp/mapred/local this is the Cygwin /tmp folder, and we need to use unix ln command to map it to local folder c:/tmp/hadoop-jizhan/mapred/local