配置安装Hadoop2.2.0 部署spark 1.0的流程

一、环境描写叙述

本实验在一台Windows7-64下安装Vmware。在Vmware里安装两虚拟机分别例如以下

主机名spark1(192.168.232.147),RHEL6.2-64 操作系统,usernameRoot

从机名spark2(192.168.232.152)。RHEL6.2-64 操作系统,usernameRoot

二、环境准备

1、防火墙禁用。SSH服务设置为开机启动。并关闭SELINUX

2、改动hosts文件

3、配置SSH无password登录

4、准备安装软件包

5、JDK1.7安装及配置

以上操作比較简单。在此就无需赘述。

三. Hadoop2.2集群安装配置

1、创建安装文件夹(在spark2上同做)


mkdir -p /root/install/hadoop mkdir -p /root/install/hadoop/hdfs mkdir -p /root/install/hadoop/tmp mkdir -p /root/install/hadoop/mapred mkdir -p /root/install/hadoop/hdfs/name mkdir -p /root/install/hadoop/hdfs/data mkdir -p /root/install/hadoop/mapred/local mkdir -p /root/install/hadoop/mapred/system

2、把文件hadoop-2.2.0.x86_64.tar.gz上传到/root/install文件夹下,并解压

3、配置Hadoop环境变量


export HADOOP_HOME=/root/install/hadoop-2.2.0 export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH

4、配置Hadoop

(1)向配置hadoop-env.sh文件加入


     export JAVA_HOME=/root/install/jdk1.7.0_21

(2)向配置yarn-env.sh文件加入


     export JAVA_HOME=/root/install/jdk1.7.0_21

(3)配置core-site.xml


<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://spark1:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/root/install/hadoop/tmp</value> </property> </configuration>

(3)配置hdfs-site.xml


<configuration> <property> <name>dfs.name.dir</name> <value>/root/install/hadoop/hdfs/name</value> </property> <property> <name>dfs.data.dir</name> <value>/root/install/hadoop/hdfs/data</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> </configuration>

(4)配置mapred-site.xml


<configuration> <property> <name>mapreduce.cluster.local.dir</name> <value>/root/install/hadoop/mapred/local</value> </property> <property> <name>mapreduce.cluster.system.dir</name> <value>/root/install/hadoop/mapred/system</value> </property> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>spark1:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>spark1:19888</value> </property> <property> <name>mapred.child.java.opts</name> <value>-Djava.awt.headless=true</value> </property> <!-- add headless to default -Xmx1024m --> <property> <name>yarn.app.mapreduce.am.command-opts</name> <value>-Djava.awt.headless=true -Xmx1024m</value> </property> <property> <name>yarn.app.mapreduce.am.admin-command-opts</name> <value>-Djava.awt.headless=true</value> </property> </configuration>

(5)配置masters

   把localhost改动为spark1

(6)配置slaves

   把localhost改动为spark1,spark2,这两个分别各一行

(7)配置好之后将整个安装文件夹复制到spark2的/root/install文件夹下

(8)编写一个脚本,方便改动配置文件时好同步到其它机器


[root@spark1 install]# cat dispatchcfg.sh #!/bin/bash for target in spark2 do scp -r $HADOOP_CONF_DIR $target:/root/install/hadoop-2.2.0/etc done

(9)格式化Hadoop的Namenode:hadoop namenode -format

5.Hadoop集群启动

(1)start-all.sh

(2)查看相关进程(jps)

6 Hadoop測试

(1)创建一个文件夹/input。并把数据文件上传到文件夹下


hadoop fs -mkdir /input

hadoop fs -put /etc/group /input

(2)执行wordcount

  hadoop jar hadoop-mapreduce-examples-2.2.0.jar wordcount /input /output

Hadoop2.2集群安装配置-Spark集群安装部署_spark

四、安装部署spark1.0

(1)解压spark-1.0.0-bin-2.2.0.tgz

(2)在文件conf/spark-env.sh加入


export JAVA_HOME=/root/install/jdk1.7.0_21 export SPARK_MASTER_IP=spark1 export SPARK_MASTER_PORT=7077 export SPARK_WORKER_CORES=1 export SPARK_WORKER_INSTANCES=1 export SPARK_WORKER_MEMORY=1g

(3)启动spark集群:sbin/start-all.sh,并查看相关进程

Hadoop2.2集群安装配置-Spark集群安装部署_hadoop_02Hadoop2.2集群安装配置-Spark集群安装部署_spark_03

(4)查看执行效果

Hadoop2.2集群安装配置-Spark集群安装部署_hadoop_04

Hadoop2.2集群安装配置-Spark集群安装部署_spark_05


Hadoop2.2集群安装配置-Spark集群安装部署_mapreduce_06

(5)执行 bin/spark-shell --executor-memory 1g --driver-memory 1g --master spark://spark1:7077

Hadoop2.2集群安装配置-Spark集群安装部署_mapreduce_07