hadoop+spark环境--单实例版 1、修改主机名及关系映射 2、关闭防火墙并创建文件夹 mkdir /hadoop/tmp mkdir /hadoop/dfs/name mkdir /hadoop/dfs/data mkdir /hadoop/var 3、配置Scala环境 [root@hadoop conf]#vim /etc/profile export SCALA_HOME=/opt/scala2.11.12 export PATH=.:${JAVA_HOME}/bin:${SCALA_HOME}/bin:$PATH [root@hadoop conf]#scala -version //查看是否安装成功 4、配置Spark环境/usr/java/jdk1.8.0_201-amd64 [root@hadoop conf]# vim /etc/profile export SPARK_HOME=/opt/spark2.2.3 export PATH=.:${JAVA_HOME}/bin:${SCALA_HOME}/bin:${SPARK_HOME}/bin:$PATH [root@hadoop conf]#vim /opt/spark2.2.3/conf/spark-env.sh export SCALA_HOME=/opt/scala2.11.12 export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.9.x86_64/ export HADOOP_HOME=/opt/hadoop2.7.6 export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export SPARK_HOME=/opt/spark2.2.3 export SPARK_MASTER_IP=hadoop.master //主机名称 export SPARK_EXECUTOR_MEMORY=1G //设置运行内存 5、配置Hadoop环境 [root@hadoop hadoop2.7.6]# vim /etc/profile export HADOOP_HOME=/opt/hadoop2.7.6 export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib" export PATH=${HADOOP_HOME}/bin 5.1 修改cor-site.xml文件 [root@hadoop hadoop]# vim core-site.xml <property> <name>hadoop.tmp.dir</name> <value>/hadoop/tmp</value><description>Abase for other temporary directories.</description> </property> <property> <name>fs.default.name</name> <value>hdfs://hadoop.master:9000</value> </property> 5.2 修改hadoop-env.sh文件/usr/java/jdk1.8.0_201-amd64 [root@hadoop hadoop]# vim hadoop-env.sh export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.9.x86_64/ 5.3 修改hdfs-site.xml文件 [root@hadoop hadoop]# vim hdfs-site.xml <property> <name>dfs.name.dir</name> <value>/hadoop/dfs/name</value> <description>Path on the local filesystem where theNameNode stores the namespace and transactions logs persistently.</description> </property> <property> <name>dfs.data.dir</name> <value>/hadoop/dfs/data</value> <description>Comma separated list of paths on the localfilesystem of a DataNode where it should store its blocks.</description> </property> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.permissions</name> <value>false</value> <description>need not permissions</description> </property> 5.4 修改mapred-site.xml文件 [root@hadoop hadoop]# vim mapred-site.xml <property> <name>mapred.job.tracker</name> <value>hadoop.master:9001</value> </property> <property> <name>mapred.local.dir</name> <value>/hadoop/var</value> </property> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> 6、 启动Hadoop cd /opt/hadoop2.7.6/bin ./hadoop namenode -format cd /opt/hadoop2.7.6/sbin 启动hdfs和yarn start-dfs.sh start-yarn.sh 查看验证正常 在浏览器输入192.168.47.45:8088 和192.168.47.45:50070 界面查看是否能访问 7、 启动Spark cd /opt/spark2.2.3/sbin start-all.sh http://192.168.47.45:8080/