文章目录

  • 17. Yarn 案例实操
  • 17.1 Yarn 生产环境核心参数配置案例
  • 17.1.1 需求
  • 17.1.2 需求分析
  • 17.1.3修改yarn-site.xml配置参数如下
  • 17.1.4 分发配置
  • 17.1.5 重启集群
  • 17.1.6 执行WordCount程序
  • 17.1.7 观察Yarn任务执行页面


17. Yarn 案例实操

注:调整下列参数之前尽量拍摄Linux快照,否则后续的案例,还需要重写准备集群。

将hadoop102、hadoop103、hadoop104全部拍摄快照

Hadoop yarn 控制台 hadoop yarn配置_Hadoop yarn 控制台

Hadoop yarn 控制台 hadoop yarn配置_Hadoop yarn 控制台_02


Hadoop yarn 控制台 hadoop yarn配置_分布式_03


Hadoop yarn 控制台 hadoop yarn配置_大数据_04

17.1 Yarn 生产环境核心参数配置案例

17.1.1 需求

从1G数据中,统计每个单词出现次数。服务器3台,每台配置2G内存,2核CPU,4线程。

参数从你的集群配置看

Hadoop yarn 控制台 hadoop yarn配置_Hadoop yarn 控制台_05

Hadoop yarn 控制台 hadoop yarn配置_Hadoop yarn 控制台_06

Hadoop yarn 控制台 hadoop yarn配置_分布式_07

17.1.2 需求分析

1G / 128m = 8个MapTask;1个ReduceTask;1个mrAppMaster
平均每个节点运行10个 / 3台 ≈ 3个任务(4 3 3)

17.1.3修改yarn-site.xml配置参数如下

Hadoop yarn 控制台 hadoop yarn配置_Hadoop yarn 控制台_08

Hadoop yarn 控制台 hadoop yarn配置_Hadoop yarn 控制台_09


在末尾插入,在上面

<!-- 选择调度器,默认容量 -->
<property>
	<description>The class to use as the resource scheduler.</description>
	<name>yarn.resourcemanager.scheduler.class</name>
	<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
</property>

<!-- ResourceManager处理调度器请求的线程数量,默认50;如果提交的任务数大于50,可以增加该值,但是不能超过3台 * 4线程 = 12线程(去除其他应用程序实际不能超过8) -->
<property>
	<description>Number of threads to handle scheduler interface.</description>
	<name>yarn.resourcemanager.scheduler.client.thread-count</name>
	<value>8</value>
</property>

<!-- 是否让yarn自动检测硬件进行配置,默认是false,如果该节点有很多其他应用程序,建议手动配置。如果该节点没有其他应用程序,可以采用自动 -->
<property>
	<description>Enable auto-detection of node capabilities such as
	memory and CPU.
	</description>
	<name>yarn.nodemanager.resource.detect-hardware-capabilities</name>
	<value>false</value>
</property>

<!-- 是否将虚拟核数当作CPU核数,默认是false,采用物理CPU核数 -->
<property>
	<description>Flag to determine if logical processors(such as
	hyperthreads) should be counted as cores. Only applicable on Linux
	when yarn.nodemanager.resource.cpu-vcores is set to -1 and
	yarn.nodemanager.resource.detect-hardware-capabilities is true.
	</description>
	<name>yarn.nodemanager.resource.count-logical-processors-as-cores</name>
	<value>false</value>
</property>

<!-- 虚拟核数和物理核数乘数,默认是1.0 -->
<property>
	<description>Multiplier to determine how to convert phyiscal cores to
	vcores. This value is used if yarn.nodemanager.resource.cpu-vcores
	is set to -1(which implies auto-calculate vcores) and
	yarn.nodemanager.resource.detect-hardware-capabilities is set to true. The	number of vcores will be calculated as	number of CPUs * multiplier.
	</description>
	<name>yarn.nodemanager.resource.pcores-vcores-multiplier</name>
	<value>1.0</value>
</property>

<!-- NodeManager使用内存数,默认8G,修改为4G内存,自己的机器是几个G,就设置几个G -->
<property>
	<description>Amount of physical memory, in MB, that can be allocated 
	for containers. If set to -1 and
	yarn.nodemanager.resource.detect-hardware-capabilities is true, it is
	automatically calculated(in case of Windows and Linux).
	In other cases, the default is 8192MB.
	</description>
	<name>yarn.nodemanager.resource.memory-mb</name>
	<value>4096</value>
</property>

<!-- nodemanager的CPU核数,不按照硬件环境自动设定时默认是8个,修改为2个,自己机器是几个CPU就设置几个CPU -->
<property>
	<description>Number of vcores that can be allocated
	for containers. This is used by the RM scheduler when allocating
	resources for containers. This is not used to limit the number of
	CPUs used by YARN containers. If it is set to -1 and
	yarn.nodemanager.resource.detect-hardware-capabilities is true, it is
	automatically determined from the hardware in case of Windows and Linux.
	In other cases, number of vcores is 8 by default.</description>
	<name>yarn.nodemanager.resource.cpu-vcores</name>
	<value>4</value>
</property>

<!-- 容器最小内存,默认1G -->
<property>
	<description>The minimum allocation for every container request at the RM	in MBs. Memory requests lower than this will be set to the value of this	property. Additionally, a node manager that is configured to have less memory	than this value will be shut down by the resource manager.
	</description>
	<name>yarn.scheduler.minimum-allocation-mb</name>
	<value>1024</value>
</property>

<!-- 容器最大内存,默认8G,修改为2G,安装自己机器的内存,尽量打个折扣,当任务大小大的话可以往上调,因为在生产环境下,一般内存是128g,都够用 -->
<property>
	<description>The maximum allocation for every container request at the RM	in MBs. Memory requests higher than this will throw an	InvalidResourceRequestException.
	</description>
	<name>yarn.scheduler.maximum-allocation-mb</name>
	<value>2048</value>
</property>

<!-- 容器最小CPU核数,默认1个 -->
<property>
	<description>The minimum allocation for every container request at the RM	in terms of virtual CPU cores. Requests lower than this will be set to the	value of this property. Additionally, a node manager that is configured to	have fewer virtual cores than this value will be shut down by the resource	manager.
	</description>
	<name>yarn.scheduler.minimum-allocation-vcores</name>
	<value>1</value>
</property>

<!-- 容器最大CPU核数,默认4个,修改为2个,尽量是你cpu个数的一半,我这里要是设置一半则运行不了wordcoun了 -->
<property>
	<description>The maximum allocation for every container request at the RM	in terms of virtual CPU cores. Requests higher than this will throw an
	InvalidResourceRequestException.</description>
	<name>yarn.scheduler.maximum-allocation-vcores</name>
	<value>2</value>
</property>

<!-- 虚拟内存检查,默认打开,修改为关闭 -->
<property>
	<description>Whether virtual memory limits will be enforced for
	containers.</description>
	<name>yarn.nodemanager.vmem-check-enabled</name>
	<value>false</value>
</property>

<!-- 虚拟内存和物理内存设置比例,默认2.1 -->
<property>
	<description>Ratio between virtual memory to physical memory when	setting memory limits for containers. Container allocations are	expressed in terms of physical memory, and virtual memory usage	is allowed to exceed this allocation by this ratio.
	</description>
	<name>yarn.nodemanager.vmem-pmem-ratio</name>
	<value>2.1</value>
</property>

Hadoop yarn 控制台 hadoop yarn配置_hadoop_10

<!-- 虚拟内存和物理内存设置比例,默认2.1 -->
<property>
	<description>Ratio between virtual memory to physical memory when	setting memory limits for containers. Container allocations are	expressed in terms of physical memory, and virtual memory usage	is allowed to exceed this allocation by this ratio.
	</description>
	<name>yarn.nodemanager.vmem-pmem-ratio</name>
	<value>2.1</value>
</property>

为什么要关闭虚拟内存
因为在java8只使用java堆里面的内存,而centos7.0以上使用linux系统为java进程预留的5G,实际使用的内存还不超过4g,所以会造成大量的浪费,因此要关闭虚拟内存

容器最大内存设置过小,会出现以下图错误

Hadoop yarn 控制台 hadoop yarn配置_Hadoop yarn 控制台_11


容器最大CPU核数设置过小会出现以下图错误

Hadoop yarn 控制台 hadoop yarn配置_虚拟内存_12

17.1.4 分发配置

注意:如果集群的硬件资源不一致,要每个NodeManager单独配置

Hadoop yarn 控制台 hadoop yarn配置_大数据_13

然后进行分发一下,如果集群的配置不同,假如hadoop102是i7,hadoop103是i3,则尽量不使用分发,而是一个一个的机器进行配置

这个脚本是之前写的,想看详细的看我之前写的

17.1.5 重启集群

[summer@hadoop102 hadoop]$  stop
[summer@hadoop102 hadoop]$  start

Hadoop yarn 控制台 hadoop yarn配置_Hadoop yarn 控制台_14

这个是我之前写的启动和关闭集群脚本

17.1.6 执行WordCount程序

[summer@hadoop102 hadoop-3.1.3]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount /testinput /testoutput/output6

Hadoop yarn 控制台 hadoop yarn配置_分布式_15

17.1.7 观察Yarn任务执行页面

http://hadoop103:8088/cluster/apps

Hadoop yarn 控制台 hadoop yarn配置_hadoop_16