./bin/flink run -m yarn-cluster -yjm 1024 -ytm 1024 -s hdfs://master:9000/flink/checkpoints/d15750eebe118cccb93b4450a008e4d3/chk-158/_metadata -c stream.TestKafkaCheckpoint /var/flink/data/jars/flink-1.0-SNAPSHOT.jar

可以看到,我给jobmanager和taskmanager分别分配了1g内存。

但是运行时报了下面的错。 

意思是说那几项内存总和(372m)超过了配置的total flink memory(132m)。

org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: Sum of configured Framework Heap Memory (128.000mb (134217728 bytes)), Framework Off-Heap Memory (128.000mb (134217728 bytes)), Task Off-Heap Memory (0 bytes), Managed Memory (52.800mb (55364813 bytes)) and Network Memory (64.000mb (67108864 bytes)) exceed configured Total Flink Memory (132.000mb (138412032 bytes)).
        at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:335)
        at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
        at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
        at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:664)
        at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
        at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:895)
        at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:968)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
        at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
        at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:968)
Caused by: org.apache.flink.configuration.IllegalConfigurationException: Sum of configured Framework Heap Memory (128.000mb (134217728 bytes)), Framework Off-Heap Memory (128.000mb (134217728 bytes)), Task Off-Heap Memory (0 bytes), Managed Memory (52.800mb (55364813 bytes)) and Network Memory (64.000mb (67108864 bytes)) exceed configured Total Flink Memory (132.000mb (138412032 bytes)).

当我调低jobmanager和taskmanager的内存为100m时,又报下面的错。

意思是配置的JVM Metaspace为700m,JVM Overhead为192m,超过了total process memory配置的200m(ytm)。

./bin/flink run -m yarn-cluster -yjm 500 -ytm 200 -s hdfs://master:9000/flink/checkpoints/d15750eebe118cccb93b4450a008e4d3/chk-158/_metadata -c stream.TestKafkaCheckpoint /var/flink/data/jars/flink-1.0-SNAPSHOT.jar

org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: Sum of configured JVM Metaspace (700.000mb (734003200 bytes)) and JVM Overhead (192.000mb (201326592 bytes)) exceed configured Total Process Memory (200.000mb (104857600 bytes)).
        at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:335)
        at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
        at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
        at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:664)
        at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
        at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:895)
        at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:968)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
        at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
        at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:968)
Caused by: org.apache.flink.configuration.IllegalConfigurationException: Sum of configured JVM Metaspace (700.000mb (734003200 bytes)) and JVM Overhead (192.000mb (201326592 bytes)) exceed configured Total Process Memory (200.000mb (104857600 bytes)).

应该是flink1.10内存模型的原因,查看官网https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/config.html

flink 内存优化 flink内存溢出_flink 内存优化

可以看到,taskmanager.memory.processor.size配置的是total process memory的大小,而taskmanager.memory.flink.size配置的是total flink memory的大小。其中前者包含后者,以及jvm metaspace和jvm overhead。当使用容器化启动时,应该将前者设置为容器内存大小。

前面的都好理解,关键是最后这一句:“当使用容器化启动时,应该将前者设置为容器内存大小”。on yarn模式,容器即为container,而container的大小和个数由几个参数控制:

yarn.scheduler.minimum-allocation-mb #申请container的最小内存(默认是1G)
yarn.scheduler.maximum-allocation-mb #申请container的最大内存(默认是8G)
yarn.nodemanager.resource.memory-mb #resourcemanager内存(默认是8G,与第一条配置的比值即为最大容器数)

我一开始是没有配置这两个参数的,使用的是默认值。后来想是不是因为我机器可以内存只剩4g,而容器数默认为8,导致每个容器的大小为512m不够我默认配置的taskmanager.memory.process.size:1568m,进而产生影响。但是不确定,因为申请的容器不够用时会自动扩容,只有低于最大值就行,所以应该不会有影响才对。不管,先试试再说,于是修改了resourcemanager内存为4g,容器最小容量为2g,然后重启Hadoop使其生效:

flink 内存优化 flink内存溢出_flink 内存优化_02

flink的默认配置为

flink 内存优化 flink内存溢出_apache_03

再次运行:

./bin/flink run -m yarn-cluster -yjm 1024 -ytm 1024 -s hdfs://master:9000/flink/checkpoints/d15750eebe118cccb93b4450a008e4d3/chk-158/_metadata -c stream.TestKafkaCheckpoint /var/flink/data/jars/flink-1.0-SNAPSHOT.jar

发现还是报错:

org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: Sum of configured Framework Heap Memory (128.000mb (134217728 bytes)), Framework Off-Heap Memory (128.000mb (134217728 bytes)), Task Off-Heap Memory (0 bytes), Managed Memory (52.800mb (55364813 bytes)) and Network Memory (64.000mb (67108864 bytes)) exceed configured Total Flink Memory (132.000mb (138412032 bytes)).
        at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:335)
        at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
        at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
        at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:664)
        at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
        at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:895)
        at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:968)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
        at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
        at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:968)
Caused by: org.apache.flink.configuration.IllegalConfigurationException: Sum of configured Framework Heap Memory (128.000mb (134217728 bytes)), Framework Off-Heap Memory (128.000mb (134217728 bytes)), Task Off-Heap Memory (0 bytes), Managed Memory (52.800mb (55364813 bytes)) and Network Memory (64.000mb (67108864 bytes)) exceed configured Total Flink Memory (132.000mb (138412032 bytes)).

说明不是这个原因。

那么这个132到底是怎么算出来的呢???

不知道。但是看到total flink size是taskmanager.memory.flink.size指定的。而默认配置没有单独配,因此试着修改了一下配置:

flink 内存优化 flink内存溢出_flink_04

再次运行:

org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: bytes must be >= 0
        at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:335)
        at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
        at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
        at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:664)
        at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
        at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:895)
        at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:968)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
        at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
        at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:968)
Caused by: java.lang.IllegalArgumentException: bytes must be >= 0

发现报了另一个错,并且好像也是因为内存的原因!

 

这时,看到官网的内存模型图:

flink 内存优化 flink内存溢出_flink 内存优化_05

可以看到,total process memory = total flink memory + jvm metaspace + jvm overhead。

我在命令中指定的jtm=1024m,根据报错知道JVM Metaspace为700m,JVM Overhead为192m,total flink memory为132m。这三项加起来刚好是1024m。与设置的值相符合。

因此,说明设置的jtm太小了!!!根据第一条报错,知道最少还需要240m。这里直接再加1g,总共分片2g给taskmanager,再次运行:

./bin/flink run -m yarn-cluster -yjm 1024 -ytm 2048 -s hdfs://master:9000/flink/checkpoints/d15750eebe118cccb93b4450a008e4d3/chk-158/_metadata -c stream.TestKafkaCheckpoint /var/flink/data/jars/flink-1.0-SNAPSHOT.jar

2020-05-27 14:06:54,099 INFO  org.apache.hadoop.yarn.client.RMProxy                         - Connecting to ResourceManager at master/xxx:8032
2020-05-27 14:06:54,200 INFO  org.apache.flink.yarn.YarnClusterDescriptor                   - No path for the flink jar passed. Using the location of class org.apache.flink.yarn.YarnClusterDescriptor to locate the jar
2020-05-27 14:06:54,396 WARN  org.apache.flink.yarn.YarnClusterDescriptor                   - The JobManager or TaskManager memory is below the smallest possible YARN Container size. The value of 'yarn.scheduler.minimum-allocation-mb' is '2048'. Please increase the memory size.YARN will allocate the smaller containers but the scheduler will account for the minimum-allocation-mb, maybe not all instances you requested will start.
2020-05-27 14:06:54,396 INFO  org.apache.flink.yarn.YarnClusterDescriptor                   - Cluster specification: ClusterSpecification{masterMemoryMB=2048, taskManagerMemoryMB=2048, slotsPerTaskManager=8}
2020-05-27 14:06:57,925 INFO  org.apache.flink.yarn.YarnClusterDescriptor                   - Submitting application master application_1590554685223_0001
2020-05-27 14:06:58,063 INFO  org.apache.hadoop.yarn.client.api.impl.YarnClientImpl         - Submitted application application_1590554685223_0001
2020-05-27 14:06:58,063 INFO  org.apache.flink.yarn.YarnClusterDescriptor                   - Waiting for the cluster to be allocated
2020-05-27 14:06:58,070 INFO  org.apache.flink.yarn.YarnClusterDescriptor                   - Deploying cluster, current state ACCEPTED
2020-05-27 14:07:04,905 INFO  org.apache.flink.yarn.YarnClusterDescriptor                   - YARN application has been deployed successfully.
2020-05-27 14:07:04,906 INFO  org.apache.flink.yarn.YarnClusterDescriptor                   - Found Web Interface master:5804 of application 'application_1590554685223_0001'.
Job has been submitted with JobID 770dda740247c797287fc22d88ad3319

可以看到成功运行了!

同时还可以发现,日志中说了jobmanager或taskmanager的内存小于yarn container的最小内存。 The value of 'yarn.scheduler.minimum-allocation-mb' is '2048'。也就是之前修改的yarn-site.xml文件中指定的。日志中建议提高jobmanager和taskmanager的内存,不然可能会出现启动失败的情况。