一:如下异常:

Starting Job

16/06/30 01:15:34 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.10.50:8032

16/06/30 01:15:35 INFO input.FileInputFormat: Total input paths to process : 2

16/06/30 01:15:35 INFO mapreduce.JobSubmitter: number of splits:2

16/06/30 01:15:35 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1467220503311_0001

16/06/30 01:15:35 INFO impl.YarnClientImpl: Submitted application application_1467220503311_0001

16/06/30 01:15:35 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1467220503311_0001/

16/06/30 01:15:35 INFO mapreduce.Job: Running job: job_1467220503311_0001

二:分析解决

1:可能由于集群所在的空间磁盘不足所致。

2.yarn配置不完整所致。修改前:

<configration>:

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

</configration>

修改后:

<configuration>

        <property>

                <name>yarn.resourcemanager.hostname</name>

                <value>master</value>

        </property>

        <property>

                <name>yarn.nodemanager.aux-services</name>

                <value>mapreduce_shuffle</value>

        </property>

        <property>

                <name>yarn.log-aggregation-enable</name>

                <value>true</value>

        </property>

        <property>

                <name>yarn.log-aggregation.retain-seconds</name>

                <value>604800</value>

        </property>

</configuration>

重启集群start-all.sh即可!

 --------------------- 

作者:顺顺顺子 

 版权声明:本文为博主原创文章,转载请附上博文链接!