Spark on yarn jar 上传问题

时间:2023-05-04
本文介绍了Spark on yarn jar 上传问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着跟版网的小编来一起学习吧!

问题描述

限时送ChatGPT账号..

我正在尝试使用 spark over yarn(CentOS 上的 Cloudera Hadoop 5.2)运行一个简单的 Map/Reduce java 程序.我试过这两种不同的方法.第一种方式如下:

YARN_CONF_DIR=/usr/lib/hadoop-yarn/etc/hadoop/;/var/tmp/spark/spark-1.4.0-bin-hadoop2.4/bin/spark-submit --class MRContainer --master yarn-cluster --jars/var/tmp/spark/spark-1.4.0-bin-hadoop2.4/lib/spark-assembly-1.4.0-hadoop2.4.0.jar simplemr.jar

这个方法报错:

<块引用>

诊断:应用程序 application_1434177111261_0007 失败 2 次由于 appattempt_1434177111261_0007_000002 的 AM 容器已退出退出代码:-1000 由于:资源hdfs://kc1ltcld29:9000/user/myuser/.sparkStaging/application_1434177111261_0007/spark-assembly-1.4.0-hadoop2.4.0.jar在 src 文件系统上更改(预期为 1434549639128,为 1434549642191

然后我尝试不使用 --jars:

YARN_CONF_DIR=/usr/lib/hadoop-yarn/etc/hadoop/;/var/tmp/spark/spark-1.4.0-bin-hadoop2.4/bin/spark-submit --class MRContainer --master yarn-cluster simplemr.jar

<块引用>

诊断:应用程序 application_1434177111261_0008 失败 2 次由于 appattempt_1434177111261_0008_000002 的 AM 容器已退出退出代码:-1000 由于:文件不存在:hdfs://kc1ltcld29:9000/user/myuser/.sparkStaging/application_1434177111261_0008/spark-assembly-1.4.0-hadoop2.4.0.jar.这次尝试失败..申请失败.ApplicationMaster 主机:不适用ApplicationMaster RPC 端口:-1队列:root.myuser开始时间:1434549879649最终状态:失败跟踪网址:http://kc1ltcld29:8088/cluster/app/application_1434177111261_0008用户:myuser 线程主"org.apache.spark.SparkException 中的异常:应用程序application_1434177111261_0008 以失败状态完成在 org.apache.spark.deploy.yarn.Client.run(Client.scala:841)在 org.apache.spark.deploy.yarn.Client$.main(Client.scala:867)在 org.apache.spark.deploy.yarn.Client.main(Client.scala)在 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)在 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)在 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)在 java.lang.reflect.Method.invoke(Method.java:601)在 org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664)在 org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)在 org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)在 org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)在 org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 15/06/17 10:04:57 INFO util.Utils:称为 15/06/17 的关闭挂钩10:04:57 INFO util.Utils:删除目录/tmp/spark-2aca3f35-abf1-4e21-a10e-4778a039d0f4

我尝试从 hdfs://users//.sparkStaging 中删除所有 .jars 并重新提交,但这没有帮助.

解决方案

问题的解决方法是将 spark-assembly.jar 复制到每个节点的 hdfs 上的目录中,然后将其传递给 spark-submit --conf spark.yarn.jar 作为参数.命令如下:

hdfs dfs -copyFromLocal/var/tmp/spark/spark-1.4.0-bin-hadoop2.4/lib/spark-assembly-1.4.0-hadoop2.4.0.jar/user/spark/spark-assembly.jar/var/tmp/spark/spark-1.4.0-bin-hadoop2.4/bin/spark-submit --class MRContainer --master yarn-cluster --conf spark.yarn.jar=hdfs:///user/spark/spark-assembly.jar simplemr.jar

I am trying to run a simple Map/Reduce java program using spark over yarn (Cloudera Hadoop 5.2 on CentOS). I have tried this 2 different ways. The first way is the following:

YARN_CONF_DIR=/usr/lib/hadoop-yarn/etc/hadoop/; 
/var/tmp/spark/spark-1.4.0-bin-hadoop2.4/bin/spark-submit --class MRContainer --master yarn-cluster --jars /var/tmp/spark/spark-1.4.0-bin-hadoop2.4/lib/spark-assembly-1.4.0-hadoop2.4.0.jar  simplemr.jar

This method gives the following error:

diagnostics: Application application_1434177111261_0007 failed 2 times due to AM Container for appattempt_1434177111261_0007_000002 exited with exitCode: -1000 due to: Resource hdfs://kc1ltcld29:9000/user/myuser/.sparkStaging/application_1434177111261_0007/spark-assembly-1.4.0-hadoop2.4.0.jar changed on src filesystem (expected 1434549639128, was 1434549642191

Then I tried without the --jars:

YARN_CONF_DIR=/usr/lib/hadoop-yarn/etc/hadoop/; 
/var/tmp/spark/spark-1.4.0-bin-hadoop2.4/bin/spark-submit --class MRContainer --master yarn-cluster simplemr.jar

diagnostics: Application application_1434177111261_0008 failed 2 times due to AM Container for appattempt_1434177111261_0008_000002 exited with exitCode: -1000 due to: File does not exist: hdfs://kc1ltcld29:9000/user/myuser/.sparkStaging/application_1434177111261_0008/spark-assembly-1.4.0-hadoop2.4.0.jar .Failing this attempt.. Failing the application. ApplicationMaster host: N/A ApplicationMaster RPC port: -1 queue: root.myuser start time: 1434549879649 final status: FAILED tracking URL: http://kc1ltcld29:8088/cluster/app/application_1434177111261_0008 user: myuser Exception in thread "main" org.apache.spark.SparkException: Application application_1434177111261_0008 finished with failed status at org.apache.spark.deploy.yarn.Client.run(Client.scala:841) at org.apache.spark.deploy.yarn.Client$.main(Client.scala:867) at org.apache.spark.deploy.yarn.Client.main(Client.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 15/06/17 10:04:57 INFO util.Utils: Shutdown hook called 15/06/17 10:04:57 INFO util.Utils: Deleting directory /tmp/spark-2aca3f35-abf1-4e21-a10e-4778a039d0f4

I tried deleting all the .jars from hdfs://users//.sparkStaging and resubmitting but that didn't help.

解决方案

The problem was solved by copying spark-assembly.jar into a directory on the hdfs for each node and then passing it to spark-submit --conf spark.yarn.jar as a parameter. Commands are listed below:

hdfs dfs -copyFromLocal /var/tmp/spark/spark-1.4.0-bin-hadoop2.4/lib/spark-assembly-1.4.0-hadoop2.4.0.jar /user/spark/spark-assembly.jar 

/var/tmp/spark/spark-1.4.0-bin-hadoop2.4/bin/spark-submit --class MRContainer --master yarn-cluster  --conf spark.yarn.jar=hdfs:///user/spark/spark-assembly.jar simplemr.jar

这篇关于Spark on yarn jar 上传问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!

上一篇:Hadoop mapReduce 如何在 HDFS 中仅存储值 下一篇:Hadoop 选项没有任何效果(mapreduce.input.lineinputformat.linespermap、

相关文章