Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task  in stage 0.0 failed  times, most recent failure: Lost task 3.3 in stage 0.0 (TID , hadoop7, executor ): ExecutorLostFailure (executor  exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 9.2 GB of  GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$.apply(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$.apply(DAGScheduler.scala:)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$.apply(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$.apply(DAGScheduler.scala:)
at scala.Option.foreach(Option.scala:)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:)
at org.apache.spark.util.EventLoop$$anon$.run(EventLoop.scala:) ERROR : FAILED: Execution Error, return code from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed because of out of memory.
INFO : Completed executing command(queryId=hive_20190529100107_063ed2a4-e3b0-48a9-9bcc-49acd51925c1); Time taken: 1441.753 seconds
Error: Error while processing statement: FAILED: Execution Error, return code from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed because of out of memory. (state=,code=)
Closing: : jdbc:hive2://hadoop1:10000/pdw_nameonce

Hive on spark时报错

解决
a.set spark.yarn.executor.memoryOverhead=512G 调大(权宜之计),excutor-momery + memoryOverhead不能大于集群内存
b.该问题的原因是因为OS层面虚拟内存分配导致,物理内存没有占用多少,但检查虚拟内存的时候却发现OOM,因此可以通过关闭虚拟内存检查来解决该问题,yarn.nodemanager.vmem-check-enabled=false 将虚拟内存检测设置为false

最新文章

  1. ubuntu安装谷歌输入法
  2. 13、系统集成项目经理要阅读的书籍 - IT软件人员书籍系列文章
  3. windows自带的压缩,解压缩命令
  4. osg渲染数据高程文件
  5. Array补充方法
  6. JQuery教程
  7. phpstorm8注册码
  8. Android TextView 文字居中
  9. 清理vs工程文件(python2.7)
  10. Phonegap(Cordova)3.4 + Android 环境搭建
  11. "红色病毒"问题 HDU 2065 递推+找循环节
  12. 《JavaScript 闯关记》之原型及原型链
  13. Python应用场景
  14. stm32串口接收中断协议解析
  15. LinearRegression 线性回归
  16. The server time zone value 'Öйú±ê׼ʱ¼ä' is unrecognized or represents more than one time zone.
  17. JavaScript lastIndexOf() 方法
  18. zabbix系列(八)zabbix添加对web页面url的状态监控
  19. __stdio_common_vsnprintf_s,该符号在函数 _vsnprintf_s_l 中被引用
  20. python各种模块,迭代器,生成器

热门文章

  1. python selenium 笔记
  2. js 引号 转义字符 json字符串
  3. (转)JS window对象的top、parent、opener含义
  4. pycharm2018.3 x64激活
  5. JDK_API剖析之java.io包
  6. JPA查询getOne()与findOne()的差异以及一些小问题
  7. C++入门经典-例4.8-同名的全局变量和局部变量
  8. oracle存储过程及sql优化-(一)
  9. LeetCode 74. 搜索二维矩阵(Search a 2D Matrix)
  10. MQ常问的问题