1. 编辑spark-defaults.conf位置文件

添加spark.eventLog.enabled和spark.eventLog.dir的配置
修改spark.eventLog.dir为我们之前在hdfs配置的端口
hdfs配置参考hadoop(七)集群配置同步(hadoop完全分布式四)|9

[shaozhiqi@hadoop102 conf]$ pwd
/opt/module/spark-2.4.3-bin-hadoop2.7/conf
[shaozhiqi@hadoop102 conf]$ vim spark-defaults.conf
# spark.master spark://master:7077
# spark.eventLog.enabled true
# spark.eventLog.dir hdfs://namenode:8021/directory
# spark.serializer org.apache.spark.serializer.KryoSerializer
# spark.driver.memory 5g
# spark.executor.extraJavaOptions -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
spark.eventLog.enabled true
spark.eventLog.dir hdfs://hadoop102:9000/directory

2. 分发我们conf修改的配置文件

分发配置参考hadoop(六)rsync远程同步|xsync集群分发(完全分布式准备三)|8

[shaozhiqi@hadoop102 spark-2.4.3-bin-hadoop2.7]$ testxsync conf/

找个机器看下是否同步成功

[shaozhiqi@hadoop103 spark-2.4.3-bin-hadoop2.7]$ cd conf
[shaozhiqi@hadoop103 conf]$ cat spark-defaults.conf
# spark.master spark://master:7077
# spark.eventLog.enabled true
# spark.eventLog.dir hdfs://namenode:8021/directory
# spark.serializer org.apache.spark.serializer.KryoSerializer
# spark.driver.memory 5g
# spark.executor.extraJavaOptions -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
spark.eventLog.enabled true
spark.eventLog.dir hdfs://hadoop102:9000/directory
[shaozhiqi@hadoop103 conf]$

3. 启动我们的hdfs

防止启动报错,先删除data logs 然后格式化namenode
bin/hdfs namenode –format

[shaozhiqi@hadoop102 hadoop-3.1.2]$ start-dfs.sh

启动成功,查看进程

[shaozhiqi@hadoop102 hadoop-3.1.2]$ start-dfs.sh
Starting namenodes on [hadoop102]
Starting datanodes
hadoop103: WARNING: /opt/module/hadoop-3.1.2/logs does not exist. Creating.
hadoop104: WARNING: /opt/module/hadoop-3.1.2/logs does not exist. Creating.
Starting secondary namenodes [hadoop104]
[shaozhiqi@hadoop102 hadoop-3.1.2]$ jps
3088 Master
3168 Worker
4452 Jps
3366 CoarseGrainedExecutorBackend
4200 DataNode
4076 NameNode
3773 GetConf
[shaozhiqi@hadoop102 hadoop-3.1.2]$

Yarn等我们提交任务到yarn时再启动

4. 查看我们的hdfs namenode ui

image.png

5. 创建hdfs文件夹,和我们上面配置的spark-defaults.conf中的一样

[shaozhiqi@hadoop102 spark-2.4.3-bin-hadoop2.7]$ hadoop fs -mkdir /directory

再次查看:

image.png

6. 再次修改spark-env.sh添加历史服务参数

[shaozhiqi@hadoop102 conf]$ vi spark-env.sh
export JAVA_HOME=/opt/module/jdk1.8.0_211
export SPARK_MASTER_HOS=hadoop102
export SPARK_MASTER_PORT=7077
export SPARK_HISTORY_OPTS="-Dspark.history.ui.port=4000 -Dspark.history.retainedApplications=3 -Dspark.history.fs.logDirectory=hdfs://hadoop102:9000/directory"

7. 同步我们的spark-env.sh

shaozhiqi@hadoop102 spark-2.4.3-bin-hadoop2.7]$ testxsync conf/spark-env.sh

8. 执行一个spark进程

[shaozhiqi@hadoop102 spark-2.4.3-bin-hadoop2.7]$ bin/spark-submit \
> --class org.apache.spark.examples.SparkPi \
> --master spark://hadoop102:7077 \
> --executor-memory 1G \
> --total-executor-cores 2 \
> ./examples/jars/spark-examples_2.11-2.4.3.jar \
> 100

9. 查看spark ui多了我们的进程

image.png

点击spark pi进程,由于我们的任务还在执行,可以直接跳转

image.png

10. 发现好久都没有执行完看下日志

19/07/01 07:15:53 WARN TaskSchedulerImpl:Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

难道是没有资源了?
点击kill掉spark shell和我们的spark Pi,然后单独提交spark Pi任务试下

[shaozhiqi@hadoop102 spark-2.4.3-bin-hadoop2.7]$ bin/spark-submit \
> --class org.apache.spark.examples.SparkPi \
> --master spark://hadoop102:7077 \
> --executor-memory 1G \
> --total-executor-cores 2 \
> ./examples/jars/spark-examples_2.11-2.4.3.jar \
> 100

image.png

可以看到50多秒句结束了
当任务执行结束现在去访问spark 的4000,发现发问不了

11. 开启历史服务就可以访问已结束的任务了

[shaozhiqi@hadoop102 spark-2.4.3-bin-hadoop2.7]$ sbin/start-history-server.sh
starting org.apache.spark.deploy.history.HistoryServer, logging to /opt/module/spark-2.4.3-bin-hadoop2.7/logs/spark-shaozhiqi-org.apache.spark.deploy.history.HistoryServer-1-hadoop102.out
[shaozhiqi@hadoop102 spark-2.4.3-bin-hadoop2.7]$ jps

可以看到多了HistoryServer

[shaozhiqi@hadoop102 spark-2.4.3-bin-hadoop2.7]$ jps
3505 Worker
4708 HistoryServer
4775 Jps
4027 DataNode
3437 Master
3901 NameNode
[shaozhiqi@hadoop102 spark-2.4.3-bin-hadoop2.7]$

12. 访问history ui,成功

image.png

13. 查看hdfsz有无生成执行结果文件

文件已生成历史服务配置成功

image.png

最新文章

  1. Linux IPC System V 信号量
  2. U盘无法拷贝超过4G的大文件
  3. 如何出色的研究 RGSS3 (三) 形式的调整的细节
  4. 4.事务提交过程,交易的基本概念,Oracle交易周期,保存点savepoint,数据库的隔离级别
  5. springboot mybatis redis 二级缓存
  6. c语言的第三次作业
  7. LogHelper 日志
  8. kamailio 云部署 配置NAT
  9. Linux 下SVN报错No repository found in 'svn://210.16.191.230/huandong_project'
  10. machine learning[GMM-EM]
  11. activemq 控制面板里Number Of Pending Messages、 Messages Enqueued等含义解释
  12. emwin 解决在A窗口上新建B窗口后‘只激活’B窗口问题
  13. python tkinter-窗体
  14. oracle中varchar2(2)存不了一个汉字的原因
  15. ASCII 码
  16. Spring 集成开发工具(STS)安装及配置
  17. JS BOM对象 History对象 Location对象
  18. [ONTAK2015]Bajtman i Okrągły Robin
  19. eclipse cut copy paste plugin
  20. 【Three.js】实现随心所欲的展示外部三维模型

热门文章

  1. Requests发送带cookies请求
  2. Java-字节输入输出。(新手)
  3. Java中static和final的解析
  4. DOM-XSS攻击原理与防御
  5. HTML每日学习笔记(2)
  6. Java 借助poi操作PDF工具类
  7. cobalt strike 快速上手
  8. sql 语句系列(字符串的遍历嵌入删除与统计)[八百章之第十一章]
  9. java基础-Map
  10. 倒影box-reflect(可图片可文字)