一、前言

公司实用Hadoop构建数据仓库,期间不可避免的实用HiveSql,在Etl过程中,速度成了避无可避的问题。本人有过几个数据表关联跑1个小时的经历,你可能觉得无所谓,可是多次Etl就要多个小时,非常浪费时间,所以HiveSql优化不可避免。

注:本文只是从sql层面介绍一下日常需要注意的点,不涉及Hadoop、MapReduce等层面,关于Hive的编译过程,请参考文章:http://tech.meituan.com/hive-sql-to-mapreduce.html

二、准备数据

假设咱们有两张数据表。

景区表:sight,12W条记录,数据表结构:

hive> desc sight;
OK
area string None
city string None
country string None
county string None
id string None
name string None
region string None

景区订单明细表:order_sight,1040W条记录,数据表结构:

hive> desc order_sight;
OK
create_time string None
id string None
order_id string None
sight_id bigint None

三、分析

3.1 where条件

那么咱们希望看见景区id是9718,日期是2015-10-10的所有订单id,那么sql需要如下书写:

hive> select s.id,o.order_id from sight s left join order_sight o on o.sight_id=s.id where s.id= and o.create_time = '2015-10-10';
Total MapReduce jobs =
Launching Job out of
Number of reduce tasks not specified. Estimated from input data size:
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Starting Job = job_1434099279301_3562174, Tracking URL = http://l-hdpm4.data.cn5.qunar.com:9981/proxy/application_1434099279301_3562174/
Kill Command = /home/q/hadoop/hadoop-2.2./bin/hadoop job -kill job_1434099279301_3562174
Hadoop job information for Stage-: number of mappers: ; number of reducers:
-- ::, Stage- map = %, reduce = %
-- ::, Stage- map = %, reduce = %, Cumulative CPU 4.73 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 4.73 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 14.87 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 14.87 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 14.87 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 14.87 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 14.87 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 14.87 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 14.87 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 14.87 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 14.87 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 15.22 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 15.22 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 15.22 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 15.3 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 15.3 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 15.3 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 21.85 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 21.85 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 21.85 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 21.85 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 37.62 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 38.06 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 38.06 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 38.17 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 38.17 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 38.17 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 38.25 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 38.25 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 38.25 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 38.32 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 38.32 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 38.32 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 38.41 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 49.13 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 49.59 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 49.76 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 49.76 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 52.79 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 52.79 sec
MapReduce Total cumulative CPU time: seconds msec
Ended Job = job_1434099279301_3562174
MapReduce Jobs Launched:
Job : Map: Reduce: Cumulative CPU: 52.79 sec HDFS Read: HDFS Write: SUCCESS
Total MapReduce CPU Time Spent: seconds msec
OK Time taken: 52.068 seconds, Fetched: row(s)

可见需要的时间是52秒,如果咱们换一个sql的书写方式:

hive> select s.id,o.order_id from sight s left join (select order_id,sight_id from order_sight where create_time = '2015-10-10') o on o.sight_id=s.id where s.id=;
Total MapReduce jobs =
Launching Job out of
Number of reduce tasks not specified. Estimated from input data size:
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Starting Job = job_1434099279301_3562218, Tracking URL = http://l-hdpm4.data.cn5.qunar.com:9981/proxy/application_1434099279301_3562218/
Kill Command = /home/q/hadoop/hadoop-2.2./bin/hadoop job -kill job_1434099279301_3562218
Hadoop job information for Stage-: number of mappers: ; number of reducers:
-- ::, Stage- map = %, reduce = %
-- ::, Stage- map = %, reduce = %, Cumulative CPU 2.24 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 2.24 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 2.24 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 5.53 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 5.53 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 14.62 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 18.66 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 18.66 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 18.66 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 18.66 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 18.66 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 19.09 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 19.09 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 19.09 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 19.22 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 19.22 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 19.22 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 19.35 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 19.35 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 19.35 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 19.54 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 19.54 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 19.54 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 19.64 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 19.64 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 19.64 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 23.32 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 27.27 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 32.82 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 34.35 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 34.35 sec
MapReduce Total cumulative CPU time: seconds msec
Ended Job = job_1434099279301_3562218
MapReduce Jobs Launched:
Job : Map: Reduce: Cumulative CPU: 34.35 sec HDFS Read: HDFS Write: SUCCESS
Total MapReduce CPU Time Spent: seconds msec
OK Time taken: 43.709 seconds, Fetched: row(s)

实用43秒,快了一些。当然咱们并不是仅仅分析说快了20%(我还多次测试,这次的差距最小),而是分析原因!

单从两个sql的写法上看的出来,特别是第二条的红色部分,我将left的条件写到里面了。那么执行的结果随之不一样,第二条的Reduce时间明显小于第一条的Reduce时间。

原因是这两个sql都分解成8个Map任务和1个Reduce任务,如果left的条件写在后面,那么这些关联操作会放在Reduce阶段,1个Reduce操作的时间必然大于8个Map的执行时间,造成执行时间超长。

结论:当使用外关联时,如果将副表的过滤条件写在Where后面,那么就会先全表关联,之后再过滤

最新文章

  1. python成长之路【第八篇】:异常处理
  2. 【mysql】利用Navicat for MySQL的使用
  3. linux 实战使用,上传git 解决冲突
  4. linux下c语言实现计算磁盘剩余空间
  5. boost: tcp client sample
  6. 5个缺失的 JavaScript 数字格式化函数
  7. ASP.Net Core 运行在Linux(CentOS)
  8. The preview is empty because of the setting.Check the generation option.
  9. sql日志框架log4jdbc的AOP式使用
  10. Grok 正则捕获
  11. js的赋值问题:值传递还是引用传递?
  12. 机器学习——KMeans聚类,KMeans原理,参数详解
  13. zabbix利用SNMPTrap接收交换机主动告警
  14. http stream
  15. centos 7 修改系统屏幕分辨率
  16. JS实例4
  17. 了解注解及java提供的几个基本注解
  18. noip第9课作业
  19. 5 Django-2 的路由层 (URLconf)
  20. [2016北京集训测试赛5]小Q与内存-[线段树的神秘操作]

热门文章

  1. Dynamic CRM 2013学习笔记(十一)利用Javascript实现子表合计(汇总,求和)功能
  2. node-webkit教程(9)native api 之Tray(托盘)
  3. [异常] openCV安装和配置
  4. [JS] HTML QQ分享界面js代码
  5. 服务器端json数据文件分割合并解决方案
  6. TF Boys (TensorFlow Boys ) 养成记(二)
  7. Leetcode 172 Factorial Trailing Zeroes
  8. Java null String
  9. mac系统安装php redis扩展
  10. Inno setup 安装*.inf文件_示例