Hive On Spark: 其实就是多一个Spark作为execution backend的选择而已. 三者共生平行关系.
以下摘自Hive Official:
Here are the main motivations for enabling Hive to run on Spark:
1. Spark user benefits: This feature is very valuable to users who are already using Spark for
other data processing and machine learning needs. Standardizing on one execution backend
is convenient for operational management, and makes it easier to develop expertise to debug
issues and make enhancements.
2. Greater Hive adoption: Following the previous point, this brings Hive into the Spark user base
as a SQL on Hadoop option, further increasing Hive’s adoption.
3. Performance: Hive queries, especially those involving multiple reducer stages, will run faster,
thus improving user experience as Tez does.
It is not a goal for the Spark execution backend to replace Tez or MapReduce. It is healthy for the
Hive project for multiple backends to coexist. Users have a choice whether to use Tez, Spark or
MapReduce. Each has different strengths depending on the use case. And the success of Hive
does not completely depend on the success of either Tez or Spark.
回复 ( 4 )
在Spark之前
Hive有两个execution backend, 一个是Tez, 另一个是MapReduce.
意思就是你的HiveQL语句的最终执行程序可以选择Tez或者MapReduce.
Hive On Spark: 其实就是多一个Spark作为execution backend的选择而已. 三者共生平行关系.
以下摘自Hive Official:
Here are the main motivations for enabling Hive to run on Spark:
1. Spark user benefits: This feature is very valuable to users who are already using Spark for
other data processing and machine learning needs. Standardizing on one execution backend
is convenient for operational management, and makes it easier to develop expertise to debug
issues and make enhancements.
2. Greater Hive adoption: Following the previous point, this brings Hive into the Spark user base
as a SQL on Hadoop option, further increasing Hive’s adoption.
3. Performance: Hive queries, especially those involving multiple reducer stages, will run faster,
thus improving user experience as Tez does.
It is not a goal for the Spark execution backend to replace Tez or MapReduce. It is healthy for the
Hive project for multiple backends to coexist. Users have a choice whether to use Tez, Spark or
MapReduce. Each has different strengths depending on the use case. And the success of Hive
does not completely depend on the success of either Tez or Spark.
你说的第二个是Spark On Hive:
可以理解为Hive外包了一层基于Spark的User Interface. 即通过Spark可以直接进行Hive的相关操作: Hive Table , Hive UDFs, HiveQL等均可正常使用无误.
这样已经有历史遗留问题的Hive的相关资料也可以通过SparkSQL继续使用.
如果是新的资料, 你想仍放在以Hive作为data warehouse的里面的话, 直接使用SparkSQL可以享用Spark针对RDD设计的相关优化, 会比Hive On Spark的效能更好.
所谓Hive on Spark只是Hive项目的一个新特性,和Spark项目的开发没啥关系。
针对你列的1和2的区别是:
1. 就是所谓的Hive on Spark,就是把hive执行引擎换成spark。众所周知的是这个engine还可以设置和成mr(MRv1时代)和tez(目前hive13默认用的引擎,性能更佳),所以目前新增的spark选项只是Hive把执行计划放到spark集群上运行而已。
2. 是Spark SQL的一个特性。就是可以把hive作为一个数据源,这样我除了textFile从HDFS直接读文件,还可以直接用HiveQL查询到RDD,这对于要获取保存在Hive表的数据太方便了。这个特性早就支持了,至少我在之前用Spark1.2的时候已经可以从Hive里导入数据啦,最新的1.5新增了更多接口,用起来更方便了。
所以,简单来说区别1是Hive调用Spark任务,2是Spark调用Hive任务。
1.Hive on Spark。主要是对mr的一种性能优化(并不是所有场景都更好)
2.这个只是把hive当数据源。和你在其它语言中访问Hive没有差别。
其实还有一种情况,在SparkSQL中访问底层的Hive存储。更像是Spark on Hive,但是其实也是当作数据源而已。但是由于被隐藏在SparkSQL的配置中,看起来会觉得结合更紧密。
前者 是交互式的,后者是 应用式的(本来想说 非交互式的,感觉还不够专业→_→)