Hive on Spark 伪分布式环境搭建过程记录

我只是一个虾纸丫 提交于 2019-11-29 21:29:28

进入hive cli是,会有如下提示:
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Hive默认使用MapReduce作为执行引擎,即Hive on mr。实际上,Hive还可以使用Tez和Spark作为其执行引擎,分别为Hive on Tez和Hive on Spark。由于MapReduce中间计算均需要写入磁盘,而Spark是放在内存中,所以总体来讲Spark比MapReduce快很多。因此,Hive on Spark也会比Hive on mr快。为了对比Hive on Spark和Hive on mr的速度,需要在已经安装了Hadoop集群的机器上安装Spark集群(Spark集群是建立在Hadoop集群之上的,也就是需要先装Hadoop集群,再装Spark集群,因为Spark用了Hadoop的HDFS、YARN等),然后把Hive的执行引擎设置为Spark。
Spark运行模式分为三种:
1、Spark on YARN 
2、Standalone Mode 
3、Spark on Mesos
Hive on Spark默认支持Spark on YARN模式,本次部署也选择Spark on YARN模式。Spark on YARN就是使用YARN作为Spark的资源管理器。分为Cluster和Client两种模式。

基础环境信息

Centos7
JDK1.8
伪分布式的hadoop-2.7.7集群
hive-2.1.1(可正常使用hive on mr)
maven-3.5.4
scala-2.11.6
编译环境要能连接互联网

编译Spark

Hive on Spark,所用的Spark版本必须不包含Hive的相关jar包,hive on spark 的官网上说“Note that you must have a version of Spark which does not include the Hive jars”。在spark官网下载的编译的Spark都是有集成Hive的,因此需要自己下载源码来编译,并且编译的时候不指定Hive。
Hive和Spark的兼容版本也有要求,可参照官网配套说明选择,本次使用hive2.1.1,选的spark版本为spark-1.6.3,对hadoop的版本并未有明显限制,确保大版本一致即可。
hive官网连接
https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started

下载hive1.6.3源码
http://spark.apache.org/downloads.html

编译前请确保已经安装基础环境信息中列出的JDK、Maven和Scala,并在/etc/profile里配置环境变量。

编译spark源码

解压源码文件,并进入解压后的源码目录,执行hive官网提供的编译命令,编译spark-1.6.3-bin-hadoop2-without-hive.tgz安装包

[root@node222 spark-1.6.3]# ./make-distribution.sh --name "hadoop2-without-hive" --tgz "-Pyarn,hadoop-provided,hadoop-2.4,parquet-provided"

经过漫长的编译和等待(取决于编译服务器的资源和网络情况),出现以下输出,说明编译成功。

并在编译目录下生成spark-1.6.3-bin-hadoop2-without-hive.tgz包。

安装配置spark

解压spark-1.6.3-bin-hadoop2-without-hive.tgz至/usr/local/目录,并修改解压后的目录名称为spark-1.6.3
配置环境变量,并使配置生效

export SPARK_HOME=/usr/local/spark-1.6.3
export SCALA_HOME=/usr/local/scala-2.11.6
export PATH=.:$SPARK_HOME/bin:$SCALA_HOME/bin:$PATH

配置spark-env.sh

修改spark-env.sh.template文件名spark-env.sh,在文件未追加如下内容

[root@node222 spark-1.6.3]# mv conf/spark-env.sh.template  conf/spark-env.sh
export SCALA_HOME=/usr/local/scala-2.11.6
export JAVA_HOME=/usr/local/jdk1.8.0_121
export HADOOP_HOME=/usr/local/hadoop-2.7.7
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export SPARK_HOME=/usr/local/spark-1.6.3
export SPARK_MASTER_IP=node222
export SPARK_EXECUTOR_MEMORY=512M
# 否则启动时会报错误 Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/conf/Configuration
export SPARK_DIST_CLASSPATH=$(/usr/local/hadoop-2.7.7/bin/hadoop  classpath)

配置spark-defaults.conf

修改spark-defaults.conf.template文件名,在文件未追加如下内容

 spark.master                     spark://node222:7077
 spark.eventLog.enabled           true
 spark.eventLog.dir               hdfs://node222:9000/user/spark-log
 spark.serializer                 org.apache.spark.serializer.KryoSerializer
 spark.driver.memory              512M
 spark.executor.extraJavaOptions  -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"

配置YARN

[root@node222 spark-1.6.3]# vi /usr/local/hadoop-2.7.7/etc/hadoop/yarn-site.xml
    <property>
       <name>yarn.resourcemanager.scheduler.class</name>
       <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
    </property>

将spark依赖jar包拷贝至hive的lib目录

[root@node222 spark-1.6.3]# cp lib/spark-assembly-1.6.3-hadoop2.4.0.jar   /usr/local/hive-2.1.1/lib/

配置hive-site.xml文件

增加如下内容,需要结合实际环境修改

  <!--hive on spark or spark on yarn -->
  <property>
    <name>hive.execution.engine</name>
    <value>spark</value>
  </property>
  <property>
    <name>spark.home</name>
    <value>/usr/local/spark-1.6.3</value>
  </property>
  <property>
    <name>spark.master</name>
    <value>spark://node222:7077</value>
  </property>
  <property>
    <name>spark.submit.deployMode</name>
    <value>client</value>
  </property>
  <property>
    <name>spark.eventLog.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>spark.eventLog.dir</name>
    <value>hdfs://node222:9000/user/spark-log</value>
  </property>
  <property>
    <name>spark.serializer</name>
    <value>org.apache.spark.serializer.KryoSerializer</value>
  </property>
  <property>
    <name>spark.executor.memeory</name>
    <value>512m</value>
  </property>
  <property>
    <name>spark.driver.memeory</name>
    <value>512m</value>
  </property>
  <property>
    <name>spark.executor.extraJavaOptions</name>
    <value>-XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"</value>
  </property>

启动spark

启动前确保hadoop基础环境已正常启动

[root@node222 spark-1.6.3]# sbin/start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark-1.6.3/logs/spark-root-org.apache.spark.deploy.master.Master-1-node222.out
localhost: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-1.6.3/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-node222.out
[root@node222 spark-1.6.3]# jps
91507 JobHistoryServer
122595 Jps
92178 HQuorumPeer
122374 Master
122486 Worker
86859 ResourceManager
92251 HMaster
92397 HRegionServer
86380 NameNode
86684 SecondaryNameNode
86959 NodeManager
86478 DataNode

通过webui查看spark

http://192.168.0.222:8080/

执行hive命令验证hive on spark

[root@node222 spark-1.6.3]# hive

Logging initialized using configuration in jar:file:/usr/local/hive-2.1.1/lib/hive-common-2.1.1.jar!/hive-log4j2.properties Async: true
hive> use default;
OK
Time taken: 1.247 seconds
hive> show tables;
OK
kylin_account
kylin_cal_dt
kylin_category_groupings
kylin_country
kylin_sales
Time taken: 0.45 seconds, Fetched: 15 row(s)
hive> select count(1) from kylin_sales;
Query ID = root_20181213152833_9ca6240f-7ead-4565-b21d-fb695259da3b
Total jobs = 1
Launching Job 1 out of 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Spark Job = 15967d00-97a6-4705-9fa2-e7a2ef3c3798

Query Hive on Spark job[0] stages:
0
1

Status: Running (Hive on Spark job[0])
Job Progress Format
CurrentTime StageId_StageAttemptId: SucceededTasksCount(+RunningTasksCount-FailedTasksCount)/TotalTasksCount [StageCost]
2018-12-13 15:28:53,906 Stage-0_0: 0(+1)/1      Stage-1_0: 0/1
2018-12-13 15:28:56,943 Stage-0_0: 0(+1)/1      Stage-1_0: 0/1
2018-12-13 15:28:59,966 Stage-0_0: 0(+1)/1      Stage-1_0: 0/1
2018-12-13 15:29:02,988 Stage-0_0: 0(+1)/1      Stage-1_0: 0/1
2018-12-13 15:29:04,000 Stage-0_0: 1/1 Finished Stage-1_0: 0(+1)/1
2018-12-13 15:29:05,014 Stage-0_0: 1/1 Finished Stage-1_0: 1/1 Finished
Status: Finished successfully in 21.17 seconds
OK
10000
Time taken: 31.752 seconds, Fetched: 1 row(s)

 

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!