spark集群搭建

倖福魔咒の 提交于 2019-11-28 16:20:25

Standalone集群构建

基础环境准备

  • 物理资源:CentOSA/B/C-6.10 64bit 内存2GB

在这里插入图片描述

主机名 IP
CentOSA 192.168.221.136
CentOSB 192.168.221.137
CentOSC 192.168.221.138

[外链图片转存失败(img-l9lPb4wS-1566826494200)(assets/1566785920711.png)]

  • 节点与主机映射关系
主机 节点服务
CentOSA NameNode、ZKFC、Zookeeper、journalnode、DataNode、master、worker、broker
CentOSB NameNode、ZKFC、zookeeper、journalnode、DataNode、master、worker、broker
CentOSC zookeeper、journalnode、DataNode、master、worker、broker
  • 主机与Ip的映射关系
[root@CentOSX ~]# vi  /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.244.144 CentOSA
192.168.244.145 CentOSB
192.168.244.146 CentOSC
  • 设置CentOS进程数和文件数
[root@CentOSX ~]# vi /etc/security/limits.conf
* soft nofile 204800
* hard nofile 204800
* soft nproc 204800
* hard nproc 204800
  • 关闭防火墙
[root@CentOSX ~]# service iptables stop
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Unloading modules:                               [  OK  ]
[root@CentOSX ~]# chkconfig iptables off
  • 同步集群的时钟
[root@CentOSX ~]# yum install -y ntp
[root@CentOSX ~]# ntpdate ntp.ubuntu.com
[root@CentOSX ~]# clock -w
  • SSH免密码认证
[root@CentOSX ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
f0:5b:aa:9d:53:1f:4b:22:de:b7:aa:d9:f1:d1:76:4e root@CentOSA
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|      .          |
|       o         |
|        S .      |
|        .+o o.   |
|       .o+.+.oo E|
|       oo+.o+o + |
|      . =oooo.  .|
+-----------------+
[root@CentOSX ~]# ssh-copy-id CentOSA
[root@CentOSX ~]# ssh-copy-id CentOSB
[root@CentOSX ~]# ssh-copy-id CentOSC
  • 安装jdk
[root@CentOSX ~]# rpm -ivh jdk-8u191-linux-x64.rpm
[root@CentOSX ~]# vi .bashrc
JAVA_HOME=/usr/java/latest
PATH=$PATH:$/bin
CLASSPATH=.
export JAVA_HOME
export PATH
export CLASSPATH
[root@CentOSX ~]# source .bashrc
[root@CentOSX ~]# java -version
java version "1.8.0_191"
Java(TM) SE Runtime Environment (build 1.8.0_191-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.191-b12, mixed mode)

构建zookeeper集群并且启动

[root@CentOSX ~]# tar -zxf zookeeper-3.4.6.tar.gz -C /usr/
[root@CentOSX ~]# mkdir /root/zkdata
[root@CentOSA ~]# echo 1 >> /root/zkdata/myid
[root@CentOSB ~]# echo 2 >> /root/zkdata/myid
[root@CentOSC ~]# echo 3 >> /root/zkdata/myid
[root@CentOSX ~]# vi /usr/zookeeper-3.4.6/conf/zoo.cfg
tickTime=2000
dataDir=/root/zkdata
clientPort=2181
initLimit=5
syncLimit=2
server.1=CentOSA:2887:3887
server.2=CentOSB:2887:3887
server.3=CentOSC:2887:3887
[root@CentOSX ~]# /usr/zookeeper-3.4.6/bin/zkServer.sh start zoo.cfg
[root@CentOSX ~]# /usr/zookeeper-3.4.6/bin/zkServer.sh status zoo.cfg
JMX enabled by default
Using config: /usr/zookeeper-3.4.6/bin/../conf/zoo.cfg
Mode: `2 follower|1 leader`

Kafka集群

[root@hadoopX ~]# tar -zxf kafka_2.11-2.1.0.tgz -C /usr/
[root@hadoop2 ~]# vi /usr/kafka_2.11-2.1.0/config/server.properties

############################# Server Basics #############################
broker.id=[0|1|2]
############################# Socket Server Settings #############################
listeners=PLAINTEXT://CentOS[A|B|C]:9092
############################# Log Basics #############################
# A comma separated list of directories under which to store log files
log.dirs=/usr/kafka-logs
############################# Zookeeper #############################
zookeeper.connect=CentOSA:2181,CentOSB:2181,CentOSC:2181
[root@CentOSX ~]# cd /usr/kafka_2.11-2.1.0/
[root@CentOSA kafka_2.11-2.1.0]# ./bin/kafka-server-start.sh -daemon config/server.properties
  • 测试kafka是否ok
[root@CentOSA kafka_2.11-2.1.0]# ./bin/kafka-topics.sh --zookeeper CentOSA:2181 --create --topic topic01 --partitions 3 --replication-factor 3
Created topic "topic01".
[root@CentOSA kafka_2.11-2.1.0]# ./bin/kafka-console-consumer.sh --bootstrap-server CentOSA:9092,CentOSB:9092,CentOSC:9092 --topic topic01
[root@CentOSC kafka_2.11-2.1.0]# ./bin/kafka-console-producer.sh --broker-list CentOSA:9092,CentOSB:9092,CentOSC:9092 --topic topic01
>this is a demo
>

观察CentOSA上是否有输出this is a demo

构建HDFS-HA

  • 解压并配置HADOOP_HOME
[root@CentOSX ~]# tar -zxf hadoop-2.9.2.tar.gz -C /usr/
[root@CentOSX ~]# vi .bashrc
HADOOP_HOME=/usr/hadoop-2.9.2
JAVA_HOME=/usr/java/latest
PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
CLASSPATH=.
export JAVA_HOME
export PATH
export CLASSPATH
export HADOOP_HOME
[root@CentOSX ~]# source .bashrc


  • 配置core-site.xml
<!--配置Namenode服务ID-->
<property>
    <name>fs.defaultFS</name>
    <value>hdfs://mycluster</value>
</property>
<property>
    <name>hadoop.tmp.dir</name>
    <value>/usr/hadoop-2.9.2/hadoop-${user.name}</value>
</property>
<property>
    <name>fs.trash.interval</name>
    <value>30</value>
</property>
<!--配置机架脚本-->
<property>
    <name>net.topology.script.file.name</name>
    <value>/usr/hadoop-2.9.2/etc/hadoop/rack.sh</value>
</property>
<!--配置ZK服务信息-->
<property>
    <name>ha.zookeeper.quorum</name>
    <value>CentOSA:2181,CentOSB:2181,CentOSC:2181</value>
</property>
<!--配置SSH秘钥位置-->
<property>
    <name>dfs.ha.fencing.methods</name>
    <value>sshfence</value>
</property>
<property>
    <name>dfs.ha.fencing.ssh.private-key-files</name>
    <value>/root/.ssh/id_rsa</value>
</property>

配置机架脚本

[root@CentOSX ~]# vi /usr/hadoop-2.9.2/etc/hadoop/rack.sh
while [ $# -gt 0 ] ; do
	nodeArg=$1
	exec</usr/hadoop-2.9.2/etc/hadoop/topology.data
	result=""
	while read line ; do
        ar=( $line )
        if [ "${ar[0]}" = "$nodeArg" ] ; then
        result="${ar[1]}"
        fi
	done
	shift
    if [ -z "$result" ] ; then
        echo -n "/default-rack"
    else
    	echo -n "$result "
    fi
done
[root@CentOSX ~]# chmod u+x /usr/hadoop-2.9.2/etc/hadoop/rack.sh
192.168.221.136 /rack01
192.168.221.137 /rack02
192.168.221.138 /rack03

  • 配置hdfs-site.xml
<property>
    <name>dfs.replication</name>
    <value>3</value>
</property>
<!--开启自动故障转移-->
<property>
    <name>dfs.ha.automatic-failover.enabled</name>
    <value>true</value>
</property>
<!--解释core-site.xml内容-->
<property>
    <name>dfs.nameservices</name>
    <value>mycluster</value>
</property>
<property>
    <name>dfs.ha.namenodes.mycluster</name>
    <value>nn1,nn2</value>
</property>
<property>
    <name>dfs.namenode.rpc-address.mycluster.nn1</name>
    <value>CentOSA:9000</value>
</property>
<property>
    <name>dfs.namenode.rpc-address.mycluster.nn2</name>
    <value>CentOSB:9000</value>
</property>
<!--配置日志服务器的信息-->
<property>
    <name>dfs.namenode.shared.edits.dir</name>
    <value>qjournal://CentOSA:8485;CentOSB:8485;CentOSC:8485/mycluster</value>
</property>
<!--实现故障转切换的实现类-->
<property>
    <name>dfs.client.failover.proxy.provider.mycluster</name>
    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>

  • 配置slaves
CentOSA
CentOSB
CentOSC

  • 启动HDFS(集群初始化启动)
[root@CentOSX ~]# hadoop-daemon.sh start journalnode (等待10s钟)
[root@CentOSA ~]# hdfs namenode -format
[root@CentOSA ~]# hadoop-daemon.sh start namenode
[root@CentOSB ~]# hdfs namenode -bootstrapStandby
[root@CentOSB ~]# hadoop-daemon.sh start namenode
#注册Namenode信息到zookeeper中,只需要在CentOSA或者B上任意一台执行一下指令
[root@CentOSA|B ~]# hdfs zkfc -formatZK
[root@CentOSA ~]# hadoop-daemon.sh start zkfc
[root@CentOSB ~]# hadoop-daemon.sh start zkfc
[root@CentOSX ~]# hadoop-daemon.sh start datanode

Spark 集群构建

[root@CentOSX ~]# tar -zxf spark-2.4.3-bin-without-hadoop.tgz -C /usr/
[root@CentOSX ~]# mv /usr/spark-2.4.3-bin-without-hadoop/ /usr/spark-2.4.3
[root@CentOSX ~]# cd /usr/spark-2.4.3/conf/
[root@CentOS conf]# mv spark-env.sh.template spark-env.sh
[root@CentOS conf]# mv slaves.template slaves
[root@CentOS conf]# vi slaves
CentOSA
CentOSB
CentOSC
[root@CentOS conf]# vi spark-env.sh

SPARK_WORKER_CORES=4
SPARK_WORKER_MEMORY=2g
LD_LIBRARY_PATH=/usr/hadoop-2.9.2/lib/native
SPARK_DIST_CLASSPATH=$(hadoop classpath)
SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=CentOSA:2181,CentOSB:2181,CentOSC:2181 -Dspark.deploy.zookeeper.dir=/spark"

export SPARK_WORKER_CORES
export SPARK_WORKER_MEMORY
export LD_LIBRARY_PATH
export SPARK_DIST_CLASSPATH
export SPARK_DAEMON_JAVA_OPTS

  • 启动Spark服务
[root@CentOSX conf]# cd /usr/spark-2.4.3/
[root@CentOSX spark-2.4.3]# ./sbin/start-master.sh
[root@CentOSA spark-2.4.3]# ./sbin/start-slave.sh spark://CentOSC:7077

  • 测试Spark环境
[root@CentOSA spark-2.4.3]# ./bin/spark-shell --master spark://CentOSA:7077,CentOSB:7077,CentOSC:7077 --total-executor-cores 6
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://CentOSA:4040
Spark context available as 'sc' (master = spark://CentOSA:7077,CentOSB:7077,CentOSC:7077, app id
= app-20190708162400-0000).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.4.3
/_/
Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_191)
Type in expressions to have them evaluated.
Type :help for more information.
scala>

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!