1.安装scala
1.1 解压缩(/opt)
tar -zxvf scala-2.11.4.tgz
mv scala-2.11.4 scala
1.2 配置环境变量
vi ~/.bashrc
export SCALA_HOME=/usr/local/scala
export PATH=$SCALA_HOME/bin
source ~/.bashrc
1.3 验证scala是否安装成功
scala -version
1.4 二三节点安装scala
复制scala
scp -r scala 192.168.252.165:/opt
scp -r scala 192.168.252.166:/opt
复制环境变量
scp ~/.bashrc 192.168.252.165:~
scp ~/.bashrc 192.168.252.166:~
2.安装kafka
2.1 解压缩
tar -zxvf kafka_2.9.2-0.8.1.tgz
mv kafka_2.9.2-0.8.1 kafka
2.2 配置kafka
vi /opt/kafka/config/server.properties
修改
broker.id=0 //依次增长的整数,0、1、2、3、4
zookeeper.connect=192.168.252.164:2181,192.168.252.165:2181,192.168.252.166:2181
2.3 安装slf4j
将slf4j-1.7.6.zip解压
把slf4j中的slf4j-nop-1.7.6.jar复制到kafka的libs目录下面
3.二三节点搭建
3.1 复制zk目录到二三节点
cd /opt
scp -r kafka hadoop002:/opt
scp -r kafka hadoop003:/opt
3.2 修改二三节点broker.id
二三节点server.properties中的broker.id分别改为1和2
4.启动kafka集群
4.1 在三台机器上分别执行:
nohup /opt/kafka/bin/kafka-server-start.sh config/server.properties &
4.2 使用jps检查启动是否成功
hadoop001:kafka
hadoop002:kafka
hadoop003:kafka
5.验证kafka搭建成功
(生产端输入消息,消费端能正常输出消息,则搭建正常)
创建topic
/opt/kafka/bin/kafka-topics.sh --zookeeper 192.168.252.164:2181,192.168.252.165:2181,192.168.252.166:2181 --topic TestTopic --replication-factor 1 --partitions 1 --create
生产
/opt/kafka/bin/kafka-console-producer.sh --broker-list 192.168.252.164:9092,192.168.252.165:9092,192.168.252.166:9092 --topic TestTopic
消费
/opt/kafka/bin/kafka-console-consumer.sh --zookeeper 192.168.252.164:2181,192.168.252.165:2181,192.168.252.166:2181 --topic TestTopic --from-beginning
6.kafka报错处理
解决kafka Unrecognized VM option 'UseCompressedOops'问题
vi bin/kafka-run-class.sh
if [ -z "$KAFKA_JVM_PERFORMANCE_OPTS" ]; then
KAFKA_JVM_PERFORMANCE_OPTS="-server -XX:+UseCompressedOops -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+CMSScavengeBeforeRemark -XX:+DisableExplicitGC -Djava.awt.headless=true"
fi
去掉-XX:+UseCompressedOops即可
来源:oschina
链接:https://my.oschina.net/u/2812496/blog/718274