首先准备好三台机器,准备分配以下角色:
hadoop1 hadoop2 hadoop3
zk zk zk
hadoop hadoop hadoop
storm storm storm
kafka kafka kafka
flume flume flume
1.解压软件
tar -zxvf apache-storm-1.2.3.tar.gz -C apps/
2.创建数据目录
mkdir apps/data
3.进入conf目录
vi storm.yaml
storm.zookeeper.servers:
-"hadoop1"
-"hadoop2"
-"hadoop3"
nimbus.seeds: ["hadoop1"]
storm.local.dir: "/root/apps/storm/data"
supervisor.slots.ports:
-6700
-6701
-6702
-6703
4.添加环境变量
vi /etc/profile
export STORM_HOME=/root/apps/storm-1.2.3
export PATH=$PATH:$STORM_HOME/bin
source /etc/profile
5.同步与其他节点将这些,还有环境变量,记得source一下
scp ....hadoop2,hadoop3
6.启动测试
bin/storm nimbus [三台群全部启动]
bin/storm supervisor [三台群全部启动]
bin/storm ui [启动一台web页面]
可以访问页面:hadoop:8080
来源:oschina
链接:https://my.oschina.net/wyn365/blog/3198213