1、准备工作
1.1 给虚拟机取个 hostname。 而且配置 hosts。如果要和win做联合开发的话,和win的hosts文件,做一样的域名映射。
# 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 # ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.241.128 master
1.2 做免密配置
[root@master opt]# ssh-keygen -t rsa [root@master opt]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@master [root@master opt]# ssh root@master
1.3 安装 dk
vim /etc/profile
export JAVA_HOME=/usr/local/java/jdk1.8.0_221 export CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar export PATH=$JAVA_HOME/bin:$PATH
2、hadoop
配置文件
hadoop-env.sh
export JAVA_HOME=/usr/local/java/jdk1.8.0_221
core-site.xml
<property> <name>fs.defaultFS</name> <value>hdfs://master:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/opt/hadoop-2.7.7/tmp</value> </property>
hdfs-site.xml
<property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property>
maper-site.xml
<property> <name>mapreduce.framework.name</name> <value>yarn</value> </property>
yarn-site.xml
<property> <name>yarn.resourcemanager.hostname</name> <value>master</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property>
环境变量
export HADOOP_HOME=/opt/hadoop-2.7.7 export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
格式化namenode。一旦格式化后,不能重复格式化。
[root@master ~]# hdfs namenode -format
来源:https://www.cnblogs.com/wuyicode/p/12236962.html