一:启动hiveserver2服务
二:启动beeline
三:连接hiveserver2(下面的1000000端口号适当改小写因为其超出最大端口号的范围建议改为10000)
如果启动不成功实现我们先检查以下两个文件配置是否正确
1)hadoop文件夹下面的core-site.xml中的内容
<property> <name>hadoop.proxyuser.zhang.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.zhang.groups</name> <value>*</value> </property>
2)hive文件夹下hive-site.xml中的内容
<property> <name>hive.server2.thrift.port</name> <value>1000000</value> </property> <property> <name>hive.server2.thrift.bind.host</name> <value>localhost</value> </property>
可能出现的错误:
Error: Could not open client transport with JDBC Uri: jdbc:hive2://localhost:10000: java.net.ConnectException: 拒绝连接 (Connection refused) (state=08S01,code=0)
解决方案1:杀死占用端口进程(https://blog.csdn.net/xiaoqiu_cr/article/details/81634434)
解决方案2:直接修改配置文件的端口号(我是采用的此种解决方法)
完整的hive-site-xml的内容
<?xml version="1.0" encoding="UTF-8" standalone="no"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value> <description>JDBC connect string for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>Driver class name for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hive</value> <description>username to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>hive</value> <description>password to use against metastore database</description> </property> <property> <name>hive.cli.print.current.db</name> <value>true</value> </property> <property> <name>hive.server2.thrift.port</name> <value>1000000</value> </property> <property> <name>hive.server2.thrift.bind.host</name> <value>localhost</value> </property> </configuration>
Hadoop的core-site.xml的内容
<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>hadoop.tmp.dir</name> <value>file:/usr/local/hadoop/tmp</value> </property> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:9000</value> </property> <property> <name>hadoop.proxyuser.zhang.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.zhang.groups</name> <value>*</value> </property> </configuration>
来源:https://www.cnblogs.com/zyt-bg/p/11470168.html