java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient

后端 未结 12 1245
无人共我
无人共我 2020-11-27 14:38

I have Hadoop 2.7.1 and apache-hive-1.2.1 versions installed on ubuntu 14.0.

  1. Why this error is occurring ?
  2. Is any metastore installation required?
相关标签:
12条回答
  • 2020-11-27 14:57

    This is probably due to its lack of connections to the Hive Meta Store,my hive Meta Store is stored in Mysql,so I need to visit Mysql,So I add a dependency in my build.sbt

    libraryDependencies += "mysql" % "mysql-connector-java" % "5.1.38"
    

    and the problem is solved!

    0 讨论(0)
  • 2020-11-27 14:59

    starting the hive metastore service worked for me. First, set up the database for hive metastore:

     $ hive --service metastore 
    

    ` https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.4/bk_installing_manually_book/content/validate_installation.html

    Second, run the following commands:

     $ schematool -dbType mysql -initSchema  
     $ schematool -dbType mysql -info
    

    https://cwiki.apache.org/confluence/display/Hive/Hive+Schema+Tool

    0 讨论(0)
  • 2020-11-27 15:01

    In the middle of the stack trace, lost in the "reflection" junk, you can find the root cause:

    The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.

    0 讨论(0)
  • 2020-11-27 15:03

    Run hive in debug mode

    hive -hiveconf hive.root.logger=DEBUG,console

    and then execute

    show tables

    can find the actual problem

    0 讨论(0)
  • 2020-11-27 15:05

    I solved this problem by removing --deploy-mode cluster from spark-submit code. By default , spark submit takes client mode which has following advantage :

    1. It opens up Netty HTTP server and distributes all jars to the worker nodes.
    2. Driver program runs on master node , which means dedicated resources to driver process.
    

    While in cluster mode :

     1.  It runs on worker node.
     2. All the jars need to be placed in a common folder of the cluster so that it is accessible to all the worker nodes or in folder of each worker node.
    

    Here it's not able to access hive metastore due to unavailability of hive jar to any of the nodes in cluster.

    0 讨论(0)
  • 2020-11-27 15:06

    I did below modifications and I am able to start the Hive Shell without any errors:

    1. ~/.bashrc

    Inside bashrc file add the below environment variables at End Of File : sudo gedit ~/.bashrc

    #Java Home directory configuration
    export JAVA_HOME="/usr/lib/jvm/java-9-oracle"
    export PATH="$PATH:$JAVA_HOME/bin"
    
    # Hadoop home directory configuration
    export HADOOP_HOME=/usr/local/hadoop
    export PATH=$PATH:$HADOOP_HOME/bin
    export PATH=$PATH:$HADOOP_HOME/sbin
    
    export HIVE_HOME=/usr/lib/hive
    export PATH=$PATH:$HIVE_HOME/bin
    

    2. hive-site.xml

    You have to create this file(hive-site.xml) in conf directory of Hive and add the below details

    <?xml version="1.0" encoding="UTF-8" standalone="no"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <configuration>
    
    <property>
      <name>javax.jdo.option.ConnectionURL</name>
      <value>jdbc:mysql://localhost/metastore?createDatabaseIfNotExist=true</value>
    </property>
    
    
    <property>
      <name>javax.jdo.option.ConnectionDriverName</name>
      <value>com.mysql.jdbc.Driver</value>
    </property>
    
    <property>
      <name>javax.jdo.option.ConnectionUserName</name>
      <value>root</value>
    </property>
    
    <property>
      <name>javax.jdo.option.ConnectionPassword</name>
      <value>root</value>
    </property>
    
    <property>
      <name>datanucleus.autoCreateSchema</name>
      <value>true</value>
    </property>
    
    <property>
      <name>datanucleus.fixedDatastore</name>
      <value>true</value>
    </property>
    
    <property>
     <name>datanucleus.autoCreateTables</name>
     <value>True</value>
     </property>
    
    </configuration>
    

    3. You also need to put the jar file(mysql-connector-java-5.1.28.jar) in the lib directory of Hive

    4. Below installations required on your Ubuntu to Start the Hive Shell:

    1. MySql
    2. Hadoop
    3. Hive
    4. Java

    5. Execution Part:

    1. Start all services of Hadoop: start-all.sh

    2. Enter the jps command to check whether all Hadoop services are up and running: jps

    3. Enter the hive command to enter into hive shell: hive

    0 讨论(0)
提交回复
热议问题