FAILED: Error in metadata: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient

前端 未结 8 1386
庸人自扰
庸人自扰 2021-01-01 07:05

I shutdown my HDFS client while HDFS and hive instances were running. Now when I relogged into Hive, I can\'t execute any of my DDL Tasks e.g. \"show tables\" or \"describe

相关标签:
8条回答
  • 2021-01-01 07:30

    See: getting error in hive

    Have you copied the jar containing the JDBC driver for your metadata db into Hive's lib dir?

    For instance, if you're using MySQL to hold your metadata db, you wll need to copy

    mysql-connector-java-5.1.22-bin.jar into $HIVE_HOME/lib.

    This fixed that same error for me.

    0 讨论(0)
  • 2021-01-01 07:32

    For instance, I use MySQL to hold metadata db, I copied

    mysql-connector-java-5.1.22-bin.jar into $HIVE_HOME/lib folder

    My error resolved

    0 讨论(0)
  • 2021-01-01 07:34

    I have faced this issue and in my case it was while running hive command from command line.

    I resolved this issue by running kinit command as I was using kerberized hive.

    kinit -kt <your keytab file location> <kerberos principal>
    
    0 讨论(0)
  • 2021-01-01 07:49

    I have resolved the problem. These are the steps I followed:

    1. Go to $HIVE_HOME/bin/metastore_db
    2. Copied the db.lck to db.lck1 and dbex.lck to dbex.lck1
    3. Deleted the lock entries from db.lck and dbex.lck
    4. Log out from hive shell as well as from all running instances of HDFS
    5. Re-login to HDFS and hive shell. If you run DDL commands, it may again give you the "Could not instantiate HiveMetaStoreClient error"
    6. Now copy back the db.lck1 to db.lck and dbex.lck1 to dbex.lck
    7. Log out from all hive shell and HDFS instances
    8. Relogin and you should see your old tables

    Note: Step 5 may seem a little weird because even after deleting the lock entry, it will still give the HiveMetaStoreClient error but it worked for me.

    Advantage: You don't have to duplicate the effort of re-creating the entire database.

    Hope this helps somebody facing the same error. Please vote if you find useful. Thanks ahead

    0 讨论(0)
  • 2021-01-01 07:50

    I was told that generally we get this exception if we the hive console not terminated properly. The fix:

    Run the jps command, look for "RunJar" process and kill it using kill -9 command

    0 讨论(0)
  • 2021-01-01 07:52

    I faced the same issue and resolved it by starting the metastore service. Sometimes service might get stopped if your machine is re-booted or went down. You could start the service by running the command:

    Login as $HIVE_USER

    nohup hive --service metastore>$HIVE_LOG_DIR/hive.out 2>$HIVE_LOG_DIR/hive.log & 
    
    0 讨论(0)
提交回复
热议问题