Table loaded through Spark not accessible in Hive

ぐ巨炮叔叔 提交于 2019-12-10 13:25:13

问题


Hive table created through Spark (pyspark) are not accessible from Hive.

df.write.format("orc").mode("overwrite").saveAsTable("db.table")

Error while accessing from Hive:

Error: java.io.IOException: java.lang.IllegalArgumentException: bucketId out of range: -1 (state=,code=0)

Table getting created successfully in Hive and able to read this table back in spark. Table metadata is accessible (in Hive) and data file in table (in hdfs) directory.

TBLPROPERTIES of Hive table are :

  'bucketing_version'='2',                         
  'spark.sql.create.version'='2.3.1.3.0.0.0-1634', 
  'spark.sql.sources.provider'='orc',              
  'spark.sql.sources.schema.numParts'='1',

I also tried creating table with other workarounds but getting error while creating table:

df.write.mode("overwrite").saveAsTable("db.table")

OR

df.createOrReplaceTempView("dfTable")
spark.sql("CREATE TABLE db.table AS SELECT * FROM dfTable")

Error :

AnalysisException: u'org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Table default.src failed strict managed table checks due to the following reason: Table is marked as a managed table but is not transactional.);'

Stack version details:

Spark2.3

Hive3.1

Hortonworks Data Platform HDP3.0


回答1:


From HDP 3.0, catalogs for Apache Hive and Apache Spark are separated, and they use their own catalog; namely, they are mutually exclusive - Apache Hive catalog can only be accessed by Apache Hive or this library, and Apache Spark catalog can only be accessed by existing APIs in Apache Spark . In other words, some features such as ACID tables or Apache Ranger with Apache Hive table are only available via this library in Apache Spark. Those tables in Hive should not directly be accessible within Apache Spark APIs themselves.

  • Below article explain the steps:

Integrating Apache Hive with Apache Spark - Hive Warehouse Connector




回答2:


I faced the same issue after setting the following properties, it is working fine.

set hive.mapred.mode=nonstrict;
set hive.optimize.ppd=true;
set hive.optimize.index.filter=true;
set hive.tez.bucket.pruning=true;
set hive.explain.user=false; 
set hive.fetch.task.conversion=none;
set hive.support.concurrency=true;
set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;


来源:https://stackoverflow.com/questions/52761391/table-loaded-through-spark-not-accessible-in-hive

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!