Hive table created through Spark (pyspark) are not accessible from Hive.
df.write.format(\"orc\").mode(\"overwrite\").saveAsTable(\"db.table\")
I faced the same issue after setting the following properties, it is working fine.
set hive.mapred.mode=nonstrict;
set hive.optimize.ppd=true;
set hive.optimize.index.filter=true;
set hive.tez.bucket.pruning=true;
set hive.explain.user=false;
set hive.fetch.task.conversion=none;
set hive.support.concurrency=true;
set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;