execute query on sqlserver using spark sql

ε祈祈猫儿з 提交于 2019-12-11 01:12:34

问题


I am trying to get the row count and column count of all the tables in a schema in sql server using spark sql.

when I execute below query using sqoop, it's giving me the correct results.

sqoop eval --connect "jdbc:sqlserver://<hostname>;database=<dbname>" \
--username=<username> --password=<pwd> \
--query """SELECT 
ta.name TableName ,
pa.rows RowCnt, 
COUNT(ins.COLUMN_NAME) ColCnt FROM <db>.sys.tables ta INNER JOIN 
<db>.sys.partitions pa ON pa.OBJECT_ID = ta.OBJECT_ID INNER JOIN 
<db>.sys.schemas sc ON ta.schema_id = sc.schema_id join 
<db>.INFORMATION_SCHEMA.COLUMNS ins on ins.TABLE_SCHEMA =sc.name and ins.TABLE_NAME=ta.name 
WHERE ta.is_ms_shipped = 0 AND pa.index_id IN (1,0) and sc.name ='<schema>' GROUP BY sc.name, ta.name, pa.rows order by 
TableName"""

But when I try to execute the same query from spark sql, I am getting an error that "com.microsoft.sqlserver.jdbc.SQLServerException: Incorrect syntax near the keyword 'WHERE'" Please help me out, if anyone has an idea about this error.

Below is the spark sql command I executed spark-shell --jars "/var/lib/sqoop/sqljdbc42.jar"

sqlContext.read.format("jdbc").option("url", "jdbc:sqlserver://<hostname>;database=<dbname>;user=<user>;password=<pwd>").option("dbtable", """(SELECT 
ta.name TableName ,pa.rows RowCnt, 
COUNT(ins.COLUMN_NAME) ColCnt FROM <db>.sys.tables ta INNER JOIN 
<db>.sys.partitions pa ON pa.OBJECT_ID = ta.OBJECT_ID INNER JOIN 
<db>.sys.schemas sc ON ta.schema_id = sc.schema_id join 
<db>.INFORMATION_SCHEMA.COLUMNS ins on ins.TABLE_SCHEMA =sc.name and ins.TABLE_NAME=ta.name 
WHERE ta.is_ms_shipped = 0 AND pa.index_id IN (1,0) and sc.name ='<schema>' GROUP BY sc.name,ta.name, pa.rows)""").option("driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver").load()

expected output:

TableName, RowCnt, ColCnt

table A, 62, 30

table B, 3846, 76


回答1:


The problem in your Spark SQL command is with the dbTable option.

dbTable accepts anything that is valid in a FROM clause of a SQL query can be used. For example, instead of a full table you could also use a subquery in parentheses. However, when using subqueries in parentheses, it should have an alias. Thus your command should be modified as,

sqlContext
.read
.format("jdbc")
.option("url", "jdbc:sqlserver://<hostname>;database=<dbname>;user=<user>;password=<pwd>")
.option("dbtable", 
    """(SELECT 
    ta.name TableName ,
    pa.rows RowCnt, 
    COUNT(ins.COLUMN_NAME) ColCnt 
    FROM <db>.sys.tables ta 
    INNER JOIN 
    <db>.sys.partitions pa 
    ON pa.OBJECT_ID = ta.OBJECT_ID 
    INNER JOIN 
    <db>.sys.schemas sc 
    ON ta.schema_id = sc.schema_id 
    JOIN 
    <db>.INFORMATION_SCHEMA.COLUMNS ins 
    ON ins.TABLE_SCHEMA = sc.name and ins.TABLE_NAME = ta.name 
    WHERE ta.is_ms_shipped = 0 
     AND pa.index_id IN (1,0) 
     AND sc.name ='<schema>' 
    GROUP BY sc.name,ta.name, pa.rows) as TEMP""")
.option("driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver")
.load()

Just a hunch. Hope this helps!



来源:https://stackoverflow.com/questions/52487007/execute-query-on-sqlserver-using-spark-sql

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!