dashdb

Spark JDBC to DashDB (DB2) with CLOB errors

假如想象 提交于 2019-12-12 09:46:10
问题 I am working to connect my spark application to DashDB. Currently, I can load my data just fine. However, I am unable to save a DataFrame to DashDB. Any insight will be helpful. var jdbcSets = sqlContext.read.format("jdbc").options(Map("url" -> url, "driver" -> driver, "dbtable" -> "setsrankval")).load() jdbcSets.registerTempTable("setsOpponentRanked") jdbcSets = jdbcSets.coalesce(10) sqlContext.cacheTable("setsOpponentRanked") However, when I try to save large DataFrames, I get the error:

How do i declare and increment local variables in db2?

风格不统一 提交于 2019-12-12 03:45:52
问题 I want to show row number for each of result set row, I have this query in mySQL SELECT @rownum := @rownum + 1 row, e.* FROM Employee e, (SELECT @rownum := 0) r Here @rownum is local variable and would increment its value for each result row. How do i write this query in db2 ( ibm's dashdb ) ? 回答1: If you're just looking to number the output rows, you can use the row_number() function: select row_number() over() as row, e.* from Employee e 回答2: If you are looking to set a variable and set a

An internal error has occurred. The application may still be initializing or the URL used is invalid

寵の児 提交于 2019-12-12 03:33:42
问题 When I try opening the dashDB console from Bluemix, I occasionally get the following error message: An internal error has occurred. The application may still be initializing or the URL used is invalid. Check the URL and try again. For more information, view the server log files. How can I fix this? 回答1: The problem seems to be a cookie caching issue. Get the domain name from the browser window that is displaying the error message. E.g. awh-yp-small02.services.dal.bluemix.net Open cookie page,

Cannot add Foreign Key on tables in DashDB / DB2 on Bluemix

与世无争的帅哥 提交于 2019-12-11 03:34:07
问题 When I create a table in DashDB (DB2) on Bluemix like this: CREATE TABLE DEPARTMENT ( depname CHAR (10) UNIQUE NOT NULL , phone INTEGER ) ; ALTER TABLE DEPARTMENT ADD CONSTRAINT DEPARTMENT_PK PRIMARY KEY ( depname ) ; CREATE TABLE EMPLOYEE ( "EmpNr" NUMERIC (3) UNIQUE NOT NULL , empname CHAR (20) , depname CHAR (10) , EMPLOYEE2_title CHAR (20) ); ALTER TABLE EMPLOYEE ADD CONSTRAINT EMPLOYEE_PK PRIMARY KEY ( "EmpNr" ) ; ALTER TABLE EMPLOYEE ADD CONSTRAINT EMPLOYEE_DEPARTMENT_FK FOREIGN KEY

Is there any way I can restore a DB2 backup file onto IBM DashDB?

吃可爱长大的小学妹 提交于 2019-12-10 18:49:50
问题 I am trying to restore a DB2 backup file into my BlueMix DashDB service. How do I go about doing this? 回答1: You cannot restore your DB2 backup image into dashDB for several reasons. In an entry-level, shared dashDB instance you only have access to one schema in a physical database shared by others. Even if you have a dedicated instance, you need 1) access to the database local disk to upload the image and 2) sufficient privileges (at least SYSMAINT authority) to perform the restore. I doubt

Spark JDBC to DashDB (DB2) with CLOB errors

时光怂恿深爱的人放手 提交于 2019-12-06 03:45:45
I am working to connect my spark application to DashDB. Currently, I can load my data just fine. However, I am unable to save a DataFrame to DashDB. Any insight will be helpful. var jdbcSets = sqlContext.read.format("jdbc").options(Map("url" -> url, "driver" -> driver, "dbtable" -> "setsrankval")).load() jdbcSets.registerTempTable("setsOpponentRanked") jdbcSets = jdbcSets.coalesce(10) sqlContext.cacheTable("setsOpponentRanked") However, when I try to save large DataFrames, I get the error: DB2 SQL Error: SQLCODE=-1666, SQLSTATE=42613, SQLERRMC=CLOB, DRIVER=4.19.26 The code I use to save the

No matched schema for {“_id”:“…”,“doc”:{…}

二次信任 提交于 2019-12-02 19:41:06
问题 When using the SDP to extract data from Cloudant and populate dashDB, I occasionally see error messages in the dashdb "XXXX_OVERFLOW" table that look like this: No matched schema for {"_id":"...","doc":{...} Questions What does this error mean? How can I fix it? 回答1: There are two main phases to the SDP process: Schema analysis Data import In the schema analysis phase, the SDP analyses a sample of documents in Cloudant and uses the document structures of the sample to infer the target schema

No matched schema for {“_id”:“…”,“doc”:{…}

雨燕双飞 提交于 2019-12-02 10:34:31
When using the SDP to extract data from Cloudant and populate dashDB, I occasionally see error messages in the dashdb "XXXX_OVERFLOW" table that look like this: No matched schema for {"_id":"...","doc":{...} Questions What does this error mean? How can I fix it? Chris Snow There are two main phases to the SDP process: Schema analysis Data import In the schema analysis phase, the SDP analyses a sample of documents in Cloudant and uses the document structures of the sample to infer the target schema in dashDB. The above error is encountered when the SDP tries to import a document with a schema