snowflake-task

how can I add datetime stamp to zip file when unload data from snowflake to s3?

一世执手 提交于 2021-02-04 08:25:52
问题 I want to be able to add a timestamp the filename I'm writing to s3. So far I've been able to write files to AWS S3 using example below. Can someone guide me as to how do I go about putting datetime stamp in the file name? copy into @s3bucket/something.csv.gz from (select * from mytable) file_format = (type=csv FIELD_OPTIONALLY_ENCLOSED_BY = '"' compression='gzip' ) single=true header=TRUE; Thanks in advance. 回答1: The syntax for defining a path inside of a stage or location portion of the

Facing classnotfound exception while reading a snowflake table using spark

空扰寡人 提交于 2020-08-06 05:54:26
问题 I am trying to read a snowflake table from spark-shell. To do that, I did the following. pyspark --jars spark-snowflake_2.11-2.8.0-spark_2.4.jar,jackson-dataformat-xml-2.10.3.jar Using Python version 2.7.5 (default, Feb 20 2018 09:19:12) SparkSession available as 'spark'. >>> from pyspark import SparkConf, SparkContext >>> from pyspark.sql import SQLContext >>> from pyspark.sql.types import * >>> from pyspark import SparkConf, SparkContext >>> sc = SparkContext("local", "Simple App") >>>

Facing classnotfound exception while reading a snowflake table using spark

折月煮酒 提交于 2020-08-06 05:54:13
问题 I am trying to read a snowflake table from spark-shell. To do that, I did the following. pyspark --jars spark-snowflake_2.11-2.8.0-spark_2.4.jar,jackson-dataformat-xml-2.10.3.jar Using Python version 2.7.5 (default, Feb 20 2018 09:19:12) SparkSession available as 'spark'. >>> from pyspark import SparkConf, SparkContext >>> from pyspark.sql import SQLContext >>> from pyspark.sql.types import * >>> from pyspark import SparkConf, SparkContext >>> sc = SparkContext("local", "Simple App") >>>

How to build a Snowflake query to get these results

♀尐吖头ヾ 提交于 2020-07-22 05:40:19
问题 The below table (TMP_RN_TC) in query is a temp table which would be used to load the data into the final table. This table has to get the data from stage-table and the output of temp-table data needs to be stored in final table. Stage table will get data for 15-days in every run. But the fact/final table should store all the data for the first run and then after only one day of data which would be changing (rest 14-days data would remain same). Since stage-table will hold even the duplicate

How to run snowflake side effect functions like SYSTEM$GENERATE_SCIM_ACCESS_TOKEN within a procedure with owner rights?

两盒软妹~` 提交于 2020-07-16 07:59:07
问题 Basically I want to do SCIM integration in snowflake. For that I have to use this command for getting the token which will be passed to Azure AD: call system$generate_scim_access_token('<value>'); This command can only run with AccountAdmin. And running it with AccountAdmin I am able to get token but In future I will not be having rights of AccountAdmin, so for that what I did, I created a procedure with AccountAdmin and execute it as owner. So that, when ever any other role which is having

Significance of Constraints in Snowflake

喜欢而已 提交于 2020-07-08 13:22:55
问题 Snowflake allows UNIQUE, PRIMARY KEY, FOREIGN KEY and NOT NULL constraints but I read that it enforces only NOT NULL constraint. Then what is the purpose of other keys and under what circumstances do we have to define them? I appreciate any examples. Thank you, Prashanth. 回答1: They express intent, helping people understand your data models. Data modeling tools can use them to generate diagrams. You can also programmatically access them to validate data integrity yourself. 来源: https:/

Using schedule tasks in snowflake to clone DB's with dynamic names

徘徊边缘 提交于 2020-06-29 03:48:14
问题 I want to use snowflake Task scheduler to clone one or all of the DB's with dynamic clone DB name something like below,Is it possible to do it without creating Stored procedure.As I have multiple DB under my account I would prefer to clone all of the DB's in one task create database xx_date clone xx I appreciate your response Thanks, 回答1: Is it possible to do it without creating a Stored Procedure The CREATE TASK statement syntax only allows for a single SQL statement to be specified, and the

Lateral Flatten Snowpipe data with mixture of arrays and dict

泪湿孤枕 提交于 2020-04-18 05:27:42
问题 I have two different structured json files being piped in from a snowpipe. The only difference is that instead of a nested dict it has many nested arrays. I am trying to figure out how to transform structure 1 into one finalized table. I've successfully transformed structure 2 into a table and included the code below. I know I need to be making use of lateral flatten but have not been successful. **Structure 1: Nested Arrays (Need help on)** This json lives within a table and in column *