I\'m trying to connect Spark with amazon Redshift but i\'m getting this error :
My code is as follow :
from pyspark.sql import SQLContext
f
if you are using databricks, I think you don't have to create a new sql Context because they do that for you just have to use sqlContext, try with this code:
from pyspark.sql import SQLContext
sc._jsc.hadoopConfiguration().set("fs.s3n.awsAccessKeyId", "YOUR_KEY_ID")
sc._jsc.hadoopConfiguration().set("fs.s3n.awsSecretAccessKey", "YOUR_SECRET_ACCESS_KEY")
df = sqlContext.read \ .......
Maybe the bucket is not mounted
dbutils.fs.mount("s3a://%s:%s@%s" % (ACCESS_KEY, ENCODED_SECRET_KEY, AWS_BUCKET_NAME), "/mnt/%s" % MOUNT_NAME)