I\'m using spark-redshift (https://github.com/databricks/spark-redshift) which uses avro for transfer.
Reading from Redshift is OK, while writing I\'m getting
just for reference - workaround by Alex Nastetsky
delete jars from master node
find / -name "*avro*jar" 2> /dev/null -print0 | xargs -0 -I file sudo rm file
delete jars from slave nodes
yarn node -list | sed 's/ .*//g' | tail -n +3 | sed 's/:.*//g' | xargs -I node ssh node "find / -name "*avro*jar" 2> /dev/null -print0 | xargs -0 -I file sudo rm file
Setting configs correctly as proposed by Jonathan is worth a shot too.