Been unsuccessful setting a spark cluster that can read AWS s3 files. The software I used are as follows:
I was able to solve this issue on Spark 3.0/ Hadoop 3.2. I documented my answer here as well - AWS EKS Spark 3.0, Hadoop 3.2 Error - NoClassDefFoundError: com/amazonaws/services/s3/model/MultiObjectDeleteException
Use following AWS Java SDK bundle and this issue will be solved -
aws-java-sdk-bundle-1.11.874.jar (https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-bundle/1.11.874)