I\'m not able to run a simple spark
job in Scala IDE
(Maven spark project) installed on Windows 7
Spark core dependency has be
C:\winutils\bin
winutils.exe
inside C:\winutils\bin
HADOOP_HOME
to C:\winutils
Follow this:
Create a bin
folder in any directory(to be used in step 3).
Download winutils.exe and place it in the bin directory.
Now add System.setProperty("hadoop.home.dir", "PATH/TO/THE/DIR");
in your code.
if we see below issue
ERROR Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
then do following steps
I got the same problem while running unit tests. I found this workaround solution:
The following workaround allows to get rid of this message:
File workaround = new File(".");
System.getProperties().put("hadoop.home.dir", workaround.getAbsolutePath());
new File("./bin").mkdirs();
new File("./bin/winutils.exe").createNewFile();
from: https://issues.cloudera.org/browse/DISTRO-544
On top of mentioning your environment variable for HADOOP_HOME
in windows as C:\winutils
, you also need to make sure you are the administrator of the machine. If not and adding environment variables prompts you for admin credentials (even under USER
variables) then these variables will be applicable once you start your command prompt as administrator.
That's a tricky one... Your storage letter must be capical. For example "C:\..."