问题
My overall aim is to use sparklyr
within an R Jupyter notebook on my Azure cloud service of Jupyter lab. I created a new conda environment with R, sparklyr and Java 8 (since this is the version supported by sparklyr) as follows:
conda create -n r_spark r=3.6 r-essentials r-irkernel openjdk=8 r-sparklyr
source activate r_spark
R
> IRkernel::installspec(user=TRUE, name="rspark", displayname="R (Spark)")
When I run R within a terminal session within this environment, everything works fine:
R
> system("java -version")
openjdk version "1.8.0_152-release"
OpenJDK Runtime Environment (build 1.8.0_152-release-1056-b12)
OpenJDK 64-Bit Server VM (build 25.152-b12, mixed mode)
> library(sparklyr)
> sc <- spark_connect(master="local")
* Using Spark: 2.3.3
Registered S3 method overwritten by 'openssl':
method from
print.bytes Rcpp
> spark_disconnect(sc)
NULL
>
However, when I do the same within a notebook with the very same "R (Spark)"-kernel, it uses Java openjdk version 11:
library(sparklyr)
sc <- spark_connect(master="local")
Error in validate_java_version_line(master, version): Java version detected
but couldnt parse version from: openjdk version "11.0.4" 2019-07-16
Traceback: [...]
Furthermore system("java -version", intern=TRUE)
returns an empty result from within the notebook.
How can I tell the notebook to use the Java version from its environment?
回答1:
@merv's comment put me on the right track:
Get the current JAVA_HOME
-path with Sys.getenv("JAVA_HOME")
in the R console in the terminal within the environment: "/path/to/your/java"
.
In the notebook with the corresponding environment kernel, use Sys.setenv(JAVA_HOME="/path/to/your/java")
and go!
来源:https://stackoverflow.com/questions/58877431/how-to-use-specific-java-version-from-a-conda-environment-inside-of-jupyter-note