Installing of SparkR

后端 未结 4 1457
不思量自难忘°
不思量自难忘° 2020-11-27 03:21

I have the last version of R - 3.2.1. Now I want to install SparkR on R. After I execute:

> install.packages(\"SparkR\")

I got back:

相关标签:
4条回答
  • 2020-11-27 03:49

    Now versions 2.1.2 and 2.3.0 of SparkR are now available in the repository of CRAN, you can install version 2.3.0 as follows:

    install.packages("https://cran.r-project.org/src/contrib/Archive/SparkR/SparkR_2.3.0.tar.gz", repos = NULL, type="source")
    

    Note: You must first download and install the corresponding version of Apache Spark from download, so that the package works correctly.

    0 讨论(0)
  • 2020-11-27 03:56

    SparkR requires not just an R package but an entire Spark backend to be pulled in. When you want to upgrade SparkR, you are upgrading Spark, not just the R package. If you want to go with SparkR then this blogpost might help you out: https://blog.rstudio.org/2015/07/14/spark-1-4-for-rstudio/.

    It should be said though: nowadays you may want to refer to the sparklyr package as it makes all of this a whole lot easier.

    install.packages("devtools")
    devtools::install_github("rstudio/sparklyr")
    library(sparklyr)
    spark_install(version = "1.6.2")
    spark_install(version = "2.0.0")
    

    It also offers more functionality than SparkR as well as a very nice interface to dplyr.

    0 讨论(0)
  • 2020-11-27 04:00

    You can install directly from a GitHub repository:

    if (!require('devtools')) install.packages('devtools')
    devtools::install_github('apache/spark@v2.x.x', subdir='R/pkg')
    

    You should choose tag (v2.x.x above) corresponding to the version of Spark you use. You can find a full list of tags on the project page or directly from R using GitHub API:

    jsonlite::fromJSON("https://api.github.com/repos/apache/spark/tags")$name
    

    If you've downloaded binary package from a downloads page R library is in a R/lib/SparkR subdirectory. It can be used to install SparkR directly. For example:

    $ export SPARK_HOME=/path/to/spark/directory
    $ cd $SPARK_HOME/R/pkg/
    $ R -e "devtools::install('.')"
    

    You can also add R lib to .libPaths (taken from here):

    Sys.setenv(SPARK_HOME='/path/to/spark/directory')
    .libPaths(c(file.path(Sys.getenv('SPARK_HOME'), 'R', 'lib'), .libPaths()))
    

    Finally, you can use sparkR shell without any additional steps:

    $ /path/to/spark/directory/bin/sparkR
    

    Edit

    According to Spark 2.1.0 Release Notes should be available on CRAN in the future:

    Standalone installable package built with the Apache Spark release. We will be submitting this to CRAN soon.

    You can follow SPARK-15799 to check the progress.

    Edit 2

    While SPARK-15799 has been merged, satisfying CRAN requirements proved to be challenging (see for example discussions about 2.2.2, 2.3.1, 2.4.0), and the packages has been subsequently removed (see for example SparkR was removed from CRAN on 2018-05-01, CRAN SparkR package removed?). As the result methods listed in the original post are still the most reliable solutions.

    Edit 3

    OK, SparkR is back up on CRAN again, v2.4.1. install.packages('SparkR') should work again (it may take a couple of days for the mirrors to reflect this)

    0 讨论(0)
  • 2020-11-27 04:05

    I also faced similar issue while trying to play with SparkR in EMR with Spark 2.0.0. I'll post the steps here that I followed to install rstudio server, SparkR, sparklyr, and finally connecting to a spark session in a EMR cluster:

    1. Install rstudio server: After the EMR cluster is up and running, ssh to the master node with user 'hadoop@' and download rstudio server

    wget https://download2.rstudio.org/rstudio-server-rhel-0.99.903-x86_64.rpm

    then install using yum install

    sudo yum install --nogpgcheck rstudio-server-rhel-0.99.903-x86_64.rpm

    finally add a user to access rstudio web console as:

    sudo su

    sudo useradd username

    sudo echo username:password | chpasswd

    1. To acess rstudio Web console you need to create a SSH tunnel from your machine to the EMR master node like below:

    ssh -NL 8787:ec2-emr-master-node-ip.compute-1.amazonaws.com:8787 hadoop@ec2-emr-master-node-ip.compute-1.amazonaws.com&

    1. Now open any browser and type localhost:8787 to go the rstudio Web console and use the username:password combo to login.

    2. To install the required R packages you need to install libcurl into the master node first like below:

    sudo yum update

    sudo yum -y install libcurl-devel

    1. Resolve permission issues with:

    sudo -u hdfs hadoop fs -mkdir /user/

    sudo -u hdfs hadoop fs -chown /user/

    1. Check Spark version in EMR and set SPARK_HOME:

    spark-submit --version

    export SPARK_HOME='/usr/lib/spark/'

    1. Now in the rstudio console install SparkR like below:

    install.packages('devtools')

    devtools::install_github('apache/spark@v2.0.0', subdir='R/pkg')

    install.packages('sparklyr')

    library(SparkR)

    library(sparklyr)

    Sys.setenv(SPARK_HOME='/usr/lib/spark')

    sc <- spark_connect(master = "yarn-client")

    0 讨论(0)
提交回复
热议问题