The Spark Programming Guide mentions slices as a feature of RDDs (both parallel collections or Hadoop datasets.) (\"Spark will run one task for each slice of the cluster.\") B
They are the same thing. The documentation has been fixed for Spark 1.2 thanks to Matthew Farrellee. More details in the bug: https://issues.apache.org/jira/browse/SPARK-1701