Spark working faster in Standalone rather than YARN

£可爱£侵袭症+ 提交于 2019-12-01 13:58:17

Basically, your data and cluster are too small.

Big Data technologies are really meant to handle data that cannot fit on a single system. Given your cluster has 4 nodes, it might be fine for POC work but you should not consider this acceptable for benchmarking your application.

To give you a frame of reference refer to Hortonworks's article BENCHMARK: SUB-SECOND ANALYTICS WITH APACHE HIVE AND DRUID uses a cluster of:

  • 10 nodes
  • 2x Intel(R) Xeon(R) CPU E5-2640 v2 @ 2.00GHz with 16 CPU threads each
  • 256 GB RAM per node
  • 6x WDC WD4000FYYZ-0 1K02 4TB SCSI disks per node

This works out to 320 CPU cores, 2560GB RAM, 240TB of disk.

Another benchmark from Cloudera's article New SQL Benchmarks: Apache Impala (incubating) Uniquely Delivers Analytic Database Performance uses a 21 node cluster with each node at:

  • CPU: 2 sockets, 12 total cores, Intel Xeon CPU E5-2630L 0 at 2.00GHz
  • 12 disk drives at 932GB each (one for the OS, the rest for HDFS)
  • 384GB memory

This works out to 504 CPU cores, 8064GB RAM and 231TB of disk.

This should give an idea of the scale that would qualify your system as reliable for benchmarking purposes.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!