Wanted some insights on spark execution on standalone and yarn. We have a 4 node cloudera cluster and currently the performance of our application while running in YARN mode
Basically, your data and cluster are too small.
Big Data technologies are really meant to handle data that cannot fit on a single system. Given your cluster has 4 nodes, it might be fine for POC work but you should not consider this acceptable for benchmarking your application.
To give you a frame of reference refer to Hortonworks's article BENCHMARK: SUB-SECOND ANALYTICS WITH APACHE HIVE AND DRUID uses a cluster of:
- 10 nodes
- 2x Intel(R) Xeon(R) CPU E5-2640 v2 @ 2.00GHz with 16 CPU threads each
- 256 GB RAM per node
- 6x WDC WD4000FYYZ-0 1K02 4TB SCSI disks per node
This works out to 320 CPU cores, 2560GB RAM, 240TB of disk.
Another benchmark from Cloudera's article New SQL Benchmarks: Apache Impala (incubating) Uniquely Delivers Analytic Database Performance uses a 21 node cluster with each node at:
- CPU: 2 sockets, 12 total cores, Intel Xeon CPU E5-2630L 0 at 2.00GHz
- 12 disk drives at 932GB each (one for the OS, the rest for HDFS)
- 384GB memory
This works out to 504 CPU cores, 8064GB RAM and 231TB of disk.
This should give an idea of the scale that would qualify your system as reliable for benchmarking purposes.