Why we have not changed just the storing routine in hadoop (making it in memory) instead of inventing a different tool, Apache spark, altogether?

前端 未结 0 1242
醉梦人生
醉梦人生 2021-01-16 20:55

We all know that Spark uses RAM to store processed data, both Spark and Hadoop use RAM for computation, which makes Spark to access data at blazing fast speed. But if that i

相关标签:
回答
  • 消灭零回复
提交回复
热议问题