How I dynamically upgrade worker's cpu/ram/disk in dataproc?

假如想象 提交于 2019-12-12 03:36:35

问题


I created a cluster by default setting(4 vCPUs, 15GB Ram) in google dataproc. After working several pig jobs, the cluster had 2-3 unhealthy node. So I upgraded the worker VM's vCPUs(4 to 8 vCPUs), Ram(15GB to 30GB) and Disk. But in the Hadoop Web interface showed the hardware of worker node didn't change, it still showed the original mounts of vCPU/Ram/Disk.

How can I dynamically upgrade worker's cpu/ram/disk in dataproc?

Thanks.


回答1:


Dataproc has no support for upgrading workers on running clusters. To upgrade, we suggest recreating the cluster. You can also add extra workers via clusters update gcloud command.

It is possible to upgrade worker type by stopping each worker instance, upgrading and restarting. However, there are a number of hadoop/spark properties that have to change to accommodate different container sizes.



来源:https://stackoverflow.com/questions/39073032/how-i-dynamically-upgrade-workers-cpu-ram-disk-in-dataproc

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!