问题
I created a cluster by default setting(4 vCPUs, 15GB Ram) in google dataproc. After working several pig jobs, the cluster had 2-3 unhealthy node. So I upgraded the worker VM's vCPUs(4 to 8 vCPUs), Ram(15GB to 30GB) and Disk. But in the Hadoop Web interface showed the hardware of worker node didn't change, it still showed the original mounts of vCPU/Ram/Disk.
How can I dynamically upgrade worker's cpu/ram/disk in dataproc?
Thanks.
回答1:
Dataproc has no support for upgrading workers on running clusters. To upgrade, we suggest recreating the cluster. You can also add extra workers via clusters update gcloud command.
It is possible to upgrade worker type by stopping each worker instance, upgrading and restarting. However, there are a number of hadoop/spark properties that have to change to accommodate different container sizes.
来源:https://stackoverflow.com/questions/39073032/how-i-dynamically-upgrade-workers-cpu-ram-disk-in-dataproc