Google data fusion Execution error “INVALID_ARGUMENT: Insufficient 'DISKS_TOTAL_GB' quota. Requested 3000.0, available 2048.0.”

梦想与她 提交于 2020-02-24 12:20:29

问题


I am trying load a Simple CSV file from GCS to BQ using Google Data Fusion Free version. The pipeline is failing with error . it reads

com.google.api.gax.rpc.InvalidArgumentException: io.grpc.StatusRuntimeException: INVALID_ARGUMENT: Insufficient 'DISKS_TOTAL_GB' quota. Requested 3000.0, available 2048.0.
    at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:49) ~[na:na]
    at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72) ~[na:na]
    at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60) ~[na:na]
    at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97) ~[na:na]
    at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68) ~[na:na]

same error is repeated for both Mapreduce and Spark execution pipeline. Appreciate any help in fixing this issue . Thanks

Regards KA


回答1:


It means that the requested total compute disks would put the project over the GCE quota for the project. There are both project wide and regional quotas. You can see that documentation here: https://cloud.google.com/compute/quotas

To resolve this, you should increase the quota in your GCP project.




回答2:


@Ksign provided the following answer to a similar question which can be seen here.

The specific quota related to DISKS_TOTAL_GB is the Persistent disk standard (GB) as you can see in the Disk quotas documentation.

You can edit this quota by region in the Cloud Console of your project by going to the IAM & admin page => Quotas and select only the metric Persistent Disk Standard (GB).



来源:https://stackoverflow.com/questions/58996991/google-data-fusion-execution-error-invalid-argument-insufficient-disks-total

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!