I\'m getting an Input/Output error when I try and create a directory or file in a google cloud storage bucket mounted on a linux (Ubuntu 15.10) directory.
Steps I have d
It appears from the Insufficient Permission
errors in your debug output that gcsfuse doesn't have sufficient permissions to your bucket. Probably it has read-only access.
Be sure to read the credentials documentation for gcsfuse. In particular, if you're using a service account on a GCE VM make sure to set up the VM with the storage-full
access scope.
I was facing this issue intermittently, so figured I'd share what I found:
I'm using minikube
for development and GCP for production.
I have the following postStart lifecycle hook:
lifecycle:
postStart:
exec:
command: ['gcsfuse', '-o', 'allow_other', 'bucket', 'path']
Locally, I configured the permissions by running these two commands before creating the pod:
$ gcloud auth login
$ minikube addons enable gcp-auth
Remotely, when creating my cluster, I enabled the permissions like so:
gcloud_create_cluster:
gcloud container clusters create cluster \
--scopes=...storage-full...
While I was develoing, I found myself updating/overriding files wtihin 1 minute of each. Since my retention policy was set to 60 seconds, any modifications or deletions were disallowed in that time. The solution was to simply reduce it.
This is not an end-all solution but hopefully someone else finds it useful.
This problem due to missing of credential file.
go to https://cloud.google.com/docs/authentication/production
Creating a service account
Enter following in /etc/fstab.
{{gcp bucket name}} {{mount path}} gcsfuse rw,noauto,user,key_file={{/path/to/key.json}}
if you have already mounted unmount first.
Follow this link
https://github.com/GoogleCloudPlatform/gcsfuse/blob/master/docs/mounting.md#credentials
You problem does stem from insufficient permissions, but you do not need to destroy and re-create the VM with a different scope to solve this problem. Here is another approach that is more suitable for production systems:
Finally, define an environment variable that contains the path to the service account credentials when calling gcsfuse from the command line:
GOOGLE_APPLICATION_CREDENTIALS=/root/credentials/service_credential_file.json gcsfuse bucket_name /my/mount/point
Use the key_file
option to accomplish the same thing in fstab
. Both of these options are documented in the gcsfuse credentials documentation. (EDIT: this option is documented, but won't work for me.)
Interestingly, you need to use the environment variable or key_file
option even if you have configured the service account on the VM using:
gcloud auth activate-service-account --key-file /root/credentials/service_credential_file.json
For some reason, gcsfuse ignores the active credentialed account.
Using the storage-full
scope when creating a VM has security and stability implications, because it allows that VM to have full access to every bucket that belongs to the same project. Should your file storage server really be able to over-write the logs in a logging bucket, or read the database backups in another bucket?
This problem can also occur in case you have set some retention policy/rules in that bucket. Like for me, I was also getting the same input/output error when I was trying to update any file within the mounted folder, the root cause was that I had added retention policy for not deleting any file before 1 month.