with the help of kubernetes I am running daily jobs on GKE, On a daily basis based on cron configured in kubernetes a new container spins up and try to insert some data into
So, if your GKE project is project my-gke
, and the project containing the services/things your GKE containers need access to is project my-data
, one approach is to:
my-data
project. Give it whatever GCP roles/permissions are needed (ex. roles/bigquery.
dataViewer
if you have some BigQuery tables that your my-gke
GKE containers need to read).
.json
file containing the SA credentials.Create a Kubernetes secret resource for those service account credentials. It might look something like this:
apiVersion: v1
kind: Secret
metadata:
name: my-data-service-account-credentials
type: Opaque
data:
sa_json: <contents of running 'base64 the-downloaded-SA-credentials.json'>
Mount the credentials in the container that needs access:
[...]
spec:
containers:
- name: my-container
volumeMounts:
- name: service-account-credentials-volume
mountPath: /etc/gcp
readOnly: true
[...]
volumes:
- name: service-account-credentials-volume
secret:
secretName: my-data-service-account-credentials
items:
- key: sa_json
path: sa_credentials.json
Set the GOOGLE_APPLICATION_CREDENTIALS
environment variable in the container to point to the path of the mounted credentials:
[...]
spec:
containers:
- name: my-container
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/gcp/sa_credentials.json
With that, any official GCP clients (ex. the GCP Python client, GCP Java Client, gcloud CLI, etc. should respect the GOOGLE_APPLICATION_CREDENTIALS
env var and, when making API requests, automatically use the credentials of the my-data
service account that you created and mounted the credentials .json
file for.