Is it possible, to pull private images from Docker Hub to a Google Cloud Kubernetes cluster? Is this recommended, or do I need to push my private images also to Google Cloud?
I read the documentation, but I found nothing that could explain me this clearly. It seems that it is possible, but I don´t know if it's recommended.
There is no restriction to use any registry you want. If you just use the image name, (e.g., image: nginx) in pod specification, the image will be pulled from public docker hub registry with tag assumed as :latest
As mentioned in the Kubernetes documentation:
The image property of a container supports the same syntax as the docker command does, including private registries and tags. Private registries may require keys to read images from them.
Using Google Container Registry
Kubernetes has native support for the Google Container Registry (GCR), when running on Google Compute Engine (GCE). If you are running your cluster on GCE or Google Kubernetes Engine, simply use the full image name (e.g. gcr.io/my_project/image:tag). All pods in a cluster will have read access to images in this registry.
Using AWS EC2 Container Registry
Kubernetes has native support for the AWS EC2 Container Registry, when nodes are AWS EC2 instances. Simply use the full image name (e.g. ACCOUNT.dkr.ecr.REGION.amazonaws.com/imagename:tag) in the Pod definition. All users of the cluster who can create pods will be able to run pods that use any of the images in the ECR registry.
Using Azure Container Registry (ACR)
When using Azure Container Registry you can authenticate using either an admin user or a service principal. In either case, authentication is done via standard Docker authentication. These instructions assume the azure-cli command line tool.
You first need to create a registry and generate credentials, complete documentation for this can be found in the Azure container registry documentation.
Configuring Nodes to Authenticate to a Private Repository
Here are the recommended steps to configuring your nodes to use a private registry. In this example, run these on your desktop/laptop:
- Run docker login [server] for each set of credentials you want to use. This updates
$HOME/.docker/config.json
.- View
$HOME/.docker/config.json
in an editor to ensure it contains just the credentials you want to use.- Get a list of your nodes, for example:
- if you want the names:
nodes=$(kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}')
- if you want to get the IPs:
nodes=$(kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}')
- Copy your local .docker/config.json to the home directory of root on each node.
- for example:
for n in $nodes; do scp ~/.docker/config.json root@$n:/root/.docker/config.json; done
Use cases:
There are a number of solutions for configuring private registries. Here are some common use cases and suggested solutions.
- Cluster running only non-proprietary (e.g. open-source) images. No need to hide images.
- Use public images on the Docker hub.
- No configuration required.
- On GCE/Google Kubernetes Engine, a local mirror is automatically used for improved speed and availability.
- Cluster running some proprietary images which should be hidden to those outside the company, but visible to all cluster users.
- Use a hosted private Docker registry.
- It may be hosted on the Docker Hub, or elsewhere.
- Manually configure
.docker/config.json
on each node as described above.- Or, run an internal private registry behind your firewall with open read access.
- No Kubernetes configuration is required.
- Or, when on GCE/Google Kubernetes Engine, use the project’s Google Container Registry.
- It will work better with cluster autoscaling than manual node configuration.
- Or, on a cluster where changing the node configuration is inconvenient, use imagePullSecrets.
- Cluster with a proprietary images, a few of which require stricter access control.
- Ensure AlwaysPullImages admission controller is active. Otherwise, all Pods potentially have access to all images.
- Move sensitive data into a “Secret” resource, instead of packaging it in an image.
- A multi-tenant cluster where each tenant needs own private registry.
- Ensure
AlwaysPullImages
admission controller is active. Otherwise, all Pods of all tenants potentially have access to all images.- Run a private registry with authorization required.
- Generate registry credential for each tenant, put into secret, and populate secret to each tenant namespace.
- The tenant adds that secret to
imagePullSecrets
of each namespace.
Consider reading the Pull an Image from a Private Registry document if you decide to use a private registry.
There are 3 types of registries:
- Public (Docker Hub, Docker Cloud, Quay, etc.)
- Private: This would be a registry running on your local network. An example would be to run a docker container with a registry image.
- Restricted: That is one registry that needs some credentials to validate. Google Container Registry (GCR) in an example.
As you are well saying, in a public registry, such as Docker Hub, you can have private images.
Private and Restricted registries are more secure obviously, as one of them is not even exposed to internet (ideally), and the other one needs credentials.
I guess you can achieve an acceptable security level with any of them. So, it is matter of choice. If you feel your application is critical, and you don't want to run any risk, you should have it in GCR, or in a private registry.
If you feel like it is important, but not critical, you could have it in any public repository, making it private. This will give a layer of security.
来源:https://stackoverflow.com/questions/50826766/google-cloud-kubernetes-accessing-private-docker-hub-hosted-images