Simpler setup for Hyperledger Fabric on Kubernetes using Docker-in-Docker
Hyperledger Fabric is a distributed blockchain network that allows users to define the behavior of their ledgers using conventional general-purpose programming languages. This user-defined blockchain code is called chaincode. It’s executed by peers to determine what effect a transaction has on the state of the ledger. Currently, Fabric supports chaincode written in Go and Node.js, but the goal is to support other languages as well.
How Peers run Chaincode
Chaincode is distributed as source code. It’s up to each peer to build and run it. First, the peer uses a builder image for the specified programming language (e.g. Golang) to create an executable image from the chaincode source:
- Make sure the builder image is available to the Docker daemon. Pull the image from Docker hub if necessary.
- Tell the Docker daemon to start a container from the builder image and send the chaincode source code to the new builder container.
- The builder container creates a new image for the built chaincode.
The built chaincode image is not uploaded to a remote registry, but it is available locally to the Docker daemon that started the builder container. This is fine as long as the peer uses the same Docker daemon to run the builder container and the chaincode container.
Once the chaincode image is built, the peer can use it to execute the chaincode:
- Tell the Docker daemon to start a container from the chaincode image. It must be the same Docker daemon as before, otherwise the chaincode image won’t be available.
- The chaincode container uses a provided IP address (e.g.
localhost
) or DNS name (e.g.peer0.org1
) to connect to the peer.
The current implementation of Fabric peers uses the environment variable CORE_PEER_CHAINCODELISTENADDRESS
to configure the address chaincode connects to. It’s simplest to use a predetermined value like localhost
or the DNS name of the peer, but it may be possible to dynamically determine the correct IP after the peer is deployed.
Trouble with Chaincode in Kubernetes
The requirements I described above aren’t well-suited to Kubernetes. Kubernetes is a container orchestration framework. At the most basic level, it’s a system for managing Pods — groups of containers that each share a network namespace. As such, Kubernetes expects applications to run their code in Pods. This is the first obstacle for running chaincode on Kubernetes.
Running Chaincode using Host Docker
The current standard is to mount the host machine’s Docker socket into the peer’s container. The peer then uses the host’s Docker daemon to build and run chaincode. This strategy works, but it has flaws.
Since the builder container and the chaincode container are created without going through the Kubernetes API, Kubernetes cannot manage them. For example, if the peer crashes or must be rescheduled onto a different machine, Kubernetes won’t clean up the chaincode container.
Another problem is that each Kubernetes Pod gets its own IP and network namespace. When the chaincode container is created using the host’s Docker daemon, it’s not part of the Pod. This means the chaincode can’t connect to the peer using localhost
. This wouldn’t be so bad, but we can’t use DNS names here either. Internally, Kubernetes uses its own DNS, called kube-dns
. Pods are set up to use kube-dns
, but the chaincode isn’t part of a Pod — it won’t be able to resolve the IP address for peer0.org1
.
Docker-in-Docker
Docker-in-Docker (DinD) is a Docker daemon running inside a Docker container. The DinD daemon is separate from the host’s Docker daemon. Containers created by one daemon are not visible to the other. We can use DinD in a Pod to solve the problems of running chaincode on Kubernetes.
No more orphaned chaincode
DinD runs as a sidecar in the peer’s Pod. This means it’s a container that runs alongside the main application (the peer) in the Pod. In addition to sharing a network namespace, containers in a Pod also share the same lifetime. If one container fails, the entire Pod fails. If the peer crashes, DinD and the chaincode containers get cleaned up too.
Chaincode connects to Peer at localhost
Now, the chaincode container is started by a Docker daemon inside the Pod’s network namespace. This means the DinD daemon, the chaincode container, and the peer all have the same IP address. The peer can then be configured with CORE_PEER_CHAINCODELISTENADDRESS
as localhost:7052
. When the chaincode container starts, it doesn’t need kube-dns
to find the peer at localhost:7052
.
Comparison to Other Solutions
The current standard for Fabric on Kubernetes is to mount the host’s Docker daemon into the peer Pod. In order for the chaincode container to find the peer, it needs to use the peer’s DNS name. This means the host’s Docker must be configured to use kube-dns
. This isn’t a standard step in setting up a Kubernetes cluster, so users with existing clusters have to reconfigure all their worker nodes. It’s not a very Kubernetes-native approach, and it doesn’t address the issue of cleaning up orphaned chaincode containers.
Another approach being investigated is to run chaincode as a Pod. This way, Kubernetes can properly manage the chaincode — scheduling based on resource requirements, setting up DNS, etc. It’s much more complex than the DinD approach, but it could be the right approach at some point in the future.
Conclusion
For now at least, using Docker-in-Docker as a sidecar to the peer solves the main problems with running chaincode in Hyperledger Fabric on Kubernetes. It’s a drop-in solution that doesn’t require changes to Fabric or the user’s Kubernetes cluster.
As a final note, it makes sense that Docker-in-Docker is useful for building and running chaincode. Building and running arbitrary code is essentially what Continuous Integration does, and CI is a popular use case for DinD. Leave your thoughts and questions in the comments!
来源:oschina
链接:https://my.oschina.net/u/2306127/blog/2043439