In CI/CD how to manage dependency between frontend and backend?

前端 未结 2 1937
孤街浪徒
孤街浪徒 2021-02-06 05:40

I\'ll describe my setup to make the problems less abstract, but they don\'t seem specific to my case.

Context

We have Python-Django backend and a VueJS fronten

相关标签:
2条回答
  • 2021-02-06 06:05

    Branch dependency for tests

    Sometimes when we develop branch feature-1 in the frontend, it must be >tested against branch feature-1 from the backend.

    In our current setup all the commits in the frontend are tested against the deployed backend (to avoid replicating the backend in CI, only the production API address is used), resulting in false tests results in such cases.

    and

    Backend integration tests

    When a commit is done to the backend, it can break the frontend.

    Currently the backend isn't tested against the frontend (only the other way).

    In my current company, we have Django for frontend(FE) and backend(BE), each in a repository. we're following trunk-based development. We also use gitlab for CI/CD. I rolled out what you mentioned here and didn't feel awkward at all.

    Relationship between the environments and this branching model.

    |Branches|example|environment|

    |master |master| staging|

    |release-v*|release-v1.1.10|preprod|

    Tag:

    |Tag|example|environment|

    |v< MAJOR >.< MINOR >.< PATCH >| v.1.1.10|Production|

    Once branches/tags are created or any commit to the defined branches, gitlab will trigger an auto build/deployment.

    The frontend has to be tested against the backend. we do that with feature branch.

    feature/< branch-summary >

    developers need to ensure the same feature branch name existing on both FE and BE.

    An URL for each frontend/backend is generated for each deployment look like -fe.< domain >.com for frontend -be.< domain >.com for backend

    eg: there is a feature/mytask in both FE/BE repos. The FE URL is mytask-fe.< domain >.com and BE URL is mytask-be.< domain >.com

    You can use docker-compose but in my case, our application is deployed to kubernetes using helm. A bit further of this implementation, My FE and BE has an k8s ingress which is managed by traefik. The DNS record(for each URL) is automatically created in (by k8s DNS controller), the backend is using DB and Redis which are created every time a feature branch is created or changed. By the convention of feature branch naming, the FE knows how to connect to BE and the BE knows how to use its own DB and Redis.

    eg: helm upgrade --install ${RELEASE_NAME} ...

    the RELEASE_NAME is extracted from feature/< branch-summary > ( not exceeding 63 chars)

    Other things, you may consider about initialize data for feature branch deployment. In my case

    *)Developers will manage to populate data (maybe running a script as init container in k8s). If a developer pushes a commit to the same feature branch, this will trigger a re-deployment and all data of DB / Redis will be re-inited. Developer may need to re-populate data and QC may need to restart their testing from beginning of this feature.

    *)to avoid creating too many resources in k8s and branches in gitlab repository. I setup a remove function in gitlab CI in order to delete a feature branch in the BE/FE repository which will trigger a deployment deletion in k8s respectively.

    0 讨论(0)
  • 2021-02-06 06:30

    Deployment synchronization

    Imagine we're doing a major change in both frontend and backend, and both will become incompatible with previous versions. So the new versions must be deployed simultaneously.

    In our current setup we have to first deploy the backend (what will break the deployed frontend) and then deploy the new frontend, fixing production, but with a "down" period.

    I'm not a portainer user, but maybe you could rely on some docker-compose.yml file or so, gathering both the version of the backend and the frontend? in this case they could be updated at the same time…

    Indeed according to portainer/portainer#1963 and this doc page, portainer seems to support both docker-compose and swarm stacks.

    Also, docker swarm provides some features to perform service upgrade without downtime, as documented in this blog, but I don't know to what extent this can be configured in portainer.

    Possible Solutions

    I'm not sure about what should be used to specify versions here: commit hash, git tag, branch, docker image version... The last maybe avoids having to rebuild and test images, but I think images name and versions are fixed in Portainer' stacks definition, and not easy to update automatically.

    While commit hashes are precise identifiers, they are probably not convenient enough to identify incompatible versions. So you may want to rely on semantic versioning using tags (and/or branches) on your Git backend repo.

    Then, you may tag the corresponding Docker images accordingly, introducing some synonymous tags if need be. For example, assuming the backend has been released with versions 1.0.0, 1.0.1, 1.1.0, 1.1.1, 1.2.0, 1.2.1, 1.2.2, a standard practice consists in tagging the Docker images like this:

    • project/backend:2.0.2 = project/backend:2.0 = project/backend:2
    • project/backend:2.0.1
    • project/backend:2.0.0
    • project/backend:1.1.1 = project/backend:1.1 = project/backend:1
    • project/backend:1.1.0
    • project/backend:1.0.1 = project/backend:1.0
    • project/backend:1.0.0

    (removing old images if need be)

    Backend integration tests

    Currently the backend isn't tested against the frontend (only the other way).

    OK but I guess your approach is fairly standard (the frontend depends on the backend, not the other way around).

    Anyway, I recall that even if the system under test is a front-end, it may be worth it to implement unit tests (which are less costly to develop and run than integration tests) so that a first stage in the pipeline quickly runs these unit tests, before triggering the necessary integration tests.

    Branch dependency for tests

    In our current setup all the commits in the frontend are tested against the deployed backend (to avoid replicating the backend in CI, only the production API address is used), resulting in false tests results in such cases.

    This may be not flexible enough: in general, CI/CD assumes the integration tests are run using a dedicated backend instance ("dev" server or "pre-prod" server), and if all integration tests and system tests pass, the image is deployed to the "prod" server (and monitored, etc.)

    I see from your post that you are using GitLab CI, which has some native Docker support, so maybe this could be implemented easily.

    A couple of hints:

    • Assume the backend has been modified in a non-backward compatible version, and the corresponding Docker image is available in a registry (e.g. that of GitLab CI). Then you could just change the specification of that image in the frontend configuration (e.g., replacing project/backend:1 with project/backend:2 or so in the GitLab CI conffile).

    • Your backend is probably implemented as a REST Web Service, in which case you might also want to add a version prefix in your URL, so that when you switch from project/backend:1 to project/backend:2 (with incompatible changes), both versions could be deployed at the same time if need be, to the URLs https://example.com/api/v1/… and https://example.com/api/v2/…

    Also, beyond the solution to have only two repos with CI/CD (backend tested apart, and frontend tested against the relevant version of the backend), the solution you suggested in the first place may also be considered:

    For the deployment synchronization problem I thought about creating another repository that would have only one file specifying the versions for frontend and backend that should be deployed. A commit in this repository would result in both Portanier' services webhooks being "curled" for update (backend and frontend). This doesn't guarantee the simultaneous update (one may fail in Portainer and there would be no rollback), but it would be better than current setup.

    You could slightly modify this approach to avoid one such deployment failure: you could add some CI setup to that third repo, that would only contain a docker-compose.yml file or so, and move the integration tests from the frontend CI to that "compose" CI…

    (FYI this approach is similar to the one suggested in this DigitalOcean tutorial, where the integration testing is achieved thanks to some docker-compose.test.yml file.)

    0 讨论(0)
提交回复
热议问题