I\'ve been developing a workflow for practicing a mostly automated continuous deployment cycle for a PHP project. I\'d like some feedback on possible process or
I don't know whether this is relevant to PHP, but you can replace at least at least some of the code review stage with static analysis.
The quality of code reviews is relying on the quality of the reviewers, while static analysis relies on best practices and patterns, and is fully automatic. I'm not saying that code reviews should be abandoned, I simply think it can be done off-line.
See
http://en.wikipedia.org/wiki/Static_code_analysis
http://en.wikipedia.org/wiki/List_of_tools_for_static_code_analysis
Importent to make your tests extremely fast, i.e. no IO and ability to run parallel and distributed tests. Don't know how applicable it is with php, but if you can test units of code with in memory db and mock the environment you'll be better off.
If you have a QA/QC or any human in the way between the commit and production you would have a problem getting to a full continuous deployment. The key is your trust your testing, monitoring and auto response (immune system) enough to eliminate error prone process evolving humans from your system.
How many people are working on it? If you only have maybe 10 or 20 developers, I'm not sure it will make sense to put such an elaborate workflow into place. If you're managing 500, sure...
My personal feeling is KISS. Keep It Simple, Stupid... You want a process that's both efficient, and more important: simple. If it's complicated, either nobody is going to do it right, or after time parts will slip. If you make it simple, it will become second nature and after a few weeks nobody would question the process (Well, the semantics of it anyway)...
And the other personal feeling is always run all of your UNIT tests. That way, you can skip a whole decision tree in your flow chart. After all, what's more expensive, a few minutes of CPU time, or the brain cycles to understand the difference between the partial test passing and the massive test failing. Remember, a fail is a fail, and there's no practical reason that code should ever be shown to a reviewer that has the potential to fail the build.
Now, Selenium tests are typically quite expensive, so I might agree to push those off until after the reviewer approves. But you'll need to think about that one...
Oh, and if I was implementing this, I would put a formal QC stage in there. I want human testers to look at any changes that are being made. Yes, Selenium can verify the things you know about, but only a human can find things you didn't think of. Feed back their findings into new Selenium and Integration tests to prevent regressions...
All handovers between functions have the effect of slowing things down, and with that, an increase of the amount of change (and hence risk) that goes in to a deployment.
Manual quality gates are by definition an acceptance that quality has not been built in from the start. The only reason code needs to be reviewed later is because there is some belief that the quality is not good enough already.
I'm currently trying to remove formal code review from our pipelines for exactly this reason. It causes feedback delays, and quoting Martin Fowler:
"The whole point of Continuous Integration is to provide rapid feedback. Nothing sucks the blood of a CI activity more than a build that takes a long time. "
Instead I'd like to make code review something that submitters request if required, or otherwise is done at the time of coding by team members, perhaps a la XP pair programming.
I think it should be your goal that once the code is merged to source control, that there is absolutely no more manual intervention.