Recently I saw a case study about how a very large consumer packaged goods company “improved” their product development process. I put “improved” in quotes for reasons explained below.
The study identified a important insight about the company’s stage gate type process. Project teams were doing insufficient definition work at each stage, especially early stages. This caused poor decisions that:
- continued projects that should be canceled,
- delayed projects that would actually provide a positive impact, and
- overburdened teams with too much work in process, reducing productivity.
The company’s solution? Do what was obvious on the surface: more definition work on deliverables in each stage.
The insights about the effects of an ineffective process were good as a start. It increased the efficiency of each stage, reducing rework and the need to revisit earlier stages.
There are, however, two flaws in the solution:
- The solution focused on improving each stage individually and perfecting the stage “deliverables.”
- The solution was really more about vanity metrics, instead of a root cause in not meeting customer and company needs.
Flaw: the “improvements” were in each stage’s internal “products”
The stage gate process was a classical “waterfall” approach where the following activities are done in sequence without overlaps:
- first do analysis,
- then come up with requirements,
- design a solution to meet the requirements,
- build the solution,
- test to check if the solution met what was asked for in the requirements,
- assuming all is good, release the product to users.
The case study was correct in that the internal “products” – the documents produced as the result of one stage for the next – were causing lots of unnecessary work later. This is a classic issue with waterfall approaches. The intended “customers” of the documents don’t really see the documents until the documents are done at stage end. Any issues means the documents have to go “backwards” to the people who originally produced the documents. Beyond the direct rework, changes in earlier stages can multiply the rework by affecting multiple later documents.
The flaw was in the focus on improving each stage and it’s “products” only. The real goal is to produce something of value at the end of the waterfall. However, in a waterfall approach, early stages only produce things for the next stage’s work … hence the focus on documents to be “consumed” in the next stage’s work. The documents are really mostly necessary to transmit in-progress work to the next stage. A “customer product” is only built in the “build” stage. It is easy to just focus on achieving stage goals and lose focus on the end goal. When the focus is on stage goals, “improvements” may help the stage but do nothing, or even make worse, the end result. In systems thinking terms, focusing on optimizing any stage sub-process in a stage gate process will sub-optimize the overall process.
Teams should keep an overall focus on the final outcomes for the customer (e.g., was it valuable and met customer needs?) and the company (e.g., was it profitable?) at the forefront of decisions. Improvements should first meet the test of “will it improve the end result?” and secondarily “can we improve this stage?”
Flaw: the “improvements” were in vanity metrics
Vanity metrics are metrics that “feel good” and look impressive, but are really useless for decision making and effective improvement efforts.
One common example that we have cited before is “utilization.” By itself, how much time a person logs against specific projects or types of work is usually of low value. The time spent merely records activity, not results. Focusing on activity over time will lead to reduction in focus on end results. It will also lead to doing low or no value added activity to keep utilization rates up.
In the case study, some of the metrics cited were in reduced project management time (e.g., less time tracking changes) and reduced rework time. Sounds good, doesn’t it? However, the extra time in perfecting documentation didn’t seem to be taken into account, so maybe there wasn’t much savings after all.
And, who cares about “getting things right” in each stage? The departments or functions with responsibility for that stage. In stage gate approaches, different departments or roles usually are accountability for a particular stage. For example, analysts are responsible for requirements, QA is responsible for the testing stage, and so on. For a department to improve its own results, it would necessarily focus on improving results for that stage only. Why? Because their performance metrics relate to that stage only. The metrics of measuring document, work quality, and even utilization for each stage produces vanity metrics for the project, as everyone focuses on their particular stage. This can lead to waste and also to behavior that actually reduces the end result … sub-optimization.
The metrics that are missing in the case study and often in stage gate approaches: customer value and overall company value. If there is not linkage between stage metrics and the end results, then vanity metrics and sub-optimization occur.
A solution: real feedback and adjustment, focusing on end results
A weakness in stage gate processes is that users or customers can’t really provide good feedback until the end of the entire project when a product is delivered. That’s why most agile approaches (or even stage gate adaptations like incremental waterfall processes) emphasize incremental product builds to get “real” – from users/customers – earlier than project end. Lean Startup practices also include the mantra “get out of the building” to check assumptions as soon as possible.
Not getting real feedback early and checking assumptions means perfecting project work is just doing more “analysis” and prediction i.e., producing documents that predict the end results and satisfy vanity metrics. More analysis and prediction, and the accompanying assumptions, really just delay real feedback and adjustments based on real data that help us deliver better results sooner. The balance is finding how much is “just enough” analysis and prediction that help design good experiments the generally correct direction.
Leave a Reply