Recently @ a client, I was trying to figure out which product team needs immediate help/attention. Since I have limited time available @ the client location, it would be best to come up with some plan of how I’ll go around the various teams and help them. Trying to fix all the issues of all the teams is impractical. Once a team is identified, I can use Theory of Constraints (ToC) to figure out their biggest bottleneck and work my way up from there. Similarly I can use ToC to identify the biggest bottleneck team (Team which is doing critical work and is slowing other teams down). To be able to come up with an initial plan, I wanted to get a feel of how each team works and each team’s code-base and its design.
I started off by spending about 2-3 hours with each team. During this visit I was playing “health-inspector”:
- I asked the team to explain the product they were working.
- After understanding the product, I took a few relevant scenarios and asked the team to help me build a Value Stream Map.
- I asked the team to share with me what was causing most pain for them and what they thought was working well.
- I asked the team to walk me through their code-base by focusing on one specific feature.
At the end of this meeting we really did not come up with any action items (would be premature at this point). All we had, was a list of observations with the value stream map on a wiki-page. This list can be considered as an initial Debt list. This gave me a decent feel for how the team was working and the kind if issues they were facing.
(One thing I’ve learnt in my previous consulting life is, never take personal notes while people are talking to you on sensitive topics. If taking notes is important because you have been bombarded with information, open a simple text editor or wiki and ask them to summarize after they have explained a point.)
After series of meetings with various teams, I talked to the Business Sponsor and share my concern about the code quality. Trying to explain the details with off-hand random examples from the code was getting difficult. Reading through each team’s observation on the wiki-page was helpful but the results were very subjective. It was not clear which team needs immediate attention. So the Business Sponsor asked me if I could come up with some code evaluation criteria and rate various team’s code-base.
While this made sense, I was concerned that it would be difficult to come up with objective criteria which can then be used to identify which team needs immediate attention. Urgency of attention is to a great extent influenced by soft things that cannot be easily externalized in a metric. Even if we came up with some criteria, the value assigned to the criteria in some context is only relevant in that context. Each team might have a different context and comparing the teams in different context using the same criteria can be error-prone.
Anyway I decided to give it a shot. Worst case, we at least would have some baseline for each team that they can then refer back to as we starting resolving issues. But the question was who should rate the team against these criteria. Personally I’m very uncomfortable rating others. I also think it’s not effective. It encourages the wrong behavior in teams. So I decided to have the team self-rate them by having an open discussion within their respective teams. At least this will ensure everyone in the team is on the same page. And then if there was any difference between what the team rated themselves as and what I thought the rating should be based on my short meeting with them, then it could be easily resolved with an open discussion about it. In some cases I don’t have enough context as the team has.
Refer to How to Rate a Product (its code and design) for more details on the quality criteria and their rating scale I used.
Once I identified a rating system, I created a page with a simple table on the company wiki. Each quality criteria was listed as a Row heading and each team’s name as a Column heading. Each team was requested to rate themselves. After the teams rated themselves, in a couple of cases, I had a discussion with them about rating of specific quality criteria and came up with mutually agreed numbers.
Something very interesting (unexpected), happened once all the rating was done. Seeing all the teams’ rating on one page helped me identify trends across the company. For example Testability was an issue on most teams. This helped me come up with some basic training on Testability which most teams could attend.
It is not fair to compare one team’s rating with another. But certainly a team which has a lot of 1’s is in more need of attention than a team which has 3’s and 4’s. Again, just because a team is in need of attention does not mean one needs to jump straight into the team. We need to figure out where on the critical path of the organization this team lies. If this team is not on the critical path, may be investing time on another team would be fruitful (at least in the short run).
This approach really helped me come up with a 2-prong strategy.
- Define general training requirements for the whole organization which will help each team improve.
- Identify the team that needs immediate attention (bottleneck) and coach that team.