I have learnt some interesting insights into the life of consulting, especially around change management, organisational transformation, leading, influencing and inspiring mindset shifts. One of the challenges is meeting a team that is on a level of maturity that is screaming out for intervention, and having that self-control to contain myself from blurting "I told you guys this three years ago! And only now you're seeing the light! You need to listen more!".
As a coach, one has to be patient, and live the journey with the team, this is okay, I accept that. And yes, it is quite rewarding to see the team come off age, mature and eventually implement, performing, if not outperforming, your own expectations.
However, it is somewhat a little more challenging to have hard delivery timeline pressures thrust upon you, knowing that a team isn't prepared yet (not at the right maturity level, will need micro managing and lots of admin/management overhead), or have the building blocks in place to not only deliver, but to continue on, post-launch with a sustainable way of working...
So the journey has to be lived and walked through with the teams, even though you as an expert know the destination already, even if it maybe 3-5 years to get there...
As a coach, one has to be patient, and live the journey with the team, this is okay, I accept that. And yes, it is quite rewarding to see the team come off age, mature and eventually implement, performing, if not outperforming, your own expectations.
However, it is somewhat a little more challenging to have hard delivery timeline pressures thrust upon you, knowing that a team isn't prepared yet (not at the right maturity level, will need micro managing and lots of admin/management overhead), or have the building blocks in place to not only deliver, but to continue on, post-launch with a sustainable way of working...
So the journey has to be lived and walked through with the teams, even though you as an expert know the destination already, even if it maybe 3-5 years to get there...
The story has been repeated in my lifetime a few times already: New product, one customer, hackathon to deliver, deliver, then the struggle of maintenance & support, and the rush to support new products & customers...
Example:
We start with a fairly young application development team, responsible for Java development of a Set Top Box EPG / User Interface. We try out this new thing called Scrum and aim to operate using the Scrum/Agile framework, we lack the supporting engineering tools & processes to manage quality (no real time to focus on CI, automated unit tests, etc) - we deliver against the odds to make an impossible launch happen. We hoped we'd have time to settle, fix the mistakes and improve working processes in time for the next release, but the work continues to pile on. Not only do we have to deliver subsequent releases, but now have to support other products as well, with the same sized team. Management want to hear none of "Refactor, Rework, Technical Debt" - on top of that, management decide to implement Key Performance Indicators (KPIs) as a way of measuring productivity...
Example:
We start with a fairly young application development team, responsible for Java development of a Set Top Box EPG / User Interface. We try out this new thing called Scrum and aim to operate using the Scrum/Agile framework, we lack the supporting engineering tools & processes to manage quality (no real time to focus on CI, automated unit tests, etc) - we deliver against the odds to make an impossible launch happen. We hoped we'd have time to settle, fix the mistakes and improve working processes in time for the next release, but the work continues to pile on. Not only do we have to deliver subsequent releases, but now have to support other products as well, with the same sized team. Management want to hear none of "Refactor, Rework, Technical Debt" - on top of that, management decide to implement Key Performance Indicators (KPIs) as a way of measuring productivity...
This story isn't new, I've seen this repeated a few times especially with STB product development. You start with what looks like an elegant design, over promise the capabilities, a real demanding customer comes along with an insane delivery target, the elegant design gets infected with hacks, the hacks turns into product, the product launches, customer is happy, expectations are now set in stone. The app is reused for other products, the customers increase. The team size remains the same. We are asked to deliver more with the same "resources" and deliver with improved quality. We will be measured by the quality of our component delivery. This, whilst all the time, running parallel streams often with simultaneous or overlapping component releases for one or more hardware products (Zappers, PVRs, etc.) - Sound familiar?? Probably not as unique to set top box software development right??
What do you as a development manager do? You just embarked on the road to Agile/Scrum. You have a massive backlog of technical debt. Your team isn't performing at the level or maturity it should. Management is pressing on some metrics from you that you must use to justify progress towards increased productivity...
Where to Start?
Find out who the unhappy customer is? Dig into the background & experience of this customer (is the person from a manufacturing background, did he/she ever write code before, etc).
In the spirit of agile, I would approach the main stakeholder and have a sincere conversation around why the request for KPIs in the first place? What information is this stakeholder not getting? Why is he/she not getting this info? Seek out the underlying concerns - ask the five Whys, drill down into the root cause.
One of the common root causes is Quality: Your component is not stable. Too many bugs. We thrashing too many QA cycles.
Use this information to agree on the core measurement that will satisfy your stakeholder.
Work with the team to figure out a way to make this stakeholder happy, highlighting full well you can't do everything, it will be a gradual process to reaching that goal.
How to Define the Goal?
Now that you've had a conversation with the big boss, the main unhappy stakeholder that triggered this management intervention in the first place, you need to get your team on board.
Make sure the team's goal or team charter captures the essence of the output. Read about my take on example team goals here.
Think about how you would visualise, show a picture of the goals and aspirations.
Think about the best way of measuring and reporting on your current state-of-affairs.
Create a baseline of where you are in terms of the areas you're targeting for improvement, against your goal - where you want to be, XXX weeks/months from today.
Align those KPIs to the goals and get sign off from management team.
Agree on the best way to communicate progress (information radiator of sorts).
Simplify what Metrics are Measuring in Clear Language
You can go crazy measuring stuff that nobody is interested in. Generally productivity in the STB world starts with achieving and maintaining quality & stability, followed with sustaining the pace and being able to work efficiently with the same headcount, but supporting multiple projects and releases simultaneously. The ultimate measure of productivity shows how much you have achieved with a constant team, baselining your team's capacity: How much can the team produce, in what time frame, and to what quality target??
What STB Delivery Managers typically look out for?
In a given release cycle, where there is significant feature development, delivery managers look at metrics to focus on not only the defects resulting from development, but the productivity of the team itself. In past projects, we had to track and report on our current status against planned schedule, and include predictions of time to completion, including a prediction of the likely defects to encounter at the end of the development phase; followed by a trajectory or glide-path for time to completion of defects!
The following topics can be considered useful reflection points for metrics regardless:
- What is the feature breakdown?
- How many features development complete?
- How many user stories implemented with no outstanding bugs?
- How many user stories implemented with outstanding bugs?
- How many user stories to be planned?
- How many user stories blocked?
- What is the prediction for completing a feature?
- Predictions for completing remaining features?
- How many features development complete?
- Measuring Sprint Progress in more detail:
- Planned versus Actuals being reported for each user story
- How many stories completed on time – development, tested with no bugs??
- How many stories missed the planned completion date but still delivered within the sprint?
- How many stories missed the sprint completely, despite not being blocked by any dependencies?
- How many stories were blocked by unforeseen issues?
- What issues were these?
- How can we take corrective action to prevent blocks from happening again?
- Planned versus Actuals being reported for each user story
- Measuring Risk-Mitigation or Technical Debt stories:
- How many risk mitigation stories are on the backlog?
- When is it predicted to close down these risk mitigation stories?
- What are the consequent stories dependent on the outcome of risk mitigation stories?
- Adding Other stories to Sprint Backlog:
- STB SI will be influencing the objectives of the sprint – SI will provide a list of defects that must be fixed for the next release. This needs to show up as a story
- Defect fixes must also be allocated a story – accounts for time
- Overall burndown of Component completion estimations
- Given the following things can happen during a sprint:
- Developing new features
- Risk mitigating features for future sprints
- Fixing of defects raised by dev testers / developers
- Resolving SI objectives
- Supporting release process
- Pre-planning
- Can we quantify more clearly, even though it's an estimate – of the likely burndown trend to result if everything is accommodated for – where does the end date come out??
- Burn-Up charts - show progress against agreed scope, new features, effects thereof, etc.
- Typical metrics relevant to the development team is still useful, but you need to consider if management will understand:
- Burn Rate (as cited above)
- Code Review Coverage
- Unit Test Coverage
- Code Inspection / Static Analysis progress
- Mean Time to Repair Broken Builds (Rejection/Fix rate)
- Cumulative Flows (how to measure flow, blockages through the team?)
- Cycle Time (how long it takes to cut different weighted features?)
- Straight-through-rate (how many releases worked first time, without fault?)
Visualising Progress
Once you have identified the baseline of your current status, and targets for goals you want to achieve, you need to think of a way to visualise your progress in a way that high level management can understand.
I have a picture in my head that I will try to sketch out some time, it's basically aimed to show a constant team size / headcount, like those rocks in a bottle example - The bottle shape & size is fixed, how do you fill rocks into the most efficient way. You might start with large rocks, then smaller rocks to fit between the gaps, going to use grains of sand/pebbles to fill all the gaps.
Imagine that your current inefficiencies or shortcomings are the large rocks. It would be nice to show these rocks shrinking over time. The smaller they get, the more rocks you can fit in. The smaller they get, it means you're being more efficient and improving some category for productivity. Eventually you can show multicoloured rocks that indicate different areas: technical debt, process improvements, automation, CI, unit testing, user stories, features, etc, etc...
Other Examples/References
- Ron Jeffries very early work on Big Visible Charts is still relevant, covers most of what I said
- Link Agile Team Goals to Metrics / Productivity improvements
- Root Cause Analysis feedback to team improvements
- Don't just trust Requirements-coverage testing
- Learn about software delivery from a nissan production line
No comments:
Post a Comment