Thursday 13 March 2014

In search of low hanging fruit following on from high level retrospective

In my previous post I shared the experience of running a retrospective with a management team who are finding it quite a challenge to maintain a coherent team together that's needed to deliver on an aggressive product roadmap. 

The outcome of the initial sense-making workshop highlighted what in essence, is a longer term vision and strategy for the team, and could be used to establish the goals for this particular team (check out an earlier post on Agile goals). The workshop has indeed left the management team a little nervous about jumping straight in and tackling the BHAGs (Big Hairy Audacious Goals), and felt that we should instead start addressing potential low-hanging fruit first. Of course, this is a natural reaction to change, but not entirely unreasonable. Recall the scorecard assessment shown on the left.

To jump in an solve the "Efficiency of Team Structure" problem is bound to be quite disruptive, especially when the teams have just started experimenting with some internal improvements on their own. Added to that, the department structure that's currently in place, doesn't lend itself well to having fully cross-functional teams. Added to that, some department managers have yet to settle between themselves the expectations & services provided by each area. We can clearly see how going agile puts pressure on traditional segmentation of departments by role, e.g. Development vs Systems Integration vs QA, etc.

Personally I've been advocating for a single, unified, cross-functional team approach for some time, but have faced the classic resistance from managers because its new territory, not really been done before, alarm bells ringing. This hasn't been made easier by the surprising rapid increase in headcount and growth of all the departments, dealing with multiple projects and platforms, that the guys just didn't have time to sit back, and discuss overall strategy & direction for each service area.

We continued to plough ahead, this time, going back directly to the foot soldiers in all teams, where each manager was asked to hold internal team retrospectives, addressing these three questions:

  • What are your current top, pain-in-the-ass issues, your most frustrating bug-bear?
  • What do you think the Root Causes are?
  • What would be a potential solution to the problem?
The topics again sparked some interesting conversations, some people also took feedback quite personally, people came up to me afterwards and shared their emotional frustrations. It is interesting how the different managers and their teams respectively, have different perceptions, expectations and understanding, yet people still tend to work be working together as a team, on the surface. The point of these retrospectives is to highlight these hidden topics, feelings & emotions are quite important. People need to feel comfortable sharing their experiences and opinions, without being intimidated by fellow peers, we also don't want peers to start defending themselves. Lets just get the issues out in the open, and start to process them in a systematic way.

The low-hanging fruit that came out from this: Improvements in Pre-Planning, Improving Control in SI (System Integration). Interestingly, these areas map nicely into the scorecard assessment that encapsulates the eventual goals, covering: Quality of Planning and Level of Control, no surprises there.

As the neutral facilitator, who's run a few of these workshops over the last year, the following is quite evident:

  • Different perceptions & understanding what's required from an Agile Project, including misplaced assumptions
  • Lack of clarity about the department's new structures, understanding the difference between Development, Delivery including quality
  • Clash of expectations (where does agile fit in, when to switch to traditional SI model, etc.)
  • Sentiment of lethargy or inaction - most of the improvement actions have been identified before, but no-one has had time, or had taken ownership of driving the improvement, no follow-through
  • Bigger people, softer issues around team camaraderie
The journey continues…
Above & Below: Outputs from the session


Monday 10 March 2014

High Level Agile Retrospectives: Sense making with the management team


In my previous post "Will the real product owner please stand up?", I sketched the scenario where a
development team is pressed to flex and adapt to product roadmaps that tend to be generally quite aggressive, where a roadmap is generally accepted as template for driving the delivery plan and so the expectation is set at the highest levels of business, which means the team must just get on and deal with it.

This scenario is typical of many software product teams, where the success of the initial product launch was due to a Herculean effort of the entire team, the product gets out on time; and that original pace of delivery then sets the scene & the tone - the precedent is set such that people have the expectation that the team can pull it off again and again.

This is all well and great since the team has inspired confidence that they are committed and can indeed deliver. Little of the legwork and thrashing that went on internally in the development team is known to the business stakeholders, the team would still love to commit to the roadmap, but are quite edgy about the reality and implications of doing so.

We are faced with the classic problem of scaling, the notion that "What got you here, won't get you there!" scenario unless something changes - but people are nervous about abrupt changes, because the teams are currently working through their own internal improvement plans that first need to be seeded for a period of time, reviewed & then evaluated, before committing to subsequent changes into the feedback loop. 

As an Enterprise Program Manager, it is my job to put a high-level plan together that shows the stages of delivering on the product's roadmap. I become comfortable with doing this planning as long as I have confidence in the development & execution processes of the impacted delivery team. Unless I have confidence that the teams understand what's expected, that they have a reasonable processes / template for execution, I will not put my name against a delivery plan, that I know is close to impossible to achieve. 

So I roll up my sleeves, don my consulting hat, and try to facilitate conversations with the development & delivery team. In this post, and the series that will follow, I share my journey with a management team that face interesting challenges with balancing the delivery expectations of the business on the one hand, and growing their young agile development team on the other hand, by applying systems thinking tools of sense-making, to better understand the challenges that await.

I work with the mid-senior managers only, expecting the respective managers to have held internal team conversations before-hand, bringing all their team's issues to the table with the rest of their fellow management peers. Despite the team adopting an Agile/Scrum mindset, it is quite interesting to note that the "one-team approach" is really quite difficult to achieve in practice.

I share the results of the first sense-making workshop, that was guided principally around answering these two questions:
  • Given your current way of working, how much confidence do you have in achieving the roadmap?
  • What ideas can you suggest that we should look into, that if we implemented now, could help us to get close to achieving the roadmap?
Interestingly enough, to my surprise, the answer to the first question was a resounding NO! The team had little confidence the roadmap could be delivered upon, not even a Yes from the product team!  The second question generated some interesting ideas that sparked some healthy dialogue. Upon reflection, it turns out that our problems aren't that unique, sharing quite a lot in common to all development stories you read about. More interesting is that the ideas were not new topics for the teams themselves, showing that how teams can get stuck-in-a-rut, dealing with the everyday pressures that take higher priority over implementing and driving changes. Despite having many internal retrospectives, the same topics were brought up, showing clearly a lack of championing and initiative from within the various teams. Without such initiatives emanating from within the teams, how can management not ignore this, and not take the top-down approach to managing??

Technique used in the workshop
I facilitated the retrospective workshop, playing the impartial neutral person. The people involved were a mix of middle & senior managers spanning product management, software development, systems integration, agile managers, business analysts and quality engineering.  The idea generation workshop lasted two hours, followed by an hour session the next day for sense making. 
I used the concept of "Rounds" where each person gets a chance to speak & offer comment without interruptions. The rule is that everyone must respect each other, no interruptions, and strictly no challenging views - the place must be a safe one where people can speak freely without being afraid of fellow peers or line managers (tricky since it was during performance review time!). Following the initial ideation workshop, the second session was around applying Systems Thinking tools to unpack the ideas.
Systems thinking is around gathering the data into themes, looking for common traits, allocating a main topic by describing the term as a variable. For example, topics touching the subject of "Quality" is grouped as a variable "Level of Quality". Once these variables are identifying, we look for patterns of influence: How does each variable impact the other?? We look for relationships between the variables, using what is called an "Inter relationship or Network" diagram. From that we generate a Scorecard that shows us the core driving variables. From the scorecard, we assess and estimate the amount of effort we're spending in each area. Once that is known, the ones with the least effort, are usually candidates to immediately improve on.
Index cards, stickies and posters are used throughout the workshop.

Sunday 9 March 2014

Will the real Product Owner please stand up?

I repeat, will the real Product Owner please stand up?? Hello, I can't hear you…we're gonna have a problem here...where's the real product owner, the pig in the team that has influence and ownership across the board (represents the development team-on-the-ground and at the same time influences senior management)??

What's that then?? Huh, That's not how you roll? The Exec, king-of-the-throne speaks, and we scramble to deliver, deadlines can't be changed, we mere chickens are not empowered to influence at that level dude…Hey man, we operate Agile/Scrum localised to just our development team, we're not cross-functional yet, and we still have to respect the notion of downstream SI/QA teams. We manage our backlog as best as we can, but we know the roadmap can change in any quarter, so we adapt and go with the flow, despite our best intentions of reaching a steady velocity that can be used to better track, commit, and help influence the roadmap. We are like the blades of grass in the wind, we bend wherever the wind blows, sometimes, more often than not, the team hardly has a chance to catch their breadth before the next roadmap change that puts pressure on the existing team structures... Sound familiar??

A roadmap is not a commitment for delivery then. It's just an indication of ideas to consider, sure the team understands that, but when the grand executive sees the roadmap, all they see is committed dates - next to near impossible to get this mindset changed, because it is only us, in our little development world that are intoxicated with Agile/Scrum, but nobody in upper echelons of management seem to get it! What to do...

Instead of writing a long blog post on this, especially the typical challenges of a young outfit experimenting with Agile/Scrum adoption deals with the ambiguity of having just too many people involved with product management - making it hard to see who the Real Product Owner is...which should trigger conversations around really understanding the worlds of product ownership, and whether there is hope in achieving the nirvana of the Agile Product Owner, the one-and-only, unambiguous, owner and driver, champion for the product itself, but also protector of the foot soldiers (dev team)...

I'm experimenting with visual notes as an alternative to writing long articles. This is just take one, first draft. I'm a little rusty drawing freehand (but I used to be quite creative), so I used the PC to knock this poster together. I have set myself a goal for 2014 to sketch visual notes proper...
Will the Real Product Owner please stand up? Download full poster

Sunday 2 March 2014

Applying Google's Test Analytics ACC model to Generic Set Top Box Product

Early this year, I recently completed, what I felt was the best software book I've read in a really long time: How Google Tests Software by Whittaker, James A., Arbon, Jason, Carollo, Jeff ( 2012 ) (HGTS).

Although a couple of years old now, and even though the main driver James Whittaker, has left Google and gone back to Microsoft, this book is really a jewel that belongs in the annals of great Software Testing books, really.

The book is not filled with theory and academic nonsense - it speaks directly at practitioners of all types and levels involved in Software Engineering. Straight-forward, realistic breakdown of how things go down in software product teams, the challenges with fitting in Test/QA streams (actually there's no such thing as fitting in QA/Test, it should always be there, but the reality is that developers don't always get it), balancing the needs of business in terms of delivering on time against meeting the needs of the customer (will real customers use the product, is it valuable), etc, etc.

Please get your hands on this book, it will open your eyes to real-world, massively scaling software systems & teams. Honestly, it felt quite refreshing to learn Google's mindset is similar to mine, as I'd been promoting similar testing styles for much of my management career, being a firm advocate that developers should do as much testing upfront as possible, with Test/QA supplementing development by providing test tools & frameworks, as well as employing a QA Program Manager (known as Test Engineer in Google) to oversee and co-ordinate the overall test strategy for a product. I have written about QA/Testing topics in the past, the ideas there don't stray too far from the core message contained in HGTS.

The book shares a wealth of insider information on the challenges that Google faced with its product development & management as the company scaled in growth in both its use of its products as well as the number of people employed, working across continents, explosive growth and large development centres across the world. It is filled with interviews with influential Googlers that give some insight into the challenges and solutions adopted. You will also find information on the internal organisational structures that Google implements in its product development teams, along with some useful references on Open Source Tools that have been borne out of Google's own internal Testing Renaissance, now shared with the rest of the world for free, thanks Google for sharing!

One such tool is Google Test Analytics ACC (Attribute, Component, Capability) analysis model - which is the topic I want to focus on. In the Set-Top-Box projects that I run, I look to the QA Manager/Tech Leads to know the product inside-out, to develop an overall Test Strategy that outlines the ways and methods of testing that will achieve our Go-To-Market in a realistic & efficient manner (and not adopt process-for-process-sake). I generally look toward usability and risk-based test coverage, seeking out a heat map that shows me in broad strokes, the high level breakdown of the products feature set (i.e. what is it about the product that makes it so compelling to sell), what the product is made of (i.e. the building blocks of the product), and what the product does (i.e. the capabilities or core feature properties). We do this generally in Microsoft Excel spreadsheet, manually importing test data from clunky test repositories. What I look out for, as a classic Program Manager, the RAG (Red, Amber, Green) status that tells me the fifty thousand foot view, what the overall quality of this product is, and how far away we're from having a healthy launch-candidate system.

It turns out the Google do pretty much the same thing, but are way more analytical about it - they call it ACC, according to the book written in 2011/12 "ACC has passed its early adopter's phase and is being exported to other companies and enjoying the attentoin to tool developers who automate it under the "Google Test Analytics" label".

So I've created my own generic Set-Top-Box project based on Google's ACC model, aiming to share this with like minded people working on STB projects. It is not complete, but offers the basic building blocks to fully apply ACC in a real project. I think this will be a useful learning for anyone involved in QA/Testing role. 

My generic-STB-Project is currently hosted on the Google Test Analytics site, along with a complete ACC profile. It took me about four hours to do the full detail (spread over a few days of 10 minute tea breaks, between meetings), and about one hour to get the first draft of the High Level Test Plan with at least one ACC triple filled in. The book challenges you to get to a High Level Test plan in just 10 minutes, which I actually I think that is quite doable for STB projects as well!! 

In about four hours, which is about half-one working day, I was able to create a Test Matrix of 254 core high level test scenarios (note I'm really not a QA/Test expert so imagine what a full time QA/Test Engineer could knock up in one day?) that looks like this:

Capabilities & Risk Overview of a Generic Set Top Box


Google currently use this model in their high level test plans for various products, like ChromiumOS, Chrome Browser & Google+. It was developed by pulling the best processes from the various Google test teams and product owners, implementation pioneered by the author's Whittaker, Arbon & Carollo. Taken from the book. Test planning really stops at the point you know what tests you need to write (it is about the high level cases, not the detailed test scripting). Knowing what to test kicks off the detailed test case writing, and this is wherer ACC comes in (quoting snippet from Chapter 3, The Test Engineer):
ACC accomplishes this by guiding the planner through three views of a product, corresponding to 1) adjectives and adverbs that describe the product's purpose and goals, 2) nouns that identifiy the various parts and features of the product, and 3) verbs that indicate what the product actually does.
  • A is for "Attribute"
    • Attributes are the adjectives of the system. They are the qualities and characteristics that promote the product and distinguish it from the competition. Attributes are the reasons people would choose to use the product over a competitor.
  • C is for "Component"
    •  Components are the building blocks that together constitute the system in question. They are the core components and chunks of code that make the software what it is.
  • C is for "Capability"
    • Capabilities are the actions the system performs at the command of the user. They are the responses to input, answers to queries, and activities accomplished on behalf of the user.
Read more to find out how/what my ACC model for Generic Set-Top-Box looks like (first draft!)...