Monday, 10 March 2014

High Level Agile Retrospectives: Sense making with the management team


In my previous post "Will the real product owner please stand up?", I sketched the scenario where a
development team is pressed to flex and adapt to product roadmaps that tend to be generally quite aggressive, where a roadmap is generally accepted as template for driving the delivery plan and so the expectation is set at the highest levels of business, which means the team must just get on and deal with it.

This scenario is typical of many software product teams, where the success of the initial product launch was due to a Herculean effort of the entire team, the product gets out on time; and that original pace of delivery then sets the scene & the tone - the precedent is set such that people have the expectation that the team can pull it off again and again.

This is all well and great since the team has inspired confidence that they are committed and can indeed deliver. Little of the legwork and thrashing that went on internally in the development team is known to the business stakeholders, the team would still love to commit to the roadmap, but are quite edgy about the reality and implications of doing so.

We are faced with the classic problem of scaling, the notion that "What got you here, won't get you there!" scenario unless something changes - but people are nervous about abrupt changes, because the teams are currently working through their own internal improvement plans that first need to be seeded for a period of time, reviewed & then evaluated, before committing to subsequent changes into the feedback loop. 

As an Enterprise Program Manager, it is my job to put a high-level plan together that shows the stages of delivering on the product's roadmap. I become comfortable with doing this planning as long as I have confidence in the development & execution processes of the impacted delivery team. Unless I have confidence that the teams understand what's expected, that they have a reasonable processes / template for execution, I will not put my name against a delivery plan, that I know is close to impossible to achieve. 

So I roll up my sleeves, don my consulting hat, and try to facilitate conversations with the development & delivery team. In this post, and the series that will follow, I share my journey with a management team that face interesting challenges with balancing the delivery expectations of the business on the one hand, and growing their young agile development team on the other hand, by applying systems thinking tools of sense-making, to better understand the challenges that await.

I work with the mid-senior managers only, expecting the respective managers to have held internal team conversations before-hand, bringing all their team's issues to the table with the rest of their fellow management peers. Despite the team adopting an Agile/Scrum mindset, it is quite interesting to note that the "one-team approach" is really quite difficult to achieve in practice.

I share the results of the first sense-making workshop, that was guided principally around answering these two questions:
  • Given your current way of working, how much confidence do you have in achieving the roadmap?
  • What ideas can you suggest that we should look into, that if we implemented now, could help us to get close to achieving the roadmap?
Interestingly enough, to my surprise, the answer to the first question was a resounding NO! The team had little confidence the roadmap could be delivered upon, not even a Yes from the product team!  The second question generated some interesting ideas that sparked some healthy dialogue. Upon reflection, it turns out that our problems aren't that unique, sharing quite a lot in common to all development stories you read about. More interesting is that the ideas were not new topics for the teams themselves, showing that how teams can get stuck-in-a-rut, dealing with the everyday pressures that take higher priority over implementing and driving changes. Despite having many internal retrospectives, the same topics were brought up, showing clearly a lack of championing and initiative from within the various teams. Without such initiatives emanating from within the teams, how can management not ignore this, and not take the top-down approach to managing??

Technique used in the workshop
I facilitated the retrospective workshop, playing the impartial neutral person. The people involved were a mix of middle & senior managers spanning product management, software development, systems integration, agile managers, business analysts and quality engineering.  The idea generation workshop lasted two hours, followed by an hour session the next day for sense making. 
I used the concept of "Rounds" where each person gets a chance to speak & offer comment without interruptions. The rule is that everyone must respect each other, no interruptions, and strictly no challenging views - the place must be a safe one where people can speak freely without being afraid of fellow peers or line managers (tricky since it was during performance review time!). Following the initial ideation workshop, the second session was around applying Systems Thinking tools to unpack the ideas.
Systems thinking is around gathering the data into themes, looking for common traits, allocating a main topic by describing the term as a variable. For example, topics touching the subject of "Quality" is grouped as a variable "Level of Quality". Once these variables are identifying, we look for patterns of influence: How does each variable impact the other?? We look for relationships between the variables, using what is called an "Inter relationship or Network" diagram. From that we generate a Scorecard that shows us the core driving variables. From the scorecard, we assess and estimate the amount of effort we're spending in each area. Once that is known, the ones with the least effort, are usually candidates to immediately improve on.
Index cards, stickies and posters are used throughout the workshop.

Sunday, 9 March 2014

Will the real Product Owner please stand up?

I repeat, will the real Product Owner please stand up?? Hello, I can't hear you…we're gonna have a problem here...where's the real product owner, the pig in the team that has influence and ownership across the board (represents the development team-on-the-ground and at the same time influences senior management)??

What's that then?? Huh, That's not how you roll? The Exec, king-of-the-throne speaks, and we scramble to deliver, deadlines can't be changed, we mere chickens are not empowered to influence at that level dude…Hey man, we operate Agile/Scrum localised to just our development team, we're not cross-functional yet, and we still have to respect the notion of downstream SI/QA teams. We manage our backlog as best as we can, but we know the roadmap can change in any quarter, so we adapt and go with the flow, despite our best intentions of reaching a steady velocity that can be used to better track, commit, and help influence the roadmap. We are like the blades of grass in the wind, we bend wherever the wind blows, sometimes, more often than not, the team hardly has a chance to catch their breadth before the next roadmap change that puts pressure on the existing team structures... Sound familiar??

A roadmap is not a commitment for delivery then. It's just an indication of ideas to consider, sure the team understands that, but when the grand executive sees the roadmap, all they see is committed dates - next to near impossible to get this mindset changed, because it is only us, in our little development world that are intoxicated with Agile/Scrum, but nobody in upper echelons of management seem to get it! What to do...

Instead of writing a long blog post on this, especially the typical challenges of a young outfit experimenting with Agile/Scrum adoption deals with the ambiguity of having just too many people involved with product management - making it hard to see who the Real Product Owner is...which should trigger conversations around really understanding the worlds of product ownership, and whether there is hope in achieving the nirvana of the Agile Product Owner, the one-and-only, unambiguous, owner and driver, champion for the product itself, but also protector of the foot soldiers (dev team)...

I'm experimenting with visual notes as an alternative to writing long articles. This is just take one, first draft. I'm a little rusty drawing freehand (but I used to be quite creative), so I used the PC to knock this poster together. I have set myself a goal for 2014 to sketch visual notes proper...
Will the Real Product Owner please stand up? Download full poster

Sunday, 2 March 2014

Applying Google's Test Analytics ACC model to Generic Set Top Box Product

Early this year, I recently completed, what I felt was the best software book I've read in a really long time: How Google Tests Software by Whittaker, James A., Arbon, Jason, Carollo, Jeff ( 2012 ) (HGTS).

Although a couple of years old now, and even though the main driver James Whittaker, has left Google and gone back to Microsoft, this book is really a jewel that belongs in the annals of great Software Testing books, really.

The book is not filled with theory and academic nonsense - it speaks directly at practitioners of all types and levels involved in Software Engineering. Straight-forward, realistic breakdown of how things go down in software product teams, the challenges with fitting in Test/QA streams (actually there's no such thing as fitting in QA/Test, it should always be there, but the reality is that developers don't always get it), balancing the needs of business in terms of delivering on time against meeting the needs of the customer (will real customers use the product, is it valuable), etc, etc.

Please get your hands on this book, it will open your eyes to real-world, massively scaling software systems & teams. Honestly, it felt quite refreshing to learn Google's mindset is similar to mine, as I'd been promoting similar testing styles for much of my management career, being a firm advocate that developers should do as much testing upfront as possible, with Test/QA supplementing development by providing test tools & frameworks, as well as employing a QA Program Manager (known as Test Engineer in Google) to oversee and co-ordinate the overall test strategy for a product. I have written about QA/Testing topics in the past, the ideas there don't stray too far from the core message contained in HGTS.

The book shares a wealth of insider information on the challenges that Google faced with its product development & management as the company scaled in growth in both its use of its products as well as the number of people employed, working across continents, explosive growth and large development centres across the world. It is filled with interviews with influential Googlers that give some insight into the challenges and solutions adopted. You will also find information on the internal organisational structures that Google implements in its product development teams, along with some useful references on Open Source Tools that have been borne out of Google's own internal Testing Renaissance, now shared with the rest of the world for free, thanks Google for sharing!

One such tool is Google Test Analytics ACC (Attribute, Component, Capability) analysis model - which is the topic I want to focus on. In the Set-Top-Box projects that I run, I look to the QA Manager/Tech Leads to know the product inside-out, to develop an overall Test Strategy that outlines the ways and methods of testing that will achieve our Go-To-Market in a realistic & efficient manner (and not adopt process-for-process-sake). I generally look toward usability and risk-based test coverage, seeking out a heat map that shows me in broad strokes, the high level breakdown of the products feature set (i.e. what is it about the product that makes it so compelling to sell), what the product is made of (i.e. the building blocks of the product), and what the product does (i.e. the capabilities or core feature properties). We do this generally in Microsoft Excel spreadsheet, manually importing test data from clunky test repositories. What I look out for, as a classic Program Manager, the RAG (Red, Amber, Green) status that tells me the fifty thousand foot view, what the overall quality of this product is, and how far away we're from having a healthy launch-candidate system.

It turns out the Google do pretty much the same thing, but are way more analytical about it - they call it ACC, according to the book written in 2011/12 "ACC has passed its early adopter's phase and is being exported to other companies and enjoying the attentoin to tool developers who automate it under the "Google Test Analytics" label".

So I've created my own generic Set-Top-Box project based on Google's ACC model, aiming to share this with like minded people working on STB projects. It is not complete, but offers the basic building blocks to fully apply ACC in a real project. I think this will be a useful learning for anyone involved in QA/Testing role. 

My generic-STB-Project is currently hosted on the Google Test Analytics site, along with a complete ACC profile. It took me about four hours to do the full detail (spread over a few days of 10 minute tea breaks, between meetings), and about one hour to get the first draft of the High Level Test Plan with at least one ACC triple filled in. The book challenges you to get to a High Level Test plan in just 10 minutes, which I actually I think that is quite doable for STB projects as well!! 

In about four hours, which is about half-one working day, I was able to create a Test Matrix of 254 core high level test scenarios (note I'm really not a QA/Test expert so imagine what a full time QA/Test Engineer could knock up in one day?) that looks like this:

Capabilities & Risk Overview of a Generic Set Top Box


Google currently use this model in their high level test plans for various products, like ChromiumOS, Chrome Browser & Google+. It was developed by pulling the best processes from the various Google test teams and product owners, implementation pioneered by the author's Whittaker, Arbon & Carollo. Taken from the book. Test planning really stops at the point you know what tests you need to write (it is about the high level cases, not the detailed test scripting). Knowing what to test kicks off the detailed test case writing, and this is wherer ACC comes in (quoting snippet from Chapter 3, The Test Engineer):
ACC accomplishes this by guiding the planner through three views of a product, corresponding to 1) adjectives and adverbs that describe the product's purpose and goals, 2) nouns that identifiy the various parts and features of the product, and 3) verbs that indicate what the product actually does.
  • A is for "Attribute"
    • Attributes are the adjectives of the system. They are the qualities and characteristics that promote the product and distinguish it from the competition. Attributes are the reasons people would choose to use the product over a competitor.
  • C is for "Component"
    •  Components are the building blocks that together constitute the system in question. They are the core components and chunks of code that make the software what it is.
  • C is for "Capability"
    • Capabilities are the actions the system performs at the command of the user. They are the responses to input, answers to queries, and activities accomplished on behalf of the user.
Read more to find out how/what my ACC model for Generic Set-Top-Box looks like (first draft!)...


Saturday, 15 February 2014

Sticky Metaphors for Software Projects: Bankable Builds, Surgical Mode

In this post I share an experience from a recent project where I influenced the senior management team to change strategy & tactics, by moving away from the standard Development->Integration->QA Team 1->QA Team n (which lasted anywhere between 2-4 weeks) approach to a more controlled Agile, Iterative/Lean approach of weekly Build & Test cycles, to a build-every-three-days, in order to make the deadline (that was fixed). We did this entirely manually, where our Agile processes were still coming of age, and had little in terms of real-world continuous integration. What I wanted to share really, is the way in which I influenced and got people to change their mindset, by using simple metaphors that were actually quite powerful, and can be used to sell similar stories:
  • Bankable Builds - Reusing concepts from The Weakest Link
  • Surgical Mode - Taking a page from medicine - Surgery is about precision, don't do more than what's required
It is interesting that these concepts really stuck with the entire department, so much so that it's become part of the common vocabulary - almost every project now talks about Bankable Builds & Surgical mode!

The post is presented as follows:
Note: Like any high-profile project with intense management focus, the decision to make these changes came from senior management, which put a spanner in the works with respect to granting our "Agile" team autonomy & empowerment. Although a little resistance was felt at the lower levels, the teams nevertheless remained committed to do-what-it-takes to ship the product. The teams and vendors did share their concerns in the retrospective feedback that the one-week release cycle was quite problematic to the point of chaotic with large overheads.

The facts can't be disputed though, that our agility was indeed flawed from the get-go by not having a solid continuous build, integration & regression test environment (lacking automation) in place. It was a struggle, with a lot of manual overhead, but it was a necessary intervention, that got us to launch on time. Not having done so, would have meant us missing the target deadline, the project was too high-stakes to ignore this intervention.

Going forward however, the principles of bankable builds is still a solid one, one that is well known to teams with mature software processes, I can't emphasise the value of investing in up-front automation with continuous build, daily regression testing & automated component / system tests will help make life so much easier for software development teams.

Invest time and effort getting the foundations set up at the expense of churning out code, continuing to do so will build a huge backlog for QA down the line, teams ending up spinning-their-wheels in putting out fires that would've been avoided, had the right discipline been set in place.

Monday, 10 February 2014

Custom Voice Recoder App: Instant Recorda


OK, before I get labelled for being lazy and not searching enough on the iOS Appstore or Google Play store, I have searched but couldn't find the specific features I need from the various Voice Recorders out there. Perhaps you might have come across a suitable app, so please get in touch with me!

Basic Idea / Elevator Pitch
The idea was borne from the many commutes to work and back, listening to the car radio, or reading some big billboards that caught my attention. For me, when listening to adverts on the radio, or some plugs during an radio interview, I would try to make a mental note (especially some new websites to check out), or when checking out some billboard, try to remember the site or key terms to follow-up for later in the day or whenever I have some free time.

What happens often though, by the time my journey is over, I've forgotten most of the stuff I wanted to follow-up on! What was that URL again??

So I thought wouldn't it be nice that whenever I came across something interesting to follow-up, there would be something that I just trigger, or tag for follow-up, it will record the last audio stream of whatever it heard: If the radio was playing and I gave the signal like "Tag", then the last ten seconds would be recorded. If I saw a billboard whilst driving, then I speak it out, and say "Tag" and presto, it gets recorded. At the end of the day, I go back an review all the things I tagged (no need for me to forget anymore). You could also have some way of triggering the device to record, hit the screen or button, or something - the trigger will just instruct the device to save the audio buffer into this playlist.

Then I thought about it a bit more, realised this could even be useful in the work-place as well, during meetings or keynotes, just take stuff for following up (audio stream).So I thought there should be an app for this: Call it instant Recorda :-)

Basically the app listens in the background, constantly recording the audio stream, waits for the trigger, and saves the last ten seconds. The file is tagged by date & time, and could be stored in playlist of some sort. And the opportunities grow from there… I have discussed this idea with colleagues and friends, they seemed quite positive about it, so hopefully there is some merit in pursuing this further.

Idea is shared Public on Trello
I've started a draft backlog of the work to do, sharing in public with the rest of the world. If any app developer comes across this, please get in touch!Check it out here: https://trello.com/b/fEv5GaWt/instant-recorda-app