This week I was invited by @Farid from Crossbolt to tour the Nissan Production Line in Pretoria, to see first-hand the Lean / Kanban / Kaizen production model in action: Cars come off the line every four minutes (cycle time), Zero defects straight-through rate.
I jumped at the opportunity without hesitation. Although I've ran software projects as a factory-line model before, I had never actually been in a real car production factory before. Fareed had decided to take his IT Operations team on this tour, after doing the theoretical knowledge-share on Lean.
He felt it would make more sense for his team to witness first hand the theory in action, where the metaphors can be easily visualised and applied to an IT Ops / Systems Integration space. Not a bad idea...
I jumped at the opportunity without hesitation. Although I've ran software projects as a factory-line model before, I had never actually been in a real car production factory before. Fareed had decided to take his IT Operations team on this tour, after doing the theoretical knowledge-share on Lean.
He felt it would make more sense for his team to witness first hand the theory in action, where the metaphors can be easily visualised and applied to an IT Ops / Systems Integration space. Not a bad idea...
This post is just a brief dump of my personal take-aways from the visit. I will try to expand on them in later posts once I've let the ideas play around my head for a while to morph into shape. Despite what some people might say: Software is a creative art, it can't be rushed and boxed into a production line, when it comes to pushing out consumer production software, I beg to differ: there are indeed many parallels from the manufacturing sector (which is seriously way more mature than software development) that can be drawn upon and applied to software teams - incidentally this is the roots of Scrum / Lean anyway - just taking the same concepts and applying it to software teams...
There's plenty of info on Kaizen / Kanban / Lean / Poka Yoke / Scrum - I won't go into detail here. For context though, the line we visited was a multi-model line. This means that the line produces more than one type of vehicle in a continuous flow. Any team on the line could be working on a different model at the same time. Some car production lines specialise in just one model, but this line builds at least four different models. Because the flow is continuous, a car comes off the production line every four minutes, like clockwork. The team working on the line therefore, are cross-functional and cross-skilled.
- Continuous Flow - the line is continuously flowing, with a cycle time of 4 minutes. Each time on the line have 4 minutes to finish the task at hand. If one team takes longer, the whole flow is interrupted.
- Thoughts: Can software release cycles maintain a constant flow? Can defects inbound / outbound to/from System Integration be a smooth process? Can all the players feeding into release planning processes maintain steady-state of deliverables?
- Straight-through Rate: Zero Defects - the aim is to strive for quality from the beginning, right upstream, from the first component / team in the supply (value chain). Each downstream team expects the output from the upstream team to be of high-quality. Any defect passed downstream, slows down the line, slows down production and compromises quality, as well as confidence / trust in upstream team.
- Thoughts: Imagine zero-defects from software component suppliers. Trust is built up-front, quality is enforced from the beginning. This is where test-driven development, continuous integration, regression testing and other code / software quality criteria become important. Don't pass on dodgy components or builds to downstream teams (QA or even Integration). In fact, aim for zero defects upstream will challenge the need for different QA elements.
- If a release is deemed OK for public consumption then just release and archive the remaining known defects - if customers pick up the issues, then just focus on those customer issues and ignore the rest. The rest is really waste - fix what is needed just-in-time. Software releases often contain more bloat and feature enhancements from the product team that most customers really wouldn't use anyway.
- Who is the Customer? On the production line, the customer is not the end-user driving the car. No, the customer is the next team down-the-line. So ensure you produce high-quality so that you make life easier for the next team to do their job. If the next team downstream finds defects, it should be frowned upon, but don't worry, the team is cross-functional and they can still sort defects out, but the productivity flow is disrupted.
- Thoughts: Software developers need to think hard about who their customer is. If you still have distinct Development / QA / Integration teams, then consider the downstream team's requirements. It is not fair on QA/Integration to receive poor-quality components. Do your bit to make sure you satisfy your next customer.
- Every unit/team down the line trusts the output from upstream - no need to check "is this the right engine, has the engine been tested?". In the same way, why can't software teams build the same trust: don't suspect the build output, ensure the upstream engines have been tested thoroughly as fit-for-purpose, otherwise you wouldn't have received the component delivery in the first place.
- Multi-Model Assembly Line - this is possible because the platforms the cars are based on is standardised or share a common configuration that doesn't stress the line. So different cars can be made, some even requiring different engine sizes, wheels, lights, even colours are different. But the assembly team are the same people. The team know how to assemble model A, versus model B and model C, even with four minutes to switch the minds between different models!!
- Thoughts: Software products usually share common core architectures that differ based on user features. For example, take a Set-Top-Box decoder: PVR versus a simple Zapper. The software components are essentially the same - so why would you want to have separate specialist teams looking after the products as separate products? Have one common team versatile to build any model.
- Cross-Functional Teams - We read a lot of this in the literature / hear in training courses, etc. On the production line, if a defect is passed from an upstream team to a downstream team, the team downstream are competent to fix the communicated defect.
- Thoughts: Imagine if QA/Integration teams actually consisted of Developers as well? They would fix the defects in-line, without passing through, waiting for the next release, assigning the defects to a bug-tracker, waiting for priority calls etc. What a waste? Applying a zero defect mindset implies the software product is ready to ship as it comes off the production line. If cars can be manufactured within four days, ready to ship for real-use (safety critical), why do allow dodgy-software (not even life/safety critical through)?? Why can't all upstream teams (software vendors) maintain high levels of quality that ensures a fit-for-purpose product is assembled at the end of the release cycle??
- Need for defect trackers disappears - this is why some schools of Agile/Scrum say defect trackers are bad and should be binned
- Eliminate waste by building quality up-front, at all levels with all teams - mindset change!
- Toolsmith In-house - In the factory, there's a toolsmith team that re-uses waste material, repurposes them as well as other artifacts into useful tools that feedback into the production line, adding value to the production team, improving efficiencies.
- Thoughts: How many software teams just make-do with useless tools, just because they've inherited them, or some manager approved the tool is good. So teams just accept their frustrations, and do nothing to improve the efficiencies. Some teams are so busy that they don't even have space to create debugging tools that will add value - sticking to manual, laborious routines of working.
- Having an in-house, tools-team that work with the product team is a useful way to share the load and feedback optimisations back to development teams. How often to people sit with sifting through thousands of lines of log files, when a tool could've been written to parse, search & index log files? Memory leaks, performance issues, Defect management, reporting, etc.
- Always maintain a clean, neat & tidy work-space - As simple as this may seem, keeping a workspace clear, neat and tidy - only containing the relevant material required for the task-at-hand, improves efficiency and makes you more productive. An untidy workplace slows you down, also leads to unnecessary accidents.
- Thoughts: Similar approach can be applied to software teams - keep only the required artefacts needed for the task-at-hand. Bin or archive old, irrelevant design/architecture documents. Make sure the whiteboard is cleaned regularly, post-its makes sense, and don' build up untidy meeting rooms / war rooms / stand-up areas.
- Close out old chat / collaboration / wiki spaces.
- No Build-up of Buffer / Stock - on the line, there is enough parts / supply to last for the planned assembly time. Buffer stock is not built up at stations, but is replenished on demand as and when required. Having too much buffer / stock is inefficient, slows down production - having just enough supply at the right time improves efficiency, also makes the workplace less cluttered, and psychologically doesn't stress people out.
- Thoughts: Consider what "buffers" exist in software project. Building up a pile of TODO on the backlog, which has no direct impact on imminent release, can be seen as a distraction. Consider what happens with the System Integration queue for defect triage: build up of non-critical defects "P1s" might communicate a trend of poor quality, or not enough-bandwidth. If P1s are not deemed critical, why track them anyway?
- Be transparent about the rate of productivity - backlog & burn-down should account for the true or real velocity, think twice before adding too much buffer to estimations?
- Planning & Design is very Important - the whole design layout of the line is well thought-out, designed end-to-end to maintain efficient cycle times and smooth flow. Whilst senior engineering management may have played a part of the initial layout, it is well understood that operation efficiency improvements & enhancements come from people on the floor.
- Thoughts: Processes are important, input should be sought from experienced managers who've delivered process improvements before - use experienced consultants to get the initial framework going. Management intervention will disappear once trust is earned - operational improvements come from the on-the-ground technical team, and should be respected. Management should not intervene in operational improvements if they haven't been working the floor.
- Progress against Targets Highly Visible almost Everywhere - everyone on the floor is aware of the targets for the day / week / month, and can track progress against targets (are we behind, ahead, etc).
- Thoughts: Publish clear targets and progress-against targets that everyone on the project team can access. This could be through whiteboards, or display terminals mounted at each team's workspaces. Being clear about targets maintains focus.
- Even if your team uses an ALM tool, take time out to maintain high visibility of progress-against-targets.
- Metrics are visible at each major station - At nearly every workstation, teams maintain some metrics that are relevant to the overall production rate. This helps maintain additional focus, on top of the highly visible target displays.
- Thoughts: Software teams should share and openly communicate relevant metrics that drive their production goals. This could be anything from defect counts, quality / performance metrics, new requests, blockages / impediments & overall burn-down or burnups. Managers can walk the floor and easily see progress.
- Suppliers synchronise component deliveries with the production rate / schedule - Stock is never built up and waiting in the warehouse to be used, wasting space. Instead suppliers are synchronised with the production line, resources are tracked (supply, demand, stock-rates) - stock is replenished just-in-time. It is useful to have your suppliers located nearby, to deliver on demand.
- Thoughts: Software release planning must be synchronised with all component vendors and system integrators. Aiming for just-in-time, but ensuring the deliveries received are fully qualified and fit-for-purpose. Just-in-time doesn't mean a release fresh from the developer's tree, without any testing. Releases must be fit-for-purpose.
- Co-located suppliers makes sense in terms of efficiency gains. Try to start new projects with insisting on on-site, co-located teams, even if the vendors are from overseas (depending on the scale of the project).
- Team breakaway stations - The production facility is huge, with over a thousand workers. Dotted around the plant are breakaway stations for the team, to meet every morning or break time, discuss work over coffee, etc. The line can't stop for group-wide announcements, so each time can get updated on company-wide briefings at their own pace. It's also a big problem to get the whole team in one room for a briefing. Teams meet at their breakout stations each morning to discuss the events of the past day, plan the current day, talk about optimisations, etc.
- Thoughts: Make liberal use of free space, breakaway stations are valuable - it is where all the important discussions happen (over the water cooler) - also reduces time to schedule meetings, etc. Having a terminal to update teams on company announcements is also useful. Also equip this space with terminals for visual display of news or even progress information. Preferably, each time should have their own workspace breakout area (if possible)
Very interesting. We can learn a lot from this especially getting it right up front and not passing junk down the line. This requires testing at source, In line fixing without burdening the bug tracker and rating processes. We spend an inordinate amount of time debating and ranking bugs.
ReplyDeleteI agree with the comment above and in the blog concerning testing at source and not allowing problems to propagate through downstream processes. I believe that this is one of the major challenges affecting some software efforts, ie the highly modularised nature of the production line does not exist in all software systems.
ReplyDeleteI also feel it critical to publish real time metrics, both for the team concerned to know how they are doing and for the rest of the business to gain an idea of the progress of the overall process. Real time metrics also eliminates the need for admin intensive repetitive reporting. Reporting of metrics should be a mandatory part of the process.
The need for bug tracker should also be investigated, is it really necessary? What happens when an issue interrupts the process on the assembly line...its all hands on deck to resolve the issue, because an issue anywhere on the line results in the whole factory's downtime.
This comment has been removed by a blog administrator.
ReplyDelete