Tuesday, 6 November 2012

ORITs: One Roof Integration Teams for DTV/STB Projects


In my previous post I introduced the concept of Release Campaigns that are a series of cycles executed to reach stability & maturity of the STB Product, which largely happens during the Integration / Stabilisation phase of the project's life cycle.

During these campaigns it is expected to uncover a number of bugs or defects, especially during the first release campaign, where the various system components are actually being stressed for the first time, end-to-end. Expect a reasonable avalanche of defects reported against STB functionality, stability & performance areas. The team that faces the brunt of these defects is usually the STB SI team, who need to investigate, reproduce the problems, and characterise the defect assigning to a relevant vendor to fix. This process is known as "triage" in some circles.

How does the project team manage dealing with these defects? How do we control the quality of the system ensuring the focus is maintained, that we're not spinning our wheels? How do we mitigate the project from further slippage, removing the burden on the SI team? How do we ensure we focus on the areas that are the pain points, the core features and functionality that add value, and affect the bottom-line: customer experience?

Even though the project is likely to have defined clear guidelines for acceptance criteria & defect severity & priorities as discussed in this post, the project will still have to be directed in a manner to maintain a sense of urgency to get the burning issues fixed.

Two areas are almost certain to happen on every STB/DTV delivery project:
  • Business / Product owners have their own view of describing the product, as a separate list of feature areas, despite what is defined in all the product documentation
  • STB SI team will become the bottleneck and on the critical path if the standard processes are maintained - the project will slip, unless some other practical solution is found
Enter the Hit-Squad or ORIT Teams
So we set-up dedicated teams to address key functional areas that cause us pain. Teams are focused on particular areas, with the sole remit of clearing away all problems. Typical areas that cause pain in a STB PVR product is shown below:

Common Areas to Focus Early during Stabilisation Phase of STB PVR
STB SI Team Bandwidth is Limited
The idea behind the one roof / hit squad is to get everyone in the same room, forming one team, focused and dedicated to closing down issues. Typically you have vendors supplying different components of the software stack, with their own processes & priorities - to expedite issues much faster, removing delays that come from communications, ping-pong of emails and phone calls - it's best to get the engineers on-site and deal with the issues face-to-face. This sounds like a no-brainer to those from the Agile or XP camp - but in reality some companies can be pretty rigid in their support service agreements. Hence it is critical this is driven quite hard by the customer, and pains should be taken to agree on the concept that projects will call for people (resources) to be available on-site during Integration/Stabilisation or even final Acceptance Testing.

Advantages of ORITs
  • ORITS definitely add focus and bring a sense of urgency to the project, across the board. It gets the attention from senior management, and commitment from teams to resolving the issues. In essence, we have the full participation of the vendors.
  • Communication-paths are reduced significantly by being under one-roof or in the same room. This is often time consuming, and generally is impacted by timezones and vendors being geographically dispersed.
  • Ensures the right level of support is provided - i.e. all required technical experts are available to support, and easily accessible. You have their full attention, no distractions.
  • Improved turnaround time for fixing issues - The ability to fix code on-site, in real time, providing engineering builds adds tremendous value and cuts a lot of the red tape associated with vendors build/release processes.
  • Customer is kept happy - all stakeholders have confidence that there is focus, attention & drive. Having a dedicated owner for each area helps build confidence, maintains pace and momentum. Of course, interventions can be applied much sooner than later - no delay in management decisions.
Some Challenges with ORITs
  • Almost certain to disrupt existing teams and structures. People will have to be seconded to other teams, virtual - people will generally be interacting with each other for the fist time, which means that these teams might not initially gel well.
  • Requires focus, participation and attention from vendors. Generally vendors are hard pressed to support multiple project deliveries, taking out a key engineer and dedicated to one customer for several weeks, stresses the vendors other commitments.
  • Engineers allocated to ORITs need to be sufficiently skilled and competent. We are looking not only for senior, hard-core technical experts but people who can deal with stress and pressures from senior management, as well as communication & reporting must be excellent. Very hard to find such calibre people.
  • Logistics challenges: The premises for the one roof sessions must support the physical & personal needs of engineers (Equipment setup, kitchen, personal workspace, etc).
Concept of an ORIT Owner is Imperative
For ORITs to return value and actually improve the outcome of the project, you have to assign a strong technical leader overall to manage the various ORITs holistically, as well as strong technical leads per ORIT area, a.k.a. Owners.
  • An Owner is essentially responsible & accountable for ensuring all critical issues for a particular area (e.g. Recordings) are resolved to closure, proven on-site, and tracked through completion in a formal release, all timely.
  • This means not only investigating & triaging, but also working closely with - and driving the component vendor engineers & teams for fixes or patches including committing to target fix dates for final implementation. If progress is not going as expected, escalate to senior management as appropriate.
  • Requires sound technical knowledge, domain experience & appreciation for problems, ideally having worked or experienced previous issues on past projects of similar nature.
  • An owner has freedom to prioritise issues depending on severity without having to wait for approval from Product Owner
  • An owner also takes responsibility for leading the team assigned, dealing with people as well as performance related issues
Does it work?
Like all other topics I've shared in my PM Toolbox, I wouldn't be sharing stuff that is fluffy and woolly - not implemented or experienced first-hand by me, directly. So, yes - ORITs do work, if managed correctly. It is no easy feat, requires mindsets & personalities that can maintain a steady pace of rigour, fortitude and relentless determination to get to the root cause of problems - seeking out the best practical solutions in the time allowing. I have both managed ORIT teams myself, observed other projects from afar, and been an engineer on-the-ground as Hit Squad member. It can work well, or can be a complete failure. There needs to be a unifying voice from the top, harmony across the entire project team, and complete focus and attention... It is not an easy or straightforward intervention...

A little more detail behind the mechanics
In the attached slide pack, I try to summarise the key points of this process, including notes on the ideal team sizes, reporting expectations including templates. 

Template for Set-Top-Box Release Campaigns

In this post, I'd like to share a concept I've coined as "STB SI Release Campaigns: The Climb to Launch" which is something a STB SI (Systems Integration) Project Manager, Development Manager or Delivery Manager can use to manage a process of stabilizing the product leading up to final Launch / Deployment campaign.

No matter what flavour your DTV project is, if the STB (Set-Top-Box) is impacted, either it's a new hardware, or new software, or existing hardware but brand new features, the last mile of any DTV launch product is the STB testing, acceptance and deployment. In general, it is widely accepted practice that the building blocks and fundamental backend or headend systems required to drive the STB end user experience, are in place well in advance of the STB. 

Sometimes this isn't always possible, especially when changes impact across the entire end-to-end system. In such cases, some prefer to do big-bang integration testing; however I much favor and prefer component-based testing using test harnesses and simulation; leaving the rest to integration testing. This is a topic for another post altogether...

So the last mile of the majority of DTV deployments is the STB system testing, including user experience & final acceptance testing. The team that are usually the custodians of the STB delivery is the STB System Integration team (STBSI). It is this team that carries the burden of producing a stable build, ironing out all the critical issues, and ultimately produce release candidates for recommending to launch. 

Typical Milestones in a STB Delivery Project
I've written previously about clearly defining objectives & goals to your STB project, so much so that it can be quantified & qualified through a measured process. To recap, a STB project will typically include the following Major Milestones:
  1. Start of Closed-User-Group (CUG) Field Trials
    • Headend is available in advance and operational on the live broadcast
    • STB SI have created a build that is functionally complete to all intents & purpose (FC)
    • We are in a position to release this build to enter formalized acceptance testing (ATP QA)
    • We are ready to start the path to Release Candidates, the climb to Launch - i.e. iterative cycles that will be repeated to reach final Product Launch
  2. Start Wider Field Trials
    • Signifies the build is stable, nearing completion and is ready for external audience
    • STB passes all certifications (HDMI, CA, Macrovision, etc.)
    • STB Hardware passes all hardware testing (is the hardware fit-for-purpose, Consumer Acceptance, Safe, Green, etc.)
    • We are getting closer to producing a final Production Build
  3. Produce Launch Build
    • Field Trials drawing to a close
    • STB SI issues are clearing away, last few hurdles to pass through, but not critically blocking launch
    • Finalised Release Candidate almost in hand (RC3)
  4. Clean Run
    • STB SI Produces Launch Build
    • Final ATP QA expected to complete with no Showstoppers
    • Final build sent out to Field Trials to verify critical issues
    • Soft download done to select group of subscribers
      • No critical issues found from the soft launch
  5. Deploy
    • Final Deployment build created by STB SI and provided to Launch Delivery Team
    • Image is broadcast for software upgrade or launch of decoder STB to market (go public in retail stores)
    • Process continues for some time, initially about 2 months
    • Next release is being planned (new features & bug fixes)
For each of the above milestones, the project will have to define entry & exit criteria. See previous post or the attached presentation. This might seem simple and logical, but generally STB projects are often complicated by lots of legacy rules, history of business & project decisions, and often involve more than one target hardware STB/Decoder platform. In cases where there are more than one STB involved, typically one chooses a lead STB for launch, with a quick follower - it depends on business objectives of course.

Introducing the Release Campaign Concept
I use this concept to bring structure to a STB Delivery Project. Having clearly defined milestones, as well as realistic, clear & unambiguous goals/objectives for each milestones, naturally allows for a simple process that can be executed repeatedly until all the objectives are met. The key points:

  • Structured, Repeatable & Easily Measurable
  • Unambiguous & clear focus on defined Launch Criteria
  • Sequence of repeatable activities to execute almost automatically to produce Release Candidate (RC) builds
    • Planned up-front: Release Campaigns can target a specific milestone, and aim for a fixed duration: Example: I would like to achieve a CUG FC build within Eight (8) weeks of reaching code complete.
    • The release cycle, production of the RC build for each milestone is Incremental
      • Current practice is to use a cycle of 10 working days (2 work-weeks)
    • Involves the buy-in & full participation of the whole team
      • Project & Programme Managers
      • Product Owners
      • STB SI Team
      • Component Vendors (Middleware, Drivers, Application)
      • ATP QA Team
      • SI QA Team
      • Field Trials & Operational Support team
Associated with this is a simple process that can be used by Delivery Project Managers to plan, based on a set of available macro variables:
  • Duration for SI to produce an incremental build (I use 10 days)
  • Amount of time to allow vendors to fix Showstopper defects in pre-candidate builds (can vary between 1-10 days depending on service level agreements)
  • Duration of the Release Campaign (How many SI cycles of build increments to allocate for specific milestone?)
Below is an illustration of the core variables & typical timeline milestones:
Overview of Time Milestones in a Release Campaign (RC)
The process is repeatable, one Release Campaign flowing to another, with feedback. It is a continuous process of defect fixing, incremental builds & continuous verification leading up to the production of the next candidate build for the next release campaign, as illustrated by the pictures below:
Flow of Release Campaigns
Example Flow from One Release Campaign to Another
As the quality of the builds improve over time (this is expected), the length of campaigns is expected to reduce over time. This assumes of course the project is in real delivery mode, that all feature development is complete (or the project implements a very strict stability regime) and that the fundamental foundations of the stack has reached an acceptable maturity point.

Release Campaign Model - Tool for the Project Manager
What the model allows the Delivery Project Manager then is to play with a few scenarios, especially the length of the release campaigns, adjusting for bug fixing and stabilization problems. The PM can then use a tool, that based on fine tuning a few variables, can come pretty close to producing realistic plans. The fine-tuning of course can be used to indicate and prove to your stakeholders the project still has a long way to go (which in general is quite true for almost all STB projects: they are usually too ambitious and poorly planned from inception):

  • #Days STBSI need to create & sanitize a build. The STB SI team usually integrate a few components from third-parties, put a stack together and instrument basic sanity, smoke & reliability testing. This usually takes the order of 3-5 days depending on the complexity of the stack, as well as maturity & competence of the SI engineers.
  • #SI cycles required to stabilize core software on final hardware. Generally there is some time required to get a software stack stable enough on the target hardware (regardless of this being a mature software stack on a previous hardware being ported to a new hardware; or a brand new stack on first time hardware)
  • #Cycles to allow vendors to meet FC build CUG Criteria. How much time to allocate for the initial Release Campaign to produce a Functionally Complete build, according to the agreed acceptance criteria. This is generally the first time all components are delivered to STBSI as ready for functional integration, and is expected to take the most amount of time.
  • #Cycles from Release Campaign X -> Release Campaign Y. Repeat until happy - the amount of RC attempts will of course vary according to the nature of your project. I tend to go with three major campaigns, you can have as many as you like.
  • #Cycles to run & manage Field Trial Testing. As above, typically the field trial testing happens in parallel, and should not be on the critical path. The business might decide to let these test phases run for much longer though, thus ultimately impacting your launch date.
If you want more detail, please refer to the accompanying presentation.

You're doing it wrong - with Agile there is no need for a separate Stabilization Phase!
There are a lot of people jumping on the Agile/Scrum bandwagon, advocating its use in every software or systems delivery project. Whilst I'm quite a firm support of Agile/Scrum, I have to admit that adopting Agile/Scrum in keeping with its true essence, is next to impossible in the real world, especially when typical Digital TV Systems Projects involve players from a multitude of vendors, with the PayTV Operator being the end-customer, and the real Product Owner. Bearing in mind also, that these independent software vendors have their very own products to support, delivering to multiple customers - to implement a truly STB agile project requires a significant investment on the part of the Pay TV operator, and exceptional buy-in from the vendors; so realistically it is practically undoable -- unless the Pay TV operator either owns the project in-house, or has a Project Charter or Contract in place that clearly stipulates the main leader, driver, and overall planning is under the ownership of PayTV operator...more on this debate later...

So what happens in practice though, is that software vendors & systems integrators work in relative isolation, and are free to adopt whatever approach is required.  The commonly accepted best practice of STB Software Projects, if managed properly, should be a short-lived Integration, Stabilization & Bug-Fixing phase - solely focused on stabilizing the product leading up to launch. We are not saying that Stabilization phase appears toward the end-phase of the project and that component vendors are unable to release stable components prior to this phase: No, not at all. In reality, a STB project depends on a number of pieces of a the puzzle fitting together, and it's often that the first time this really happens is closer to last mile of Components-Integration (a.k.a. Systems Integration & Stabilization).

In complex projects that impact the end-to-end DTV value chain, where the Headend & Full STB Stack is brand-new, or fundamentally changed, then it's really difficult to avoid a separate stabilization phase. Components are expected to be stable & performant as part of their independent component testing & release processes, but the system ultimately only begins to be stressed when all components come together.

The typical, best practice of Systems Integration Management & Delivery is shown below:
Industry Best Practice of STB SI (Courtesy/Permission granted by S3)
The above model follows a structured approach to a STB delivery project, ideally SI (System Integration) is kicked off from stable foundations, then becomes a series of cycles that is executed until the system is fir for further Field Trials / Certification. Essentially these SI cycles are the Integration & Stabilization phase of the project. What also happens in reality is that Certifications/Field trials kick-off at a suitable SI cycle, in parallel and thus is not sequential - i.e. not Waterfall.

As much as I'd like to see a STB project implemented from start-to-finish using Agile/Scrum, I've been involved in many DTV Systems Projects to know it is extremely difficult and challenging, incurs a huge management & administration overhead, is costly & requires mammoth coordination, but not impossible! I've seen some projects come close to doing end-to-end development & integration, incrementally, adopting as much Agile principles as possible, but not really wearing the badge of pukka-Agile (see earlier posts).

In the real world, benchmarking and signing off a system that's going to generate millions of dollars in revenue, one cannot avoid controlling the acceptance of the product by implementing a fairly strict process for stabilization, testing & final acceptance.

Conclusion
I have shared a simple model for managing & planning STB software release schedules, that a STB Delivery Project Manager can use. This is not a theoretical model based on zero real world experience or case studies. I have seen this model used in the past, and though not publicly available, I've seen many good PMs use this technique naturally. I am using this model to manage some of the projects I'm currently running. I believe it is probably the first time somebody has ventured to share this model, and also the first time to provide a tool for free, that can be used to assist with planning & modeling release scenarios. I have worked with, and learnt from, brilliant managers who apply this planning model instinctively without depending on tools, they have excellent hunch-bases borne from delivering many real world projects - I'm grateful to have worked with such giants (BK, DD, NT, ST, JC, MK)...

Some hardcore engineers might retort by remarking the classic "How long is a piece of string?" argument. Sorry, us poor Project Managers need to have some way of measuring & predicting the output of a project, and therefore must rely not only on analytics (like Defect Trends/Prediction) but also tools based on insight, intuition, wisdom and lessons learnt from past projects. This tool is borne from this experience - I am confident in its use & value it could bring to your project. The recurring, iterative cycles of the model makes it easier to find the answer to the length of the string problem - we have to provide an expectation based on some reasoning, disclosing all the risks & uncertainties upfront of course...

If you found this info useful or would like to learn more, or bounce ideas, or even share your own experiences from your own PM toolbox, do drop me a line or two! :-)

Download the Free Planning Excel Tool!
If you've read this far, that's great! In my next post I'll share a powerful planning tool that is based on this release campaign model. You can use this tool to model Best Case, Most Likely (Realistic), Worst Case and the resulting 3-Point Estimate - to help you not only plan, but visualize the Climb to Launch, offering you many options to tweak your planning, stay tuned...

Wednesday, 10 October 2012

Model for Software Release Schedule Process


In keeping with my wish to share the tools I've come to use in managing my projects, in this post I plan to share a simple, yet effective template for planning, coordinating & managing the Software Release Schedule (a.k.a Bill of Materials BoM).

This process is typically owned by the party responsible for System Integration or producing the final Software Deliverable. Again, my context is a Set-Top-Box (STB) Software project, where the system basically consists of a number of software components, typically provided by independent component providers. For instance, the common building blocks in a STB Software stack are:
  • Physical Target STB Hardware Device (Decoder) with Platform Software
    • Platform Software is basically:
      • Firmware - Bootloader
      • Operating System - nowadays mostly Linux Kernel
        • Customised by Chipset Provider, e.g. Broadcom's own Linux Toolchain
      • Device Specific Drivers complying to a Middleware Interface (Macro - OS)
  • Middleware Component
  • Virtual Machine Component (VM Engine)
  • Electronic Programme Guide (Primary User Interface / Application component)
  • Interactive Applications
If you would like more information on what goes on in DTV Software Projects, just search for "DTV" on this site. I have previously written about DTV Architecture, STBSI and Agile Development Methods.

Because these components may come from a variety of sources, there is usually a need for assigning the role of [STBSI], or Set-Top-Box System Integration. This team is responsible for producing the system software build that will eventually be deployed by the customer, the [PAYTV OPERATOR].

[STBSI] in my opinion, is core to the success of any project. They have ultimate responsibility and accountability for ensuring a fit-for-purpose product is assembled and deployed. They have the freedom to drive and manage the independent [COMPONENT VENDORS], often delegated the role of customer, i.e. [STBSI] has the [PAYTV OPERATOR]'s best interest at heart, and often makes the necessary judgement and priority calls for the customer. However, this is not always the case, as some [PAYTV OPERATOR]'s have a strong Product Management team, where the [PRODUCT OWNER] works with [STBSI] in determining the priorities, etc.

So what's a Release Schedule or BoM anyway?
I've seen projects executed through the classic methods of Waterfall, Spiral/V-Model and more recently more Agile, Iterative models of project delivery. Once the early development / integration phase is reaching completeness, with STB projects in particular (despite using Agile) there is generally a massive effort of further integration & stabilisation. In this phase, the component releases need to be carefully controlled, and clear communications must be maintained with all parties, to ensure the correct issues are worked on, the priorities are made clear, and that everyone understands the goals [STBSI] is ultimately aiming for.

So there is a need to maintain a sense of a Software Release Schedule or Bill Of Materials, that is executed in a timeous, methodical manner, in producing software component releases. There needs to be some structure and method, that all parties need to understand and agree on as a way of working. This then becomes a blueprint, a regular heartbeat that is executed timeously - removes ambiguity and provides a sense of clarity and control.

Brief Walk through of the Tool
I wish I could take credit for conceiving the original visualisation, but this is actually something that I picked up from a previous project from four years back, however I've since then modified & added my own enhancements in producing a generic model for my PM Toolbox, reusing on all future projects:
Generic Model for Software Release Schedule (BoM) Process
This visualization should be self-explanatory. If you don't think it is, then I've failed in communicating the essence of the process, which will make it difficult for people to pick up and reuse. If you need more info, please get in touch.

This model assumes a timeline, as shown by the main arrow in the centre - showing a timeline of working days for the period leading up to a specific release iteration point (a.k.a. sprint or release cycle), during the cycle, and post the release cycle.

More projects are adopting some form of Agile, the basis being iterative, incremental release points. The typical STBSI release cycle is [TWO] weeks, it can be more, say three weeks, or less, about a week. It all depends on the strength of your STBSI team, the maturity and stability of your components, and the willingness of your component vendors to be flexible in producing regular release drops. Traditionally DTB Component Providers, especially Middleware providers have been very poor in delivering continuous incremental releases - which has been the bane of many PAYTV Operator's projects because the Middleware providers lock the customer in to their own processes, rigid and slow, rather than bend to the norms of current trends of flexibility. Times are changing now, and these component vendors are realising their mistakes because PayTV operators are seeking alternatives - features must be deployed in the order of a couple of months rather than at least 6-12 months...

Coming back to the process, in a nutshell:
  • Assumes a Release Cycle is in the order of TWO working weeks, or 10 man-days
  • The work or backlog for the next release cycle is governed by the Release Schedule or BoM
  • The BoM is produced leading up to the Next Cycle, a few days in advance to allow Vendors to plan
  • The BoM is baselined at the start of the Release Cycle
  • The BoM is subject to change based on any new Showstopping defects the Project uncovers
  • Vendors are continually monitored to track the progress with burning down the issues assigned
  • Any deviation to the agreed BoM is subject to Programme/Management interventions
  • Vendors release their components at the end of the Release Cycle subject to meeting the Acceptance Criteria as defined by SI
  • [DTT] - is a placeholder for your own Defect Tracking Tool
  • [STBSI] - is a placeholder for your Integration Team
  • [Vendor] - can be used as a placeholder to replace if you only have one vendor, otherwise keep it generic

Is the Tool/Process Effective? Surely this goes against Agile? Is this Not PM Overkill?
I believe the complex, variable and unpredictable nature of a STB Project calls for some processes to provide some sense of control. This is especially true for STBSI teams who are new the to SI process and are in a state of chaos. I've seen this on past projects, and on introducing the tool, it brought a sigh of relief to the project as it allowed for all players to settle on a common understanding and goal. So yes, I believe the tool, a visual representation is always useful way of communicating a method of working.

Is the Process Effective - it is as effective as the team responsible for execution. Like any process, there is some management, administration and diligent participation required from all impacted players. It certainly can make a Release Manager's life much easier, and planning more predictable. There is a time and place for instigating this process, so used prematurely, this BoM process might not be very effective, and could become a hindrance more than anything else.

Agile: First, I don't believe Agile or Scrum works well in the context of a STB project delivery, without a massive investment in management, contractual agreements between vendors, etc - such that the delivery is managed, end-to-end in an Agile manner. I will expand on this in a future post as to why I think Agile will not work in Digital TV Systems projects. The BoM process is iterative, and is aimed at producing incremental releases of sound functionality and stability. There is a Backlog, called the release schedule. Work is broken down and executed in release cycles, two week sprints. So the concepts of Agile are present, but the fundamental issue of leaving bug/fixing/stabilisation toward the end of the project cycle, after the initial development/integration cycle, indeed disobeys a fundamental rule of Agile/Scrum as delivering shippable, stable product at the end of each iteration - and doing so, would not call for a separate stabilisation/bug-fix stage...again, this is a topic for another post.

PM Overkill? To some yes, it might be. But to others, this brings a sense of clarity and purpose to a project delivery team. It sets the stage for Release Management, communicates the rules and behaviours expected from all parties, and establishes a baseline for communication and project escalation. So no, I don't believe this is Project Management Overkill!

Monday, 8 October 2012

Meeting Tom Gilb & his take on Competitive Engineering


Last week I attended a two day masterclass workshop on Competitive Engineering by the world-reknown, and much respected Tom Gilb.  Taking a paragraph out of the training brochure from Secolo Consulting.  Book  Competitive Engineering: A Handbook For Systems Engineering, Requirements Engineering, and Software Engineering Using Planguage

Tom Gilb is a competitive engineering consulting mainly serving multi-national clients in improving their organizations and methods. He works with major multinationals such as Boeing, Bosch, Qualcomm, HP, IBM, Nokia, Ericsson, Credit Suisse, Sun Open Office, Microsoft, US DOD, UK MOD, Symbian, Philips, BAe, Intel, Citigroup, Telenor, BAA, Den norske Veritas, Schilumberger, Tektronix, Thales, GE, GSK; and many others - including smaller organizations such as Confirmit, University of Trondheim IT, Enlight, Avenir, Clearstream, UECC....Gilb is the author of 9 books, his latest book is "Competitive Engineering: A Handbook for Systems Engineering"...

I have heard of Gilb before, and have, for a very long time, a few of his books on my Amazon Wishlist, namely Software Metrics & Software Inspection, as these books come well recommended by other famous people in Software. But admittedly I had held off buying Gilb's books because they were written way back then, pre-1990 stage, before my time - surely there can't be anything of relevance in those books?? They must be out-dated, filled with stuff around structured and sequential programming techniques, classic traditional Waterfall methodology of project estimation, implementation & delivery...why bother with these old relics, better learn from what's current and trending...


How naive was I? Just as I my eyes were opened when I read the Mythical Man-Month, realizing that the challenges that were faced in the early days are still very much relevant today, my eyes again opened on being enlightened by the ideas presented by Gilb in person, not through reading his work, but with face-to-face, personal human interaction. It was truly an honor & privilege to have spent two days with Tom...


So what did I take from this course then?
The workshop was really around navigating through the book, Competitive Engineering, discussing ideas and spending time on a few sample case studies and Tom's war stories. It was basically a seeder, an introduction to share this concept, to instigate new ideas and ways of thinking - and introducing the fundamental mindset changes required from people and organizations alike.

We had no time to do any practical tutorials - it was much a one-directional push from Gilb, we just had to listen, absorb and catch any ideas worth following up.

Of course, I need to give the book a read at least two attempts, it is very dense and probably requires a third read to be fully grounded. At the same time, I need to choose a few concepts and start working on them on my current projects. Whilst the ideas are fresh in my head, my take aways are:
  • Value-driven project delivery.  The idea of delivering incremental value is not new, I've always tried to work in this fashion, especially with my uptake of Scrum & Agile.  However, the fact that Gilb started this technique off back in the seventies or eighties was enlightening, Agile is not a new concept.
    • The tools & methods used to derive at kicking off the value-based delivery project is really interesting and requires more focus and experimentation.
    • Impact Estimation Table - seems to be a very powerful, multi-dimensional view of establishing the core values of the project/architecture/design. Everything starts off with the Impact Estimation Table
    • I need to look at synergies of applying some of the rigor and discipline of generating the Impact Estimation Table whilst simultaneously mapping this with existing  Scrum/Agile methods of creating the Product Backlog
  • Quantification is Key. Everything can be quantified. You're either too lazy or too stupid, or really ignorant (or can afford not to) to spend the time and energy in quantifying your core project requirements / architecture / design. If a thing is important and of value, then it must be Quantified
  • Evolutionary Project Management or Evo. This is the concept invented by Gilb which really implements the techniques for iterating on value-based deliveries. Deliver incremental value that can be easily measured on a week-by-week basis. Involves clearly defining the definition of done, which again is borne from the Impact Estimation Table.
  • Concepts such as Credibility, Evidence, Past Metrics. Gilb's tools forces you to dig deep, to really drill down into the details, quantifying everything as much as possible. I was going to share my generic Risks Register Template that I use for most of my DTV projects, actually I will share the first version, and then follow-up with an updated version to introduce concepts in the Risks Register such as: How credible is this risk? Do you have proof this risk happened before? What was the past project's benchmark, if any? All of this factor into determine the overall value of a risk. Isn't it obvious the project should really do all it takes to mitigate a risk if it came from someone who was expert in the field and hence very credible, has enough evidence to back it up and a sample of data backing up the risk??
  • Planguage. Planguage (Planning Language), pronounced "Plan-guage", is an invention by Gilb that is really a culmination of his 50+ years experience in the industry. He has created a language, that is complete with a syntax for both words and pictures, that is used to describe his method of planning, and used in all related tools.  There is a lot of handbooks out there, standards and configurations, templates and processes - but Gilb firmly believes these are all ok, some might be downright dismal, but they all lack a fundamental attribute of quantifying quality...
I have a lot more background work to do now, but I believe there's enough value and merit in testing these ideas out. I had in real-time, during the course, googled for any expert critics on Gilb, refuting his principles and methods - and came up short.  Gilb has an unbelievable track record and tenure, he is well respected in the industry, his work has pioneered much of the quality processes in Systems/Software Engineering - if you're serious about Engineering principles, in delivering value across the board, then you're stupid not to give this guy's methods a chance...

My tweets during the course...






Monday, 1 October 2012

Milestones Template for Managing & Tracking a Digital TV Programme


Do you keep a PM Toolbox close to hand?
One of my most common questions I ask candidates interviewing for Project Management or Programme Management positions is the subject of maintaining your own personal set of tried-and-tested tools you've come to depend on in your career with managing projects: Do you have any tools or templates that you keep with you as a standard reference such that you hit the ground running on your new project? If so, what kind of tools do you most value and cherish, and why??

Surprisingly though, most of the people I've interviewed either settled for the generic tools that are sold by the various PM Methodologies (Prince2, PMI, PMBOK, Scrum) - I've come across very few people who keep an arsenal of personal tools ready-to-hand to pick-up and use on their next project.  I too am guilty as charged, very often relying on memory and previous project documents to pick-up and use as a starting point.  I have however, started to collect my own set of tools that I'm willing to share with everyone, in the interest of knowledge-sharing and collaboration, the first starting with this post on a Template for Tracking Milestones on a Digital TV Programme.

When I used to be a developer back in the day, I had my own set of code samples that were generic enough to be re-used, my recommended debugging tools, libraries, parsers, open source projects, and so on. Along with your trusted book on Algorithm Design, Software Architecture Patterns and System Utilities - necessary tools for the job, like any other trade.  So too, for people involved in various aspects of Project Management, you could keep your very own toolbox that can help speed-up the much repeated, but often required aspects of the job.

A Preview of my PM Tools Backlog
Because my background really is around Digital TV Systems Projects, an area that has still a lot of life to survive the next 10-15 years, especially since Digital TV (DTV) is really only just beginning to fire up in emerging markets of Africa, India & China, there will still be a need and demand for such Project Managers.  I aim to share as much knowledge as I've gained through practical experience, by making available the tools I've come to trust and use in both past and current projects alike:
  • Generic Model for Tracking Milestones in a DTV Programme (this post)
  • Programme Model for Staged Deliveries for a DTV Launch
  • A Modeling tool to Estimate & Predict Realistic Time to Market (Climb to Launch)
  • Set-Top-Box Software Integration Release Schedule Process
  • A Generic Risk Register for a Typical DTV Programme
  • A Template for a typical DTV Project Charter or Software Development Plan
  • A Template for Managing a DTV Project based on an Agile Product Backlog
  • RAG Template Options
  • Models for Programme Organizational Structure
  • Methods for Controlling Quality: Effective Defect Management
  • Methods for Managing STB Software Development Processes
  • Template for Defining Bootloader Development Work-Stream
  • DTV Projects Matrix - A guide to understanding the permutations & combinations of system impacted components in a variety of different projects, arranged by complexity and rough estimations of project life-cycles
So I intend to release a tool as often as I can, in the hope that other Project Managers can use, learn, benefit and adapt these tools in their own day-to-day projects...

PM Toolbox #1: Generic Milestones Template for DTV Programme Management
This tool is all about summarizing and tracking the various Milestones that must be met in order to achieve a successful DTV Project Delivery. It is essentially the fifty thousand foot view of the high level work packages (or Work Breakdown / Sub-Projects) that essentially make up the various streams of delivery in the overall Programme.

The tool serves the purposes of communicating clearly the status of the overall Programme, which is what Senior Stakeholders are interested in.  It also serves as the master Project Plan for the entire programme, around which dependent sub-projects or work streams must deliver to.  Whilst it might come across as a simple table, list with columns, it is nevertheless a powerful tool in managing and tracking the overall project.

Download the Tool!
Please click here to download the tool. It is simply a Microsoft Excel spreadsheet that provides a list of milestones, for each milestone, we track the following criteria:

  • Milestone Name / Title - The actual event that manifests as an identifiable achievement of a goal
  • Company Responsible - The company or vendor who is assigned responsibility and accountability for delivering the result
  • Actual Plan - Tracks the current forecasted planned delivery date
  • Original Plan - Tracks the baselined original dates at the inception of the project
  • Acceptance Criteria - For each milestone, the agreed acceptance criteria (at high level)
  • Dependencies - For each milestone, detail the associated dependencies the milestone has, preventing successful completion
  • RAG Status - An agreed RAG convention to be used to summarize the status of the milestone
  • Recovery / Mitigation Actions - A set of actions to be taken to implement recovery scenarios
  • RAG Comments - A note on the current state of affairs
  • Project Work-Stream Owner - The name of the Project Manager or Owner assigned responsibility and accountability for delivering on that milestone

The table is pre-populated for you with the standard set of milestones I usually expect from a typical DTV Project. I have also pre-filled in scenarios for Acceptance Criteria, Dependencies as well as listed your typical Recovery Scenarios.  The tool comes with an information page that instructs you how to go about replacing the template with actual project data.

What is a Digital TV Programme: Players, Concepts, Milestones?
In order to appreciate the nature of a Digital TV Systems Project, you have to first indulge in a brief overview of what a Digital TV System entails, the players involved and the typical project configurations expected:

So now that you've read this background material, we can start exploring the make-up of the Milestones Tracking for a typical DTV Project:

The template assumes a DTV Project that impacts the entire end-to-end system of a PayTV Operator's value chain. Not all DTV projects encompass such massive changes across the board, there are some projects that involve just Set-Top-Box (STB) Software-Only changes, other projects might only involve introducing a new STB hardware device (decoder) with no changes to software. There are also Headend projects dealing with routine maintenance and enhancements as well.  I plan to create and share a matrix of your typical DTV project-based scenarios one day...

The tool assumes a big-bang project that involves the following players:

  • [PAYTV OPERATOR] - This is generally the PayTV Operator that is implementing the project to deliver or enhance their existing systems (e.g. BSkyB, DirecTV, Multichoice, etc.).
  • [STB MANUFACTURER] - The designated company responsible for the hardware design and manufacture of the client Set-Top-Box (STB) (a.k.a. Decoder), supplying related device Software (Drivers).
  • [MIDDLEWARE PROVIDER] - The company or service provider responsible for developing the software operating system for the STB, including providing value added services that enables rich Application Development for STB User Experience (i.e. the Electronic Programme Guide / EPG).
  • [EPG UI DEVELOPER] - The company or service provider a [PAYTV OPERATOR] usually employs to develop the STB client application or user experience. There is a growing trend with PayTV Operators electing to do this development in-house.
  • [INTERACTIVE APPLICATIONS DEVELOPER] - The company or service provider a [PAYTB OPERATOR] employs to implement value-add interactive applications that are built on the [MIDDLEWARE PROVIDER]'s platforms. Increasingly PayTV operators are also electing to do in-house development, or to contract this work out to the same [EPG UI DEVELOPER] or indeed have a third-party application development house contracted.
  • [HEADEND PROVIDER] - The company or service provider responsible for developing the backend or headend systems. Typically this is the same as [MIDDLEWARE PROVIDER] since the technologies are inter-related, but there are some PayTV operators that prefer open standards and choose to use a variety of suppliers to manage their risks (i.e. not to have all their eggs in one basket).
  • [SYSTEMS INTEGRATOR] - The company or service provider with sole responsibility and ownership for proving components & systems are well integrated and deliver on the desired functionality and expectations of the programme.  There are many levels of System Integration, depending on the [PAYTV OPERATOR]'s budget, etc. Typically though, it seems that [PAYTV OPERATOR]'s are aiming to be more independent from their service providers in taking the plunge of owning not only the System Architecture but also the System Integration delivery aspects of the programme.
With these players in mind, the Programme can then be translated to obey the following high-level concepts  in translating detailed deliverables or milestones (The detailed breakdown of the plan is a topic for another post):
  • STB Manufacturer Hardware Delivery
    • Hardware Design Specification
    • Remote Control Design Specification
    • Packaging & Artwork Specification
    • Hardware Acceptance Testing
  • STB Software Delivery
    • Device Firmware Bootloader Software Ready
    • Device Hardware Platform Driver Software Ready
    • Middleware Component Functionally Complete
    • UI / EPG Component Functionally Complete
    • Interactive Applications Functionally Complete
  • Backend / Headend Delivery Functionally Complete
    • Broadcast Components Operational
  • Systems Integration
    • STB Integration Functionally Complete
    • Headend Components Integrated
  • Launch / Deploy Product
    • All Acceptance Criteria Met
    • System has acceptable level of defects
    • Business decision granted for Launch
This is just a summary of the deliverables one can track. The tool provides a sample of 34 Milestones that drill down into the finer details.

In the next post I will spend some time explaining what some of these milestones mean, and how one should realistically go about planning the various work-streams, such that dependencies are minimized whilst simultaneously running multiple sub-projects in parallel.


Summary
This is a first draft, a brief attempt at sharing what I think is a powerful tool to aid Project Managers in managing Digital TV Projects.  I have for now, assumed the reader is familiar with the territory and therefore have excluded explanations of describing what the different sub-projects entail, etc. My aim was to share this tool with already practicing Project Managers in the DTV trade, to solicit feedback and share new ways of working, etc.
Some people might think I'm giving away some of my crown jewels by sharing this knowledge, that I'm forsaking my competitive advantage, etc...but I don't think so. I've laid out a simple framework that can be useful to others. There are many templates and tools out there to aid Project Managers, the success of a project delivery lies not in the templates, but in the project manager's knowledge, wisdom and tenacity of execution and implementation....