Wednesday, 28 March 2012

Effective Defect/Quality Management...


Continuing with the theme of highlighting the core tenets of managing a Digital TV Programme, having previously discussed the following ingredients of a typical delivery:
In this post, I'll share my experiences with yet another stream of activity that in my opinion, is of critical importance to guaranteeing a successful project delivery: Effective Defect/Quality Management. Whilst this topic, when viewed from the concepts of best practices of Software Quality Assurance/Management as described by SWEBOK (Chapter 11 Software Quality Management) -- is an area that could be managed by a Quality Assurance Group, more often than not, it is up to the Programme Delivery Team to monitor, control, track and manage the quality of components deliveries from a variety of component vendors into your programme, more often than not, it comes with the role of a QA Programme Manager if you can afford one, otherwise this is collectively owned by the Project Team.

Depending on the nature, size & complexity of the product/programme, programme manager, apart from managing the programme's planning and delivery schedule, also has to factor in defect management, thinking about the following: 

  • Do you have someone own the Defect Management activity as a sole activity?
  • Is this activity shared between all the project managers on the programme?
  • Do you treat defect management as mostly a co-ordination and monitoring/reporting activity?
  • When is the right time to introduce Defect Managing and Tracking?
  • How hard do we push for Quality Metrics throughout the life-cycle of the programme?
  • What systems are in place to make defect management less of an administration overhead?
  • How do you ensure that all vendors (internal & external) speak the same language w.r.t. defects?
  • Do your vendor's factor in defect fixing in their planning and how do they go about doing that?
  • How much time should your programme schedule allow for defect fixing and stabilisation?
  • How can you manage the expectations upwards to the executive sponsors using predictive analysis and statistical probabilities to target a more realistic launch date based on defect metrics?
  • How do you convince your vendors that instrumenting defect management metrics & reporting is useful as a delivery/schedule tracking mechanism?
  • How do you as a programme manager influence the development/release processes of your development / SI vendors to more effectively deal with quality issues?
  • How do you as the programme manager influence the product owners in accepting your quality delivery criteria?
  • How can you re-use metrics from past projects as inputs to your delivery schedule?
I will not attempt to answer all of the above questions, but instead will focus on an overall strategy that seems to work in the real world, at least from my experience: there is the theory that you can find in all Software Books on Quality Management and all the institutes and bodies of knowledge (SEI, SWEBOK, CMM, ISO9xxx, WikiPedia) -- and then there is the reality of doing what it takes to get a Digital TV project delivered:  Essentially this boils down to doing what is sufficient, what is good enough, what is fit-for-purpose as defined by your business objectives. In this post I will highlight real world practices that exercise the quality principles as outlined in the theory that are sufficiently practical to implement and deliver a quality DTV product.

Regardless of whether your business is a software vendor delivering components to an Operator, or an Operator managing deliveries from several component vendors, the topics discussed in this post are relevant to both Operators and Vendors alike.  As a software component vendor it is in your best interest to prove to your customer, the Operator, that your products are built from sound engineering principles, your quality assurance is second-to-none, and you can guarantee a level of component quality that more than meets the expectations of your customer, the PayTV Operator.

As an Operator, you are so busy managing your operations, that your business is much larger than just managing new technical projects, that you're forced to impose quality criteria on your projects.  If this is not something you're already used to doing, then IMHO, you're overlooking a crucial element of your project's success.  Where relationships between vendors are good, and proven from past experience and deployments of past products, then your can rely on the element of trust.  But in cases where you're taking a risk in adopting new technologies, then it is your responsibility to ensure that Quality Criteria are well defined and acceptable - and ultimately accepted by your vendors.  If you haven't done this already, then you need to re-consider, especially when the expectations are high.

This is a massive post, broken up into the following topics:

Battle-scars (Real-world Project Experience)
Taking a chapter out from a past project experience around the time 2007-2010: An Operator embarked on a programme to replace its existing subscriber-base of deployed HDPVR software with new technology from a different Middleware vendor (company I worked for, I was Development Owner for the Middleware). This new technology that was not yet proven in the market and boasted advanced concepts and radically short time-to-market promises.  Over 2 million people would be affected by this migration, so there was no room for error. Think of the project as replacing all four wheels of your car as well as overhauling the engine whilst the car is still in motion: a seamless transition from one operating system to another, a new user interface with all user's recordings & preferences restored.  To the user, the migration would be unnoticeable apart from an upgraded UI. There was an additional requirement that roll-back to the previous Middleware must also work in the event of disaster recovery.  (Suffice to say, the project was a success and no roll-back was required).

The stakes were high on both the operator-side and vendor-side, as it posed a significant business risk to the Operator; and a massive reputation-risk on the vendor.  Both Operator & Vendor worked very closely together, with complete transparency.  It was well noted during the early stages of the project, that significant effort and investment in processes and tools were required from the start.  There was also significant investment in effort upfront for resolving the end-to-end architecture and detailed planning (based on Agile principles, it was a massive departure from the classic texts on Agile).

Both vendor and operator recognised the drive for better quality, the project's contract included the following goals for defects to be measured at each output stage of the project:

  • Zero Showstoppers per Component 
  • No more than 3 Major Defects per Component
  • No more than 10 Minor Defects per Component 

The above criteria was imposed on just the STB system alone. Recall the STB is a system that largely consists of a User Interface (EPG), a Middleware (The Operating System of the STB) & Platform Drivers.  The defect criteria was applicable to those STB sub-systems right down to individual component level.  Take for example, the Middleware-subsystem:  The Middleware system was componentized, broken down into more than seventy components - each component was owned by a component team. Each component had to be tracked subject to terms of discreet quality deliverable - metrics were mined on a continuous basis and trends reported as part of the Defect & Quality Management process.  The same diligence was equally applied to Headend & Backend components.

The Operator went further, imposing the following on each component, read below as Each Component:
  • Must have a Component Requirements Specification Document 
    • Each component requirement must map into a high level Product Use Case
  • Must be testable
    • Requirements-to-test-mapping matrix document
    • Automatic component unit tests must cover component requirements
    • 100% Test coverage of Requirements-to-tests
    • Must be testable in isolation - component unit test
    • Must be testable in a group - component group testing
    • Must be testable as a subsystem -  Middleware  tests
    • Must be testable as a system - full stack system tests
    • Test results be attached to release notes per release
    • Must have no Regressions - 100% Component Tests Must Pass
  • Must have an up-to-date API Interface Control Document
    • APIs must be testable and map to a component requirement
    • Higher level APIs must map to high level product use cases
  • Must have a Component Design Specification Document
  • Must have an Open Source Scan Summary Report
  • Must have Zero Compiler Warnings
  • Must have Zero MISRA Warnings
  • Must exercise API coverage 100%
  • Must test Source Code Branch Coverage, results at least 65%
  • Must test Source Code Line Coverage, results at least 65% 
With this degree of focus on quality which outlines in clear terms the expectations of the delivery, the vendors no doubt had to invest significant time, effort, and energy in installing quality management processes early on during the project.  The Operator has taken sufficient precautionary measures to ensure quality deliverables from its suppliers, so overall tracking quality became a natural part of the programme management process.

As we were on the receiving end of this quality expectation and had to ensure our deliveries met expectations, our project team was partitioned such that Quality & Defects were managed by extending the project team. We employed a quality evangelist, a seasoned technical expert that focused on the software quality deliverable. A project manager was used at key points in the project to focus specifically on managing the defect backlog.  A project administrator provided support to the project team in the areas of data mining and management, managing the internal defect processes (ensuring defect tracking tool was used correctly, information was correctly applied, etc.), and generating weekly defect metrics reports that was presented weekly to the customer (Operator).  The defect metrics was one way of measuring the completeness and maturity of the project, apart from being software development complete.  We had to report on every single deviation from the agreed criteria, trying to satisfy the customer's (Operator's) every demand.

To give you an idea of the amount of tracking, administration and management involved in just ensuring we met the quality criteria, you can view a sample template of the tracking spreadsheet here - see picture below:
Sample Tracking Tool (A lot of Project Management Effort)
It's been genericized for privacy reasons, but it gives you an overview of the complexity of just this one Middleware sub-system alone.  We had do this for key milestones, typical milestones for a STB project are: Zapper complete, Basic PVR, Advanced PVR, Progressive Download, Interactive Applications (Not sequential activities).  Just so you appreciate the context: This Middleware consisted of 80-100 components during the life-cycle of the project, spread across 3 continents, 4 countries, with a development team of 200 people. At the project's peak, there were around 350 people directly involved in the STB development activity, supported by a project team of 20-30 people also geographically dispersed.  My part as development owner was overall management of the development backlog. planning, issues and delivery, working through regional project interfaces...

In the remaining sections of this post, I'll attempt to describe what I've come to learn & consider as good practices for Quality/Defect Management to ensuring a successful outcome for most DTV STB projects (based solely on past project experiences).

In a typical STB project, there are usually more than one vendor involved. As vendors, it is in their best interests to support multiple customer projects at the same time, generally building upon a common software platform. These vendors can be viewed as traditional software vendors. The following players are involved: Chipset Manufacturer, STB Manufacturer, Middleware Vendor, Virtual Machine Provider, Application Vendor and Systems Integrator (SI).  The SI of course doesn't own any software components, but is primarily responsible for pulling the components together to provide a coherent STB build.  Each of the vendors will have its own internal processes to manage quality. They will also have their own internal defect tracking tool, with its own terminology that is well understood.  Some of these vendors operate quite independently, but when the STB stack is pulled together, it often involves collaboration between multiple parties (as facilitated by SI) in resolving issues.

Because there is no internationally accepted standard for defining defects, and taking the disparity between vendors into account, it becomes necessary for the DTV project team to ensure there is consistency in terminology across all parties. For example: A Showstopper means what it is, and nothing else; a Priority P0 is assigned by one authority only and no one else, etc.

Imposing common terminology upfront sets the stage for the programme, ensures consistency, avoids unnecessary noise (which wastes time) and promotes a common understanding and set of values across the board.  With a clearly defined criteria, teams can use the template as a handy look-up, it also empowers people to challenge certain defects, and is an enabler for everyone on the project to raise and classify defects with reasonable consistency.

The most important feature however is consistent representation. All vendors must understand the project's objectives and make the necessary adjustments to their internal defect tools.  When reporting defect metrics for the entire project, there is thus some confidence in the integrity of the data.

There is often confusion around Defect Severity versus Priority.  Priority calls are often subjective and usually at the discretion of the Product Owner or Delivery Manager accountable for delivery.  Severity on the other hand, can be more accurately defined, and can take advantage of a useful look-up table.  This helps developers, tests, integrators and managers alike, in setting severity consistently.

In most of the past projects I've worked on, there are usually the following defect Severity definitions: Showstopper, Major, Minor. In terms of Priority, some projects use separate Priority defines: P0, P1, P2; whilst others imply priority by the overloading the Defect severity, i.e. a Showstopper by definition is assumed priority P0.  As the project matures, the Product Owner/Delivery Manager gets more involved in defining the priorities, which may contradict the look-up definitions.  At launch stage, this is solely at the discretion of the senior stakeholder accountable for Launch.

Nevertheless, I strongly support the argument for consistency and common language. I have over time collected some useful tools for managing DTV projects, one tool is a useful Severity & Priority Matrix template:

Defect Severity Template
Defect Priorities Template
Download the Excel Template
Download the PDF Template

Recall a previous post that discussed auditing your project's processes. In a recent audit carried out on a project I wasn't primarily involved in, the recommendations around defect management, from the auditor S3, echoed much of the above templates that I'd already instigated in my project, prior to the audit exercise.

There was an additional recommendation of the option of Service Level Agreements in terms of response times for defect resolution that must be agreed with your various suppliers:

  • P0: Acknowledgement within 1 hour, Initial Response within 24 hours, Fix/Workaround within 48 hours
  • P1:  Acknowledgement within 1 hour, Initial Response within 2 Working Days, Fix/Workaround in next release or within 1 month whichever is earlier
  • P2:  Acknowledgement within 1 hour, Initial Response within 5 Working Days, Fix/Workaround before final release according to agreed schedule

You can also reduce the work on defect administration by architects (as the programme depends on architects assessing the severity of defects) by providing guidelines for priorities, for example the following are generally used for STB projects:

  • STB SI Stability Showstoppers
  • STB SI Smoke/Soak Test Regressions
  • Customer Sanity Regressions
  • Customer Acceptance Tests Showstoppers
  • Existing Showstoppers (oldest first)
  • Component Stability Showstooppers
  • Showstopper Feature defects
  • Major Defects
  • Features broken by Major defects

I've used variations of the above in past projects, I am currently using this in my present project. It works - but it requires some administration and consistent diligence!

Earlier I recounted some of the quality criteria that was imposed in a past project. Some might say that was quite a heavy-handed project; given the context and expectation, I would say the expectation was not that unreasonable, if not I think it was missing a few quality targets (such as Code Complexity, Cyclomatic Complexity, Function Points, Defects per Lines of Code, Defects per Functions, Defects per Region).

Not all projects need to define quality criteria down to that level of detail, it essentially depends on the nature of the project. Other factors include the maturity of your component suppliers. If your suppliers have earned a reputation for quality and have audit results to prove, perhaps they're a CMM level 3/4/5 company, or ISO9001 certified then that would naturally instill some confidence in assuming quality deliverables. Another factor related to component vendors is maturity in the marketplace - if your project includes mature, proven components that have a well-known track-record, the it's relatively unlikely that things will go pear-shaped.  However, if you're dealing with the somewhat unknown, unclear, where the level of uncertainty and risks are quite high, then you cannot ignore quality criteria.  You should have some way of defining your quality objectives for the programme.  At the very least, you should set the expectations of quality building upon best practices, and should go to the extent of auditing your supplier's processes to  increase the chances of your project's success.

Settling on a common terminology for defects as described above is the first step towards specifying clear, measurable, specific and timely quality criteria.  You could go down the Software Quality metrics route as discussed above if you are serious about your software.

Realistically however, if you're an Operator, your end result is defining the success criteria for the project delivery. Generally, no matter what development methodologies your component vendors adopt, or the integration strategy you define in terms of continuous integration & continuous delivery -- it is natural to approach STB projects using clearly defined stages, or in project management terms: Stage Gates.  At a high level, the stages represent feature completeness of the product:
Zapper > Basic PVR > Advanced PVR > Push VOD > IP VOD > PDL > Interactive Games > Launch.

In order to reach the Launch milestone, there are a few testing/stabilisation gates that need to be passed.  This is where the Common Defect Severity & Priority definitions and Delivery Milestones go hand-in-hand (most of the below concepts were used in the project I mentioned previously):

  • Milestone: Start Closed User Group (CUG) Field Trials
    • Criteria:  All Components Must be Functionally Complete:
      • Verified acceptable by component tests, system testing, and pre-field trial testing
      • Zero Showstopper defects from frequent use (e.g. normal end user, installer or customer-care user day-to-day usage) of product functionality in Functionality, Performance or Stability.
      • Product to be stable in normal end user, installer, customer-care user usage, such that it can continuously run without rebooting for >72 hours at a time.
      • Less than 5 Major defects per functional area in Functionality, Performance and Stability
      • Any Showstopper graphical bugs by exception.
  • Milestone: Start Wider Field Trials
    • Criteria: CUG Ends,  Release Candidate build is available
      • All product functionality available with Zero Showstopper defects in any area.
      • Product to be stable in normal usage, such that in can continuously run without rebooting for > 1 week at a time.
      • Less than 20 Major defects across Functional, Stability & Performance in all Functional areas - across the entire product.
      • All such defects to have clearly identified and agreed action plans associated with them and a forecast resolution of < 3 weeks.
      • Zero Showstopper defects.
      • Major graphical defects by exception.
  • Milestone: Launch - Go Live!
    • Criteria: Field Trial Exits, Launch Build available
      • Zero Showstoppers or Zero Critical issues whatsoever - any defect affecting revenue classified deemed as critical.
      • A level of minor defects that have no negative impact on customer satisfaction, which does not represent a regression of what is in the field already.
      •  Stability issues must be sufficiently rare as not to be expected to be encountered under conditions found in normal end user, installer or customer-care user usage.
      • Product should not crash or need to be rebooted.
      • Graphical and Performance issues should be of a nature such that they are not noticeable to most users using the box in normal end user conditions, installer or customer-care usage patterns.
      • Both above to be established through field trial surveys, with 95% of respondents confirming the software to be equal to or better than current infield software in terms of stability and performance.
In addition to the above set of criteria, it is also quite useful to breakdown the STB product into the key functional areas to track the impact of defects. By breaking down the product into functional areas, and tracking the spread of defects across these areas, the programme can gauge the overall health of the product.  The product owner is best placed to prioritise the areas to breakdown, but STB products are quite constrained and already have a well known feature set.  For instance, the following breakdown can be used:
  1. User Experience / Usability
  2. Finding Content via Service Information
  3. Finding Content via On-Demand
  4. Live TV Viewing
  5. Booking Recordings and Reminders
  6. Broadcast Recording - Schedule and Current
  7. PushVOD
  8. On-Demand Downloading
  9. Purchasing Content
  10. Parental Control & PIN Management
  11. Using the Planner / Playlist Management
  12. Live TV Trick Modes
  13. Playback Trick Modes
  14. Managing Resources (Tuner Conflicts)
  15. Interactive Applications
  16. Usage Monitoring
  17. System Operations
  18. Diagnostics / Health Check
  19. CA Callback
  20. Internet Connectivity / Browsing
For each of the above functional areas, you can define more targeted quality criteria.  This helps the project team direct focus and attention intelligently, working towards clear, specific, measurable and timely objectives.  The functional areas gain increasing focus as you approach launch, and it is generally within this phase of the project that we then start making launch-critical decision, concessions and waivers.  So understanding the critical criteria for business decisions is helpful.


DTV projects are becoming increasingly more complicated. Traditional Operators are taking risks and doing more enhancements than perhaps five years ago, where the broadcast system was fairly stable, and all major enhancements were more STB-centric than headend.  With features like PushVOD, IP-VOD, Progressive Download, Recommendations, Audience Measurement for Dynamic Advertising and Recommendations -- these are big-impact features that touch the entire value-chain: STB, Headend, Backend, Infrastructure components are all impacted.  Overall your programme needs to track progress for the entire value chain, and quality needs to be managed effectively.

This largely becomes the responsibility of the Systems Integrator (SI) assigned to your programme, from an Operator's perspective.  For an individual product vendor's perspective, you need to align your existing internal processes to match the requirements and expectations of the programme you're delivering to.

Having a clear Defect Management Policy that not only defines what Defect Severities & Priorities as described above, but also describes the processes for raising defects, assigning to component owners, required information, etc.  This document must be published and distributed to all vendors in the programme.

It is quite obvious from an SI perspective why you need to define a clear Defect Process and Workflows, but if you're a vendor delivery Middleware or UI, with your own established tools and processes, you might question the need for creating a separate process for a project in the first place.  Generally the reasons behind this are the following:

  • In some customer projects, general product processes will not apply. There will be mismatch between internal definitions versus customer expectations
  • Defects will be raised internally by your own development/test teams, as well as by the customer and system integrator. Your team need to understand where different defects come from, and priorities
  • Customers generally will use a different defect tracking system to manage all their supplier deliverables.  For example, S3's Engage Portal is increasingly being used as the central hub for release and defect management, whilst vendors may choose to use Rational Clearquest, BugZilla, Spira, Jira or TFS.  Your team needs to understand the mechanics around this: how does the customer and vendor ensure the trackers are synchronised?? This generally requires a custom process or tool.
  • In terms of integration, release and branch management - as a vendor you will be supporting different branches of your product.  Your product team need to understand what the policy is for fixing defects on your main product trunk versus customer branches - a customer might be given higher priority than your product development, this process needs to be made clear to your development/test team. Do all your component teams understand how to manage defect fixes across multiple branches??
  • Quality of the release is measured according to the number of defects open - defect metrics are important to the customer. Your processes will have to change to support the metrics needs of the customer, your teams need to learn about maintaining accuracy of data, hence a clear defect process is required.
If you have never created a Defect Process/Policy document before, here are a handful of questions to guide you along:
  • Why do you need a separate Project Defect Process anyway?
  • Can you take me through a simple for work flow of a defect in in my project?
  • Who can raise defects in my project?
  • What is the project's definitions for severity?
  • How does the project define defect priorities?
  • Are there any tools being used outside of our internal defect tracking tool?
  • What should you do if your defect is lacking in detail and you need to find more info?
  • Who is allowed to postpone a defect?
  • Who is allowed to reject a defect?
  • Who is allowed to close a defect?
  • Who is allowed to duplicate a defect?
  • Who is allowed to clone a defect?
  • Who is allowed to move a defect to another component?
  • Who is allowed to open enhancement requests?
  • Who should test and verify a defect is fixed - i.e. where is the ownership?
  • How do we handle defects from an external tracking system not our own?
  • How do we handle multiple projects for the same customer?
  • What do you do when you find a defect that is common to multiple customers?
  • What do you do when you find a defect on one customer branch that is relevant for another customer branch?
  • What is the priority for fixing defects on customer branches?
  • What do you do if you moved forward in your sprint but the customer requires an urgent fix?
  • How to use your defect tracking tool - mandatory fields, information required, requirements for log files, etc.?

If everyone in your project can answer all of the above without needing it documented, then great - but I can guarantee you that there's bound to be confusion and ambiguity in this area - that's why the programme management team should dedicate time and energy to bed these processes down.  It will help your operation in the long run!

Ensure consistency of data
Earlier I mentioned the disparity of bug tracking systems that maybe in use by different vendors (if you're the customer) or by your customer (if you're a vendor delivering to a customer). Additionally the System Integrator assigned might opt to use a completely different system as well.

Just take a look at what the market has to offer: comparison of different defect tracking tools. There is no perfect tool that will meet all your needs, typically these tools provide enough of a foundation for best practices, it becomes complicated as the reporting needs for the project matures, eventually resorting to custom scripts and tools that sit on-top the defect tool to provide the information in the format and view you need.  I've got a deep background for IBM Rational Clearquest as I've used that tool for almost my whole career to date, which makes me pretty biased to the Rational Development/Quality philosophy.  Yes, that tool can be pretty heavy handed, requires a lot of administration, and depending on the size of your organisation and maturity of products, the centralised versus distributed deployment models can be somewhat of a pain.  Nevertheless, with an accessible API, one can create reports and dashboards relatively easily; automation tools integrate well too.  Coupled with Rational ClearCase, you then have quite a solid tool for your software management policy. I have used other lightweight tools like BugZilla, Redmine & Roundup -- but I found those to be pretty much featherweight contenders not fit for the heavyweight requirements for a serious DTV Programme.

In my current project we have a System Integrator using Spira, one Application development team using Spira as well, another team using Pivotal, a team using Jira, one vendor using TFS, and another vendor using IMS.

So the challenge then is how do we maintain consistency of data? We cannot force our vendors to change tools, but what we have to ensure is that the mappings of defects reflect the common terminology as noted previously. That's the first point: ensure all tools comply with the project's definitions for different types of defects.

The next challenge is ensuring data synchronicity. As a vendor, how do you ensure that defects raised by SI find their way back to your internal defect tracking system? As a customer or SI, how do you ensure that your vendors are not hiding crucial defects in their internal tracking systems that should be exposed to the customer?

Maintaining consistent data is key to enabling more accurate production of defect metrics. It also allows the project to have a unified view of the end-to-end system, confident that all parties are using the same terminology that is reflective in the data being reported.  A critical phase of reaching the Launch stage of a DTV project is a period of focused defect resolution and fixing.  Based on the rate of defects being submitted and fixed, we can use this information to better predict the closure of the project.  Ensuring consistency of data also prevents us from not seeing the true picture - the classic tip of the iceberg scenario: the customer/SI might have just a shallow view of the true defects status, there maybe hidden defects your vendors are not publishing that could influence the outcome of the project.

S3 have developed a portal, called Engage, intended to better manage the system integration and global management of an operator's delivery. It is more than just an issue tracking system, offering a repository for documents (not a document management system) as well as a release repository (instead of having separate FTP sites floating around).

In the example past project I mentioned earlier, Engage was the primary tool being used by the customer to manage all their suppliers. We started off by providing just the Middleware sub-system but later also provided the EPG, and then also took over system integration. During the early days of my managing the  Middleware, we realised there was a disaster brewing because of the mismatch between Engage and Clearquest, the customer kept asking about issues we weren't really focusing on. We did have one person assigned to tracking Engage but it was not a full time activity, the process was manual and quite tedious and error-prone (manual copy-and-paste) from one system to another. This clearly wasn't efficient. We introduced automatic importing from Engage-to-Clearquest, and also exported from Clearquest-back-to-Engage, ensuring that both Clearquest-and-Engage were in sync. Being on the same page with the customer makes a whole lot of difference!

Take a look at a typical workflow required to maintain synchronicity - this is based on how we solved the Engage-Clearquest problem (genericized for privacy reasons) for our Middleware component (PDF, Visio):
Syncing Engage-Clearquest Workflow

Because the Middleware is a fundamental component to a STB-stack, offering services to many consumers, defects will arrive from various sources. The Engage portal was used to track the other component vendors, typically the EPG developers, SI & Test teams.  Every day the customer would raise issues on the Engage Middleware tracker, as well as the other trackers (SI, xEPG, ATP). Issues arrive on the Engage tracker after an internal review process by the customer. We enforced a daily defect scrub/review process, involving our lead architect, component technical leads, customer test analyst lead, customer system integration lead, and technical project managers. Everyday at 10AM, the newly submitted defects would undergo a review "Scrubs".  For each issue that was assigned to our  Middleware tracker, the customer had to ensure sufficient detail was captured in the description to allow us to start our investigations.  We created a custom tool that ran every night like clockwork, at 19:00PM, imports all new Engage issues automatically raising new Clearquest defects (CQs). The tool enforced strict rules according to the status of the Engage item, one rule was that only those Engage items assigned to us (a.k.a. Vendor Owner) will generate a new vendor CQ, assigned to a generic owner called ENGAGE_MW for  Middleware issues.  There were cases where old Engage items were re-opened, and a new CQ would've been auto-generated, even though the previously assigned CQ was still in the Open State & assigned.  Every morning, the scrub would agree to accept an issue as requiring further investigation in the  Middleware , and will move the CQ from ENGAGE_MW to the respective  Middleware component as assigned by the scrub for initial investigation, we'd add comments like "This defect was reviewed in the daily scrub call and we think component XYZ is best placed to start the investigation" or "There is insufficient information to assign to a component, this needs to be triaged further by STB SI".

To maintain consistency of data, we had to ensure that the states for Clearquest defects were mapped to the states in Engage.  The customer generated reports from Engage, which cross-referenced our internal CQs from Clearquest.  We maintained Clearquest reports which was also communicated back to the customer in the form of weekly Defect Metrics reporting.  Hence the need for specifically defining a workflow that established mappings between the different defect tracking tools for maintaining consistency of data.

I am quite certain this problem will creep up on almost every major STB project, so hopefully I've given you an idea of the work involved.  We had a project administrator involved full time to the activity of ensuring the defect tracking systems were synchronised, that information was accurate and that information integrity was maintained.  This same administrator also ensure that the Showstopper/Major/Minor was equally mapped to the Critical/Major/Low translations of the different tools.  More importantly, when there were discrepancies in the metrics reporting, we would go to this project administrator as the first port of call for clarification.

Defect Metrics is not just yet-another-administrative-task-by-project-management-to-keep-themselves-busy-and-create-annoyance for development teams.  Defect metrics is not about just reporting defect counts using visually appealing graphs and figures again, to keep senior management happy.  Defect metrics is also not an an over-the-top management intervention that goes against Agile Development philosophy.

Defect metrics provides valuable information relating to the health of the overall project. It can provide useful insight into problematic areas, highlighting areas of bottlenecks, identify areas for improvement -- all useful pieces of information for the project management team to make informed decisions, taking corrective action.

Applied correctly, defect metrics can be an invaluable tool providing input into the discussions around a project's success criteria and probability for launch.  Recall the earlier delivery milestone criteria for a STB project launch: Functionally Complete, Start of Closed-User-Group Trials, Start of Field Trials to Final Launch. To reach each one of these milestones, your project needs to progressively improve the quality of the defects and stability of the system.  Without useful metrics to hand that can be used to highlight the current trend of defects in your system, versus a projected view (extrapolating trends from previous performance) of likelihood of actual project completion -- you have no scientific or factual basis to justify confidence in reaching your milestone deliverables, apart from your hunch-base.

This is just not my opinion - there is a vast body of knowledge around the subject of Software Quality Measurement (some supporting evidence will be presented now, followed by a real-world example):

A DTV project is a naturally software-intensive system, and as such, a key measurement of software systems is the latent state of bugs or defects. Taking a few words out of Steve McConnell's Software Project Survival Guide: How to be Sure Your First Important Project isn't Your Last (Pro -- Best Practices) (Chapter 16 on Software Release):

At the most basic level, defect counts give you a quantitative handle on how much work the project team has to do before it can release the software. By comparing the number of defects resolved each week, you can determine how close the project is to completion. If the number of new defects in a particular week exceeds the number of defects resolved that week, the project still has miles to go... 


Generic Open VS Closed Defect Trend (Shows ideal - fixed overtakes open, project in good shape)


If the project’s quality level is under control and the project is making progress toward completion, the number of open defects should generally trend downward after the middle of the project, and then remain low. The point at which the “fixed” defects line crosses the “open” defects line is psychologically significant because it indicates that defects are being corrected faster than they are being found. If the project’s quality level is out of control and the project is thrashing (not making any real progress toward completion), you might see a steadily increasing number of open defects. This suggests that steps need to be taken to improve the quality of the existing designs and code before adding more new functionality…


According to the Software Engineering Book of Knowledge (SWEBOK Chapter 11 on Quality), this is what they have to say about metrics and measurement:

The models of software product quality often include measures to determine the degree of each quality characteristic attained by the product.  If they are selected properly, measures can support software quality (among other aspects of the software life cycle processes) in multiple ways. They can help in the management decision-making process. They can find problematic areas and bottlenecks in the software process; and they can help the software engineers assess the quality of their work for SQA purposes and for longer-term process quality improvement... Finally, the SQM reports themselves provide valuable information not only on these processes, but also on how all the software life cycle processes can be improved...While the measures for quality characteristics and product features may be useful in themselves (for example, the number of defective requirements or the proportion of defective requirements), mathematical and graphical techniques can be applied to aid in the interpretation of the measures. These fit into the following categories: 
Statistically based (for example, Pareto analysis, runcharts, scatter plots, normal distribution); Statistical tests (for example, the binomial test, chi-squared test); Trend analysis & Prediction (for example, reliability models).
The statistically based techniques and tests often provide a snapshot of the more troublesome areas of the software product under examination. The resulting charts and graphs are visualization aids which the decision-makers can use to focus resources where they appear most needed. Results from trend analysis may indicate that a schedule has not been respected, such as in testing, or that certain classes of faults will become more intense unless some corrective action is taken in development. The predictive techniques assist in planning test time and in predicting failure....They also aid in understanding the trends and how well detection techniques are working, and how well the development and maintenance processes are progressing. Measurement of test coverage helps to estimate how much test effort remains to be done, and to predict possible remaining defects. From these measurement methods, defect profiles can be developed for a specific application domain. Then, for the next software system within that organisation, the profiles can be used to guide the SQM processes, that is, to expend the effort where the problems are most likely to occur. Similarly, benchmarks, or defect counts typical of that domain, may serve as one aid in determining when the product is ready for delivery.


And last, but not least, there is actually a website dedicated to the Defect Management process: www.defectmanagement.com  which basically says much of the same as above in more palatable English:
Information collected during the defect management process has a number of purposes:
To report on the status of individual defects.
To provide tactical information and metrics to help project management make more informed decisions -- e.g., redesign of error prone modules, the need for more testing, etc.
To provide strategic information and metrics to senior management -- defect trends, problem systems, etc.
To provide insight into areas where the process could be improved to either prevent defects or minimize their impact.
To provide insight into the likelihood that target dates and cost estimates will be achieved.


Management reporting is a necessary and critically important aspect of the defect management process, but it is also important to avoid overkill and ensure that the reports that are produced have a purpose and advance the defect management process.

Real-world examples of Metrics Reporting
Drawing again from the example project, as highlighted in the beginning, there were clear milestone deliveries communicated at the project's initiation.  We had to start measuring our defects status early on in the project, despite having not reached functionally complete status.  Some people are of the opinion that defect metrics should only start once development phase is fairly mature and we're into our integration & test / stability phase, as doing this any earlier would cause undue noise to the development teams, and unnecessary management administration.  Whilst others take the view that defects should be monitored early, from the development phase, right through the end.  I share the latter view, as I believe that early monitoring of defects during the development phase is bound to highlight problematic areas in requirements, design, architecture and coding, that if left unresolved until much later in the project would be a costly exercise.  I also believe that continuous integration and testing early, surfaces defects earlier than later.

Going back to the example project, we had to supply our customer with weekly defect metrics reports.  We also had to extrapolate past trends in predicting the future outlook.  We were also then measured and tracked on our predictions and were taken to task if we were woefully out with our predictions!  I was not the one producing the reports to the customer, that was handled by the delivery team -- however my middleware component was the major supplier to the programme, so I took it upon myself to maintain my own internal view of the metrics.  This was a contentious issue at first because there were discrepancies with what was being reported to the customer, versus my own internal calculations.  This actually struck a chord with the senior account holder, who based his method of metrics from sound judgement and experience, whereas I wanted to be more academic -- suffice to say, I did eventually come around to the senior manager's way of thinking, I've come to respect his hunch-base, wisdom earned from a great track
record of delivering successful projects in the past (he's an inspiration to many!).  I have also come to appreciate that if you've delivered projects before, that is valuable experience, that based on your instinct alone, without requiring a detailed plan, you can fairly accurately predict when or how long a project would take.  I had a recent experience where I had to stand my ground based on my own experiences and learning to base my own project's outcome of success solely on my past experience, or hunch-base.  The term hunch-base is from Tom Demarco's novel The Deadline : A Novel About Project Management which every Software Manager should read by the way...

Manager's crude way
We relied on a very basic approach (not overkill) to defect prediction. The metrics were based solely on defect counts alone, using a history dating back to the previous four weeks.  The number of Open/Closed defects was based on a rolling four-week period, based on the number of defects closed in this four-week period, predicted the future that showed the number of days remaining to fix all open defects.  Using the milestone criteria, the defect metrics report tried to establish predictions around those dates.


My own internal metrics predictions
- Download My Excel Tool
There are more deeper analysis and statistical techniques to mine data and make defect predictions, even down to using a mathematical model.  I did not go down that route, even though I was really tempted to do so.  Instead, I went a step further than the crude approach and based my predictions on the trends of submitted versus closed rate for all defects, with a focus on Showstoppers & Majors. Taking into account not only the defects closed in the last week, but also those submitted, if you mine enough data you can then rely on a statistical average for open/closure rates to predict when the project or component is bound to start the downward trend to reaching zero.

I went to the extent of creating a model in Excel that basically played with two variables: Submit Rate vs Closure Rate.  To get to launch, and to get to the ideal that your closure rate must surpass the submit rate, the following could happen: Assume your submit rate decreases over time and/or your closure rate increases over time.  The chances of your submission rate decreasing over time is much higher than your closure rate increasing over time.  Why??  Because over time we have to assume that the quality of the system is increasing over time and that testing will find less and less defects.  It is unrealistic to assume your closure rate is going to continuously increase over time because once you've established your teams capability for closing defects, i.e. their maximum throughput, closing a Showstopper/Major defect will take as long as it takes.  It is unlikely your team size is going to continuously change or that your team will be exceeding expectations. Generally as we reach the launch phase, the teams will have been working at full throttle, getting any more performance gain out of them is going to be a tall order.  Hence it is best the closure rate is assumed constant, and hope for a decline in submission rate over time.  The model was still crude in that it assumed a uniform reduction in submission rates -- nevertheless it was still a model that could be used to output a prediction based on some assumptions.  Looking back however, the model did come pretty close to reflecting the eventual end-date.  The snippet below shows the front page of the tool: It can be used to track separate components or the Middleware as a whole.  In the figure below, it shows the status & prediction for reaching the launch milestone for the entire Middleware component. It also showed the current trend at the time that the Middleware component was looking in good shape as we started to see at last that we're closing more defects than were being submitted:

Defect Prediction Tool (Download the Excel Template)

Some example metrics I like to report in my projects


Defects Dashboard Summary:
  • Total Open
    • Submitted
    • Under Investigation
    • Implemented
    • Ready for Test
    • Verified
  • Total Closed
    • Rejected
    • Duplicated
 Age of open defects:
  • By age, calculate the duration the defect has been open (from submit date to present date), along with the owner, and the state of the defect (submitted, assigned, etc.)
  • Also include views on severity & priorities  
Trends:
  • Week by Week view (ability to generate a table if needed) preferably a graphical view of the following:
    • Total Open trend (delta between current week & previous)
    • Total Closed (deltas)
    • Submission rate VERSUS Closure Rate
      • This graph is helpful as it’ll show you if the project is trashing or not. When you ready for launch, the trend should clearly show Closure rate consistently higher than submit rate
      • The data must be readily available, because it’ll help you predict the defect rate leading up to launch. We should be able to predict, based on past history of defects submit/close rate, the likelihood of reaching a stable build
 View of defects by Feature:
  • A Pie-chart giving us the breakdown of defects by feature / user story. This will show us how complete we are functionally
  • This depends on whether there are appropriate fields being used in your defect tool to track these
  • Feature priority must also be taken into account – probably a separate pie chart. This will show you which areas to concentrate on
View of defects by submitter:  Need a way of checking where the defects coming from:
  • UI developers
  • UI Testers
  • SI Integrators
  • SI-QA Testers
  • Customer ATP
  • Field Trials
Time it takes to close defects:
  • You need to answer: How long does it take to close a Showstopper/Major P0/P1/P2 on average?
For the SI projects:
  • View defects by Component: Drivers, Middleware, UI
  • For each component, apply the same metrics as above (Age/Trend of Submit versus Close, etc)
Measuring development schedule also contributes to overall metrics and is related to overall Defect Metrics reporting
There is usually a development phase for one major component of the STB project, could be a Middleware component or the EPG Application / UI.  The completion of the development phase will obviously impact the project's launch criteria, it is therefore a natural requirement to track development as part of the overall metrics reporting.  What these metrics will focus on is not only the defects resulting from development, but the productivity of the team itself.

In the example project, we had to track and report on our Middleware current status against planned schedule, and include predictions of time to completion, including a prediction of the likely defects to encounter at the end of the development phase; followed by a trajectory or glide-path for time to completion of defects.

The following topics can be considered useful reflection points for metrics regardless of the software component or development methodology (assumes Agile):

  • What is the feature breakdown?
    • How many features development complete? 
      • How many user stories implemented with no outstanding bugs?
      • How many user stories implemented with outstanding bugs?
      • How many user stories to be planned?
      • How many user stories blocked?
      • What is the prediction for completing a feature?
    • Predictions for completing remaining features?
  • Measuring Sprint Progress in more detail:
    • Planned versus Actuals being reported for each user story
      • How many stories completed on time – development, tested with no bugs??
      • How many stories missed the planned completion date but still delivered within the sprint?
      • How many stories missed the sprint completely, despite not being blocked by any dependencies?
      • How many stories were blocked by unforeseen issues?
        • What issues were these?
        • How can we take corrective action to prevent blocks from happening again?
  • Measuring Risk-Mitigation stories:
    • How many risk mitigation stories are on the backlog?
    • When is it predicted to close down these risk mitigation stories?
    • What are the consequent stories dependent on the outcome of risk mitigation stories?
  • Adding Other stories to Sprint Backlog:
    • STB SI will be influencing the objectives of the sprint – SI will provide a list of defects that must be fixed for the next release. This needs to show up as a story
    • Defect fixes must also be allocated a story – accounts for time
  • Overall burndown of Component completion estimations
    • Given the following things can happen during a sprint:
      • Developing new features
      • Risk mitigating features for future sprints
      • Fixing of defects raised by dev testers / developers
      • Resolving SI objectives
      • Supporting release process
      • Pre-planning
    • Can we quantify more clearly, even though it's an estimate – of the likely burndown trend to result if everything is accommodated for – where does the end date come out??
Other Useful Example Metrics
The picture below illustrates a snapshot in time of how we used metrics reporting over the life-cycle of the example project. This should give you some idea of what it involves:
Tracking Metrics through different phases of a project (12 months to Launch)
The first row in the above dashboard tracked the  Middleware test coverage in terms of total test cases defined. written, run, passing & failing. The point of those curves is to illustrate the gaps between the various areas:

  • a gap between defined and written test cases highlights the backlog of work remaining to implement the test cases as planned [ideal is to have defined to equal written]
  • a gap between written and run indicates the bottleneck with executing test cases (test system was fully automated and required to complete within 48 hours) [ideal: run equal written - all implemented tests must be run]; 
  • a gap between run and passing highlights the quality of the test cases or quality of the  Middleware [ideal: all tests run must pass];
  • a gap between failing and passing indicates how far the component is away from meeting the quality criteria [ideal: zero failures]
Over time as the project reached its end we had close to 2000  Middleware tests alone, with a pass rate of 95%. We extended the tests to system level testing - basically those  Middleware tests that failed but didn't manifest themselves at system level was given low priority.

The second & third rows focused on All Open Defects, Showstopper & Major defect trends, with the aim of highlighting the gaps between reality and planned targets.  We also had to predict a "glide-path" to reaching the target, which is what the curves in the second column highlights. These curves were getting incredible complicated to produce based on manual administration, we dropped those prediction curves and focused our efforts on burning down as many Showstoppers/Majors as we could.  

In reality, your defect metrics reporting will undergo several iterations before the project settles on what is realistic, simple, understandable, easy to maintain and track. Hence why the senior stakeholder went down the practical route and established predictions based on the performance of the last iteration and predicted the burn-down based on the remaining days to reach the target.

Below is another example of how to communicate the progress of your software components in reaching the agreed targets:
Metrics reporting/predicting time to complete defects to meet Launch Criteria
In the above picture, we focused on the top components that required attention, currently failing the launch criteria.  Management used this information and implemented recovery actions where possible (e.g. beef up the team by using developers from other component teams, focused one roof integration (ORIT) sessions); but in some components like the Streaming Manager, we just had to accept the status as is because it was one of the most complicated components in the Middleware, responsible for all features of media streaming - core group of technical experts very difficult to clone (as evident by the small number of open defects but predicted the longest time to complete).

We also tracked the classic defect counts of Submitted VS Closed:

Classic Defect Submit/Open/Closed Trends


The main focus of these curves, as mentioned above in citing Steve McConnell, is to measure how the project is managing the defect closure rates: essentially, are we in control of our defects? The curve should highlight this quite nicely, the goal being to be as close to the total submit/open line as possible, at best, to be able to close more defects than submitted:

  • The implemented curve is the current total of all defects which have been implemented or closed.
  • The gap between Implemented and Submitted reflects the amount of development work yet to be done.
  • The gap between Implemented and Closed reflects the amount of work to be done by Integration / Test teams for verification.
The trend did highlighted the classic creep in defects during the early stages of the project, although we were continuously developing and integrating / testing at the same time.  Eventually though, the trend did match the classic outcome of peak in submissions followed by a decline, then overtaken by the increase in closure rate. 

Measuring Rejected and Duplicates
A significant amount of time can be wasted in the areas of development and system integration, investigating issues that turn-out to be non-issues; or investigating defect reports with poor quality information; or also time is wasted investigated problem reports that have been previously been reported on (duplicates).  To the project manager this lost time cannot be gained, it's time wasted, and for all intents and purposes you have aim for maximum efficiency, eliminating the waste.  Having access to metrics reporting the incidence of Rejected/Duplicate reports is definitely useful in highlighting problem areas, without having to revert to detailed root cause analysis.

Take a look at the picture below: It is immediately apparent that a high proportion of defects being raised by the customer test teams are rejected or duplicated. At some point in the project, this amounted to more than half the total of customer raised defects.  This alerted us to instigate stricter controls over the quality of information being reported, which resulted in us enforcing the daily defect scrubbing process as discussed in the opening sections of this post.  Over time the quality of the defect reports did improve and the number of false defects reduced, saving us some valuable developer and integrator time.
Rejected/Duplicated Trends - Aim is to minimize false defects
Other Reference Examples

Report Metrics to the Executive Committee

Communicating metrics to the senior executive committee on your project is of crucial importance to ensure that the necessary decision-makers have the correct visibility and understanding of the true status on the ground.  Typically you would focus on the high level metrics across the Programme, focusing on specific vendors and trends overall for the Programme. Below are a few samples of interesting metrics that can be digested by the ExCo.  Whilst I've not read the complete book myself, I have interacted with many senior management people who highly recommend The Visual Display of Quantitative Information; and I've been around many executives who insist on visual information without the detail so much so that I myself have become quite the visualisation guy: pictures convey the message much clearer than having someone try to explain and show you numbers!

Here are some example metrics I've used in the past that have helped in conveying the message of the programme - The intent should be self-explanatory:
Defect Metrics Report for Exec Committee
Test Coverage Metrics for Exec (How far away are we from ATP/Launch?)


Implement Root Cause Analysis (RCA)
Again, there is a vast body of knowledge on RCA: Wikipedia, SWEBOK, CMMI. This site provides useful background information to tackle root cause analysis that I won't detail here. Measuring quality & defects go hand-in-hand: if the trends reflect an unusual increase in defects, or your test results produce fluctuating reports (e.g. constant but erratic regressions) and you detect general ambivalence from various teams, then those are your typical clues to instigate further investigation - to root out the causes of the fundamental problems.

RCA is all about learning from previous problems, seeking out ways to prevent similar problems from recurring in future, or at least will help you spot problems sooner than later.  By carrying out RCA the project teams (developers, testers, integrators) themselves have an opportunity to learn some very specific lessons relevant to the way they work, whilst the management team takes the big picture view in identifying areas to improve the processes in those areas.

In the area of RCA for defect management, there is a large reliance on availability of data. The data is your starting point for the first pass analysis.  Mining data from your defect tracking tool, you can look for clues: provided the defect tool is being used correctly.  Hence further emphasising the need for a clear Defect Management Policy that imposes rules on how the defect tool should be used, the mandatory fields of information, guidelines of what engineers/testers should communicate, etc.  Many technical people (developers/integrators/testers) often make the mistake of assuming defect tracking systems as an annoyance - administrative burden -  and seek ways to reduce the time spent, for example: automating the import of information. Whilst there's nothing wrong with automation, RCA seeks out worthwhile information: What went wrong, How did you fix it, What was the inherent cause for the defect, etc. Spending a little time explaining the situation in the defect record not only provides valuable information to the RCA administrator, but also would save time for people investigating similar defects in future.

Assuming you have a mature Defect Management System in place, the next step is to ensure your project or organisation is receptive to doing RCA, as it is a fairly intensive process, requiring access to your project's database, and access to engineers/testers - as much of the interaction is personal and through meetings (there is only so much one can infer from data metrics. To get to the bottom of the problem, a face-to-face, heart-to-heart is required).  You need to be ready to implement corrective actions that, if implemented correctly, will prevent certain defects from recurring.

I've not actively managed an RCA process myself, although I did instigate the activity in the past; and contributed information to facilitate the process. My part is more an observer however, as mentioned previously we had a specialist Quality expert that managed RCA for the example project - I'll share a few snippets of info here, since I'm sure all DTV projects will experience much the same.

The picture below is just one example of RCA metrics worth reporting:
RCA Trends: (Top) Quality of Defect Info - (Bottom) Root Causes Assignments
The above example illustrates the tracking of RCA over a period of time (11 SI releases, releases a 3 week cycle, so 33 weeks of data). Based on the analysis and feedback, we unearthed the following problems/resolutions:

  • Quality of Defect Information captured in problem reports needed much improvement
    • Instigated stricter defect reviews, the daily scrub process rejected poor quality information, assigning defects back to the submitter requesting missing information.
  • Component Design Flaws
    • Project management team enforced stricter controls over the "Definition-of-Done": Design reviews and sign-off required
  • Coding Mistakes / Errors
    • Project management team drove through code review training exercises
    • Code Review training became mandatory learning
    • Introduced further code quality checking into continuous integration system (measuring complexity)
    • Introduced Static Analysis Coding Tools to identify and trap hard-to-find defects; consequently the static analysis tool in itself was used to train engineers
    • Ensure test harnesses and tools are free of compiler and MISRA warnings themselves
    • Mandatory memory leak testing & tools introduced as part of release process
    • Better use of Code Review techniques
      • Peer programming
      • Face-to-Face review
      • Use online code review tool to facilitate collaboration
      • Open the code review to teams outside project (worked well to a point)
    • Too much pressure from project management
      • Defects take time to investigate and resolve, sometimes this is not appreciated by the management team.
      • PMs acted as deflectors to customer, buffering as much as possible leaving the engineers alone to focus on fixing critical issues
  • Architecture / Requirements
    • As part of architecture design, introduced section to detail specific error handling conditions
    • Improve system-wide knowledge across the project team
      • Reduce the component-silo approach to working
    • Reduce conflicting requirements by ensuring cross project architecture reviews
    • Aim to reduce ambiguity or "open to interpretation of implementer" in interface specifications
    • Prevent defects in requirements in the coding/testing phases because it's a costly affair - instill stricter requirements reviews, often involved multi-project architects to avoid conflicting requirements
  • Other / Unreproducible
    • Ensure test bit-streams match desired specification and don't defer to live stream
So if you're delivering software components to a STB/DTV delivery, and your components have a significant development phase, it is worth you planning in strategic points for RCA, as a way to improve your overall quality of your deliverable!

This piece of course will not be complete if I don't mention the classic, oft repeated and self-explanatory model of the cost-of-fixing-defects through different stages of the project:
Classic Defect Cost/Impact

Similar Cost Model from IBM


Conclusions

Defect Management is a key component of managing a DTV programme.  Measuring quality by means of defect metrics targeting clear milestone criteria is an effective way of tracking the progress of the project, the information can be used in various ways to communicate how far or close you're away from/to reaching the launch requirements.  There is a wealth of information around Software Quality Engineering; I've shared some experiences that have proven to be effective in real world projects.  Always keep a realistic hat on, avoid overkill and the pitfall of applying process-for-process-sake. Instead focus on the measurements that count, and depending on the growth or strategy of your organisation, think about applying some of the best practices in the background, without jeopardising your current project commitments, treating it as an investment to better improve your future projects....

I've presented a somewhat detailed approach to Quality Control for DTV Projects. I hope you can take away from this the following points, in no particular order:

  • Walk-the-Talk  (Implement the theory practically and see real results)
  • Have Accountability (Assign a custodian to the enforcing the quality deliverable)
  • Enforce Process at All Stages (Find the right balance of process and be diligent about it)
  • Ensure Feedback and Lessons Learnt (Metrics reporting and Root Cause Analysis)
  • Get buy-in throughout the value chain (Your Programme team must accept this approach to Quality)
  • Make sure you measure the right aspects (The value-add)
  • Don't forget the business objectives (NFRs), adjusting the process based on the level of quality required
  • If you get it wrong it's going to cost you time and money - and possibly your position in the market (Lean on people who've done this before)



Disclaimer: The views expressed in this post are based solely on my own professional experience and opinion (drawing upon both past and present projects). Whilst I've based the information on a real-world project, the data was normalised and kept generic-enough not to implicate/breach any company-sensitive policies (in the interest of professional knowledge-sharing).  

Tuesday, 6 March 2012

Tag Channels for Follow-up / Sticky EPG (2007)


This was an idea I had back in 2007 which coincidentally was shared by two other colleagues. We collaborated on the the concepts and jointly submitted the idea through the patents process. Unfortunately, having invested a lot of time and energy into fleshing out the patent proposal, and reviewing counter claims -- it was decided that there was enough prior art to nullify the patent application, i.e. the concept was considered a natural evolution of existing features.

Nevertheless, five years down the line, and I have yet to see this feature implemented in any form of user experience in the real world, particularly the satellite Set-Top-Box decoders. For the IP and Internet TV space, there is the concept of bookmarks, but I still haven't seen the concept implemented as I envision it.  I've worked with EPG UI Application development for some years now, and from the basic to the most advanced state-of-the-art, the closest feature remotely related to this idea is that of Favourite Channels.  But I'm pretty sure there must be an IPTV app around that exposes this idea, to be honest, I've not done an in-depth search to verify...

Here's the idea in a nutshell:
Posted: 18 July 2007 Tag Channels for Follow-up
Some users start channel surfing from the first available channel till the last channel, spanning all the channels - in the hope of finding something useful to watch. For each channel that seems interesting to follow up, the user tries to remember it - to get back to it - after completing the search.

It's often the case that the user would forget which channels were interesting, and at times, by the time it took to complete the search, and jumping back to what seemed to be interesting - the event would have finished.

Wouldn't it be nice to allow a channel to be tagged as ''interesting to follow up'' similar to Outlooks ''Mark for follow-up'' on email. When the user finishes the search, he/she could then bring up the list of channels for ''follow-up''. This will be the subset of channels he/she may be interested (yes, you have a way of setting favorites - but this usage scenario is transient - just for one instance).

Wouldn't it be nice if all these channels were being recorded in the background so the you don't miss out the stuff that for following up?

Realistically the box is limited to the number of tuners, and transponders carrying the channels as well - but don't limit this to broadcast - this feature will be possible in the IP-domain, there is no restriction on physical tuners - in the IP-world you can tune and record as many channels as you like.

Then end result could be a mosaic generated for the tagged channels - allowing the user to switch between channels, etc - all with confidence that your items for follow-up are being recorded and will be available for you to consume at your leisure...


Snippets of Patent Application

Background including Known Relevant References:

The invention is in the area of digital TV and the use of the Electronic Program Guide (EPG) and the problem of channel selection from a set of hundreds of available services. The area includes using the following EPG features:
1.      Selecting a television service to watch from a list that is greater than can be shown on a screen at once.
2.      Selecting a television service to watch after having browsed each channel in turn through channel surfing from the first channel, to the last available channel.

Most set top boxes provide a mechanism whereby a list of favourite services is maintained from which the user may select one that could offer entertainment. The user is left to make his selection based on historic performance, i.e. what has been shown on that service in the past – it does not show what that service is currently showing now, or indeed over the next few hours.

In addition, the use of a favourite list restricts the user to a choice of previously selected services – a subset of what is currently available. Thus, favourite lists are static, related to services, although those services themselves may not be showing something interesting that would be worth watching.

The Problem:

When settling down to watch television, or listen to the radio over digital TV, there are is a lot of choice(just over a thousand channels nowadays), certainly more services than can be examined at a time.
The typical scenario for searching for something to watch now, is to use one of the following:
  1. Using the EPG feature of channel listings (example, a Program Guide), one starts from the beginning (say channel 101) and browses till the end (say channel 999) to find a channel that is interesting that is worth following up.
  2. By channel changing in turn (from 101 to 999), spending about 1-3 minutes on the channel, to gauge the channel’s content.

As I am unlikely to see a service that I *must* watch, I will want to search through all available channels first, and then make a selection from those services that I *might* watch (or follow-up on). However, by the time I have reached the end of the search of services, I can no longer remember what the acceptable choices were, and the associated channel numbers!

For example, when looking for something to watch, a possible candidate is the BBC 1 News. But I’ll keep looking, and see if there is a Schwarzenegger film with a high body count. While looking for that, I’ll see a re-run of the definitive Sherlock Holmes series with Jeremy Brett. But I’ve seen them all, so it’s only a ‘maybe’. And so on, until I reach the end of my search, by which time I have no idea of what the choices were, or where to find them, or indeed what the channels or channel numbers were.

The Solution in Brief:

Two approaches for solving the two use cases must be considerd, that will ultimately solve the problem of going back to the list of channels that should be followed up to watch:
  1. Using the EPG grid, that usually displays a program guide consisting of channel listings: By selecting a service in a particular manner (e.g. pressing the purple button while the service is highlighted), it becomes ‘sticky’, always staying on the screen. The other services scroll around it, as the list is navigated. Further services can be selected in the same manner, also remaining on the screen, up to a limit imposed by the number of services that can be displayed on a single screen. Once I reach the end of the list of services, I have a list of services, all of which I might want to watch. I can now select what I consider to be the most interesting.
  2. During the channel surfing (from first to last channel), use special buttons (e.g. pressing the purple button whilst tuned to the channel), the STB “tags” the channel for follow-up, adding it to its “sticky” list in memory. Once I have channel changed to the last channel, I use another special button (e.g. pressing the yellow key) that will present the list of channels I’ve marked for follow-up. The presentation can be similar to the list generated in 1).

The Solution in Detail:

1) Solution for navigating the EPG listings

When the EPG displays a list of services to the user, make one of the remote control keys (e.g. the blue button) into the ‘sticky select’ key.
Consider the list of services displayed on screen to be made up of two lists, displayed one after another on the screen. The second list is scrolled up and down as normal; the first list always occupies the top N locations on the screen. Initially, the first list has zero entries, and the second list has as many entries as can be displayed on the screen (say 8). Behaviour is thus exactly the same as the normal EPG.
When the ‘sticky select’ key is pressed, the currently highlighted service is moved to the first free position in the first list (currently top of screen), and the services in the list above this moved down one. This implementation causes the selected service to appear to be shuffled to the top of screen. The first list now has 1 entry, the second list 7. As the user scrolls through more services, only the 7 lower services scroll – the top service stays put.
‘Sticky select’ another service. This service moves to the second position in the first list, other services move down one line. The first list now has two elements, which stay on the screen, while the user scrolls through the rest of the service list, displaying 6 at a time. Repeat as necessary.
Once the user has viewed the list of all services, he has a selection of services that he might like to watch. Pressing another key on the remote control (say the yellow button), moves the highlight to the sticky list. The user moves up/down in the list by the up/down keys, selects a service by means of the ‘select’ button (which then tunes the STB to the service in the usual manner), or unsticks the service by pressing the blue button, to drop this service from the sticky list. Pressing the yellow button takes the user back to the non-sticky scrolling list.

2) Solution for channel changing (ie. Tune to channel, watch for a short while, tag for follow-up)
The scenario is that one would start from the first channel, surf to the last available channel, tagging a channel (by pressing a special key e.g. purple key) for follow up as desired. Once finished, one then hits another key (e.g. yellow key) to view the list of channels that were tagged or “sticky”.
Because this process is a dynamic and the generation of the stick list is transient, the STB would maintain a cache in volatile memory of all the channels that were tagged. When the STB is powered on, the cache is empty. The cache is built, and updated each time a channel is tagged. If the channel has been tagged before, the STB could either ignore this channel (as it’s already in the cache) or remove the channel from the cache. Alternatively, the user could be asked to confirm deletion. Alternatively, by pressing the blue button, the user is able to untag or “unstuck” the service. This usage scenario should be easily customized via one of the EPG setup menus.
The cache is basically a structure that would contain all information required for the STB to remember the channel details. Usually these details are easily obtained through the STB middleware API, and may vary according to the type of digital TV environment. For example, for systems in the DVB domain, this would typically include all Transport stream data relevant for channel tuning (network triple, delivery descriptor, service descriptors) – and for DirecTV DSS systems in the US, typically APG data. The cache would typically be controlled by the underlying middleware, and could be realized using the existing mechanism of service list database, with the addition of a new flag that marks the service as Sticky. A typical EPG application could then use a simple interface like “Get number of Stick Services” to which the middleware will return the number of sticky services marked. The EPG would then iterate through the number of sticky services, and for each service returned, display the results in the list accordingly. This mechanism of list generation and display is already widely used in STB EPG/Middleware code, and would be relatively straightforward enhancement to this mechanism – making it possible to be realized in past, present and future STB EPGs.


Points of Novelty that go Beyond the Known Relevant References:

This mechanism permits the user to make an informed decision from a small number of nearly equally acceptable choices, in a manner easily integrated into the normal functioning of the STB user interface.
The novelty includes the generation and display of the “sticky list”. On the basic solution would be typlical list displayed on screen, similar to the usual EPG listings.

Possible Novelty Enhancements linked to the basic idea of Sticky Lists
A more advanced novelty is that the STB now has knowledge of the interest channels and can then decide to intelligently record these services in the background. Recording these services in the background ensures the viewer wouldn’t miss out on anything whilst searching for channels to watch. Moreover, it’ll offer enormous benefits to the user knowing that two channels interesting, and the user wouldn’t have to choose between the two, as the channels are being recorded. If the channels are being recorded, then the “sticky list” that’s generated by pressing the said key, could be displayed as a mosaic of services, with video thumbnails of the subset of “sticky services” – not much dissimilar from a mosaic channel, but only this time – this mosaic is dynamic, and generated by the STB in realtime, and by the STB itself, and not displayed by the headend.

Monday, 27 February 2012

EPG for the Blind / Talking EPG / Speaking EPG (2006)


In late 2006, I reminisced about a very special interaction I had with a blind person who left quite an impression on my life at the time. Whilst I was still at university I used to work weekends and holidays for a major national clothing retail store, called Asmalls.  The receptionist for the head office; as well as personal assistant to the CEO, was manned by a lady who was completely blind. She operated a computer, answered the telephones, listened to the radio and read books all by herself, she even lived on her own and braved the walk to work by herself....she was absolutely amazing. She inspired me to raise the idea of a Talking TV, learn about accessibility to the point of becoming evangelical about it - so much so that I took the concept from idea to prototype, in under a year, all in my spare time - back in the years 2006/2007 when absolutely no other company or research body were publicly demonstrating this technology on Set-Top-Boxes.  Alas, what should've been my chance at innovating for humanity, was beaten by another company who partnered with the leading blind organisation in the UK and produced the world's first Talking TV product, 3-4 years later...

Again, why am I posting this now - almost 6 years later??
If you've been reading my other posts on Ideas, you'll know that I want to use my blog as a platform to make public the ideas I've had to date as well as my current ideas, in the hope that it will lead to making connections and other opportunities.  With this particular idea, I want to show you that I've done real work, with real organisations and people, and have what it takes to take something from a concept through to completion, despite the resistance from a corporate who saw no real monetizing value of the concept, although fully supported the idea as being socially beneficial.

In 2006 I set out on this project pet project to prove I could build a talking TV product, using existing set-top-box hardware and software, without the need for additional and complicated hardware.  I sought out real people to get the requirements, and iterated a few prototypes until I had a working demo of the concept running on a BSkyB platform, that is currently being powered by NDS software (the fact that NDS powers BSkyB products is already in the public domain).  Working outside normal work hours I built up relationships with other industry players and engaged in workshops.  I found out at the time that the RNIB was indeed engaged with OceanBlue on a similar project, but they shared their requirements with me in the spirit of collaboration.

What set my project apart was that I used an implementation that was completely based on software speech synthesis, based on an Open Source technology called Flite (Festival-Lite), kindly offered free for commercial use by the kind people at Carnegie Mellon University.  The other projects took a different approach of enhancing the physical hardware chipset for speech processing functions, adding cost to the set-top-box hardware, the quality of the sound was questionable at the time as well.  What eventually transpired was that the end product that was released was based on a commercial speech engine offering better quality voices...

In short I was able to showcase my demo to prospective customers, had a real working prototype built in 2007 - I've got a video that proves it as well.  My work was mentioned in press release in Jan 2009, but we didn't go public with any official demonstration of the technology. Instead, the PR noted that:


NDS is also researching new technologies that will help sight-impaired people

to access Electronic Program Guide information and other digital TV services.

One such NDS innovation enables people who are sight-impaired, and others who

have difficulty using text-based EPGs, to hear EPG listings and content read
aloud in real time in a natural sounding voice.

Getting my work mentioned in a press release felt great - my fifteen minutes of fame.  Alas, despite all my efforts of self promoting the project, it lacked the monetizing appeal and wasn't in-line with the product's roadmap and customer expectations at the time. People were more interested in home networking, advanced 3D graphics and animation, to be concerned about promoting accessibility. The feeling was that from a technology-provider point of view, it's easy to implement but there wasn't demand. The onus was more for operators and broadcasters, and depended on regulation. With little regulation to enforce this upon operators, this feature fell right to the bottom of the priority queue.  Even six years later, in 2012, despite headway being made in EU/UK/US, it still not a mandatory requirement to implement accessible EPGs using speech synthesis. It is rather an optional, much desired nice-to-have, and the market demand is small...So to cut a long-story short, my project was put on the back-burner, giving enough time for other companies to catch-up and make use of this niche space that had yet been unfulfilled - and so in 2009 instead Oceanblue went public in Sept 2009...

Take a look at this video:

Suffice to say, I took that a little personal because I felt we'd missed out on the chance of becoming the first company to announce to the world that STBs can become more accessible, my Speaking EPG project dies a quiet death!  I even pursued the patent routes years earlier, but there was enough prior art in the PC domain to warrant the concept void from patenting...

The options a typical technology provider faces
STB Middleware providers are usually focussed on providing compelling, exciting features and pay little regard to promoting accessibility. It is also evident in most of the advanced user interfaces being published today, full of flashy animation and graphics, using 3D images and video clips, paying little attention to the usability requirements, especially for the rather overlooked segment of the market, the visually impaired audiences. Instead, the focus is on maximising screen real-estate as offered by HD resolutions - typically usability / accessibility is added as an afterthought.  Why can't these providers learn from Apple who've designed their products with accessibility in mind from the get-go? The operating system does not feel clunky at all, enabling VoiceOver or any other accessibility feature is almost natural to the interface...If only Digital TV Middleware providers and EPG Application vendors could support this mindset... So some of the options (as shared by an ex-colleague Paul Jackson) available to these vendors are:
  • Do nothing except what customer's (operators) specifically request 
  • Assume that this is a niche product / service area, thus provide APIs etc as below to let third parties sort it all out, as you don't believe it can make enough money or produce enough innovation in this area 
  • Assume that this is a niche product / service area, but try to make money and encourage innovation from within, leaving room to change direction if it turns out not to be such a niche area as is assumed
  • Realise that easy to use accessibility features and functionality are a key part of good general product and service design and will have positive, possibly unexpected and potentially lucrative spin-offs in the general mass market if designed in from the outset, and choose to develop accessibility solutions first, then encourage mass market for them (top down approach)
  • Realise that easy to use accessibility features and functionality are a key part of good general product and service design and will have positive, possibly unexpected and potentially lucrative spin-offs in the general mass market if designed in from the outset, and choose to develop mass market accessibility related solutions first, then provide more niche / specialised accessibility solutions once volumes of suitably enabled devices are in the market (bottom up approach)
Current Real-world Products Promoting Accessible Talking TVs
The RNIB has a good summary of the current technology offerings to date - see Accessible TV Devices promoting the Smart Talk TV (from RNIB/OceanBlue partnership), Sky Talker (BSkyB's own proprietary standalone device) & Apple TV (promoting Apple's VoiceOver technology).

I can offer consultation on STB/TV Accessibility Topics & Technology
Based on my close experience and hands-on implementation of the technology itself, I can offer consultancy on accessibility issues around TV. This has become a personal passion of mine.  If you're looking to implement speech processing in software, looking to prototype and experiment with speech technology options, I can advise on the options available.  I am planning to write a white paper that describes this topic in fair amount of detail....


White Paper In Progress
In my upcoming white paper, I will introduce the need and demand for Speaking Programme Guides using data from a variety of sources, will explain the high level requirements from a user point of view, provide an industry update as well as benchmark the technology options available today. I will also highlight a template project that one can use based on Open Source technology....Please get in touch with me if you can't wait for this white paper....

Other Snippets of Industry activity proving I wasn't all that nuts

Proof of what the market is doing:



                                                            oOo
                                         P r o o f s     of       I d e a 
                                                             oOo        
Original Idea Posted: 18 October 2006 EPG for the Blind

I have a friend who is blind - she is employed as a personal secretary(is able to email, fax, draft letters, etc), lives on her own, cooks her own food, travels to work alone, listens to music, reads books and is an amateur musician - which is quite amazing really.

She is able to do her work (and this is going back to Windows 3.1 days) by using specialised Microsoft product for the blind (hard-of-seeing?) - i.e. every key stroke is read out, etc. She is so good she can even touch-type without any braille on the keyboard...This is great for the PC, but what about TV, especially the EPG?

Certain channels broadcast events with audio descriptions - this is great. But what about the pressing the remote control, menu keys, menu options - grid, planner, etc- how is a blind person to navigate this? Also, how is one to find all the channels/events that have audio descriptions or even radio channels?

Basic idea: Embed audio as part of the navigation of the EPG, e.g. Pressing menu, would do something like read out ''Menu'' and as one navigates through the menu, each option in turn is read out...
That is only the beginning - the next step would be to embed audio description into the event information/synopsis.
And a further step would be to provide a personalised guide that is filter for the person's specific needs - and as she navigates through a grid for example - the content of the grid block is read out aloud...

Embedding built-in EPG audio samples/voice overs shouldn't be too much of a problem - a few K of memory probably - but with modern STBs this shouldn't cause a problem. Broadcasting audio description of events may add little to the BW overhead, but a few Kbps is likely...

Where is the money to be made? None - I hope - think of the service this would do to the community/society... 

Comments Received...
 An alternative approach which eliminates the cost of producing and transmitting spoken EPG data and synopsis information would be to simply enable the text highlighted / presented by navigation to be delivered as a data string out of a serial / USB port on the STB at the same time as it is rendered on the screen. External devices which are capable of text to speech are readily available (including as a standard feature of Microsoft OS) and offer multi language ''dictionaries''. The same methods can be extended to provide subtitle to speech capability.

One commercial downside is that it would be possible for third party devices to ''harvest'' this EPG and Synopsis data for other purposes. If this is considered to be a problem then the serial data could be encrypted using NDS solutions so only approved / paired devices could decrypt and text to speech these strings.
---------2006--------
That is interesting information indeed - but on the humane side, I'm against add-on peripheral features (leading to bloat, just look at the X-Box as an example!) - if a system can be made to natively support a feature, then it should...it may even be considered discriminatory - why should someone who's blind have to incur extra setup costs to experience TV compared to someone who isn't blind?

If the technology exists, it may well be worth considering implementing/porting this text-to-speech stack directly in a middleware itself...
--------2006---------
Thinking about this a little more: The feature to support the blind, should not be obtrusive, invasive and disrupt the viewing of other abled members of the household. As such, it should be discreet and could possibly use the following:

- The middleware could provide separate accessibility features like an independent remote command set - that allows one to seamlessly browse the EPG data without disrupting the live viewing at all. The EPG information can be streamed wirelessly to a headset and thus the audio of the TV program currently being listened to (by others), is not disrupted. Even helpful audio description could be sent in this format.

So moving forward, possible solutions for an EPG for the blind include:
1. Embedding audio to each remote control command
2. Embedding audio into the TS stream
3. Using a third-part add on that does Text-to-Speech translation
4. Building a Text-To-Speech translator as a component of the middleware
5. Streaming this EPG info in the least disruptive way, e.g. wireless headset
6. The service provider to provide a separate channel (like info/help) dedicated to serving the needs of the blind (like commentary on what's on, etc - perhaps even through an interactive app?)
--------2010---------
I just wanted to share with you some real life feedback of UK's Sky+ HD accessibility feedback. For the original post, you can read the Damon Rose's Blog.

Quote:
Hi there.
Sadly the new Sky HD box isn't accessible.
The elements that I most want to use, and that I could use on their previous boxes, have now been re-designed in such a way that they're impossible to navigate.
So whereas I could previously go into the list of programmes I've recorded on Sky + PVR, arrow up and down and have a good idea where my programme is ... I now can't. Arrowing up and down no longer is confined to the 'programmes you've recorded' list ... it also takes the control left and right, in a manner which is hard to understand if you can't see. It goes into sub menus of 'series linked' programmes ... and takes you off round the screen if you arrow too far.
Very disappointing, a disappointment that I now spend 10 pounds a month extra on.

So one lesson to learn here is a well known one, but worth reminding ourselves for any development project that has to deal with a legacy of users: be careful to break features that people have become accustomed to, consider all your users no matter how small the market is, customers are paying for the service and are contributing to the bottom line...

Like all other improvements to software products, accessibility needs to be built in from the start, and not added as an afterthought--------

Monday, 20 February 2012

Subscription-based Computing (Raised March 2007)


As mentioned in a previous post, another aim of this blog is to share my ideas...I have finally mustered up some courage and guts to start the process of showcasing some of the ideas I've had in the past, as well as ideas currently germinating -- opening up to public scrutiny, in the hope that people leave constructive feedback, or get in touch to jointly pursue ideas going forward -- ultimate dream is to latch onto an idea promising start-up possibilities.  This is partly due to Jeff Jarvis's influence as I'm currently reading his recent book Public Parts, having first read What Would Google Do which inspired me to create this blog in the first place!

I am also wary that my ideas at the time was raised whilst working for a company that is quite secretive and security-conscious. But I'm taking the risk in so far as only sharing my ideas that were raised that lead to nothing, zero, zilch: topic was raised, discussed and not taken forward. Hence I'm confident there are no existing IP-infringements on what I'm about to disclose now as well as in future posts.  After-all, these in fact were my own ideas that were submitted for review to an ideas committee but nothing really materialised mostly because the ideas were not relevant to the core business at the time... I had personally kept a record of all my ideas submitted for reasons such as this post...

Why am I still taking the risk then??  Well, I have a bee in my bonnet really - but I'm not stupid, I've edited our company-sensitive info.  I know that deep down inside of my I've got the entrepreneurial spirit, the mind of a thinker, a rebel and a desire to take risks, disruptive -- although I've not really taken most of my ideas past the initial stages, i.e. move from Idea to concept, to prototype (I have actually pushed one through to completion though, which you'll get to know of on a future post, "Talking TV"). You might think me crazy, but the more I dabble with ideas, the more I read about start-ups and entrepreneurship, the more closer I feel I am to latching on to something for real...it's a feeling I can't shake off, call it instinct, time will tell - I'm only 34, there's years in my yet to be inspirational...

I also want to prove to people out there, especially companies seeking like-minded-people such as myself, that my ideas are not just nuts, because there are companies that are indeed making similar products.  So unlike some folks who say "well I wish I would've thought of that", I am saying that "Dude, I did have the very same idea!" and can show you the proof as well...I want to be noticed (Google, please find me). I want to work for Tech Giants, the likes of Google / Apple / Microsoft - that's no secret, I would love to go to Silicon Valley / Redmond one day to experience the excitement of managing a start-up or working with a start-up...what's wrong with dreaming hey??

You don't have to believe me - it's the truth - that's about it.  I want to share the ideas I had in the past that are actually becoming real now, to not only prove to you that I do have innovative ideas but also to strengthen my case for offering professional technical consultancy...

Here goes:  I'm kicking off my first public idea by provoking some possible contention.  I had the notion of a WebOS, replacing the classic PC OS with an all-in-one-simple-device back in 2007 before Google or HP mentioned ChromeOS or HP/Palm's WebOS.  Of course those giants might've have seeded the ideas around the same time as me, but according to Wikipedia, Google first started work on ChromeOS in early 2009; HP also took forward Palm's WebOS around the same time in 2009.  So I keep telling myself I was two years ahead of these guys....

My idea went further in that Service Providers will have provided boxed, branded-units that basically just worked, was always online and met the basic technical/PC needs of the average PC consumer...Basically I predicted exactly this as portrayed by Google themselves (see video below).   Personally, I still think there is mileage in pursuing this going forward - most probably make a business on the back of ChromeOS - why? Because people still need to customize their applications and environment, ChromeOS doesn't yet offer you stress-free, seamless subscription-based computing. My bet is people will still want something that works out-of-the box, first time, without fuss (no wiring, no wireless contracts - it just works)....

                                                                      oOo                         

Below is a thread of discussions of how the idea unfolded and was left dangling in the air:
                                                                         oOo   


Subscription-Based Computing, Idea was raised on21 March 2007

I am wondering why a subscription-based model to PC computing hasn't taken off, or perhaps has it even been considered?
I'm questioning the need for having a PC in households. The majority of people are not techies, and will never be techies. People want things that just work, without needing any special knowledge, etc. With these assumptions, the requirements are simple:
1. I want to connect to the internet (and I don't care how it happens - it should just work)
2. I want to send email (and I don't want to know anything about POP, SMTP, User name, password, etc)
3. I want to write a document (and I don't care about integrated features)
4. I want to maintain a simple spreadsheet to manage my accounts
5. I want to be able to share photos (and I don't care about drivers, photo software, etc. My service provider can sort this out)
6. I want to watch TV (anything I fancy and no fixed schedule please)
7. I want to play games
8. I don't want to worry about Firewall protection, antivirus, anti-spyware, malware,etc.
9. I want access to my data anywhere, any time.
10. I just want my pc to work out-of-the-box, hassle-free.
And the most interesting part is: I am willing to pay a subscription to a service provider (BSkyB in 30 years time maybe?) I just don't care about the cheapest broadband service, the best AV, the best hardware, I don't want to manage OS upgrades, etc, etc - I want my service provider to do all of this for me - hassle-free - computing!
A PC is supposed to be general-purpose - but is it really that general? What do the majority of users do with their PCs? It would be interesting to learn if any surveys are available in this area...
Can such a system be realised? The consumer just subscribes for the services, and the service provider just serves? Can this semi-closed system survive? Will people be interested in this?
What I'm questioning is the need for people to buy PCs and for people to have the know-how to configure and operate them. The majority of people just want to use the internet, send email, view pictures and watch movies.
So, just as we have a subscription model for mobile phones, digital tv, etc - could the future provide a similar service for computers? For example, I purchase a subscription from Sky. A SkyPC is delivered, it may need installation as usual. But once installed, I have all my basic computing needs satisfied.
This device is still a PC, but a Sky-branded one - not a STB.
The device could be so dumb, it wouldn't even have a hard drive - a simple client connecting to a terminal. The terminal serves the needs of all clients. Ignore the physical limitations of this idea right now, and assume the infrastructure is present in 30-50 years time....
If this can fly, then the winner will truly dominate house-holds... 
Threads of discussion:


_____________________________________________
Sorry for bringing up this old thread again – but I was wondering if we were continuing to do anything in this area? Vodacom SA has released an offering that is not dissimilar to my original idea: http://www.linkbook.co.za/


Basically looking capitalising on the cloud, web apps, and web OS, offering a tightly controlled interface (as we do for EPGs) but still providing a portal to the internet (monitored), offering basic necessities of a browser, messenger, skype, etc. Everything is plug and play, the user basically doesn’t have to be technical.


Have we looked into the idea of being a Data Service Aggregator/Mediator? Why are devices exposing yet another interface for connectivity – 3g/4g sim cards? What if there was a mediator that deals with the likes of Virgin, O2, T-mobile, Orange, etc. I purchase a huge data bundle from all these providers. Consumers sign up with me for data services, they don’t care about the underlying platform offering, they just want to be online. I have a setup, just as current mobile operators do with their management systems, but my system intelligently handles handover of the data service from one operator to another, example – when a user is in an area known for poor coverage from Orange, but excellent coverage by Vodafone – because of my relationship with these operators, the user will seamlessly switch from one operator to the other, guaranteeing a sustained level of quality.
With this relationship and system in place, I can offer always on connectivity to subscribers without them worrying about where they get the data from. This could be extended to hot-spots as well, my system could negotiate deals automatically with hotspot providers without consumers having to worry about local payments, and configuring the connectivity…


_____________________________________________
This is interesting news indeed. I've tried to get the gist of the EasyNeuf (using Babelfish translation - why didn't the French colonize South Africa so I could've learnt French instead :-)


I am interested to learn more about that company; and if that project has had much success. Looking at the press releases, there seems to be some recent publications.


Although the idea of having 3 versions of the interfaces, taking the user from beginner to intermediate and then to advanced level sounds good (as it is empowering the users) - that's not the approach I had in mind originally - as I was targeting the segment of people who just want a box to do the basic stuff - non-technical users: parents, grandparents, or even younger middle-aged individuals who just couldn't be bothered with all the fancy applications - but just want to use their email, surf the net, shop online, socialise, do a bit of budgeting, shopping lists, downloading content, playing movies, listening to music, access to TV, etc, etc. That would be the entry level. The moment we grant freedom to advanced level - then people would want to experiment, tweak, configure, enhance, etc. That shouldn't be allowed - get a PC to do that :-)


Besides, if after paying a subscription of 40 euros (in addition to a 150 euro down payment) - after 2-3 years, the amount spent on subscriptions may well have amounted to the cost of a new laptop, without having to hassle with subscriptions. Having said that, the value add must be so good that consumers stick to their subscriptions because of their peace of mind, ease of use, flexibility, security, maintenance-free and mobility - that can be offered by such a device.


Yes, such as system presents a plethora of technical challenges: network infrastructure (routing, wi-fi, adsl, 3g, etc), Operating System (local OS or thin client), packaged applications, interoperability, etc, etc.


But I share Ronnie's view in that I just cannot see this not happening in the medium to long term future; and quite certainly a disruptive technology.


_____________________________________________
I will point at a french stuff that is exactly what Muhammad is talking about: it was announced last year by Neuf Telecom, and has been launched since then.


They call it ‘’easy neuf’’ and came with a surprising microwave-ish form factor !


Below is a related article in French: http://www.clubic.com/actualite-38705-easy-gate-box-routeur-pc.html


The commercial site is  http://www.easyneuf.fr/


It is based on Intel M600 + 852GM, 512 M RAM, 512 M Flash, linux 2.6.17, firefox, Gimp image soft, multimedia player and a small productivity suite (spreadsheet, text …).


No internal HDD (to be extended with external USB2 disc or memory stick)


Note that the product offers 3 level of UI: easy, ergo and expert (see picture above).


I have no idea where they are today and if they are successful. The price starts at 149 euros for the central unit (oven). See the ‘’tarif’’ (pricing) section on the commercial site.


Funny, even though I share the feeling that there is still something to dig here.


____________________________________________
Yes  - of course! The more comments the better…I really think subscription-based computing is not that impossible - especially when remote storage is becoming cheaper, web technologies providing not only online office apps, but video editing tools as well…there will come a time when the need for a monolithic local OS will disappear - and everything will be managed remotely and distributed…And assuming the problem of getting TV over the internet is solved, you not only have your personalised mobile "PC" but also access to TV…The best part is, no pains of security alerts, upgrades, etc…complete trust in your service provider! Who knows, we may travel full circle going back to the days of thin terminals….
_____________________________________________
Thanks again for replying. Perhaps the word "PC" is conveying the wrong meaning - I envision a closed device (could be a laptop, could be your STB Gateway, or STB itself) with a wireless keyboard (so you could use your HD tele) or indeed - a laptop-like device.


The device can only be tweaked and configured by the service provider, like [ServiceProvider]. Call it the "[ServiceProvider] Explorer" box - that meets all your internet or computer needs…


Think about it - how many people really use their PCs to their full potential? There should be some research to back this up: Mostly browse the internet, check email, online shopping, a bit of Word processing, and a bit of spreadsheets. No more than that. So a box that is always on, does not need tweaking, can interconnect with other devices (printers, cameras, etc) - and provides not only the security for online shopping - but also peace of mind because of [ServiceProvider]'s reputation with parental controls ( XXX's business)…


[ServiceProvider] could even have their own secure payment system ( XXX to provide the technology of course)
[ServiceProvider] would have to have their own OS (Linux or Fusion) - ( XXX to provide of course)…
[ServiceProvider] will need headend control -  XXX to provide of course… :-))


What a stir this would cause to Microsoft and other OEMS :-))


_____________________________________________ 
I'm sure you're probably getting loads of emails for review of ideas - I just thought I share this one more idea with you - and if you could provide your comments/feedback/gut-feel - that'll be great :-)


At a very high level: Provide box-standard, packaged solution - of a personal computer - providing:
- always on access to the internet (email, chat, browser, etc)
- office applications
- security (viruses, malware, etc)
- parental controls (filtered view of the internet) - using some sort of entitlements
- unlimited remote storage
- free maintenance, support, upgrades, etc
- usual share photos, content, etc - USB, plug and play, etc…
[Nick>]  Don't you think that people like HP, Dell and others already aspire to this?


Picture this: [ServiceProvider] offering the [ServiceProvider]PC. Engineer comes home, and installs the box. You don't need to worry about broadband tariffs, ISPS, Viruses, Software, Phone line, etc, etc. You have a box standard, all the tools one needs to meet the basic computing needs…offered by service provider.
[Nick>]  I agree completely with your proposition, and I think this is where [ServiceProvider] would like to go. The reason that they might shy away from it just now is a) most PCs are very user configurable, and this can lead to some very nasty support call headaches. With PCs (and even Macs), there isn't a viable business in supporting home users.


The closest I have seen anyone come to this is the One Laptop per Child project (OLPC). If we could reproduce PC functionality with STB levels of reliability (you may laugh, but they are much more reliable in practice), then you might get close to your vision.


It works for TV, mobile phones - why can't this model work for the PCs.
[Nick>]  Because its very easy to mess them up as a user.


The majority of people just want something that works - not technical gurus - they don't want to control their pc environment - it's a headache really…
[Nick>]  So how about something which is more like a PC latop then, or more like an embedded device, and heavily cost reduced. Maybe very carbon efficient too.


What about parental controls for the internet?? Surely  XXX can offer something in this space?
[Nick>]  Looks like a good idea


I've blogged about this here: http://XXX


What do you think?
[Nick>]  We need to think about what the functionality as a user would really need to be, to provide something that would need little or no user support.


Cheers,
Muhammad