Pages

Saturday, 27 April 2013

Worlds of QA Testing in Digital TV Systems Projects

I have previously written about the Digital TV ecosystem with its complexities and challenges in defining the Architecture & Systems Integration spaces. In this post I will share some thoughts around the QA (Quality Assurance) / Testing problem space, as well as generally accepted strategies for solving these problems.

What really triggered this post (which is centred around E2E QA) was a recent conversation with a QA Manager who in passing commented that he's facing a challenge with the "End-to-End QA" (E2E QA) team, in that it takes up to six weeks to complete a full QA cycle. Now some might think this is expected, as E2E QA is the last mile of QA and should take as long as it takes. My response to this is that it depends...

It depends on which phase your project is: is it still in early development / integration testing - where components are being put together for the first time to test out a feature across the value chain? Or is the project well ahead in its iterations having already executed much of the E2E testing? It also depends on the scope and layers of testing the systems and sub-systems before it gets to E2E Testing. Has a version of the system already been promoted to live? If so, what are the deltas between the already live, deployed system compared to the current system under test?

[--- Side note:
This post is one of many to follow that will encompass some of the following topics:
  • Worlds of testing / QA
  • Is ATP / E2E testing the longest cycle than other system test cycles - or should it be the shortest?
  • High level block diagrams of architecture / interfaces / system integration
  • Visualisation of technical skills required for various aspects of testing skills required at various areas of the environment: Scale of technical - heat map
    • E2E QA is highly technical and should be considered as part of Integration, not a separate QA function
  • Which came first: the chicken or the egg? Headend first, then STB, or do both together - big bang?
    • How to solve complex systems integration/test in DTV?
  • What is end-to-end testing, what areas should you focus on?
  • What is required from a Systems ATP?
  • What kind of QA team structure is generally accepted practice? How does Agile fit in with this?
  • Can E2E Testing be executed using Agile - iterate on features End-to-End?
Side note --- end ---]

It also depends on the complexity of the system, the range of features to be tested, and the revenue-generating impact, as well as customer-service impact the system has. Very special systems such as the Conditional Access System (CAS) is of vital significance - the core revenue stream of the PayTV operator is significantly impacted by this system, so it's perfectly understandable that migrating or upgrading CAS versions is limited to probably two releases a year, or even less - unless some significant new feature (Recommendations or Audience Measurement perhaps) is released.

My take on the subject is that there are many worlds of QA testing in the DTV ecosystem. Just as I've written about the worlds of System Integration in a previous post (see figure below); equally there are worlds of QA.
Worlds of SI
The project team must carefully assess the levels of QA required from the various components that make up this ecosystem. I expect by the time the testing stage reaches that of E2E QA, the amount of testing at this E2E stage is the least, as confidence has been gained from the testing stages before E2E QA. That E2E QA should basically consist of proving the high-level business and product feature scenarios as expected by the end-user, instead of going too detailed into the system level. For example, as a user I am expecting the system to deal with the following scenarios:
  • I am a new subscriber and joined the PayTV subscription for the first time, please enable my account
  • I am an existing subscriber and wish to upgrade to the latest product
  • I was on the latest product, but I wish to downgrade my subscription for now
  • Test various upgrade / downgrade scenarios
  • Test software download and disaster recovery
  • Test purchase of VOD content
  • Test out the parental guidance flow
  • Test out the TV Program Guide
  • Test out Signal broadcasting scenarios
  • Test out how the STB works with changes to broadcast (Add/Remove channels, events, etc.)
  • Test out security aspects
I expect, by the time the test stage reaches E2E QA, that the detailed component testing, that deals with all sorts of technical scenarios, including the STB EPG User Experience will have all already been covered by some form of testing or the other. In the System integration-space, time & expectation of quality deliverable are essential - we want to iterate and quickly get to the stage where acceptable component versions are identified end-to-end, that are suitable candidates for formal release into live deployment.

Depending on the project, you might define a stage called "Acceptance Test Phase" or ATP - this can be anything from 2 weeks to 3 months. One reaches this stage in the project where, in addition to passing "E2E QA", the customer wants to make sure that all the I's are dotted, T's crossed, before going into live deployment. Some broadcasters might be extra sensitive to prelude the Field Trials stage by signing off formal ATP (proven in a quasi-lab environment), whilst other operators might be confident to forego the ATP and consume the ATP as part of the overall Field Trials stage. Field trials is about testing the system with real users - for example, real people in their houses use the STB/decoder, report issues & provide feedback to the project team. Once enough feedback has been received, and the team feel that enough fixes have gone into the system, that it is ready for mass audience, the system will officially go live, or a new product launch is publicized in earnest.

Generally accepted model of Worlds of QA
Looking back the earlier picture showing the worlds of SI, each world can in itself be considered a system on its own. For example, collectively the Set Top Box is a system consisting of a group of core components, typically: OS/Drivers, Middleware, EPG, CA & Interactive engine. These STB components themselves would be architected and implemented in a modular fashion and will be in modularised internally. Similarly the Headend System is a collection of discreet service components specialising in a certain feature, for example: Broadcast Signalling, EPG Guide data carousseling, CA System, VOD system, etc. Collectively the Headend system refers to all of these components put together. And thus, End-to-End represents this massive Headend system connected to the STB, along with complementary systems required to manage the billing and subscriber management.

To test such a large end-to-end system, in one big bang approach, is really asking for trouble. Systems Engineering calls for testing the systems out in a fairly independent manner as possible, building trust from the preceding test areas, and then carefully integrating the system components together, and so perform the integration testing.

Integration testing is really about ensuring the flow of data from Source to Sink follows the correct paths, that the end user experience is functioning as expected, and the system deals with edge cases or exceptions in a reasonably consistent way. Integration testing would test Core System or Product Use cases.

Visually, and system should consist of the following representation, and can apply to any "System"
A Model for System Testing in Layers
Consider for example, a well architected STB system - it would take the following approach - Layered tests, starting from low-level drivers to full stack application. Confidence is gained bottom-up, so when it comes to final stages of UI/EPG Integration/Testing, the expectation is the test cycles should be rather high level, not too technical, and the duration of the test cycles themselves are shortened...(Automation is key)...
Real World Application of Layered Model Approach (MediaHighway Fusion / BSkyB / UPC / Snowflake)
E2E Testing in Digital TV Systems should really be the shortest QA cycle
I maintain the view that E2E QA should be the last mile of testing, and should be either the shortest test cycle compared to the preceding system test cylces; OR; should contain the LEAST amount of test cases compared to the preceding system test layers. This is how I see it:
The E2E QA World as I see It
If a project team tells me they need six weeks to run an E2E QA cycle, then I get worried. If you tell me that we need six weeks to do an ATP before migrating from one system to another, and that you're concerned about regression on live deployments, then that will be more acceptable - heck, I would say, in that instance take eight weeks or more!

I have also seen this flipped on its head: E2E QA or ATP - is the longest test cycle - all I can say that that approach is rather old school - to gain the best efficiencies and speed up time to market and delivery, a more flexible, agile approach to testing is required - using a layered approach, having a strong reliance on automated testing with full regression traceability.

How do you test an EPG User Interface if the Middleware or STB/Decoder is not Ready?
This answer requires a post in its own right - with a well-defined software architecture and strong design practices, for example, a good process for API Interface Control Definitions for the entire stack, then writing an EPG UI application can be done in relative isolation of the core Middleware, through the use of simulators and mock or stubbed code. I have had personal experience where we wrote an EPG almost entirely using a Windows Simulator, that we emulated the STB decoder hardware on a Windows PC, implemented the middleware as a stubbed component, simulated the entire broadcast SI tables, Guide information and even emulated CA Events -- to completely free the EPG development team from the Middleware. We had to do this because the Middleware was still under development, but it was really important for us to prove reach code complete for the EPG. It then becomes a classic Integration problem as and when the dependent software components, like Middleware, or the actual decoder hardware is ready.

How do you test a STB Application if the Headend is not ready?
There is almost always a headend system available by way of the live broadcast. Recorded bit streams is your answer. Cut a recorded stream, play it through a stream player, connect STB to stream player. This is assuming the STB application is not dependent on new features from the Headend. If there are dependent new feature development required from the Headend to enable new feature on the STB Application, then again, mocks can be applied - the STB Application can simulate whatever Headend stimulus it requires from the Headend to create the experience. For example, VOD catalogs can be simulated, EPG Guide data can be simulated, CA events can be simulated. It is generally not recommended to create false dependencies on other system components that prevents one component from progressing development.

Should STB SIQA Testers know how to manipulate Transport Streams or How Technical should your QA Engineers be?
It is my view there are varying degrees of technical competencies required from the different worlds of QA testing. In another post I will attempt to present a heat map that shows the varying degrees of technicality required from a QA engineer. In the case of a STB System Integration QA engineer for example, I would expect a strong technical mindset, with the ability to understand the broadcast ecosystem, down to the transport stream level. If test cases call for cutting specific bit streams, or crafting user generated bit streams that test say, DVB SI Signalling for example, then yes, I would expect the STB SIQA Engineer to be able to specify such a test case, right down to transport stream behaviour. However, I don't expect the STBSI QA engineer to actually cut the bitstream, this requires some other broadcast engineering / Headend skillset though...Contrary to an EPG UX QA person though, this engineer is primarily focused on UI specification, Usability and User Experience, less so on technical functionality...

No comments:

Post a Comment