Showing posts with label QA/Testing. Show all posts
Showing posts with label QA/Testing. Show all posts

Friday 20 January 2012

Appreciating the Role of System Integration


Last year I started my series of posts around the different aspects of DTV system projects. Starting out by describing the a typical end-to-end system architecture and the importance of clearly identifying the roles and responsibilities of the architect team (see post: Architect Roles), followed by a write-up on considerations for setting up a programme organisation structure (see post: Managing a DTV Programme) and more recently, a brief overview of auditing the STB System Integration process (see post: SI/Dev Process Audits).

I still have a few more topics in the pipeline, however the aim of this post is to highlight the worlds of System Integration, as this can often be overlooked and open to interpretation, introduces dangerous assumptions and automatically puts the programme at risk.  With a large-scale DTV deployment, the programme must at the outset, clearly define the boundaries of system testing, and establish clear roles for the system integration effort.

This post is especially relevant to TV Operators deciding to become more involved with technical ownership of their value chain, as previously highlighted, more and more companies are taking the plunge into doing their own Software Development, and System Integration. This post is another humble attempt at raising awareness that such operators should proceed with caution, ensuring the full extent and gravity of the undertaking is understood, because System Integration is a significant investment in people (highly experienced technical experts required), equipment (investment in costly infrastructure) and time (starting from scratch, building this competency from the ground up is at least a 5 year maturity period).

Recall the end-to-end architecture originally present a few posts back:
As highlighted in previous posts, the world of Digital TV consists of Broadcasters / PayTV operators employing one of more component vendors or Technology Service Providers offering specialist services of Software development for Headend/STB components, Conditional Access Providers, Billing, Scheduling  & Play-out Systems and System Integration services.  This is made possible by basing the various technologies on open standards (MPEG/DVB/IET/IEEE) and thus allows for creating an end-to-end system consisting of a variety of components from different, independent vendors.  However, there are some TV operators who opt to use one supplier, often referred to as the Primary System Integrator, who are tasked with not only ensuring the end-to-end system is architected correctly, but quite often, the Primary System Integrator provides the bulk of the system components end-to-end (i.e. dual role of component developer supplier and integrator).

System Integration and Architecture are linked, they go hand-in-hand. They are not separate entities, although in theory the act of architecting is different to the act of integrating, but integration drives the architecture.  Looking at the above figure showing the different architectural streams, the following becomes clear:
  • Solutions Architect is replaced with Solutions Integrator
  • Headend Architect is replaced with Headend Integrator
  • STB Architect is replaced with STB Integrator

This now exposes us to the world and levels of System Integration.  A Programme needs to define its System Integration strategy at the outset, success of the programme depends on a solid System Integration strategy.

If you're a broadcaster taking ownership of SI, you need to decide in what capacity you will be involved in:
  • STB SI
    • This is generally taking responsibility for integrating the various components of the STB stack (typically Platform Drivers, CA Component, Middleware, UI & Interactive Engines) providing a stable build (STB image).
    • Interfacing with a broadcast/backend is part of this responsibility but STB SI assumes the necessary backend/headend infrastructure is in place
    • Responsible for managing and driving through component deliveries in terms of quality expectations and to some extent project time-lines
    • It is always useful to have access to component source code to aid in investigating and characterising bugs, but this isn't always possible as it has commercial implications with IPR, etc.
  • Headend SI
    • Within the Headend system itself, there are a number of component systems that must come together in implementing a specific macro-component features, like Conditional Access, Scheduling, On-Demand Catalogue/Playout, IPTV streaming.
    • Generally though, headend SI is about integrating these headend components either individually or collectively, proving the interfaces are working as designed (i.e. verifying the input/output is as specified by the protocols, essentially obeying the Interface Control Definitions ICDs)
    • Test tools might be used as test harnesses, or the STB itself can be used depending on the situation
    • There is a fine line between Headend SI and End-to-End SI or Solution SI, which can often lead to confusion and false assumptions
  • End-to-End SI or Solution Integration
    • This is about testing the solution in a representative environment, as close to real life operations as possible
    • It brings together proven systems at the component level, i.e. STB, Headend and other dependent infrastructure systems like Transport (Encoders, multiplexors, IP networking), Billing & Subscriber Management
    • There is often a dependency or interface with systems that are currently live (live components are considered crown jewels, usually managed by a separate delivery/operations team).
    • Failover, Recovery and Upgrade scenarios are performed as part of this exercise
    • Requires investment in one or two end-to-end labs (Some broadcasters impose this on their major vendors to ensure appropriate end-to-end testing is setup locally at the vendor premises; and some vendors have end-to-end labs set-up on the operator's site)
    • Often confusion around how this activity defers from System Delivery/Operations - think of this as pre-delivery, the last mile of the programme's development phase.
    • Becomes really important to define an end-to-end SI strategy where the overall DTV value change is a conglomerate of components provided by different business units offering different services, e.g. a Broadcaster might have separate departments, sometimes independent units each offering a slice of the system (Broadcast, Interactive, VOD, Guide, Operations, R&D) - all these business units must come into alignment at the outset of planning the programme...
In a nutshell, the following diagram tries to illustrate the worlds of SI, the intersections shaded show the typical integration points required. The worlds are interconnected, and can't be expected to be tested and qualified in isolation of other systems (during development phase, unit/component testing is expected, but as soon as we have stable elements to make up the E2E system, then its time to kick-off your End-to-End SI activity). Typically, E2E testing must start well in advance of launch/go-live, typically 6-8 months of E2E testing is not unheard of...
Worlds of SI


The world of Delivery/Operations
The absolute last mile in completing a DTV Programme, is the actual launch - go live!  This is usually under the control of an Operational entity who's sole purpose is to ensure the smooth operational deployment of the systems on live - 99.9999% of reliability is expected, the transition from End-to-End SI to Live Deployment must have the highest probability of success.  In parallel to the E2E SI activity, there is usually a Delivery/Operations team that are mirroring the E2E SI effort, making preparations for launch.  This delivery team must be included in the Programme Schedule from the start, so they can make the necessary equipment procurements, network configurations and deployment schedules in advance to ensure the smooth transition.  Practically though, the delivery/operations team might re-use the same people from E2E SI, but it is important to ensure that everyone understands the milestones and phases of this activity.   Forward planning is crucial to avoid ambiguity and confusion down the line, which goes back to my point earlier that the boundaries and scope for each Testing/Integration activity are clearly understood.

As always, I'm interested in your comments and feedback. 



Thursday 15 December 2011

SI/Development Project/Process Audits...


Earlier this year I wrote about setting up the right Roles & Responsibilities in managing a typical DTV project, covering the Architecture & Project Team requirements.  In this post, I'd like to share my experiences with the Project Audits.

The scope of Project Audits is in itself a vast topic; mostly around the area of auditing the Project Management aspects of the project, along with ensuring effective Quality Management processes are in place, proper Project Governance is enforced, etc. The IEEE SWEBOK dedicates a chapter on Sofware Quality, of which Project Audits is considered as part of this knowledge-area, summarising the process as:
“The purpose of a software audit is to provide an independent evaluation of the conformance of software products and processes to applicable regulations, standards, guidelines, plans, and procedures” [IEEE1028-97]. The audit is a formally organized activity, with participants having specific roles, such as lead auditor, another auditor, a recorder, or an initiator, and includes a representative of the audited organization. The audit will identify instances of nonconformance and produce a report requiring the team to take corrective action.

While there may be many formal names for reviews and audits such as those identified in the standard (IEEE1028- 97), the important point is that they can occur on almost any product at any stage of the development or maintenance process.

What to audit in DTV Projects?
Again, with a bias towards the STB Software Development delivery project, audits are introduced to review two main areas:
  • STB Development / Test Processes across the stack (Drivers, Middleware, Applications)
  • STB System Integration Processes
If you're a product vendor supplying any of the STB components, you might want to review and assess your own development/test processes to gauge how well you're doing against industry best practices. It's also good for your portfolio when pitching for new business to potentional operators.

If you're a TV operator and dependent on product vendors supplying components to your value chain, you need to ensure that your vendors are indeed delivering quality components & systems. Typically (and this can vary with the type of contract in place), the Operator (customer) provides the Product/System requirements to Vendors; Vendors implement and test their components according to the spec; produce test results (self-assessments), making it difficult for the Operator to dig too much into the detail, to find out exactly how the results came about, or gauge the quality of the underlying development processes, typically until it's too late in the project life-cycle.  This can of course be remedied by building into the contracts the requirement for auditing the vendor's various processes, at any stage of the project life-cycle.

Some operators outsource the whole testing and verification effort to third-party test houses. These test houses, depending on the strength of the commercial contracts in place, could very well take complete ownership of the test & quality management, and enforce audits on encumbent component providers.

As noted in my previous posts, more and more Operators/Broadcasters are taking more ownership over the product development, testing and integration. Take for instance, the task of STB System Integration.  Whilst there are well known companies like S3 & Red Embedded, that specialise in STB SI, operators like Sky, DirecTV, Foxtel, Multichoice have dipped their feet into the waters; attempting to own STB SI activity internally, in-house. Incidentally, there are more players on the STB SI front, some STB manufacturers like Pace, UEC; as well as service houses in India like Wipro & Tata Elxsi. NDS Professional Services is also a well known player in the System Integration field.

In my rather short stint of 11 years with STB projects, the world of DTV is quite small; the popular companies of choice for SI, from my experience are:  S3, Red Embedded & NDS.

In my current company, being a DTV Operator/Broadcaster, we are running our own development & STB Integration projects. Because we're relatively new in this game, having built up a good team of people, making every effort to source the right people with the right skill-set (which is really challenging in South Africa), we'd been through a few rounds of Development & SI, using third-party vendors supplying Middleware & Drivers, we decided now was a good a time as any to perform a self-diagnostic, a review and audit of our current processes, specifically focussing on STB System Integration and Supplier Deliverables coming in. Although we have enough knowledge in the team to perform a self-assessment ourselves, and despite having learnt and corrected a lot of mistakes from past experience; we opted for a professional consultancy to do the audit, and chose S3.

Why we chose S3?
S3 despite being a relatively small company in Ireland, has been involved with TV technology for many years; and is well respected in the DTV community; for their specialist design services & system integration knowledge. They've leveraged the wealth of past project experiences in producing useful products like S3 StormTest & Decision Line, a STB automated testing system that has only in increased in popularity since its inception (which we're currently using by the way). The S3 Engage Portal is also another useful product to manage and co-ordinate a complicated DTV SI project.  S3 are also known to be frank & objective, offering feedback and analysis that is open, without hiding the nasty details. S3 were chosen as SI for some big players in Europe; and they're also being used in some other projects of ours.  Based on strength of previous relationships; despite the fact that I previously worked at S3! The world of DTV is small, the other companies mentioned above are also competent and capable.

What areas do you focus on for an SI audit?
Of course I can't share with you specifics of the S3 Audit process, but if you're interested in learning more please contact S3, or speak to this guy.
In a nutshell, S3 shares a model of SI that is well known to the industry:
  • Incremental / Iterative approach, built on a strong foundation of domain expertise, using experienced System Integrators.  The foundation cornerstones that deliver an effective SI process, rely on the following:
    • Strong Technical Expertise
    • Strong Project Management
    • Usage of Advanced Tools & Infrastructure to support the process
    • Strong Quality Assurance Practices
S3 have been kind enough to allow me to publish a slide on their overview of SI process:
S3's Integration Model: SI is a continuous cycle once all Components are in a suitable state of completeness

Based on the above cornerstones, the audit focuses on 48 topics of concern, that largely fall into the following categories - again, the below is generally best practices as evoked by IEEE SWEBOK Chaper 11 on Software Quality which proves S3 appreciate Industry best practices:
  • Project Management
  • Co-ordination & Communications
  • Requirements Management
  • Defect Management
  • Testing
  • Quality Assurance
  • Supplier Management
  • Release Delivery
  • Resourcing
  • Software Architecture
  • Integration & Configuration Managmeent
  • Software Development

My Own List of Audit Criteria
Since this knowledge comes from my own past experience, I've no problem sharing information with with you. If you want more details, feel free to contact me, or leave a comment. The list below is something I put together in advance of the audit we've just undertaken.

If I were doing a STB audit, this is what I'd be interested in:
  • Foundations of the STB Component (Middleware/UI) architecture design
    • Design philosophy – apart from being MHP/Jave/MHEG/etc compliant, what is the design ethos behind the architecture, i.e. does it scale?
    • Design patterns – what abstractions are used to ensure flexibility with changing requirements without major impact to design?
    • Performance requirements – is the product being architected with performance considerations in mind?
    • Security considerations – is security checks in place, to prevent buffer overruns, etc?
    • Stability considerations – how do we ensure stability checks are enforced in design / implementation?
    • Open Source components usage – if open source is being used, is there a process for choosing components, etc?
    • Inter-component Interface design and control – assuming the middleware is a collection of components, how are component interfaces managed?
    • Documentation – Requirements, Design – is there a documentation standard in place? Middleware design, component design, etc?
    • Process for introducing change to architecture? New components to implement new features? – How are new components introduced to the system?
    • API ICD Controls – how do we manage API changes??
      • Is there an SDK?
      • Quality of API Documentation? Is it usable, self-explanatory, auto generated??
      • New APIs must be communicated to application developers
      • Changes in APIs must be controlled
      • APIs must be backwards compatible
      • API Deprecation rules

  • Testing the STB (Middleware/UI/Drivers) stack – what is the testing strategy employed?
    • Depending on the architecture, assuming it’s component based architecture – do we have a continuous build and integration system in place essentially for regression testing:
      • Component based testing – unit tests on the component, using mocks for other components, assuming interfaces are controlled
      • Component group testing – where you collect related components together and test as unit, e.g. PVR engine can be tested in isolation
      • Middleware stack testing – test the middleware as a whole, i.e. collection of components forming the MW
    • Regression Suite – does one exist, how is it tested, and how often is it tested?
    • Spread of STB hardware platforms used for testing
    • Use of a Simulator
    • How often do we check for regression – daily, weekly, just before release?
    • How is new feature development controlled – i.e. new features to be delivered without causing regression in previous working functionality
    • Is there a concept of a middleware integration team?
    • Middleware development is separate to middleware integration, separate teams
    • Development teams do component development, and component unit testing
    • Middleware integration team takes development components and integrate – test new features against last stable middleware as well as test new functionality
    • Middleware integration team consists of technical people who can code – best practise is to use a middeware test harness that exercises the middleware, not just the external Application APIs, but speak directly to component APIs themselves
    • Is there a separate team owning the typical STB Hardware Abstraction Layer / Driver Interface?
    • Who defines the HAL spec / interface?
    • Change control procedures in place to support HAL spec?
    • Testing of the HAL?
      • Is there a test harness that validates new platform ports?
      • Is there a separate Platform Integration team?

  • Supporting multiple customers and build profiles
    • How does the architecture support the needs of different customer profiles: Satellite / DTT / Cable / IP ?
    • For the same customer with different project requirements, how is the codebase managed?
    • Build, Release, Configuration Management and Bbranching philosophy?

  • Multi-site development
    • How is multi-site development managed?
    • What configuration management system is used? Does it support multi-site development well?
    • Are off-shore teams used as support arms, or do actual development?
    • So off-shore team have any ownership over components?
    • How is communication managed with off-shore teams??
  • Planning releases
    • How are new releases planned?
    • How is work assigned to development teams?
    • What is the usual process for planning / starting new development?
    • How much time is spent designing new features?
    • Is testing included as part of the initial design review?
    • Are new features delivered back to main product code tree in isolation or in big chunks??
    • How much time is allocated to testing on reference platform?
    • How much time is allocated to testing on customer production platforms?
    • How is the backlog prioritized?
    • How long is the development cycle?
    • What is the exit criteria for development?
    • How long is the test cycle?
    • What is the exit criteria for testing?
    • Is Middleware integration part of planning?
    • Is full-stack integration part of planning?
    • How are defects planned?
    • Does the plan cater for stabilization period for new features?
    • Who decides on the branching strategy nearing up to release?
    • Are new tests identified as part of the planning process?

  • Defect Management
    • Is there a defect management process in place?
    • How are defects managed and factored into the development process?
    • How often is root cause analysis performed?
    • How is quality being managed and tracked internally?
    • What defect tracking tool is being used?
    • What metrics can be generated to gauge the quality of all components in the stack??
      • Average submission rate
      • Closure rate
      • Time it takes to close the average Showstopper
      • Most suspect components
      • Most complex components

  • Development practises – industry standard coding practices should be in place
    • Coding standard
    • Architecture rules – e.g. system calls to avoid, avoid using strcpy, etc?
    • Is CUnit test framework or any other suitable framework used for native component testing?
    • Is JUnit being used?
    • Do component unit tests map to component requirements?
    • Do middleware tests map to system requirements or use cases?
    • Are any of the following code quality metrics being looked at:
      • Zero compiler warnings
      • Zero MISRA warnings for native C components
      • Which Java best practises being used?
      • Code Coverage:
        • Line coverage tools
        • Branch Coverage
      • Code complexity
      • Function points
      • Other static analysis tools like Klocwork
  • Development organisational structure
    • Size of development teams
    • Structure of development teams – is it based on architecture components, i.e. notion of component teams, with component owners?
    • If using multisite development, what is the line management structure – is the team a matrix, with virtual management teams?
    • Do you have separate groups for:
      • Development: Separate UI development, Separate Middleware team?
      • Integration – is there a concept of middleware integration team?
      • Platform – is there a concept of a team specialising in low level HAL/Kernel/Drivers/Platform
    • What are the defined roles and responsibilities:
      • Product Architect  - the person owning the product architecture, maintains the integrity of the philosophy, keeping everyone in check?
      • Middleware Component Architect – do you have people owning specific components in the middleware, ensuring the interfaces are respected. For e.g. PVR engine usually is a separate architect?
      • Lead Developer / Component Owner – someone responsible for delivering component to spec, owns the internal design and implementation of the component?
      • Engineers – what is the mix of people and skillsets across the organisation