Pages

Thursday, 15 December 2011

SI/Development Project/Process Audits...


Earlier this year I wrote about setting up the right Roles & Responsibilities in managing a typical DTV project, covering the Architecture & Project Team requirements.  In this post, I'd like to share my experiences with the Project Audits.

The scope of Project Audits is in itself a vast topic; mostly around the area of auditing the Project Management aspects of the project, along with ensuring effective Quality Management processes are in place, proper Project Governance is enforced, etc. The IEEE SWEBOK dedicates a chapter on Sofware Quality, of which Project Audits is considered as part of this knowledge-area, summarising the process as:
“The purpose of a software audit is to provide an independent evaluation of the conformance of software products and processes to applicable regulations, standards, guidelines, plans, and procedures” [IEEE1028-97]. The audit is a formally organized activity, with participants having specific roles, such as lead auditor, another auditor, a recorder, or an initiator, and includes a representative of the audited organization. The audit will identify instances of nonconformance and produce a report requiring the team to take corrective action.

While there may be many formal names for reviews and audits such as those identified in the standard (IEEE1028- 97), the important point is that they can occur on almost any product at any stage of the development or maintenance process.

What to audit in DTV Projects?
Again, with a bias towards the STB Software Development delivery project, audits are introduced to review two main areas:
  • STB Development / Test Processes across the stack (Drivers, Middleware, Applications)
  • STB System Integration Processes
If you're a product vendor supplying any of the STB components, you might want to review and assess your own development/test processes to gauge how well you're doing against industry best practices. It's also good for your portfolio when pitching for new business to potentional operators.

If you're a TV operator and dependent on product vendors supplying components to your value chain, you need to ensure that your vendors are indeed delivering quality components & systems. Typically (and this can vary with the type of contract in place), the Operator (customer) provides the Product/System requirements to Vendors; Vendors implement and test their components according to the spec; produce test results (self-assessments), making it difficult for the Operator to dig too much into the detail, to find out exactly how the results came about, or gauge the quality of the underlying development processes, typically until it's too late in the project life-cycle.  This can of course be remedied by building into the contracts the requirement for auditing the vendor's various processes, at any stage of the project life-cycle.

Some operators outsource the whole testing and verification effort to third-party test houses. These test houses, depending on the strength of the commercial contracts in place, could very well take complete ownership of the test & quality management, and enforce audits on encumbent component providers.

As noted in my previous posts, more and more Operators/Broadcasters are taking more ownership over the product development, testing and integration. Take for instance, the task of STB System Integration.  Whilst there are well known companies like S3 & Red Embedded, that specialise in STB SI, operators like Sky, DirecTV, Foxtel, Multichoice have dipped their feet into the waters; attempting to own STB SI activity internally, in-house. Incidentally, there are more players on the STB SI front, some STB manufacturers like Pace, UEC; as well as service houses in India like Wipro & Tata Elxsi. NDS Professional Services is also a well known player in the System Integration field.

In my rather short stint of 11 years with STB projects, the world of DTV is quite small; the popular companies of choice for SI, from my experience are:  S3, Red Embedded & NDS.

In my current company, being a DTV Operator/Broadcaster, we are running our own development & STB Integration projects. Because we're relatively new in this game, having built up a good team of people, making every effort to source the right people with the right skill-set (which is really challenging in South Africa), we'd been through a few rounds of Development & SI, using third-party vendors supplying Middleware & Drivers, we decided now was a good a time as any to perform a self-diagnostic, a review and audit of our current processes, specifically focussing on STB System Integration and Supplier Deliverables coming in. Although we have enough knowledge in the team to perform a self-assessment ourselves, and despite having learnt and corrected a lot of mistakes from past experience; we opted for a professional consultancy to do the audit, and chose S3.

Why we chose S3?
S3 despite being a relatively small company in Ireland, has been involved with TV technology for many years; and is well respected in the DTV community; for their specialist design services & system integration knowledge. They've leveraged the wealth of past project experiences in producing useful products like S3 StormTest & Decision Line, a STB automated testing system that has only in increased in popularity since its inception (which we're currently using by the way). The S3 Engage Portal is also another useful product to manage and co-ordinate a complicated DTV SI project.  S3 are also known to be frank & objective, offering feedback and analysis that is open, without hiding the nasty details. S3 were chosen as SI for some big players in Europe; and they're also being used in some other projects of ours.  Based on strength of previous relationships; despite the fact that I previously worked at S3! The world of DTV is small, the other companies mentioned above are also competent and capable.

What areas do you focus on for an SI audit?
Of course I can't share with you specifics of the S3 Audit process, but if you're interested in learning more please contact S3, or speak to this guy.
In a nutshell, S3 shares a model of SI that is well known to the industry:
  • Incremental / Iterative approach, built on a strong foundation of domain expertise, using experienced System Integrators.  The foundation cornerstones that deliver an effective SI process, rely on the following:
    • Strong Technical Expertise
    • Strong Project Management
    • Usage of Advanced Tools & Infrastructure to support the process
    • Strong Quality Assurance Practices
S3 have been kind enough to allow me to publish a slide on their overview of SI process:
S3's Integration Model: SI is a continuous cycle once all Components are in a suitable state of completeness

Based on the above cornerstones, the audit focuses on 48 topics of concern, that largely fall into the following categories - again, the below is generally best practices as evoked by IEEE SWEBOK Chaper 11 on Software Quality which proves S3 appreciate Industry best practices:
  • Project Management
  • Co-ordination & Communications
  • Requirements Management
  • Defect Management
  • Testing
  • Quality Assurance
  • Supplier Management
  • Release Delivery
  • Resourcing
  • Software Architecture
  • Integration & Configuration Managmeent
  • Software Development

My Own List of Audit Criteria
Since this knowledge comes from my own past experience, I've no problem sharing information with with you. If you want more details, feel free to contact me, or leave a comment. The list below is something I put together in advance of the audit we've just undertaken.

If I were doing a STB audit, this is what I'd be interested in:
  • Foundations of the STB Component (Middleware/UI) architecture design
    • Design philosophy – apart from being MHP/Jave/MHEG/etc compliant, what is the design ethos behind the architecture, i.e. does it scale?
    • Design patterns – what abstractions are used to ensure flexibility with changing requirements without major impact to design?
    • Performance requirements – is the product being architected with performance considerations in mind?
    • Security considerations – is security checks in place, to prevent buffer overruns, etc?
    • Stability considerations – how do we ensure stability checks are enforced in design / implementation?
    • Open Source components usage – if open source is being used, is there a process for choosing components, etc?
    • Inter-component Interface design and control – assuming the middleware is a collection of components, how are component interfaces managed?
    • Documentation – Requirements, Design – is there a documentation standard in place? Middleware design, component design, etc?
    • Process for introducing change to architecture? New components to implement new features? – How are new components introduced to the system?
    • API ICD Controls – how do we manage API changes??
      • Is there an SDK?
      • Quality of API Documentation? Is it usable, self-explanatory, auto generated??
      • New APIs must be communicated to application developers
      • Changes in APIs must be controlled
      • APIs must be backwards compatible
      • API Deprecation rules

  • Testing the STB (Middleware/UI/Drivers) stack – what is the testing strategy employed?
    • Depending on the architecture, assuming it’s component based architecture – do we have a continuous build and integration system in place essentially for regression testing:
      • Component based testing – unit tests on the component, using mocks for other components, assuming interfaces are controlled
      • Component group testing – where you collect related components together and test as unit, e.g. PVR engine can be tested in isolation
      • Middleware stack testing – test the middleware as a whole, i.e. collection of components forming the MW
    • Regression Suite – does one exist, how is it tested, and how often is it tested?
    • Spread of STB hardware platforms used for testing
    • Use of a Simulator
    • How often do we check for regression – daily, weekly, just before release?
    • How is new feature development controlled – i.e. new features to be delivered without causing regression in previous working functionality
    • Is there a concept of a middleware integration team?
    • Middleware development is separate to middleware integration, separate teams
    • Development teams do component development, and component unit testing
    • Middleware integration team takes development components and integrate – test new features against last stable middleware as well as test new functionality
    • Middleware integration team consists of technical people who can code – best practise is to use a middeware test harness that exercises the middleware, not just the external Application APIs, but speak directly to component APIs themselves
    • Is there a separate team owning the typical STB Hardware Abstraction Layer / Driver Interface?
    • Who defines the HAL spec / interface?
    • Change control procedures in place to support HAL spec?
    • Testing of the HAL?
      • Is there a test harness that validates new platform ports?
      • Is there a separate Platform Integration team?

  • Supporting multiple customers and build profiles
    • How does the architecture support the needs of different customer profiles: Satellite / DTT / Cable / IP ?
    • For the same customer with different project requirements, how is the codebase managed?
    • Build, Release, Configuration Management and Bbranching philosophy?

  • Multi-site development
    • How is multi-site development managed?
    • What configuration management system is used? Does it support multi-site development well?
    • Are off-shore teams used as support arms, or do actual development?
    • So off-shore team have any ownership over components?
    • How is communication managed with off-shore teams??
  • Planning releases
    • How are new releases planned?
    • How is work assigned to development teams?
    • What is the usual process for planning / starting new development?
    • How much time is spent designing new features?
    • Is testing included as part of the initial design review?
    • Are new features delivered back to main product code tree in isolation or in big chunks??
    • How much time is allocated to testing on reference platform?
    • How much time is allocated to testing on customer production platforms?
    • How is the backlog prioritized?
    • How long is the development cycle?
    • What is the exit criteria for development?
    • How long is the test cycle?
    • What is the exit criteria for testing?
    • Is Middleware integration part of planning?
    • Is full-stack integration part of planning?
    • How are defects planned?
    • Does the plan cater for stabilization period for new features?
    • Who decides on the branching strategy nearing up to release?
    • Are new tests identified as part of the planning process?

  • Defect Management
    • Is there a defect management process in place?
    • How are defects managed and factored into the development process?
    • How often is root cause analysis performed?
    • How is quality being managed and tracked internally?
    • What defect tracking tool is being used?
    • What metrics can be generated to gauge the quality of all components in the stack??
      • Average submission rate
      • Closure rate
      • Time it takes to close the average Showstopper
      • Most suspect components
      • Most complex components

  • Development practises – industry standard coding practices should be in place
    • Coding standard
    • Architecture rules – e.g. system calls to avoid, avoid using strcpy, etc?
    • Is CUnit test framework or any other suitable framework used for native component testing?
    • Is JUnit being used?
    • Do component unit tests map to component requirements?
    • Do middleware tests map to system requirements or use cases?
    • Are any of the following code quality metrics being looked at:
      • Zero compiler warnings
      • Zero MISRA warnings for native C components
      • Which Java best practises being used?
      • Code Coverage:
        • Line coverage tools
        • Branch Coverage
      • Code complexity
      • Function points
      • Other static analysis tools like Klocwork
  • Development organisational structure
    • Size of development teams
    • Structure of development teams – is it based on architecture components, i.e. notion of component teams, with component owners?
    • If using multisite development, what is the line management structure – is the team a matrix, with virtual management teams?
    • Do you have separate groups for:
      • Development: Separate UI development, Separate Middleware team?
      • Integration – is there a concept of middleware integration team?
      • Platform – is there a concept of a team specialising in low level HAL/Kernel/Drivers/Platform
    • What are the defined roles and responsibilities:
      • Product Architect  - the person owning the product architecture, maintains the integrity of the philosophy, keeping everyone in check?
      • Middleware Component Architect – do you have people owning specific components in the middleware, ensuring the interfaces are respected. For e.g. PVR engine usually is a separate architect?
      • Lead Developer / Component Owner – someone responsible for delivering component to spec, owns the internal design and implementation of the component?
      • Engineers – what is the mix of people and skillsets across the organisation




1 comment: