Tuesday, 7 February 2012

White Paper Preview: Technologies for Internet TV



I am working together with a friend on a joint white paper covering popular technologies around Internet TV. We've been working with this on & off for a good few months, in between work, balancing family needs and what little spare time we have.  The idea is to share our skills & experiences through white papers, thus attracting offers for consulting...I'm releasing this very draft work in progress to act as a motivator for us to actually finish this white paper in 2012:

If you stumble across this post and are interested in this topic, please get in touch.

-----------------------------------------oOo-------------------------------------------------

General Overview of IPTV Technologies, covering Online Portals with media consumption via PC, Internet Streaming technologies, Client implementation (PC, STB, TabletPC), Open Source tools & Home Networking


Increasingly Telcos and Broadcast operators are looking into increasing their revenue generation stream by providing multiple mechanisms for customers to consume entertainment content over the Internet. These businesses usually decide that the achievement of this objective can be realised by having a web (or online on-demand) presence by creating a platform for delivering content over the Internet.

The road to success lies in solving a multitude of issues in a variety of domains including marketing, sales, legal, and technical. This white paper focuses on the technology choices and challenges for implementation, providing a discussion point around a high-level, simplistic solution intended to highlight the challenges these businesses face when creating the solutions.

So from the construct that a business decides to have a online presence and wishes to provide a entertainment video streaming solution to augment the bottom line the following issues needs to be resolved:
  • Rights to Content for distribution
  • Marketing the solution against other vendors
  • Technical solution to implement the portal and distribution mechanism
  • Support for customers
Rights to Content for distribution
This is key to the success of the project: If the telco is unwilling to acquire the right mix of content based on its targeted subscriber base the project is likely to fail.
The fundamental issue impacting this area is getting the content providers to allow distribution of the content over the Internet in the background of security concerns where it seems nothing is safe to hacking attacks on the Internet. Content providers look for assurance that their content will not be compromised and redistributed without them realising revenue.

This usually requires utilising a Digital Rights Management (DRM) solution into the product where the content provider feels satisfied that the protection mechanism is secure enough for them to use the solution.

Choosing a DRM solution is an an activity that involves thorough planning right at the outset from designing the architecture, as this is a fundamental component to the system architecture, from back-end distribution to consumption on the end-consumer device. A number of DRM-solutions exist in the marketplace, this paper will focus on the popular (not necessarily most secure) solutions including Microsoft Windows Media DRM, Apple Streaming DRM, NDS Secure VG DRM, …

Marketing of solution against other vendors

Since businesses need to differentiate in the competition space, focus is made on a variety of factors such as price, product features (integration with devices, social networks, widgets etc) and type of content (niche versus premium) amongst others.
Companies with existing Portal solutions (TODO: Give example references) generally adopt a common theme of marketing, specifically centred around on-demand, instant access to content, variety of content, consumer freedom to choose to watch “whatever you want, whenever you want, on any device, anywhere”. Of course this is music to the consumer’s ears but the proof of the pudding lies in the actual implementation - and to realise this marketing objective is to overcome a number of tough technical challenges as we’ll focus later in this paper.

Technical solution to implement the portal and distribution mechanism

The technical challenges in relation to the above two is a simpler problem to resolve and relates to creating the platform, infrastructure, know-how of a wide variety of issues that encompass the following areas:
  • Online Portal
  • Subscriber Management
  • Payment System
  • Content Management including Acquisition, Processing and Distribution
  • Content Consumption by devices
  • Support Systems
  • Regional Infrastructure

Online Portal

This is the gateway into the offering by the business; the user experience greatly drives the success or failure of the system. The Online Portal must provide features for:
  • authenticating the user;
  • managing the user’s profile;
  • billing information;
  • social network integration;
  • allow searching for content; and
  • placement of  recommended content for the user based on their previous usage; and
  • YouTube-style features to allow the user to follow trends and watch what other users of the system are watching, have recommended / commented upon etc. In essence this is the public face of the content management system which is integrated with the billing system and distribution system.


This is an intensive area requiring expertise in creating a very usable interface for all types of users. Along with this, the main feature is to allow consumption of the content and is typically based around using a Flash Player (TODO Reference to Flash) providing a compelling experience in comparison to the user’s TV.
There is a tremendous amount of integration work required, typically focused on:
  • adaptation of the user experience across all web-browser platforms
  • the content player that renders the content based on the format of the content
  • what audio/video players are available
  • what devices are required to be supported
  • what features are to be supported such as the protocol for delivery of content
  • support for degradation of video quality as the bandwidth available to the user fluctuates based on usage pattern in the user’s home;
  • the bandwidth available to the user;
  • the pathway between the source of the content on the Internet and the user’s consumption device.
  • Additionally other features including the DRM technology dictates how the interaction occurs.


We shall aim to discuss each of the above points in further detail in future sections.

Subscriber Management

This domain needs to be well understood by the business model. It broadly encompasses the following areas:
  • uniquely identifying the user, the device (including the limitations of distribution profiles) being used for consumption;
  • business model restrictions based on subscription tier, promotions, discounts, billing information etc;
  • the management system integrates with the online portal in addition to the content management system as well as the payment system to realise the revenue.

Payment System
This is typically a legacy or third party system that the operator utilises and integration of this to the online system needs to occur.
<TODO add more info>

Content Management including Acquisition, Processing and Distribution
This is a significant aspect of setting up the process flow and delivery chain for content consumption by the operator. Depending on the investment, an operator budgets for the solution that need to be defined encompassing the following:
  • Setting up infrastructure to acquire content from diverse content providers
  • Automating the ingest of content in to the system
  • Preparing the content for consumption of the variety of devices the platform is intending to support, ensuring that the correct DRM aspects are catered for each type of end user device
  • Managing the content through its life cycle which may entail marketing aspects such as promotions
  • Delivering the content through a variety of delivery mechanisms requiring integration with third party vendors


Content Acquisition

This area involves a plethora of decisions that need to be made about the content and how it becomes available on the platform and ultimately how it gets delivered. Issues to resolve include:
  • What format the content (audio, video) is provided in by the content provider?
  • What metadata is required for presenting to the user on the online portal or during playback and how that is transformed from the input all the way to the point of consumption?
  • Whether encoding services (software and or hardware) are required to transcode the content to the end device format?
  • Whether features such as adaptive bitrate streaming is required?
  • Whether support for live, restart, catch-up, on-demand playback of content is required?
  • What third parties to select and what level of integration is required?
  • What licensing issues resolve with content owners?
  • What licensing is required for encoding to different device formats?


Content Processing

Once the content has been ingested into the platform, decisions around the following areas need to be resolved that include:
  • What is the life-cycle of the content?
  • What promotion mechanisms are required?
  • What DRM(s) need to be applied and what business models will be applicable?
  • How is the content encrypted (dictated by the DRM(s) in use)?
  • How will the variety of DRM(s) work seamlessly as the user begins consuming content from a variety of devices?
  • What extra metadata needs to be added to support different types of playback and interactivity during consumption?
  • What support is required by the systems administrators to manage outages and issues with the process flow during deployment?


Content Distribution

Once the content is available for distribution it needs to be made available for consumption over a variety of mechanisms; issues that needs resolution include:
  • Protocol for delivery on the network (RTP, RTSP, HTTP, RTPM)
  • CDN (Content Delivery Network) support dependent on protocol choice
  • Commissioning of hardware resources to serve content to devices
  • Integration with DRM system(s)



Content Consumption by devices

Dependent on the number of devices being supported a large amount of effort is used in integrating the Content Player in to the OS (Operating System) framework of the end-device. This entails a variety of issues:
  • Choice of devices (PCs, Macs, Tablets, Smartphones, Set-Top-Box, SmartTVs)
  • Player Usage (such as Flash/Silverlight/QuickTime/Windows Media Player/Custom) of player using bespoke or open source software (typically involves providing plug-ins for web browsers on desktop (Windows/OSX/Linux) platforms and integrating to the mobile / tablet devices OS framework
  • Integration and maintaining integrity of DRM(s) on devices
  • Fine-tuning user experience of UI and play-back
  • Supporting integration between devices via UPnP (Universal Plug & Play) and DLNA (Digital Network Living Alliance) to support download and play-back between devices


Support for customers

This area revolves around trouble-shooting, resolving issues due to plurality of devices and the versions of software they run and creating baselines for the operator to know what issues cause outages and rectifying them.

Infrastructure capacity

A successful provisioning an online content portal with capacity for multiple device streaming to the home relies on fundamental components of infrastructure support being in place. Not only does the business need to ensure that it can deal with system loading (hundreds of thousands of subscribers accessing the content simultaneously; protection from attackers intent on crippling the service through denial of service attacks) but more importantly the business would have carried out an analysis covering the following areas:
  • Number of subscribers with online access, i.e. able to access the Internet?
  • Intended delivery mechanism for streaming (xDSL, WiFi, 3G, etc)
  • Solutions for supporting a number of different delivery profiles
  • Regulatory compliance - what relationships or conditions are in place to allow/prevent delivery of content over third-party networks (e.g. not all ISPs will allow traffic through)
  • Content Delivery Networks - is enough infrastructure in place to support the demand (Edge servers, etc)
  • Reliability of the underlying delivery network?

Infrastructure and Regulatory concerns might not be an urgent topic to address if the business is operating in a developed country where access to Internet is now considered a basic utility, but for other parts of the world, especially the Indian/African countries, Internet adoption and underlying network infrastructure has not reached the maturity levels as say, compared with countries like the United Kingdom/Germany where there is a high penetration of cable supplying xDSL for instance.  In South Africa for example, xDSL is not the main player of Internet connectivity, instead 3G WiFi is the dominant player - and for such markets, the business must therefore have a solution that meets the variety of access mechanisms in front of the user.

<TODO Examplae Case Study>

We’ll now walk though a general architectural overview of the components in and end-to-end system, assuming the required infrastructure exists...

Friday, 20 January 2012

Appreciating the Role of System Integration


Last year I started my series of posts around the different aspects of DTV system projects. Starting out by describing the a typical end-to-end system architecture and the importance of clearly identifying the roles and responsibilities of the architect team (see post: Architect Roles), followed by a write-up on considerations for setting up a programme organisation structure (see post: Managing a DTV Programme) and more recently, a brief overview of auditing the STB System Integration process (see post: SI/Dev Process Audits).

I still have a few more topics in the pipeline, however the aim of this post is to highlight the worlds of System Integration, as this can often be overlooked and open to interpretation, introduces dangerous assumptions and automatically puts the programme at risk.  With a large-scale DTV deployment, the programme must at the outset, clearly define the boundaries of system testing, and establish clear roles for the system integration effort.

This post is especially relevant to TV Operators deciding to become more involved with technical ownership of their value chain, as previously highlighted, more and more companies are taking the plunge into doing their own Software Development, and System Integration. This post is another humble attempt at raising awareness that such operators should proceed with caution, ensuring the full extent and gravity of the undertaking is understood, because System Integration is a significant investment in people (highly experienced technical experts required), equipment (investment in costly infrastructure) and time (starting from scratch, building this competency from the ground up is at least a 5 year maturity period).

Recall the end-to-end architecture originally present a few posts back:
As highlighted in previous posts, the world of Digital TV consists of Broadcasters / PayTV operators employing one of more component vendors or Technology Service Providers offering specialist services of Software development for Headend/STB components, Conditional Access Providers, Billing, Scheduling  & Play-out Systems and System Integration services.  This is made possible by basing the various technologies on open standards (MPEG/DVB/IET/IEEE) and thus allows for creating an end-to-end system consisting of a variety of components from different, independent vendors.  However, there are some TV operators who opt to use one supplier, often referred to as the Primary System Integrator, who are tasked with not only ensuring the end-to-end system is architected correctly, but quite often, the Primary System Integrator provides the bulk of the system components end-to-end (i.e. dual role of component developer supplier and integrator).

System Integration and Architecture are linked, they go hand-in-hand. They are not separate entities, although in theory the act of architecting is different to the act of integrating, but integration drives the architecture.  Looking at the above figure showing the different architectural streams, the following becomes clear:
  • Solutions Architect is replaced with Solutions Integrator
  • Headend Architect is replaced with Headend Integrator
  • STB Architect is replaced with STB Integrator

This now exposes us to the world and levels of System Integration.  A Programme needs to define its System Integration strategy at the outset, success of the programme depends on a solid System Integration strategy.

If you're a broadcaster taking ownership of SI, you need to decide in what capacity you will be involved in:
  • STB SI
    • This is generally taking responsibility for integrating the various components of the STB stack (typically Platform Drivers, CA Component, Middleware, UI & Interactive Engines) providing a stable build (STB image).
    • Interfacing with a broadcast/backend is part of this responsibility but STB SI assumes the necessary backend/headend infrastructure is in place
    • Responsible for managing and driving through component deliveries in terms of quality expectations and to some extent project time-lines
    • It is always useful to have access to component source code to aid in investigating and characterising bugs, but this isn't always possible as it has commercial implications with IPR, etc.
  • Headend SI
    • Within the Headend system itself, there are a number of component systems that must come together in implementing a specific macro-component features, like Conditional Access, Scheduling, On-Demand Catalogue/Playout, IPTV streaming.
    • Generally though, headend SI is about integrating these headend components either individually or collectively, proving the interfaces are working as designed (i.e. verifying the input/output is as specified by the protocols, essentially obeying the Interface Control Definitions ICDs)
    • Test tools might be used as test harnesses, or the STB itself can be used depending on the situation
    • There is a fine line between Headend SI and End-to-End SI or Solution SI, which can often lead to confusion and false assumptions
  • End-to-End SI or Solution Integration
    • This is about testing the solution in a representative environment, as close to real life operations as possible
    • It brings together proven systems at the component level, i.e. STB, Headend and other dependent infrastructure systems like Transport (Encoders, multiplexors, IP networking), Billing & Subscriber Management
    • There is often a dependency or interface with systems that are currently live (live components are considered crown jewels, usually managed by a separate delivery/operations team).
    • Failover, Recovery and Upgrade scenarios are performed as part of this exercise
    • Requires investment in one or two end-to-end labs (Some broadcasters impose this on their major vendors to ensure appropriate end-to-end testing is setup locally at the vendor premises; and some vendors have end-to-end labs set-up on the operator's site)
    • Often confusion around how this activity defers from System Delivery/Operations - think of this as pre-delivery, the last mile of the programme's development phase.
    • Becomes really important to define an end-to-end SI strategy where the overall DTV value change is a conglomerate of components provided by different business units offering different services, e.g. a Broadcaster might have separate departments, sometimes independent units each offering a slice of the system (Broadcast, Interactive, VOD, Guide, Operations, R&D) - all these business units must come into alignment at the outset of planning the programme...
In a nutshell, the following diagram tries to illustrate the worlds of SI, the intersections shaded show the typical integration points required. The worlds are interconnected, and can't be expected to be tested and qualified in isolation of other systems (during development phase, unit/component testing is expected, but as soon as we have stable elements to make up the E2E system, then its time to kick-off your End-to-End SI activity). Typically, E2E testing must start well in advance of launch/go-live, typically 6-8 months of E2E testing is not unheard of...
Worlds of SI


The world of Delivery/Operations
The absolute last mile in completing a DTV Programme, is the actual launch - go live!  This is usually under the control of an Operational entity who's sole purpose is to ensure the smooth operational deployment of the systems on live - 99.9999% of reliability is expected, the transition from End-to-End SI to Live Deployment must have the highest probability of success.  In parallel to the E2E SI activity, there is usually a Delivery/Operations team that are mirroring the E2E SI effort, making preparations for launch.  This delivery team must be included in the Programme Schedule from the start, so they can make the necessary equipment procurements, network configurations and deployment schedules in advance to ensure the smooth transition.  Practically though, the delivery/operations team might re-use the same people from E2E SI, but it is important to ensure that everyone understands the milestones and phases of this activity.   Forward planning is crucial to avoid ambiguity and confusion down the line, which goes back to my point earlier that the boundaries and scope for each Testing/Integration activity are clearly understood.

As always, I'm interested in your comments and feedback. 



Thursday, 15 December 2011

SI/Development Project/Process Audits...


Earlier this year I wrote about setting up the right Roles & Responsibilities in managing a typical DTV project, covering the Architecture & Project Team requirements.  In this post, I'd like to share my experiences with the Project Audits.

The scope of Project Audits is in itself a vast topic; mostly around the area of auditing the Project Management aspects of the project, along with ensuring effective Quality Management processes are in place, proper Project Governance is enforced, etc. The IEEE SWEBOK dedicates a chapter on Sofware Quality, of which Project Audits is considered as part of this knowledge-area, summarising the process as:
“The purpose of a software audit is to provide an independent evaluation of the conformance of software products and processes to applicable regulations, standards, guidelines, plans, and procedures” [IEEE1028-97]. The audit is a formally organized activity, with participants having specific roles, such as lead auditor, another auditor, a recorder, or an initiator, and includes a representative of the audited organization. The audit will identify instances of nonconformance and produce a report requiring the team to take corrective action.

While there may be many formal names for reviews and audits such as those identified in the standard (IEEE1028- 97), the important point is that they can occur on almost any product at any stage of the development or maintenance process.

What to audit in DTV Projects?
Again, with a bias towards the STB Software Development delivery project, audits are introduced to review two main areas:
  • STB Development / Test Processes across the stack (Drivers, Middleware, Applications)
  • STB System Integration Processes
If you're a product vendor supplying any of the STB components, you might want to review and assess your own development/test processes to gauge how well you're doing against industry best practices. It's also good for your portfolio when pitching for new business to potentional operators.

If you're a TV operator and dependent on product vendors supplying components to your value chain, you need to ensure that your vendors are indeed delivering quality components & systems. Typically (and this can vary with the type of contract in place), the Operator (customer) provides the Product/System requirements to Vendors; Vendors implement and test their components according to the spec; produce test results (self-assessments), making it difficult for the Operator to dig too much into the detail, to find out exactly how the results came about, or gauge the quality of the underlying development processes, typically until it's too late in the project life-cycle.  This can of course be remedied by building into the contracts the requirement for auditing the vendor's various processes, at any stage of the project life-cycle.

Some operators outsource the whole testing and verification effort to third-party test houses. These test houses, depending on the strength of the commercial contracts in place, could very well take complete ownership of the test & quality management, and enforce audits on encumbent component providers.

As noted in my previous posts, more and more Operators/Broadcasters are taking more ownership over the product development, testing and integration. Take for instance, the task of STB System Integration.  Whilst there are well known companies like S3 & Red Embedded, that specialise in STB SI, operators like Sky, DirecTV, Foxtel, Multichoice have dipped their feet into the waters; attempting to own STB SI activity internally, in-house. Incidentally, there are more players on the STB SI front, some STB manufacturers like Pace, UEC; as well as service houses in India like Wipro & Tata Elxsi. NDS Professional Services is also a well known player in the System Integration field.

In my rather short stint of 11 years with STB projects, the world of DTV is quite small; the popular companies of choice for SI, from my experience are:  S3, Red Embedded & NDS.

In my current company, being a DTV Operator/Broadcaster, we are running our own development & STB Integration projects. Because we're relatively new in this game, having built up a good team of people, making every effort to source the right people with the right skill-set (which is really challenging in South Africa), we'd been through a few rounds of Development & SI, using third-party vendors supplying Middleware & Drivers, we decided now was a good a time as any to perform a self-diagnostic, a review and audit of our current processes, specifically focussing on STB System Integration and Supplier Deliverables coming in. Although we have enough knowledge in the team to perform a self-assessment ourselves, and despite having learnt and corrected a lot of mistakes from past experience; we opted for a professional consultancy to do the audit, and chose S3.

Why we chose S3?
S3 despite being a relatively small company in Ireland, has been involved with TV technology for many years; and is well respected in the DTV community; for their specialist design services & system integration knowledge. They've leveraged the wealth of past project experiences in producing useful products like S3 StormTest & Decision Line, a STB automated testing system that has only in increased in popularity since its inception (which we're currently using by the way). The S3 Engage Portal is also another useful product to manage and co-ordinate a complicated DTV SI project.  S3 are also known to be frank & objective, offering feedback and analysis that is open, without hiding the nasty details. S3 were chosen as SI for some big players in Europe; and they're also being used in some other projects of ours.  Based on strength of previous relationships; despite the fact that I previously worked at S3! The world of DTV is small, the other companies mentioned above are also competent and capable.

What areas do you focus on for an SI audit?
Of course I can't share with you specifics of the S3 Audit process, but if you're interested in learning more please contact S3, or speak to this guy.
In a nutshell, S3 shares a model of SI that is well known to the industry:
  • Incremental / Iterative approach, built on a strong foundation of domain expertise, using experienced System Integrators.  The foundation cornerstones that deliver an effective SI process, rely on the following:
    • Strong Technical Expertise
    • Strong Project Management
    • Usage of Advanced Tools & Infrastructure to support the process
    • Strong Quality Assurance Practices
S3 have been kind enough to allow me to publish a slide on their overview of SI process:
S3's Integration Model: SI is a continuous cycle once all Components are in a suitable state of completeness

Based on the above cornerstones, the audit focuses on 48 topics of concern, that largely fall into the following categories - again, the below is generally best practices as evoked by IEEE SWEBOK Chaper 11 on Software Quality which proves S3 appreciate Industry best practices:
  • Project Management
  • Co-ordination & Communications
  • Requirements Management
  • Defect Management
  • Testing
  • Quality Assurance
  • Supplier Management
  • Release Delivery
  • Resourcing
  • Software Architecture
  • Integration & Configuration Managmeent
  • Software Development

My Own List of Audit Criteria
Since this knowledge comes from my own past experience, I've no problem sharing information with with you. If you want more details, feel free to contact me, or leave a comment. The list below is something I put together in advance of the audit we've just undertaken.

If I were doing a STB audit, this is what I'd be interested in:
  • Foundations of the STB Component (Middleware/UI) architecture design
    • Design philosophy – apart from being MHP/Jave/MHEG/etc compliant, what is the design ethos behind the architecture, i.e. does it scale?
    • Design patterns – what abstractions are used to ensure flexibility with changing requirements without major impact to design?
    • Performance requirements – is the product being architected with performance considerations in mind?
    • Security considerations – is security checks in place, to prevent buffer overruns, etc?
    • Stability considerations – how do we ensure stability checks are enforced in design / implementation?
    • Open Source components usage – if open source is being used, is there a process for choosing components, etc?
    • Inter-component Interface design and control – assuming the middleware is a collection of components, how are component interfaces managed?
    • Documentation – Requirements, Design – is there a documentation standard in place? Middleware design, component design, etc?
    • Process for introducing change to architecture? New components to implement new features? – How are new components introduced to the system?
    • API ICD Controls – how do we manage API changes??
      • Is there an SDK?
      • Quality of API Documentation? Is it usable, self-explanatory, auto generated??
      • New APIs must be communicated to application developers
      • Changes in APIs must be controlled
      • APIs must be backwards compatible
      • API Deprecation rules

  • Testing the STB (Middleware/UI/Drivers) stack – what is the testing strategy employed?
    • Depending on the architecture, assuming it’s component based architecture – do we have a continuous build and integration system in place essentially for regression testing:
      • Component based testing – unit tests on the component, using mocks for other components, assuming interfaces are controlled
      • Component group testing – where you collect related components together and test as unit, e.g. PVR engine can be tested in isolation
      • Middleware stack testing – test the middleware as a whole, i.e. collection of components forming the MW
    • Regression Suite – does one exist, how is it tested, and how often is it tested?
    • Spread of STB hardware platforms used for testing
    • Use of a Simulator
    • How often do we check for regression – daily, weekly, just before release?
    • How is new feature development controlled – i.e. new features to be delivered without causing regression in previous working functionality
    • Is there a concept of a middleware integration team?
    • Middleware development is separate to middleware integration, separate teams
    • Development teams do component development, and component unit testing
    • Middleware integration team takes development components and integrate – test new features against last stable middleware as well as test new functionality
    • Middleware integration team consists of technical people who can code – best practise is to use a middeware test harness that exercises the middleware, not just the external Application APIs, but speak directly to component APIs themselves
    • Is there a separate team owning the typical STB Hardware Abstraction Layer / Driver Interface?
    • Who defines the HAL spec / interface?
    • Change control procedures in place to support HAL spec?
    • Testing of the HAL?
      • Is there a test harness that validates new platform ports?
      • Is there a separate Platform Integration team?

  • Supporting multiple customers and build profiles
    • How does the architecture support the needs of different customer profiles: Satellite / DTT / Cable / IP ?
    • For the same customer with different project requirements, how is the codebase managed?
    • Build, Release, Configuration Management and Bbranching philosophy?

  • Multi-site development
    • How is multi-site development managed?
    • What configuration management system is used? Does it support multi-site development well?
    • Are off-shore teams used as support arms, or do actual development?
    • So off-shore team have any ownership over components?
    • How is communication managed with off-shore teams??
  • Planning releases
    • How are new releases planned?
    • How is work assigned to development teams?
    • What is the usual process for planning / starting new development?
    • How much time is spent designing new features?
    • Is testing included as part of the initial design review?
    • Are new features delivered back to main product code tree in isolation or in big chunks??
    • How much time is allocated to testing on reference platform?
    • How much time is allocated to testing on customer production platforms?
    • How is the backlog prioritized?
    • How long is the development cycle?
    • What is the exit criteria for development?
    • How long is the test cycle?
    • What is the exit criteria for testing?
    • Is Middleware integration part of planning?
    • Is full-stack integration part of planning?
    • How are defects planned?
    • Does the plan cater for stabilization period for new features?
    • Who decides on the branching strategy nearing up to release?
    • Are new tests identified as part of the planning process?

  • Defect Management
    • Is there a defect management process in place?
    • How are defects managed and factored into the development process?
    • How often is root cause analysis performed?
    • How is quality being managed and tracked internally?
    • What defect tracking tool is being used?
    • What metrics can be generated to gauge the quality of all components in the stack??
      • Average submission rate
      • Closure rate
      • Time it takes to close the average Showstopper
      • Most suspect components
      • Most complex components

  • Development practises – industry standard coding practices should be in place
    • Coding standard
    • Architecture rules – e.g. system calls to avoid, avoid using strcpy, etc?
    • Is CUnit test framework or any other suitable framework used for native component testing?
    • Is JUnit being used?
    • Do component unit tests map to component requirements?
    • Do middleware tests map to system requirements or use cases?
    • Are any of the following code quality metrics being looked at:
      • Zero compiler warnings
      • Zero MISRA warnings for native C components
      • Which Java best practises being used?
      • Code Coverage:
        • Line coverage tools
        • Branch Coverage
      • Code complexity
      • Function points
      • Other static analysis tools like Klocwork
  • Development organisational structure
    • Size of development teams
    • Structure of development teams – is it based on architecture components, i.e. notion of component teams, with component owners?
    • If using multisite development, what is the line management structure – is the team a matrix, with virtual management teams?
    • Do you have separate groups for:
      • Development: Separate UI development, Separate Middleware team?
      • Integration – is there a concept of middleware integration team?
      • Platform – is there a concept of a team specialising in low level HAL/Kernel/Drivers/Platform
    • What are the defined roles and responsibilities:
      • Product Architect  - the person owning the product architecture, maintains the integrity of the philosophy, keeping everyone in check?
      • Middleware Component Architect – do you have people owning specific components in the middleware, ensuring the interfaces are respected. For e.g. PVR engine usually is a separate architect?
      • Lead Developer / Component Owner – someone responsible for delivering component to spec, owns the internal design and implementation of the component?
      • Engineers – what is the mix of people and skillsets across the organisation




Tuesday, 22 November 2011

Meet Calvin, the security guard with Hope...



Calvin, on the day he passed his Drivers Licence
Meet Calvin, the Fidelity security guard onsite at my company.


A few months ago I would not have thought I'd help someone out by providing a renewed sense of Hope, Trust and Faith in fellow citizens of South Africa...but somehow I did. I am certainly not feeling overly proud, wishing to boast this to the world, in fact, we're taught to keep the good that we do a secret, so that it promotes humility and gratitude to God that it was only by God's will that some good has come out of your actions. There's a saying that if you do a good deed, it should be such that your left hand doesn't learn what your right hand has done, so keep it secret...

But in this day and age, it's necessary to share the little good that has been done; and with the power of social media, perhaps the same act could be repeated by others, creating a stimulus for good and create further joy in this world. So I feel compelled to publish a humble success that I hope can lead to other successes. For now I've helped someone by creating hope, and instilling self-confidence, that no matter what one's situation is, given the right motivation and support, if you're willing to try to change your life for the better, you will succeed, even at the risk of failure, as one of my mantras that I repeat often goes "If at first your don't succeed, try, try and try again"

Calvin is the security guard at my company who manages entry/exit of vehicles into the car park. We got to know each other in the first couple of months as I was using a hire car at the time, and used the visitor car park quite frequently.  We only got to striking a good conversation when I braved the walk to the local Civic Centre offices in Randburg to enquire about my SA Drivers licence.  I say "braved the walk" because nobody in Joburg really walks anywhere, and I returning from the UK after 10 years, have grown accustomed to walking anywhere.  Against the concerns of some colleagues, I decided to walk the 800 metres to the Office, to get a sense of the security myself....suffice to say, it's safe and I've repeated the walk several times hence.

On my back from the Licence office, I spotted Calvin and waited for him, a) so I could chat and b) to have a security guard as company :-) On this brief walk back to the office, I learnt that Calvin spoke very well, had strong opinions about corruption and the state of the country, he was quite curious to find out about London. He was in awe about the underground trains and more so, when I told him about the level of transparency and accountability the politicians have with their constituents, that there is a strict control over corruption and personal expenditures, so much so that politicians public resign...he wished the same could be done in SA.

I would then stop at his post now and again for a chat, and one day in August I found him really stressing about his situation. He tells me how he's been a security guard for the last 6 years, how he wanted to study engineering and was forced to leave school at an early age when his father passed away...and now he's got his Truck Drivers Learners certificate that would soon expire (Dec 2011) and he's got no money to do what's required to get his Drivers Licence before it expires. I knew in my heart immediately that I should help this guy out, and told him not to worry, these things have a way of working out, and left him with that thought - went for my meetings, and returned in the afternoon to continue the conversation.

I learnt that Calvin was serious to change his life for the better. As a security guard, he'd become friends with many of the truck drivers the company uses, and so he was aiming to get his Code 14 drivers licence with the aim of working for a trucking company. Truck drivers certainly earn much more than a security guard, so he'd started the process of, had passed his Learners test over a year ago, but just didn't have the funds to go for driving lessons and do the exam.  He also wants to study engine mechanics -- I recommended he focus on one goal at a time, first get the licence, change jobs and then think about the next step...

Without consulting my wife, I told Calvin that I'll help him get his licence.  Obviously, to a seasoned South African, all alarm bells would be going off at that moment, around trust, being taken for a ride, taken advantage off, etc... Dismissing all of the negativity, I proposed to Calvin that I will contribute the majority of the costs towards the licencing, provided he contributes some of his hard earned money to the cause.

I didn't want to do everything (although I had the means to fully sponsor it), but I felt that if Calvin contributes some of his own money, he'll be compelled and motivated to work hard, and succeed... I also grew up in tough times, I value commitment and dedication, Nothing comes for Mahala...

With this contract in place, we worked together on a plan of action and aimed that by end of November, Calvin should have been on enough driving lessons to be comfortable with taking the final exam.  Calvin surpassed expectations, and with five lessons managed to pass his Code 14/C1 Drivers exam!  We were both beaming with joy that day - you can see it in the photo!

Proof!
So what costed me just under R2000-00, which I could've easily used up on toys and take-aways, I helped enabled Calvin to step out of what was a depressing situation, and instil a renewed sense of hope.

It's not over yet...
Getting his licence is the first step. My aim is for Calvin to land a job by the start of the new year.  He must leave his job of Security Guard of the last 6 years, and take the plunge into a world full of possibilities.  I will continue to look out for Calvin, my company has a campaign called "Be More" and I'm going to send this blog post to the senior management and HR to see if the company can reciprocate or think about setting up similar initiatives...
I am also going to contact Talk Radio 702 to see if this is an example for Lead SA...

Doing more for South Africa...
When I left SA to work overseas, one of my ambitions was to return home and add a valuable contribution socially and professionally. I've made a humble start with the social aspect, but I do have bigger ambitions for the helping the professional outlook.

I'll post about this topic later, but I have a bee in my bonnet with the lack of skills/competencies in the IT/Software development field...I was really surprised that much of the workforce in my current company is outsourced to contractors from India, that there isn't any talent in our country. I blame the schools and tertiary institutions for not doing enough - so much so - that I strongly believe that it's a waste of time and money to go to a SA university: If you're interested in IT or programming, teach yourself, become self-taught, work on Open Source Projects, that's your ticket to landing a decent job and earning a salary...I've been mulling over setting up an Open Source Software academy where people can learn from high school, real world software engineering, contributing to real world projects without needing to attend University...

Saturday, 19 November 2011

Agile Project Management with Scrum by Ken Schwaber




Agile Project Management with Scrum (Microsoft Professional) is a well written, easy flowing book that is clearly borne from someone whose had first-hand, real-world experiences of running and managing a variety of Software Projects over a number of years, Ken Schwaber, who is considered the father of Scrum who humbly takes no responsibility for being assigned that title, and so points to much earlier endeavours of many people, pointing to Babatunde's Process Dynamics, Modeling & Control for his first oral presentation of the theory behind Scrum; and sites Degrace's Wicked Problems, Righteous Solutions as the first people to call Scrum Scrum.
Regardless of who is attributed to formalising the Scrum processes, it is a software development methodology that is here to stay, and anyone working in software projects not familiar with Scrum should indeed rethink their strategy. Still, with so many books out there claiming to distil the secrets of Agile/Scrum, finding the right book is indeed challenging -- as with so many books, people focus on the theory without expounding on the practice, the real-world experiences and encounters that as a manager, and especially as someone transitioning from classic project management principles (not necessarily connected to the Watefall process of software development).

This book is definitely different, one that I recommend and fully endorse 100% - so if you're reading this and don't have a copy, get one now!

Why do I like this book so much?
Well, as a developer I'd worked on many projects of different styles, more recently explored with Agile/XP concepts. I've been on training courses, presentations & and participated in practical workshops such as the agile lego game. I've contributed as a developer, lead certain development efforts with a few engineers, then later went on to managing development projects using Waterfall as well as Agile; and more recently came off a massive project with 300+ people, where at the macro-level applied Agile/Scrum principles across the programme, but I didn't have much involvement in the day-to-day planning with individual component teams, where we'd assigned team leaders to act as Scrum Masters. We maintained one huge Product Backlog, not so different to that as Ken describes in Chapter 9 Scaling Projects Using Scrum.  We were developing a product that would deliver to multiple customers, each customer adding unique features contributing to the final feature set, the product itself would service tens of millions of homes -- so we had a strong Product Management group maintaining the heartbeat of the multiple projects using Sprint Time-boxing. We had macro daily planning and status updates, call it Scrum of Scrums.  So I was pretty much focussed on the high level programme and product management, than the low-level activities that a Scrum Master would normally be dealing with, I'd instruct as necessary....

As I was moving to a new job that placed me in a Scrum Master role, I needed to brush up on my Scrum Master knowledge and needed to prepare or translate my heavy Project Management experiences to Scrum principles. This book was certainly written with that purpose in mind. I was looking for real world use cases and reflections of actual projects, something that Ken describes well.

Ken touches on the key concepts of Scrum by using example projects as references, with frank and honest feedback. He includes much of the areas that would trip someone up coming in cold to Agile, and also does not dismiss the value and usefulness of applying the rigour of classic project management dogma (reporting, tracking, metrics, predictions) as old-school, out-with-the-old, in-with-the-new.  In fact, Ken advocates entirely the opposite of that. It certainly takes a while to master Scrum as a Scrum Master, and therefore takes much longer for a company with seasoned managers who know no better, to transition and accept Scrum from day one.  So as important stakeholders in the project and business, these people have a right to be given information in a format that is understandable, and this too, can still be achieved using Scrum.

What really resonated with me was...
Ken has re-affirmed much of my own understanding in that Agile/Scrum is no excuse to cut corners. Scrum is not a short-cut from applying rigour and due diligence.  Way too often, people who are either new to Scrum or have been previously burned by management, fall in love with the concepts of Scrum, especially the tenets of autonomy, freedom and do-what-it-takes attitude to get the job done, immediately put up barriers when someone starts talking about processes...

  • "Oh that's waterfall approach, we are young developers, we don't do it like that anymore. We are light on process, and don't need documentation. We don't need to have a process for teaching new engineers, it's a collective, the engineer will be absorbed into the team and learn on the job. We don't have release notes any more, it's automated by Git. We don't need a version numbering system, Git labels are fine. We don't do need pre-planning, we do just enough because it's going to change anyway...."

Principles I connected with...

  • The importance of taking time out to produce a Product Backlog early in the project. The argument of not knowing what you're creating because it's unknown is not good enough. Even if you're aiming to brainstorm to produce an initial Proof-of-Concept (POC), that POC is driven by meeting the high level needs of the Product Owner, so it is not an impossible activity to put thoughts onto paper, call it your wishlist, to-do list, whatever - there must be a Product Backlog to start with...how else do you determine the goals, and measure your progress accordingly??  So if you think you're doing Scrum and don't have a Product Backlog, then please go back and reconsider what you're doing...
  • Ensuring the Roles/Responsibilities are clearly identified - especially the ownership of the Product Backlog - Is your project big enough to have its own Product Owner, or is this also something the Scrum Master can absorb?? The Product Owner's role is pivotal to setting feature priorities only, but not responsible for driving through scheduling or people management. That is the Scrum Master's job...So spend time to clearly outline the roles and responsibilities...
  • Spend enough time Planning - the planning process described by Ken is a useful starting point. Spending a full day on planning and getting the team to uncover the tasks before the Sprint begins is definitely valuable.
  • Due diligence must be followed by Scrum Master - measuring progress and reporting on progress, generating metrics and predicting the future are essential aspects to Scrum Management.  Don't be fooled by Scrum being lighter than classic project management. Scrum teams still have a responsibility to meet business objectives, how you go about doing it is the Scrum teams own business, but when asked with business-type questions, then the Scrum team better have enough data to provide professional responses
  • Don't misuse Waterfall defence mechanisms - who says you don't do requirements/design/testing - Scrum says you ensure enough is done during the sprint such that at the end of the Sprint you produce a release that is shippable - so a Sprint must support requirements/design/testing/validation/release - same steps as waterfall, but it's constrained to happen within the sprint itself. Of course, one doesn't have to have heavy processes, but it is about applying core engineering principles.
  • Self-organising teams are difficult to create, and takes time to master.  As a Scrum Master you have to continuously monitor the team's performance and interactions.  The hard truth is that in the real world, using distributed development or even multicultural teams with mixed permanent/contractor staff, a truly self-organising team is a rare find.
  • Common sense - a large part of Scrum is essentially maintaining a common sense and pragmatic approach to things.  One doesn't have to be a certified Scrum Master to manage Scrum projects, but care should be taken in obeying & applying the rules of Scrum, which Ken concludes in Appendix A:
    • The ScrumMaster is responsible for ensuring that everyone related to a project, whether chickens or pigs, follows the rules of Scrum.  These rules hold the Scrum process together so that everyone knows how to play. If the rules aren't enforced, people waste time figuring out what to  do.  If the rules are disputed, time is lost while everyone watis for a resolution. These rules have worked in literally thousands of successful projects. If someone wants to change the rules, use the Sprint retrospective meeting as a forum for discussion. Rule changes should originate from the Team, not management.  Rule changes should be entertained if and only if the Scrum Master is convinced that the Team and everyone involved understands how Scrum works in enough depth that they will be skillful and mindful in changing the rules. No rules can be changed until the Scrum Master has determined that this state has been reached.

Watch the man himself here @ GoogleTechTalks: