Thursday, 7 March 2013

Overview of Open Source Software Governance in Projects

In an earlier post, I introduced the topic of the increasing use of Free & Open Source Software (FOSS) in Digital TV projects. This post provides a brief overview of the various areas to consider as part of managing open source software in projects.

I am pleased to share this as my first guest blog post, by Steven Comfort. Steve works as a Software Compliance Analyst, we crossed paths when I started searching for support & consultancy on implementing an end-to-end FOSS governance program. This is still a work in progress, but we'd like to share with you our take of our experience / learning to date - in the hope it would help others who might be thinking of doing the same...

Open Source Compliance
With the partial exception of mobile phones, the embedded device operating system wars can realistically be said to be over, with Linux in its various flavours emerging as the clear winner. Coupled to the proliferation of open source software stacks and applications, it is highly unlikely that any device you purchase will be devoid of open source software.

When Free and Open Source Software first started penetrating traditionally proprietary software solutions, many people were sceptical and the cliche "There is no such thing as a free lunch" were commonly heard. Hopefully this short piece will assuage those suspicions, because there is in fact a cost associated with using open source software. Put simply, this cost is associated with fulfilling the license obligations concomitant on that usage.

Using Open Source in Set Top Box Software Stacks

Set-Top-Box (STB) software stacks have come a long way since its inception back in the early 1990s, with custom & bespoke software vendors providing proprietary components for most of the STB software-stack. Still today, most of the higher level STB components are still indeed proprietary - for example, the likes of Middleware providers (NDS, OpenTV, Irdeto, DirecTV, EchoStar, etc.) are still pretty much closed, but the game-changer was really the advent of the Linux Kernel, and especially its foray into the embedded space, when it really started to take off and start gaining critical mass from early 2003 in Set-Top-Box stacks (if I recall correctly). And more recently, and this is something that I'd forewarned my previous company (NDS) about, was the advent of the Android stack, way back in 2007 when I first downloaded the Beta Android Core SDK and saw immediately the similarities of this stack vis-a-vis set-top-box middleware software stacks, and lo and behold, Android STBs were seen in early 2009/2010; and are already becoming the mainstream choice for quick-and-easy, low-cost-to-market projects.

I am not sure how many STBs (apart from legacy) are still using the likes of uCos, vxWorks, Nucleus, ST OS/20, etc.(I had at a previous life worked on all these OSes), but nowadays, the OS of choice is definitely a variant of the Linux Kernel, available for free! A Linux system opens up the world to so much more: in terms of utilities, components & application engines, almost any type of functionality that exists on the PC/Systems world, is just a port away from the embedded device world.

PayTV Operators and Middleware System vendors alike, would spend months to years, re-inventing the wheel for functionality & features that are readily available in the PC world. Before Linux, these vendors would literally re-create software components from scratch: Compression engines, Point-to-Point Protocol, TCP/IP stack, HTML engines, UPnP stack, etc, etc. Time-to-market is much shorter than before, proof-of-concepts can be also be done in record time. In past projects I've seen teams use free, off-the-shelf components, to name a few: libupnp, pppd, httpd, dhcp, libcurl, xbmc, ffmpeg, webkit, directFB & gstreamer to do some impressive demos. I myself, have personally used other open source software like the Festival Speech Engine to bring real-time Text-To-Speech to set-top-boxes.

I attended a session recently where Broadcom's (BCM) STB team presented their chipset roadmap & demonstrated their Trellis Software Framework, built on open source. It's really interesting to see that BCM is quite well established in the Open Source community, using as well as reciprocating. Their proposals on using an open interface really defies the old ways of traditional Middleware stacks is really an interesting (most likely disruptive) thread worth keeping an eye on...

Tuesday, 26 February 2013

Modeling Software Defect Predictions


In this post I will share yet another tool from my PM Toolbox: A simple method any Software Project Manager can use for predicting the likelihood of project completion, based on a model for defect prediction. This tool only scratches the surface of real software engineering defect management, but has nevertheless proved useful in my experience of managing a large software stack. For instance, this tool was borne from my direct experiences as a Development Owner for a large Software Stack for STB Middleware that comprised of more than eighty (80) software components, where each component was independently owned and subject to specific defect & software code quality requirements. I needed a tool to help me predict when the software was likely to be ready for launch based on the health of the open defects; as well as use it as evidence to motivate to senior management to implement recovery scenarios. It was quite useful (and relatively accurate within acceptable tolerances) as it offered perspective of a reality that more often than not, people underestimate the work required to get defects under control, the need for predicting project completion dates based of defect resolution rates, depending on the maturity of the team is often also misunderstood. 

I created the first instance of this tool back in 2008/2009, the first public version was shared in this post on Effective Defect & Quality Management. Since then, I've upgraded the tool to be more generic, and also significantly cut down on the number of software components. I still use the tool in my ongoing projects, and have recently convinced senior management in my current project to pay heed to defect predictions. 

Friday, 8 February 2013

Risk Management - Generic Risk Register DTV Projects


In this post, I share a generic Risk Register than can be tailored for almost any Digital TV Systems Project, especially projects around Set-Top-Box product development. The register can be used by Program & Project Managers from the customer (Pay TV Operator), vendor (Middleware / EPG / Drivers / CA) or Systems Integrator. It can be tailored to meet the specifics of a project.

You are free to download this tool, use it as a template for your own projects, as long as you retain some attribution to me as the original author. All my work on this site is licenced according to Creative Commons Share-Alike attribution.

Over the past decade of working with Digital TV projects, especially around Set-Top-Box projects, I've come to experience common themes that repeat themselves one project-after-another-after-another. Whilst it is true that no two projects are the same, I do believe that projects will ultimately be exposed to similar risks, issues & opportunities - besides, we can no longer say that Digital TV is a new and emerging field. We've been making Set-Top-Box (STB) hardware & developing STB software for just over two decades now, and as with all manufacturing & development processes, we have probably reached a point where some standard blueprints can be put together, forming templates that can be used to guide such projects going forward.

So it is the aim with this post, to share a Generic Risks Register, that is borne out of my own personal experiences - that I myself use as a template for managing Risk on every new project I embark on.

Wednesday, 23 January 2013

SI QA Sanity Tests debate...round one

Today, after 19 months of joining the company & about 18 months from getting involved with ongoing projects, we finally decided to have a discussion around the topic of "Sanity testing" since I had a tendency to always comment on why we continued with downstream testing even though "sanity" had failed, but I was always overridden that it was deemed acceptable for the current stage of the project, and since I wasn't directly involved in managing that project, so I let it be - the timing wasn't right yet...

But even today, on a project that I do have direct involvement in as the overall programme manager, having influenced & steered much of the development / integration / test process improvements to bring the project back on track again, there is still some confusion around what "sanity testing" really means. The technical director managing the launch, who's position in the corporate hierarchy is one level above myself (and the QA manager), has maintained a different view of Sanity Testing than I. But it seems I'm not alone in my view, having recently gained the support of the QA Manager as well as the Product Owner - so we decided it was time to meet and discuss this topic in an open forum, to try and reach an understanding, to avoid confusing the downward teams, and establish a strategy for possible improvements going forward...easier said than done!

One of the challenges I face almost daily is that of enabling change across the project team, sticking my nose into Development / Integration / Test departments with the aim of steering them in the right direction, to help increase the chances of the project delivery.  I do this, even though I myself, strictly speaking am positioned within the organisation as a "Programme Manager or Strategic Planner", wearing the hat of Project Management, as if applying PM pressure across the team isn't enough of a thorny subject already, I still go and push for process changes in Dev / Int / QA - am I a sucker for punishment or what?? But I can't help myself really, because my technical background, I've been through all the stages of software engineering as both engineer and manager, and have worked for the best companies in the industry, who, through the years evolved to using best practises -- so when I see things being done that deviates from what I consider the norm in the industry, I just can't help myself, but to intervene and provide recommendations because I can help short-circuit the learning curve and help the team avoid repeating expensive mistakes!

And so with the topic of QA Sanity, we've never respected the term, and continue, despite failing sanity, to proceed with testing an unstable product, in the hope of weeding out more failures:

  • What does Sanity mean? 
  • What do we do when Sanity fails?
  • Do we forsake quality until later in the project?