Monday 4 June 2012

Managing Large-Scale Projects using Agile, Part 2 - Organisational Challenges


Organisational Challenges of Large-Scale SDPs
In this post I will describe the organisational challenges with supporting large-scale software projects, especially when the management philosophy is one that promotes agility, or has its roots in adopting Agile/Scrum early with a small team and then ramps up to using a globally distributed project team. Specifically describing the approach we took in managing the example project as described in the previous post. As with any organisation adopting Agile, there must be management support. In some cases organisations have transformed through bottom-up approach, but in an LSSDP project, some initial strategic planning is expected, it's imperative that management support is won at the outset (more about this later...).

Whether you're running a classic, relatively small project (say a team of 15) locally or multiple teams (250+ people), the essential elements of team/project management remain the same, the only difference is the scaling factor as the team expands. In my opinion, the following factors are integral to any software development initiative:
  • Communications: Classic example is the communication model that Brooks provides, with n being the size of your team, the different communication paths is formulated as: n*(n-1)/2  - so a team of 10 would have 45 communication paths, so a team of at least 200 would create 19 900 paths!  With this in mind, pains must be taken to ensure that organisationally, a suitable structure must be put in place to sustain manageable communications.
  • Organisational Team Structures: Flowing from the above problem of communications, each team needs to have an identity - roles, responsibilities needs to be understood. This is a trait common to both small and large teams alike. Defining the structure for the project/organisation through the classic org chart, does help in clarifying the structure.  Even though Agile promotes a flat team, collaborative decision making and participation, it is still useful to ensure the roles are identified and understood.  With a large-scale project though, collaborative discussions and decision making can still sometimes happen, but is extremely challenging. Large scale projects call for a really strong management structure (which involves technical people as well) to ensure momentum in decision making. 
  • Worskpace Environments:  Common challenges to both large and small teams alike, one has to ensure the team's physical environment at which they work, is not only comfortable from a personal space perspective, but also to ensure the workspace can support the needs. For example, Set-Top-Box development/testing requires a desk-space large enough to cater for a few screens (monitors / TVs), ample power supplies, space for mobile stream players, etc.  Agile/Scrum promotes the use of whiteboards for tracking work for example: So do you have mobile whiteboards, fixed boards or what? How do you solve this problem when your team is distributed?? How do you ensure that all your people in all geographic locations have a similar setup as the rest of the world?
  • People Skills / Training needs: Whether you have a small or large team, you will experience the same people challenges: Does the team have the knowledge/skill-set to do the job or task at hand, effectively, efficiently?? How do you build competencies across your team base?  What are the basic fundamental prerequisites of skills/knowledge you need to have? How do you promote transfer of knowledge & training?
  • Peopleware Issues & Cross-Cultural Challenges: Building software is more about people dynamics than implementing technology. If you don't have a well-formed team (Agile/Scrum promotes self-organsing, well-formed teams), which is true for both large and small teams alike, you will experience problems that inhibit the smooth flow of development, impacting your project's delivery.  In a small team, the Team Leader, Scrum Master or Development Manager has more control over this, and has the time to see his/her team through the various stages of Forming, Storming, Norming & Performing. With small teams this process takes time.  With a large contingent of 200+ people, this process is exacerbated, being very difficult to manage and assess due to the spatial and temporal differences in managing distributed teams.  In our project we had people from UK (two locations), India, Israel (Russian, Polish, American, S. African), France & Korea, with third parties from Poland, UK, Ireland & India.   How can you achieve a well jelled team of international players? How do you solve the cultural issues? How do you avoid the frequent Lost In Translation moments?
  • Maintaining Integrity of Software Architecture: All teams need a custodian of the software architecture, typically this is your architect, in our context, the STB architect. Where the team is small, you can have just one STB architect. On a team as large as 200+ people, all implementing a complex architecture, how do you ensure the technical integrity of the architecture is maintained?
  • Tools/Infrastructure/Configuration Management: Like any other trade, a software team need the right tools for the job. Managing your local team it's easy to notice gaps and provide tools; it's far more complicated with distributed teams, since the challenge is in maintaining consistent adoption of tools, i.e. tools should proliferate and be different from team to team (Defect tracking tools spring to mind) so establishing a common infrastructure plan does help in creating consistency for the programme, and maintaining a sense of harmony within the distributed team itself.
In the remaining sections I'll touch on how we addressed each of the above areas referring to the example case study mentioned in my starting post. This post is arranged as follows:

Workspace Environment
The company's thought leaders had convinced the business to adopt Agile practises globally and embarked on a massive upgrade to the office work spaces. In the UK we upgraded the office-space to include floor-to-ceiling whiteboards, 4m x 4m x 4m space for stand-ups and ad-hoc break-out areas. The development team were kept on the same floor, grouped logically by the components they owned.  Desk space was large enough to accommodate a number of displays (TVs & monitors), as well as portable stream players. We'd also enabled all phones with headsets as people spent a fair amount of time on teleconferences.
All types of cable feeds were installed directly to people's desks enabling access to all infrastructure required.  An ample supply of meeting rooms were available, although we primarily used one meeting room for most of the discussions, we never really had an official WAR room.  There was a lot of white space that allowed for impromptu meetings. We also installed separate, but close, labs dedicated for third-party support (e.g. driver support). Break-out areas included comfortable couches/sofas, as well as a games room (XBox360, PS3, etc).
We also encouraged one-roof-integration-teams (a.k.a. ORITs) that catered for 20 hot-desks where we'd tackle key integration activities with key people on-site: developers, integrators, testers, vendors - all in one-site tackling the problems at hand. These ORIT-stations were kitted such that anyone from any location could come in and be productive from the first day.
The company also had installed HD video conferencing units world-wide, enabling us to have video conferences almost daily.  As the project team was huge, we had facilities that supported the 200 people in one auditorium regular group-wide updates.

IMHO a major challenge with a distributed team is trusting your team have similar / equivalent the skills and knowledge to do the job; it is especially challenging when it comes to distribution of tasks and managing accountability.  We had a massive engineering base, with at least 200 developers, integrators and testers. The nature of the work was complicated, we were using new technology, processes and tools. So training was high on the agenda.
All members of the project team were sent on training, standardised for the organisation to maintain consistency: Week long course in Embedded Linux Programming, 3-5 day workshops on Agile & Scrum, 2 day workshops on Negotiating/Facilitating/Listening skills. Rational ClearCase, ClearQuest training. Managers went on separate workshops for Agile/Scrum Master courses. People were encouraged to join Agile forums & groups.
Project Managers also went on training as well to establish common management practices. We had a strong body of knowledge within the organisation itself, and got people to run training courses on coding best practises, debugging techniques, architecture and design patterns. Just about everything that required a knowledge-share had training associated with it.
On the people, cultural issues - there were regular workshops such as: Working with the Israelis, Working with the Americans, Working with the French, Working with the Indians and of course Working with the British. The company offered online services for Cross-Cultural awareness. Each employee has a cultural profile created, whenever you were going to deal with someone from a different country, you could view the regions profile to prepare for you upcoming interactions with that person.  This tool was extremely useful. It went to the extent of profiling each region's cultural index, we could compare different regions, to the point of observing the profiles of senior management. The tool is called Cultural Navigator. You also had this feature about cultural holidays in regions, pointers about what to say, etc -- I found this tool extremely helpful.
We also encouraged frequent travel and working on-site, even got people to learn other region's components - we had to break the view of working in component silos.
Each local region organised team building events since the team was too huge to convene centrally. However the senior management did travel to each region to promote a presence and maintain harmony and commitment.
The company also hosted annual global developer conferences, a large contingent of the attendees being from the Fusion team. This was great in fostering knowledge-sharing and other workshops hosting tutorials on relevant technology, etc.
We encourage frequent communications, especially the use of the phone over email, instant messenger, etc. In the section on tools, I'll describe what we put in place to encourage communications and collaboration.

Organisational Structure
The company was a large international corporate with engineering development centres in US, Canada, UK, France, India and Israel, employing close to 5000 people worldwide. Specialising primarily in conditional access, we also provided much needed peripheral services from broadcast scheduling control, IPTV streaming and content management, as well as STB Middleware (120 million worldwide deployment, including 40 million+ DVRs) – offering an end-to-end solution for broadcasters to deliver services with a very short time-to-market. With over 100 customers globally, including tier 1 – BSkyB, DirecTV, TataSky, SkyItalia, Foxtel & Viasat, the business has to maintain an efficient organisational structure to realise the delivery of its projects, consisting of the following macro structures:
  • Sales & Marketing department deals with customer proposals and new projects, working with Delivery and R&D on tenders, requirements, etc.
  • Delivery department deals directly with the customer offering Systems Architect, Integration and Test services. Generally R&D projects deliver into Delivery, who handle the deployment of the end-to-end system.
  • R&D is the research and development organisation that owns product development of Headend and STB components. It consists of a pool of architects, development and test teams.
  • Advanced Technologies / New Initiatives department – deal with new ideas which see R&D, kicking off new projects that either enhance existing products or offer a new offering for marketing to entice new as well as existing customers alike.
Depending on the project, a “new concept” will have had several rounds of review before entering R&D, via the Delivery group, where customer architects will have worked through the end-to-end system requirements and behaviour with the customer, and agreeing on the macro-architecture for going forward. R&D has a pool of product and component architects that then break down the problem and design component solutions, depending on the component, could provide detailed component design. Development teams work closely with the architects and implement the solution. The freedom in implementing component solution varies: sometimes the component owner has complete freedom and ownership of the internal design mechanics of the component and its subsystems, as long as the external interfaces defined by the architects are adhered to; sometimes the architects are involved in the code and algorithm level. If you’re familiar with the writings of Frederick Brooks renowned for the essay titled “The Mythical Man-Month”, the organisational structure of NDS utilises is largely based on the team described by Brooks as “The Surgical team”, albeit on a highly distributed model...

Our project however was somewhat different in that it originated from R&D product development and then transformed into a customer delivery project, so R&D team actually fulfilled the roles of the macro-departments mentioned above. So our project team essentially handled the end-to-end marketing, product management, development and project delivery of the STB system. Split across the UK (2 sites), France, India, Israel & Korea, the the team consisted roughly of the following structure as highlighted in previous post:

Generic Org Structure Template

Rough Org Structure of Case Study Project showing Regions/Groups
Click here to view in Scalable Vector Graphics format.

The main players were (Each group will be expanded upon as necessary in this and upcoming posts):
  • Project & Product Management
  • Architecture
  • Development: Middleware & UI Application Development
  • Integration: Middleware & System Integration
  • Testers: Middleware, Application & End-to-End Testing
  • Tools, Infrastructure, Build & Config Management
Communications
With a large project team of 200+ people situated across the globe, maintaining effective communications is indeed a challenge.  To compound things more, this project was about creating a product from the ground up, we weren't developing on top of existing, mature technologies. The project was both a product development initiative as well as a delivery project.

So we needed a platform that promoted transparency and collaboration, we chose Microsoft's SharePoint platform.  Despite what others might say about Microsoft products, or your developers moaning about forcing them to use Windows (when their main platform for development is Linux), SharePoint, if installed and architected correctly can really work well when measured against platform stability, availability and performance.  Noting that we were split across continents and countries, SharePoint was a natural part of the IT infrastructure that made it homogeneous across networks, allowing corporate IT teams to take ownership for the infrastructure.  People would only need a browser.

SharePoint in itself is useless without a compelling portal.  We had a portal dedicate to the Fusion product, that was the epicenter for communicating everything about the product, as well as the multiple customer delivery projects active at any given time.  The master product backlog was maintained on SharePoint. People could raise new requests for work packages (a.k.a user stories), which would trigger a work flow notifying the product management team to review and approve. New work packages submitted would be approved/rejected, notifying the submitter as appropriate.  Behind the scenes was an Excel spreadsheet that contained the raw, physical backlog.

Remember this project included 100+ software components and had multiple teams owning different components that had to interface with each other. Every component team maintained a page on SharePoint, where they effectively managed their component: Details of latest release, release notes, links to release binaries, component test results, component defect metrics, etc.  Component owners would instigate a work flow to publish the availability of their component to the rest of the world.  We had a similar portal for the Integration team.

Component interfaces, APIs, were strictly controlled - again - the mechanism was through SharePoint. Using work flows, component owners would submit API change requests for review to the architects and peer developers. A review process is triggered, and on approval the new API details would be forwarded to a holding area for collating. During the first week of the development cycle, all API changes would be collected, packaged and sent to the customer for review (customer was developing application built on APIs).

We had a lot of stuff being managed through SharePoint - a lot of email traffic. There was too much of email, but it was a necessary evil.  Mailing lists were setup for all key groups.

All technical discussions, API changes, new APIs were sent to the architects group for review.  All technical issues were tracked using a tool called Roundup, which captured the same on SharePoint for audit tracking.

We used other forums to discuss issues found in the field, we were dogfooding as we developed. There was extreme collaboration.

I mentioned earlier the business had kitted out state-of-the-art video conference suites with interactive, networked whiteboard capabilities. The project and development teams heavily relied on VCs, especially when collaborating over design and architecture decisions.  The personal presence element of HD video conferencing was helpful and sometimes it felt as if the people were actually round the same table. Company had installed a dedicated MPLS network backbone with guaranteed quality of service levels.

In terms of project planning, management and control - there was daily interaction at the management level, a sort of macro-standups that happened every day, without fail from 9AM-10AM GMT. That set the tone for the day, the expectations for the week, on all active work streams. The message will be filtered down to engineers on the ground through their respective line managers, who also would run daily stand-ups with their respective teams.  Progress was communicated daily at the macro level: Deliveries planned today, Expected Tomorrow, Slippages, etc.  This continuous communication ensure everyone had the correct information and knowledge of what's going on with the project.  These were of course synchronized with the SharePoint project portal.

We also relied heavily on regular reporting, presentations and status updates.  The planning methodology centered around agile and had defined a strict template - a cycle of 6 weeks - that executed like clockwork. Leading up to the iteration, communications will be sent out regarding the latest status of the Project/Product Backlog. Enough notice would be given to the team to prepare for the Iteration Planning meetings. Regular checks and balances were in place during the course of the iteration, making use of daily emails, messaging and discussion forums.

Individual component teams resorted to internal wikis that were all searchable and accessible globally. Frequent newsletters were published often highlighting interesting topics worthy of sharing with the wider project team. Some people published video/webcasts/training material to further enhance collaboration.

Maintaining Architectural Integrity
Software architecture is integral to any software development product, no matter how complex the product is - there is always an underlying philosophy of design, or method of approach used for implementation.  The product we were building was essentially the operating system for the set-top-box, the Middleware. This Middleware provided services that enabled application development, example your TV guide or EPG, as well as interactive TV applications (News, Weather, Sport).  The software ran on multiple hardware platforms, requiring adherence to a very strict hardware abstraction layer/interface. The Middleware consisted of a number of different modules (software components at last count was around 80), these components were logically grouped according to the specialisation of the functionality being supported. These components again were distributed across the globe, had a development team of 150+ people working on it, with a team size of 8-12 people owning at least 4 components each.  Each component was assigned a component owner, a.k.a. technical lead. Each component also had an assigned architect. Collecting all these together, we had a team of 20-30 architects supporting the architecture.  There was a single lead architect for the project who ensured the team of architects were synchronised and maintained coherency with the architecture. We also had a chief architect that ensure the fundamental product architecture wasn't being compromised by custom project requirements. The chief architect played dual role of project architect as well, since this was our first customer. When additional customer projects sprung up, the chief architect handed over control to a lead project architect, to focus instead on maintaining the architectural principles to support multiple customer projects simultaneously - the main focus was maintaining generic implementations avoiding the need for branching or forking to specific customer demands.

This sounds all complicated, in the world of product development, where the goal was always to maintain a single product platform that supported multiple customer profiles and features, without maintaining different customer branches - this arrangement of architecture had to be maintained.  To this day, there are at least 4 major customer projects all building up this common Middleware architecture, the development teams using the same code base to deliver on multiple customer projects.  This is indeed possible when the architecture is tightly controlled, especially when a lot of thought goes into designing a common code framework such that it can be configured and tailored to suit the needs of multiple customers, using reusable software components that can themselves be individually tailored and customised.

So how exactly did we go about maintaining Architectural Integrity then?
  • Maintaining a strong sense of ownership and control - Design Authority
    • There was clear and unambiguous instruction as to who owned the architecture. Although the group of architects were themselves globally distributed and were specialists in specific areas, there was still a central figure who held ultimate accountability for the architecture - who had final say on architectural matters, especially when consensus between architects couldn't be reached.  Respect the oracle, our very own BDFL.
  • Establishing strong Foundation Principles
    • The architectural design took several iterations to get right before instigating any real development work. Pains was taken to ensure the model would be sustainable and scalable to support the desired longevity of the platform, i.e. shelf life for at least another 10 years. So to maintain strong architectural integrity, one has to first ensure the foundations are solid
    • Software Model - inception, maintenance and continuous improvement
      • Architectural principles were clearly documented, along with a set of rules and coding guidelines, example:
        • Inter-Process communications - all components to use Client/Server IPC model
        • Inter-Thread communications model
        • Memory management model and rules
        • Logging Library
        • Design patterns for APIs, e.g. Property Bags that allowed functional signatures to remain the same without breaking the interfaces
        • Strict coding standards - applied the MISRA C coding standard
        • UML model always updated to reflect all updates
        • Everything was abstracted - strict rules preventing direct calls to native libraries, unless approved or through the proper components
        • Memory usage was strictly controlled - memory pools for each component was budgeted and required architect approval
        • Thread and process priorities were also subject to architect approval
      • Controlling change
        • Component owners were not allowed to implement changes without prior approval from architects
        • Interface changes were tightly controlled through ICD Change process - followed a strict review process
        • Strict code review and approval processes - generally had a few layers of code review. Architects themselves could look at code if they wanted to. Everything was made public for anyone in the company to review and pass comments.
      • Continuous improvement
        • Reusability of software components and other in-house products was always promoted (don't re-invent the wheel)
        • New components had to be proposed and approval sought for acceptance into stack
      • Customer Support
        • Application APIs were documented in detail to support the needs of application developers. The Middleware supported a number of interfaces, e.g. Native-C interface, Java API and Flash API. Each interface had a complete Developers Reference, sometimes packaged with a Simulator to allow customers to develop applications
        • We had a team of technical writers who maintained the accuracy of the architectural documentation: Product Requirements, Design, SDKs, etc.
  • Communications
    • As discussed before, communications is always challenging, especially with a cross-cultural team, and quite fundamental when it comes to maintaining architectural integrity.
    • As highlighted earlier, there was a common product backlog that was available to all. Any new work had to be requested through the product backlog, with the chief architect approving/rejecting requests.
    • Regular architecture meetings would be held to sync up with all the working going on in different projects and different architects throughout the world. This was much needed to ensure duplication was avoided, as well as keeping a hand on consistent solutions
    • Architects had to submit their designs for peer review before being passed on to development teams
    • Heavy use of video conferences, mailing lists, and discussion forums
    • Project managers had to intervene at some points when the architects were just being stubborn or playing the priority game
    • There was a well defined process for interacting with the customers who were developing their own application. As users of our API, we had to ensure the APIs were acceptable, so had API review sessions before development would start (more on this later).
    • All changes to architecture was clearly communicated to customers in reasonable time
Needless to say that there is a lot of administration overhead in maintaining a integrity of a complex architecture.  The problems are not new, it just gets more difficult to control when the team is not located in the same building, not small and manageable size of around 12 people -- there is so much that can go wrong with a distributed team, so for high profile projects you have to have a firm reign on all the streams, or else your architecture will crumble, dying a quick death.  I will illustrate the visual model of an example STB software architecture in another post, but like all software stacks, the typical construction is not so different to the example below:
Example STB Software Stack (Intended for Illustration purposes only)

Consistent Tools, Infrastructure & Configuration Management
Again, this is no different to managing a small team, or a simple project - maintaining strong Tools, Infrastructure and a process for Configuration management are universal concepts for managing software development projects. It just gets a little more complicated for a large, complicated project, involving 200+ people with a team that is geographically separated across continents and countries :-) For LSSDP, a lot of planning and attention goes into these areas because you want to make sure that whatever you introduce has a low barrier to entry, is realistic, has merit and longevity, i.e. scalable, extensible and not entirely disruptive.  You don't want to have constant churn or too many trials, if this is not managed effectively, it will result in disruption, unhappiness, and ultimately you lose your chance to influence, no one will listen to you, opting to manage their own shops.  So careful attention and thought must be invested up-front, engaging in spirit of collaboration & feedback, in setting the project on the right footing.

There is certainly a lot to talk about in this area, I'll try to be brief, highlighting the key points that cemented the foundations to maintain & support many LSSDPs to come:

  • Tools
    • Earlier in this post I mentioned strong reliance on Communications/Collaboration tools  that supported the project. The remaining tools centered around supporting development & integration efforts.  
    • Rational Unified Processes: We were creating a software product that was complex, complicated and worked on my hundreds of people. The aim for this product was to become world number in the STB Middleware market. To sustain a solid future, after much investigation, decided to use the suite of Rational Products: Rational ClearQuest for Defect Management, Rational ClearCase for Software Configuration Management.  We used the multi-site version of these products (project had transition from centralised to distributed control, which was a pain but had to be done). 
      • Whilst there are indeed open source offerings out there, in terms of the support and meeting urgent business needs, as well as the strong management component, these tools were mandated by the business, not just the project team. So this was a corporate-wide roll-out of ClearQuest, whilst ClearCase was a product management decision as it allowed for controlling the development and delivery process in minute detail. 
      • Some people might contend these tools are overkill, impose too much administration overhead on the part of developers and integrators, but when it comes to quality and controlled management of releases to support strong product development and industry best practices, then you can't really go wrong with this.
    • Requisite Pro was the tool of choice during the early stages, but we started to evolve to using Rational Doors for common product, multiple project requirements management. We however also managed to use Microsoft Word document as a way of managing requirements, but it was quite labour-intensive.
    • Quality Centre Mercury Test Director was the tool of choice for our System Testing/QA team - as it handled requirements traceability and distributed teams quite nicely.
    • Blackduck was used as a tool to manage our Open Source Software Policy.
    • Klocwork was used as the static analysis tool of choice to manage the quality of the software code.
    • Bespoke In-House Tools: We had over time instigated many in-house development tools that provided added value to maintaining software quality standards and productivity enhancements:
      • Continuous Integration (CI) powered by CruiseControl - this can be considered as the mother of all CI systems. This in itself is a separate topic on its own. Suffice to say, we had a CI system using real Set-Top-Box decoders, racks and racks of them, literally hundreds of networked STBs, controlled by CI-bots and processes...
      • Regression Tracker - Tightly integrated with the CI & Clearquest, this was a real-time system that provided a dashboard and detailed views into the daily builds and regression status for all software components
      • Defect Metrics - We created a web-frontend to Clearquest to provide easy, dashboard metrics
      • Code Review - This was an in-house tool that supported collaborative code-reviews, allowed code comments bubbles, review and approval workflows integrated with ClearCase and Clearquest
      • Simulators / IDEs - Not so dissimilar to Android's Emulation environment (and long before Android), we had a simulator that not only emulated a Set-Top-Box that allowed for middleware and application development, it also evolved to provide a live debugging environment, user friendly interface to Gnu Debugger, that allowed us to debug on target hardware. The standard JTag/Windriver debugger at the time, didn't meet all our needs, hence we extended the platform, creating our own environment built on Eclipse platform. 
      • Change Request (CR) Workflow tool - We re-used a previous tool from another project that implemented a CR workflow that was tightly integrated with ClearQuest and the Time Recording System, as CRs are charged Time & Materials. 
      • Documentation Management - Whilst the project portal was hosted through SharePoint, the company had created its own Document Management system, supporting the typical requirements of configuration management.
    • Excel - Agile Backlog & Planning - We searched high and low to find a tool to manage the product backlog and resulting detailed plans, but in the end settled for Microsoft Excel.  Although SharePoint managed the backlog requests, the master planning document was an Excel document.  Most Agile Project management tools support typical small scale projects, focused on managing the direct development team: We had a collection of development teams to manage, plus our definition of done encompassed cross-functional and cross-department testing, that NO tool in the market catered for.  To get an idea of the kind of backlog we had in mind, we used the below example to present to the likes of Hansoft and other vendors (even in my current project, the development team started with SpiraTeam, then switched to Excel/Whiteboard, now using Agilefant - Agilefant doesn't come close to meeting that need), our model:
Product Backlog Concept
  • Infrastructure
    • Continuous integration - I touched briefly about the CI system we'd developed. This system was quite complex with a big demand on lab space, physical infrastructure (decoders, servers, networking, cabling) as well as the system was supported by its own team of system developers. The CI product supported multiple customer projects, hence was a separate entity managed by the Infrastructure Team.
    • Broadcast Engineering - Specifically relevant to Digital TV projects, there is a dependency on broadcast systems, not everything could be tested using the existing, live broadcast stream. Developers, Integrators & Testers needed to develop and test the system to respond to a variety of inputs and stimuli.  When it came to field trialling the system, the project team relied upon someone building/deploying the system components in a lab or other format -- it therefore became necessary to set up a dedicated support team, called TIG (Technical Infrastructure Group) to support the technical needs of the project. 
  • Configuration Management 
    • Centred around Documentation Team, Rational Tools support and ClearCase Release Management - we had a dedicated group that managed the various aspects of configuration management.  
    • As I will explain in upcoming posts around the development, integration & release management philosophy, we employed the notion of Build Masters, with the Project Integration team the only team responsible for generating the build for release, including managing the results of the CI system, influencing Go/No-Go decisions, and for preparing the final release candidate builds.
I will expand in the above topics in future posts as I dig deeper into this topic of Large Scale Software Development Projects, a large part being the Software Development Plan or Project Methodology....Read Part 3: Disciplined Agile Model for Product Management...

No comments:

Post a Comment