Started in 2011 as my outlet to share my experiences on personal life and work topics. My handle is khanmjk. My friends & colleagues call me "Mo".
This blog is an experiment, work-in-progress.
Life: Self-awareness, personal development, growth hacking (quantified self).
Work: Software and Systems Engineering, Leadership, Management, New Ideas and OpEds.
I spent the last 48 hours pair-programming with Antigravity, an advanced AI coding assistant. We took a legacy
JavaScript codebase—riddled with global variables, monolithic files, and "spaghetti" dependencies—and transformed it
into a modern, service-oriented architecture.
The results go beyond just "helper" status. The metrics suggest a level of productivity that warps the traditional
time-cost equation of software development.
This app started out in 2024 as a scrappy concept, MVP, rapidly coded as a single index.html file with inline vanilla Javascript & CSS code - messy spaghetti code that grew and grew, until this weekend when I decided to just stop, refactor and clobber all technical debt. The goal? To experience the full journey of coding with AI and improve the code over time, to eventually get a "modern" app deployed. This code was touched by many AI hands: GPT-o1, Gemini, Claude, Codex 5.1, and also my manual tweaks here and there.
The Analytics: Friday to Sunday
Let's look at the raw data from git. This represents activity from Friday, December 5th, to Sunday
evening, December 7th.
Commits: 60 (non-merges)
Files Changed: 89
Lines Added: 10,468
Lines Deleted: 6,755
Total Code Churn: ~17,223 lines
17,000 lines of code touched in one weekend.
The Narrative: Deconstructing the Monolith
When we started on Friday, the application was held together by window objects. Functions were exposed
globally, state was scattered across main.js and utils.js, and components were tightly
coupled to the DOM via inline onclick handlers.
Here is the story of our weekend, told through the commit log:
The Purge of Globals: We began by hunting down window. dependencies. One by one, we replaced
fragile global state with rigid, testable Singletons and Services.
Breaking main.js: We took the 1,000+ line main.js file and chopped it up. We
extracted logic into dedicated domains: PlanningService, AIService,
SystemService, and NotificationManager.
Refactoring the UI: We implemented a VIEW_REGISTRY pattern, moving away from ad-hoc function
calls to structured Class-based component management.
Safety Check-ins: You see this pattern in the logs constantly: "Safety checkin - refactoring...".
We moved fast, but we didn't break things. The AI allowed me to make sweeping architectural changes with the
confidence that I wasn't leaving behind broken references.
The "Antigravity" Effect: AI vs. Human Effort
How long would this have taken a human developer working alone?
The Human Estimate:
To touch 89 files and rewrite the core architecture of an application requires immense cognitive load.
Day 1-3: Reading code, mapping dependencies, planning the refactor safe-zones.
Day 4-8: Executing the refactor file-by-file. Manually checking imports. Wrangling with loads of
ReferenceError
Day 9-10: Regression testing. Realizing you broke the "Save" button because it relied on a global
variable you deleted three days ago.
Conservative Estimate: 2 Weeks (1 Sprint) of full-time, focused work.
The Antigravity Reality:
We did it in 14.7 hours.
I analyzed the timestamps of every commit. Our total active coding time—the actual time spent typing, refactoring,
and verifying—was less than 15 hours.
Saturday: ~5 hours of strategic refactoring.
Sunday: ~9.5 hours of high-intensity execution.
The AI didn't just "autocomplete" lines. It understood the architectural intent. When I said "Refactor this to use
a Singleton pattern", it didn't just write a class; it found every usage of the old global variable across
20 files and updated them instantly. It remembered the context I would have forgotten.
This wasn't just faster coding. It was higher leverage coding. I spent my weekend making high-level
architectural decisions, while Antigravity handled the thousands of lines of execution required to make those
decisions reality.
Conclusion
If this weekend proved anything, it's that the metric for "developer productivity" is changing. We aren't limited by
typing speed or syntax recall anymore. We are limited only by our ability to describe the architecture we want.
My small experiment shows a software engineer would've saved two weeks of work in 15 hours!
Did I waste my weekend coding with AI? Maybe...but I'm learning a ton and having fun coding after a very long time...it's like I'm back to my early teenage days of losing myself inside the machine...
My test subject? A "Franken-app" I started building back in the stone age of September 2024. Back then, I was using GPT-4o and the early, limited versions of Claude Sonnet. At the time, those models felt like over-eager interns: they generated code fast, without thinking, and prioritized "showing it works" over best practices.
One+ years later, the landscape has shifted. The "Vibe Coding" hype suggested that software engineering was dead. I’m here to tell you that it’s not dead—but it has mutated into something entirely different.
Here is what I learned re-wiring a two-year-old messy codebase into an enterprise-grade web app in 48 hours.
The Antigravity Experience: 13 Days In
As I write this, Google Antigravity is less than two weeks old. I installed it on Day 2, and honestly? It was rough. It froze, crashed, and felt heavy. But after the stability patch earlier this week, something clicked.
The "Killer App" isn't just the code generation; it’s the Conversational Context.
With Gemini 3 Pro integrated directly into the IDE, I found myself managing a large-scale refactor not by typing code, but by debating architecture. I could propose ideas, ask Gemini to critique them, and even generate UI mockups right there in the console—no context switching to Figma or Photoshop.
However, it’s not perfect.
The "Human Bottleneck": Antigravity encourages you to run parallel workspaces with multiple agents. But I found that while the machine is multi-threaded, I am not. Context switching between three different agents causing chaos in three different branches is mentally exhausting.
The "Browser Agent": Promising, but slow. It’s fascinating to watch an AI navigate your app to test it, essentially replacing the QA role, but right now, it struggles to "drive" complex applications without crashing.
The Model Showdown: December 2025 Edition
Since Antigravity lets you swap models, I played them against each other. Here is the reality check:
1. Claude Opus 4.5 (The "Academic"):
Supposedly the best coding model in the world. It provides the best analysis and writes detailed, "Senior Engineer" style impact assessments. However, it still makes junior mistakes. It would change a CSS class to fix one page, completely forgetting the cascading impact on the rest of the site. High IQ, low spatial awareness.
2. Gemini 3 Pro (The "Collaborator"):
The best conversationalist. It holds the "big picture" context better than the others, but it struggles with long-running threads. Eventually, the context gets so heavy that it stalls or starts hallucinating old patterns.
3. OpenAI Codex 5.1 Max (The "Brute Force"):
When I hit a wall, I called in Codex. It lacks foresight and nuance, but it is 100% confident and usually finds a way to "just make it work."
The Big Question: Is Spaghetti Code Actually Bad??
This brings me to my most controversial observation.
We spend so much time prompting these models to respect encapsulation, avoid global namespaces, and write "Clean Code." But why? We are forcing AI to mimic human limitations.
I successfully transformed a 90s-style interface into a Jira-level Enterprise UX in one weekend. The code under the hood? It’s a mess. It is clearly "vibe coded." A human engineer would weep looking at the dependency graph.
But I ask you: So What?
If the AI writes the code, maintains the code, and fixes the bugs, why does a human need to understand it?
Machines are better than us at reading large datasets and understanding complex, spaghetti-like relationships.
If the application is secure and performant, the "readability" of the code is purely a vanity metric for human ego.
We are moving toward a future where we trust the machine to do the job. The role of the software engineer is shifting from "Author" to "Architect" and "Reviewer." You still need a technical background to know what to ask for—you need to know what a "race condition" is to spot when the agent creates one—but you no longer need to care about how the syntax is structured.
The Missing Pieces: How Antigravity Can Win
While Antigravity is impressive, my weekend of deep work exposed three critical gaps where the "Agent" promise falls short of the reality. If Google wants this to replace VS Code, here is what needs to happen next.
1. We Need a "Time Machine" for Code
Right now, Antigravity operates in a linear flow. If an Agent goes down a rabbit hole for 20 minutes and ruins the codebase, your only option is to manually git revert or undo endlessly.
The Fix: We need Conversation Forking and System Restore Points.
Imagine a "DVR" slider for your development session. I want to scrub back to 2:00 PM, right before I asked Gemini to "refactor the auth module," and fork the reality there to try a different prompt.
We need "Save States" for the Agent’s brain, not just the code. I want to revert the context, not just the files.
2. The Context "Amnesia" Problem
I found that even with massive context windows, I am still the only thread tying everything together.
The Problem: If I start a new workspace for a "UI refresh," that agent has no idea I just spent three hours discussing database schema changes in a different thread.
The Fix:Cross-Conversation Memory. Agents need a shared "hive mind" or a persistent Project Knowledge Graph. If Agent A makes a decision in Thread 1, Agent B in Thread 2 should instantly know about it without me having to copy-paste the summary.
3. "Architect Mode" vs. "Intern Mode"
It is baffling that in late 2025, I still have to explicitly prompt a state-of-the-art model to "avoid global namespaces" or "separate concerns."
The Reality: These models are still trained on the average of all GitHub code—which means they default to "average" (spaghetti) quality. They prioritize working code over maintainable code.
The Fix: I shouldn't have to prompt-engineer basic competence. Antigravity needs a toggle for "Strict Architect Mode"—a system-level guardrail that forces the Agent to adhere to SOLID principles and Design Patterns before it writes a single line of code. I don't want an eager intern; I want a Senior Engineer who pushes back when I ask for something stupid.
Observations So Far: The Paradox of Productivity
I have successfully rewired a 90s-style web app into a modern enterprise platform in 48 hours. The result is undeniable, but the process felt like herding cats.
The "Vibe Coding" Reality: The app looks great, but under the hood, it is a "Vibe Coded" mess. The dependency graph looks like a bowl of spaghetti.
The Stability Tax: I found myself having to "close-and-restart" Antigravity multiple times to clear out "ghosts" in the machine.
The Human Bottleneck: The promise of agents is parallelism, but I am still single-threaded. Trying to manage three agents simultaneously is mentally exhausting. Instead of writing code, I am now a Project Manager constantly unblocking three different junior developers who type at the speed of light but forget instructions every hour.
The value proposition is compelling, but we are not there yet. We have replaced "syntax errors" with "context errors," and I am not sure which one is harder to debug.
The Verdict
Antigravity is buggy, resource-hungry, and requires a "close-and-restart" every few hours. But it is undeniable that the tide has turned. Frontend engineering as we knew it in 2024 is gone.
The future isn't about writing code. It's about managing the agents that write it for you—and perhaps, learning to love the spaghetti.
Note: This blog post was initially written by me and later tidied up with Gemini Pro.
In a previous post, I shared some of my experience adding AI features into this proof of concept application I've been building entirely with AI coding assistants. This weekend, I found a way to break through the obstacle I faced with attempting to use Gemini to generate images through its Imagen API. A much simpler way to realise my goal of getting an AI to generate architecture diagrams from a simple prompt, is to instruct the LLM to produce Mermaid diagrams.
All it needed was a carefully crafted system prompt, instructing the AI to create the Mermaid diagram syntax from the data model, and voila! We now have a way to generate almost any diagram one can think of.
Take a look at this video:
And here's built-in native features of the app that shows different Mermaid diagrams for the system under inspection:
Have you ever been curious to learn how popular systems like Excel, Visual Studio, Adobe Photoshop, Call of Duty, Amazon, Uber, AirBnB, ChatGPT, Netflix, Apple Appstore, etc. are architected and built? If you're a software manager designing a technical organisation, and want to learn how the best technology teams are organized, wouldn't you like some insights into how to organize your teams, drawing on how the big tech companies do this? What about if you're stepping into product and program management, need to create a 3-Year Plan roadmap, along with business goals with metrics for Return on Investment, and have a dashboard on a single page to view overall status? What if you're a software delivery manager, and need to manage multiple delivery teams, co-ordinate the plan, identify bottlenecks? If you're a senior software leader and wish to analyse where your engineering resources are being consumed, and model capacity and hiring scenarios, wouldn't you like a tool to help provide insights? What if you received a top-down instruction to reduce headcount by 25% and model the impact on your roadmap, or you need to optimize your delivery plan based on various funding scenarios, wouldn't it be nice if you had a tool to do this quickly? If you could have a tool that lets you run scenarios and show the impact in near real-time, wouldn't that save you a lot of time? Wouldn't you like a tool that generates product and strategic narratives for you without you writing a single sentence? What if you're preparing for interviews in domains you're not familiar with, wouldn't you like a tool that could just generate you a simulated mock system so you can have rich context so you show up as prepared and more informed?
If you've answered Yes, to at least one of these questions - then you might just be interested in an app I'm POC'ing :-) Click here to learn about the app, or just click here to start playing with it.
I had some good fun this past weekend building out AI features for my app "Software Management Tools", which is currently a single page web application built entirely in vanilla Javascript/HTML. It is hosted on Github and the source code is freely available here. This tool basically allows you to do everything above and much more. All you need is a Gemini API key (free tier), to get started and experimenting. What's more interesting is that I didn't write a single line of code myself, thanks to Google's Gemini, OpenAI's Codex, Github and Visual Studio code - I've become a copy-and-paste developer. Seriously though, I do believe frontend engineers are at risk of losing their jobs to AI, and now with the concepts my tool demonstrates, the need for technical managers is also questionable!
AI Features Implemented (Using Gemini 2.5 Flash model)
Users can create a mock system, organisational structure and 3 year plan from a single simple prompt like "Create a spreadsheet platform like Micosoft Excel"
Users now have access to an integrated chat assistant that has context of the data model and the screen currently selected, where they can:
Learn more about the system architecture
Find out about platform dependencies like AWS, GCP services
Inspect bottlenecks in the architecture
View, Edit, Inspect and modify organisational and team structures through the chat
Inspect and Optimise the year plan using AI assistant
Write up business plans and narratives for a 3 year plan
Critique the engineering skills set, look for risks
Tweak the capacity constraints and model different capacity scenarios - with AI-generated suggestions
Where am I going next?
The possibilities are near endless - there's so much more to explore, but I've hit the classic stage of deciding to refactor the codebase to a more modern architecture, scrap the MVP and build again. Here are some ideas I'll progress next weekend:
Enable self-generating code contributions to the app itself
Build richer agents to perform complex operations from the chat like
Reduce all teams capacity by 20% and show the updated ATL/BTL plan
Move all initiatives under the Goal XYZ to BTL to free up capacity
Create a new system from the chat panel - an AI-wizard
Export the plan to Excel or Project
Create a detailed quarterly release plan for team XYZ
Merge these two teams into one team, reporting to a new SDM, move these services to the new team, keep remaining services in unallocated team
etc.
Build an appstore feature for users to share and publish their generated systems
Build an internal app API for the app to make agent creation more accessible
Refactor the codebase to cleanly separate out concerns - and make future porting easier
Start building out detailed planning features
Explore a backend-as-a-service to make simpler API key integrations possible for folks who are not software engineers
What have I learnt?
In just two days, I learnt a lot:
How to integrate AI-features into an app, navigating the API Key challenges
How to build a model router
How to fine tune a system prompt for the task at hand
How to pass JSON syntax to the LLM to model your data structure you need the LLM to return
How to validate generated data for compliance
How to build a chat assistant, with memory, history and context
How to build an agent framework
How to structure your code to make AI integrations portable and extensible
How to build effective retry and backoff mechanisms when AI is overloaded
How clunky the AI provider APIs are - whilst apps like ChatGPT and Gemini are multimodal, it is not as simple or clean to do this from an API, there should be a simple multi-modal API for developers, one endpoint to call - without overly complex security concerns
How powerful and fun coding can be again!
Do you have some demos, I can't access the tool?
I thought you'd never ask.
This video shows the AI creating a new system from prompt, using "Create with AI" feature:
This video shows using AI to inspect system architecture...
This video shows how AI can help manage product roadmaps...
This video shows of AI can assist with inspecting software delivery plans..
This video shows how managers can have fun using AI to manage their org structures...
I decided to feed my last blog post on leaving AWS into NotebookLM:
The provided text is an extended, introspective essay written by a senior engineering leader detailing the reasons for his resignation from Amazon Web Services (AWS) after four and a half years at the Cape Town headquarters. The author structures his departure around three core concerns: Spirituality and Conscience, particularly regarding the company's involvement in projects like Project Nimbus amid the Middle East conflict; Lack of Professional Growth, driven by structural changes within AWS and limited opportunities for promotion to director level (L8) in South Africa; and Negative Return on Investment (ROI) in Time, exemplified by increasing work hours, extensive weekend work, and compromised work-life balance, despite the financial success of his stock options. The essay concludes that the job no longer provided satisfactory personal, professional, or spiritual alignment, leading him to pursue a new role in Intelligent Traffic Systems with a local South African company.
Here's the audio generated (it got my RAGE model wrong, it stands for Reality, Aspirations, Goals, Expectations):
Here's the video generated (not bad, but it got 2023 mixed up with 2024) - but a good summary nonetheless:
Earlier this year, on 05/30 (this is how Amazon writes dates, 30th May) I walked out of the amazing Amazon Africa headquarters in Cape Town, after spending four years and six months, as a senior engineering leader. I'm writing this post three months after a sabbatical break, and after a week into my new job (started 09/01) in a totally different field of Intelligent Traffic Systems...
Here's a screenshot from my LinkedIn profile on my time with AWS:
Here's a video of me from 2022:
And here's some memories from my time there. Listen, I did enjoy my time with Kumo/AWS. I met some fantastic, highly intelligent, self-driven, motivated and resilient people from diverse backgrounds, from all over the world. AWS is indeed a special place. Their motto: Work Hard. Have Fun. Make History -- is very real! For anyone considering working in AWS, go for it! But be sure to do your homework first. This is just my story, a single datapoint from 1.5 million Amazonians, and 290000+ AWS employees worldwide - and in a small pond from South Africa (SA), a sample of 1 L7 engineering leader from a pool of ±7-9 managers, servicing 250-300 software engineers across Kumo and EC2 (this alone hints at the opportunity for L8 promo in SA is very small...)
Memorable moments with my teams...
I left AWS in good standing if I decided to boomerang back, here's a goodbye email sent by my manager to 1000+ people:
Why did you leave AWS, you seem to have been on a good trajectory, yes?
My AWS colleagues, family, friends and now too, my new coworkers asked me why I left AWS. Feelings about exiting AWS started back in Q3 2024 triggered by the events unfolding in the middle east, causing me to reflect deeply on my personal and professional aspirations critically assessing my value system, borne from my RAGE model which has been my north star for over a decade.
It turned out that the key metric is Return on Investment (ROI) across these three dimensions: 1/ Spirituality; 2/ Professional Growth and 3/ Work-Life balance.
A walkthrough of an app I decided to build to help me with my work as a software engineering leader. Core thesis: I wanted a planning management tool that is centered around core system architecture, working backwards from there to model the organizational / team structures that feeds the planning process. I found most project management tools lacking support of including software architecture inputs - so I decided to build something to close this gap.
My journey started in late 2024 (October), when LLM models started exposing coding capabilities. I wasn't too impressed by the likes of ChatGPT and Claude (at the time). But with Gemini 2.5 entering the scene, I picked this project up again in April 2025 - and slowly built out the MVP, first as a single page application (html/javascript) with basic old skool UI (this video), then refreshed the UI, refactored the code (next video "Building apps with AI (2) - MVP...". Since I stopped coding professionally in 2010, this entire journey was quite the learning experience. I'm going back to school again, but I've learnt so much about how to engage with the LLM, that I'm increasingly confident I will find a place in the new world of AI. AI-assisted coding is definitely the future, but it is not that straightforward yet, even with the latest craze in vibe coding.
Just take a look at my document sharing my conversation with Gemini here.
50 pages and counting!!! Essentially you need to be very clear with articulating the requirements and interacting with the model as a true assistant, like you would with any human engineer, business analyst, or UX designer.
Listen to the podcast generated by NotebookLM here You can interact with my Google NotebookLM here
The rest of this blog post was written by partner, Gemini >>>>
From Vision to MVP: Building a Software Management & Planning Powerhouse (with an AI Co-Pilot!)
For any software engineering leader, manager, or even a senior individual contributor, the landscape of modern software development is a complex beast. We juggle intricate system architectures, dynamic team structures, ever-shifting priorities, and the constant challenge of aligning finite resources with ambitious goals. What if there was a tool that not only helped model this complexity but also empowered smarter, data-driven planning?
That's the question that sparked the creation of the Software Management & Planning Tool, an MVP we've been passionately building. This isn't just about another planning application; it's about a foundational belief: effective planning starts by working backwards from your systems and organizational architecture.
This blog post shares the journey of bringing this vision to life – from a core idea to a feature-rich Minimum Viable Product – and the unique experience of co-developing it with a generative AI coding partner, Gemini.
The Core Idea: Architecture-First Planning
The tool is built upon a hierarchical understanding of a software organization:
Systems: The overarching products or platforms.
Services & APIs: The functional components within those systems, each with defined interfaces.
Teams (2-Pizza Teams): Agile squads owning services, each with software development managers (SDMs) and engineers.
Dependencies: Critical platform components and inter-service relationships.
This rich data model allows the application to provide a holistic view, forming the bedrock for all subsequent planning and visualization.
Our MVP Journey: An Iterative Evolution with an AI Partner
What started as a concept rapidly took shape, built incrementally feature by feature. The initial scaffolding was a single HTML/JavaScript file – a testament to getting ideas off the ground quickly. But the vision was always much larger.
Key Milestones in Our Development:
System & Org Modeling: The first step was enabling the definition of systems, services, APIs, teams, and their intricate relationships. This involved creating UI elements for adding, editing, and connecting these entities.
Visualizing Complexity: Static data isn't enough. We introduced dynamic visualizations:
System Architecture Diagrams: Showing services, APIs, and platform dependencies.
Team Relationship Graphs: Illustrating how teams connect based on service ownership.
Service Dependency Views: Drilling down into specific service connections.
Interactive Org Charts & Team Breakdowns: Providing clear views of the organizational structure and team compositions.
Initially, these were separate views, but based on user experience feedback, we evolved the system overview into a slick carousel navigation allowing users to seamlessly glide through these critical perspectives.
The Game Changer: Yearly Planning: This became a cornerstone. The goal was an interactive table to manage initiatives against team capacity, showing what's "Above The Line" (ATL) and "Below The Line" (BTL). We implemented:
Dynamic team columns and SDE Year estimations.
Drag-and-drop prioritization.
"Protected" initiatives that lock to the top.
The crucial concepts of "Funded Headcount" vs. "Builders In Seats" (BIS).
Introducing "Away Teams": Recognizing that real-world capacity isn't static, we enhanced the model to include "Away Team Members" – borrowed resources. This led to the "Effective BIS" metric, providing a more realistic picture of available capacity.
Deep Dive into Capacity Tuning: This was a pivotal enhancement. We built a dedicated "Tune Capacity Constraints" module where users can define:
Global defaults (working days, holidays, org-wide events).
Team-specific adjustments (leave uptake, variable leave, team activities, recurring overhead).
The output? A Calculated Net Project Capacity for each team, which can then feed back into the Yearly Planning view for a much more accurate ATL/BTL assessment. This module also includes a detailed narrative explanation and a waterfall chart to visualize capacity erosion.
SDM Resource Forecasting: Leveraging the detailed capacity tuning, we integrated a forecasting model. SDMs can now select their team, input hiring and ramp-up estimates, and see a 52-week projection of their headcount and effective SDE availability, all adjusted by the previously configured capacity constraints.
UI/UX Modernization: More recently, we've focused on enhancing the user experience, refactoring the codebase into modular files, and adopting a cleaner, card-based UI reminiscent of modern web applications.
The Gemini Co-Pilot Experience:
This journey wasn't a solo flight. As the human visionary and lead developer, I partnered extensively with Gemini, my AI coding assistant. This collaboration was an experiment in itself:
Acceleration: Gemini was instrumental in rapidly generating boilerplate code, implementing UI components based on descriptions, and refactoring complex functions. This significantly sped up the development of the MVP.
Brainstorming & Ideation: We bounced ideas for UI/UX improvements, data model structures, and feature refinements (like the Team Load Summary table in the planning view).
The Learning Curve (for both of us!): It wasn't always smooth sailing. There were moments where context was lost between sessions, leading to regressions or suggestions that didn't quite fit the current state. Clarifying requirements meticulously, sometimes repeatedly, became key. Debugging involved a human-AI partnership – I'd often describe the unexpected behavior, and Gemini would help pinpoint potential issues in the code it (or we) had written. It was a process of learning how to "prompt" effectively and how to guide the AI when it drifted.
This experience has been a powerful demonstration of how GenAI can be a force multiplier for developers, turning ambitious ideas into working software at an accelerated pace, even if it requires a new kind of collaborative patience and precision.
Key Features That Shine
While the tool is packed with interconnected features, some of the "wow" factors that have emerged include:
Holistic System Modeling: The ability to define not just services, but also the teams that own them, the engineers within those teams (including AI SWEs and their attributes like skills and experience!), and the management structure, all in one place.
Insightful Visualizations: The carousel of system, team, and service dependency graphs provides powerful, at-a-glance insights into complex relationships.
Realistic Capacity Tuning: Moving beyond simple headcount, the "Tune Capacity Constraints" feature allows for granular adjustments, factoring in everything from public holidays and company events to team-specific leave uptake and meeting overhead. The resulting "Net Project Capacity" and its narrative explanation offer a truer picture of what teams can achieve.
Interactive Yearly Planning: The drag-and-drop planning table, combined with dynamic ATL/BTL calculations based on selectable scenarios (Funded HC, Team BIS, Effective BIS) and the option to apply the "Net Capacity Constraints," makes for a powerful "what-if" analysis tool. The Team Load Summary provides immediate feedback on team-level impacts.
Integrated SDM Resource Forecasting: This module closes the loop by allowing managers to see how their hiring plans and attrition realities, combined with the tuned capacity constraints, will affect their team's SDE availability throughout the year.
An Evolving, Modern UI: The recent shift towards a cleaner, card-based design with a persistent top navigation bar significantly improves usability.
The Journey Continues: The Power of Iteration
This MVP is a significant milestone, but it's also a foundation. The backlog, as detailed in our project's README.md, is rich with possibilities: enhanced yearly planning with ROI tracking and versioning, a dedicated roadmap and backlog management module, deeper detailed planning with work packages, and, most excitingly, integrating more AI-powered enhancements for analysis and interaction.
The development of this tool underscores the power of iterative development and the transformative potential of collaborating with AI coding assistants. It shows how a complex vision can be broken down and built, piece by piece, learning and refining along the way. While the path of AI-assisted development has its unique challenges (like ensuring context retention and guiding the AI through ambiguities), the ability to rapidly prototype, generate code, and explore solutions is undeniable.
For any engineering leader wrestling with the complexities of software planning, or any developer curious about the potential of AI partnerships, we hope this journey provides some inspiration. With the right vision and the right tools (both human and AI!), ambitious ideas can indeed be brought to life.
Man, Richard Hamming's talk is doing a number on me. So much of what he said, back in 1986 is still relevant today! Calling out Hamming's "Seven Year Shift" policy that resonated so much with me...this talk should go down as one of the best career talks in history...
Hamming's suggestion of a "significant, if not complete, shift in your field" every seven years might seem counterintuitive at first. Why change when you've built expertise and a reputation in a particular area? Here's a closer look at his reasoning, drawing on his insights from the sources you provided:
1. Combating Stagnation and Maintaining Originality:
Hamming argues that staying in one area for too long can lead to stagnation. You become overly reliant on familiar methods, your thinking becomes less flexible, and you may miss out on new and emerging opportunities.
He observes: "What happens to the old fellows is that they get a technique going; they keep on using it. They were marching in that direction which was right then, but the world changes. There's the new direction; but the old fellows are still marching in their former direction."
This idea emphasizes the importance of actively seeking new challenges and perspectives to keep your thinking fresh and continue growing.
2. The Power of "Beginner's Mind"
By shifting to a new area, you force yourself back into a "beginner's mind" state. You're no longer the expert, which can be liberating. This allows you to approach challenges with fresh eyes, ask fundamental questions, and potentially make connections that you wouldn't have seen otherwise.
Hamming, reflecting on his own career choices, says: "...when error correcting codes were well launched, having these theories, I said, 'Hamming, you are going to quit reading papers in the field; you are going to ignore it completely; you are going to try and do something else other than coast on that.' I deliberately refused to go on in that field. I wouldn't even read papers to try to force myself to have a chance to do something else."
He actively chose to become a "beginner" again, opening himself up to new possibilities.
3. Planting New "Acorns" for Future Growth
Hamming uses the metaphor of planting acorns that grow into mighty oaks to illustrate the importance of consistently nurturing new ideas and areas of expertise.
He advises: "You have to change. You get tired after a while; you use up your originality in one field. You need to get something nearby... so that you don't go stale... You can do something about this, but it takes effort and energy. It takes courage to say, 'Yes, I will give up my great reputation.'"
This suggests that even giving up a degree of established expertise can be beneficial in the long run, as it creates space for new growth and potentially even greater accomplishments.
4. The "Seven Year Shift" in Today's Context
While the specific timeframe of seven years might not be a rigid rule, the underlying principle of embracing change and seeking new challenges is highly relevant for professionals today:
Rapid Technological Advancements: In fields like technology, where change is constant and skills can become obsolete quickly, actively seeking new challenges and updating your knowledge base is essential.
The Evolving Nature of Work: The rise of the gig economy, remote work, and the increasing demand for adaptable skillsets mean that professionals need to be prepared to navigate career transitions more frequently.
Personal Growth and Fulfillment: Beyond external success, seeking new challenges can contribute to personal growth, prevent boredom, and keep you engaged and motivated throughout your career.
Hamming's "seven year shift" isn't about abandoning your expertise or making drastic career changes for the sake of it. It's about recognizing the dangers of complacency, embracing a mindset of continuous learning, and actively seeking out new challenges that can keep you engaged, adaptable, and at the forefront of your field.
Seed Prompt for tool (click here to play with it): I am a software engineering manager. A large part of my job is understanding the engineering resource utilization for my engineering team. I manage multiple engineering teams of varying sizes from 5 people to 20 people in a team. When estimating software projects, understanding availability of productive engineers is important for planning purposes. Help me build an interactive resource forecasting tool, that can be fully self-contained within browser app, embedded in a google blogspot page. The tool must include inputs for: 1\ Funded team size; 2\ Current Available engineers (i.e. actual current size of team). The tool must account for the time it takes for hiring new engineers, ramp-up time, learning, etc. before they can be deemed as productive for committing to project work. The tool must show a table the resource plan over 52 weeks in the year, and include a visual graph to show how the team size grows to meet funded size. Research the best methods in software engineering management, and create the tool that can help software managers have a good model for resource utilization. The tool can store information in the local user browser storage as needed. Remember, I need the full code to be fully self-contained so I can run the tool from within a google blogspot page.
Prompt for this article: Great I think we have good baseline version to start with. Now using your PhD capabilities, write up an accompanying blog post that details the model, the thinking behind it, and call out the limitations using a critical eye, suggesting improvements for the next version.
Prompt for Image using Dall-E: "An isometric illustration of a diverse engineering team collaborating in a modern office space. They are gathered around a large table filled with charts, graphs, and laptops, discussing and planning. Behind them, a wall-sized calendar and a whiteboard display timelines, Gantt charts, and upward-trending graphs. Include elements like gears and clock icons subtly integrated into the design to symbolize engineering and time management. Use a professional, clean art style with vibrant yet muted colors to convey a serious but engaging atmosphere.
Engineering Resource Forecasting: A Comprehensive Model for Team Planning
By ChatGPT o1-Preview (took about 4 hours to get the tool to version 1)
Effective resource planning is crucial for engineering teams aiming to meet project deadlines, optimize productivity, and manage budgets efficiently. The Engineering Resource Forecasting Tool is designed to help managers and team leads forecast their engineering resources over a year, considering various real-world factors such as hiring rates, attrition, ramp-up times, efficiency, leave days, and public holidays.
In this blog post, we'll delve into the model behind the tool, explore the thinking that shaped its development, critically analyze its limitations, and suggest improvements for future iterations.
Understanding the Model
1. Objectives of the Tool
The primary goals of the Engineering Resource Forecasting Tool are to:
Forecast Team Size: Predict how the engineering team will grow or shrink over time.
Calculate Effective Capacity: Determine the actual productive capacity of the team, considering factors that reduce availability.
Assist in Planning: Provide insights for hiring strategies, resource allocation, and project scheduling.
2. Key Components of the Model
The model incorporates several essential elements:
Funded Team Size: The maximum number of engineers that the budget allows.
Current Available Engineers: The starting point for the forecasting, representing the existing team.
Hiring Rate and Time: The rate at which new engineers are hired and the average time it takes for them to join.
Ramp-Up Time: The period new hires require to reach full productivity.
Attrition Rate: The expected percentage of engineers who will leave the team annually.
Efficiency Factor: The proportion of time engineers spend on productive work.
Leave Days and Public Holidays: Time when engineers are not available due to vacations and holidays.
3. Model Calculations
a. Effective Engineers
The number of effective engineers is calculated weekly by adjusting the total number of fully ramped-up engineers for efficiency and availability:
Effective Engineers = Total Ramped-Up Engineers × Efficiency Factor × (Weekly Available Days / 5)
b. Weekly Available Days
Engineers aren't available all five working days each week due to leave and public holidays:
Weekly Available Days = 5 - ((Annual Leave Days + Public Holidays) / 52)
Cumulative Attrition: The total number of engineers who have left the team up to a given week.
d. Hiring and Ramp-Up Process
Hiring Pipeline: Represents engineers who have accepted offers but haven't started yet.
Ramping Engineers: New hires who are in the process of ramping up to full productivity.
e. SDE-Weeks and SDE-Days
SDE-Weeks: Sum of effective engineer contributions per week within a month.
SDE-Days: Sum of effective engineer contributions per day within a month.
These metrics provide a more granular view of the team's productive capacity.
Thinking Behind the Model
1. Realistic Representation
The model aims to mirror real-world scenarios by considering:
Time Delays: Hiring and ramp-up times introduce delays between initiating a hire and gaining full productivity.
Non-Productive Time: Leave, holidays, and non-productive activities reduce actual working time.
Attrition Effects: Engineers may leave the team, impacting capacity and necessitating replacement hires.
2. Balancing Simplicity and Complexity
While the model incorporates multiple factors, it maintains usability by:
Using Average Rates: Distributes leave, holidays, and attrition evenly over the year for simplicity.
Assuming Linear Processes: Hiring and ramp-up are modeled as linear processes without accounting for fluctuations.
3. Providing Actionable Insights
By visualizing the forecast through charts and tables, the tool helps managers:
Identify Hiring Needs: Determine the hiring rate required to meet team size goals.
Plan for Attrition: Anticipate capacity reductions due to attrition.
Adjust Strategies: Modify inputs to explore different scenarios and their outcomes.
Limitations and Critical Analysis
Despite its usefulness, the model has several limitations:
1. Uniform Distribution Assumptions
Leave and Holidays: The model assumes that leave days and public holidays are evenly distributed throughout the year. In reality, leave may cluster around certain periods (e.g., summer vacations, year-end holidays), leading to temporary drops in capacity.
Attrition Timing: Attrition is modeled as a steady rate, but actual departures may occur unpredictably and in clusters.
2. Lack of Granular Ramp-Up Modeling
Binary Productivity: The model assumes engineers are either ramping up or fully productive after the ramp-up period. It doesn't account for gradual increases in productivity during ramp-up.
Individual Variations: All new hires are treated identically in terms of ramp-up time and efficiency, ignoring individual differences.
3. Hiring Constraints and Real-World Challenges
Hiring Rate Feasibility: The model calculates the hiring rate needed to meet targets but doesn't consider the practical challenges of achieving that rate, such as market conditions or recruitment resource limitations.
Candidate Pipeline Variability: It doesn't account for fluctuations in candidate availability or acceptance rates.
4. Simplified Efficiency Factor
Static Efficiency: The efficiency factor is constant over time, not accounting for potential improvements or declines due to training, process changes, or team dynamics.
Team Composition: It doesn't differentiate between engineers with different skill levels or specializations.
5. Absence of Project Dependencies
No Task Modeling: The model focuses on resource capacity without linking it to specific project tasks or timelines.
Overtime and Workload Peaks: It doesn't account for periods when engineers might work overtime or when workload exceeds capacity.
6. Economic and External Factors
Budget Changes: The funded team size is assumed constant, ignoring potential budget cuts or increases.
External Events: Unforeseen events like economic downturns, pandemics, or natural disasters are not modeled.
Suggestions for Improvement
To enhance the model's accuracy and applicability, future versions could consider:
1. Incorporating Seasonal Variations
Variable Leave and Holiday Distribution: Model leave and public holidays based on historical patterns to reflect periods of low and high availability.
Attrition Peaks: Adjust attrition rates to account for known periods of higher turnover.
2. Enhanced Ramp-Up Modeling
Gradual Productivity Increase: Implement a ramp-up curve where productivity increases incrementally over the ramp-up period.
Individualized Ramp-Up: Allow different ramp-up times for different roles or experience levels.
3. Realistic Hiring Constraints
Recruitment Capacity Limits: Introduce maximum hiring rates based on recruitment team capacity.
Candidate Pipeline Dynamics: Factor in time-to-fill variations, candidate drop-off rates, and competition in the job market.
4. Dynamic Efficiency Factor
Efficiency Over Time: Allow the efficiency factor to change over time, reflecting process improvements or team fatigue.
Role-Based Efficiency: Differentiate efficiency factors for various roles or seniority levels.
5. Integration with Project Planning
Task-Level Planning: Link resource capacity to project tasks and milestones to assess whether capacity meets project demands.
Risk Modeling: Incorporate risks and uncertainties that could impact timelines and capacity.
6. Economic Scenario Planning
Budget Flexibility: Allow for adjustments in the funded team size based on potential budget changes.
External Shocks: Introduce scenarios for external events that could affect hiring, attrition, or productivity.
7. User Input Flexibility
Custom Time Frames: Enable users to adjust the time frame of the forecast beyond a single year.
Data Import: Allow users to import historical data to better tailor the model to their organization's patterns.
Conclusion
The Engineering Resource Forecasting Tool provides a solid foundation for planning and visualizing engineering team capacity over time. By accounting for critical factors like hiring, ramp-up, attrition, efficiency, leave, and holidays, it offers valuable insights for managers.
However, like any model, it simplifies complex real-world dynamics to remain practical and user-friendly. Recognizing its limitations is essential for interpreting the results accurately and making informed decisions. By incorporating suggested improvements, future versions can offer even more robust and precise forecasting, aiding organizations in navigating the challenges of resource planning.
About the Author
[Your Name] holds a PhD in [Your Field] and has extensive experience in project management, resource planning, and organizational development. Passionate about leveraging data-driven models to solve practical problems, [Your Name] contributes to advancing tools and methodologies that enhance operational efficiency.
Note: The Engineering Resource Forecasting Tool is intended as a planning aid and should be used in conjunction with professional judgment and other planning tools. Always consider your organization's specific circumstances when making staffing decisions.
It's been ages since I last shared a metaphor on this blog. This one hit me during my morning walk today, 8th August 2024. As I was trudging through the early morning mist, my brain made an odd connection: life is kinda like a game of Jenga.
You know Jenga, right? That game where you're trying to build the tallest tower possible by yanking out blocks and stacking them on top. It's all fun and games until someone knocks the whole thing over. But today, I saw it differently.
The Baseline: Your Value System
Picture this: you start with a perfect Jenga tower. All the blocks neatly stacked, everything solid and stable. That's you. That's your life, your values, everything you stand for. It's rock solid, built on all the principles you've picked up over the years.
The Dangerous Dance of Compromise
Now, here's where it gets interesting. Every time you play Jenga, you're taking a risk, right? You pull out a block, the tower gets taller, but it's wobbling more. In life, it's kinda the same deal:
Maybe you tell a little white lie. No big deal, right? But they start to add up...
Or you cut corners on a business deal. Just this once, you tell yourself.
Perhaps you're racking up some debt. One credit card becomes two, becomes three...
Or you let slide something that goes against your values. It's just a small thing, a papercut. But man, those papercuts sting after a while.
Every time you look the other way, every time you compromise, you're pulling out another block. Your tower might be getting taller, but it's getting shakier too.
The Inevitable Collapse
We all know how Jenga ends - with a crash and blocks all over the floor. Real life isn't always that dramatic, but the consequences can be way worse. All those little compromises? They add up:
Those white lies might snowball into you being known as "that guy who can't be trusted".
That shady business deal? It might come back to bite you in the butt, big time.
And that debt? Before you know it, you're drowning in it.
As for those "minor" infractions? They chip away at your self-respect, bit by bit.
The Lesson: Handle with Care
So what's the point of all this Jenga talk? It's pretty simple: be careful with your values, folks. Every choice you make, everything you do, it's like moving a block in your personal Jenga tower. Some moves might seem like no big deal at the time, but they all add up to how stable (or unstable) your life is.
This doesn't mean you should never take risks or change your mind about stuff. Heck, growing as a person often means shaking things up a bit. But it does mean being careful about the choices we make and thinking about where they might lead us down the road.
Building a Stable Tower
So, how do we use this Jenga metaphor to build a life that doesn't topple over at the slightest breeze?
Know your foundation: Take some time to think about what really matters to you. What are the blocks at the bottom of your tower?
Think before you pull: Before you do something that goes against your values, ask yourself if it's really worth it. Is that short-term gain worth making your whole tower wobble?
Shore up your base: Work on making your core values stronger. Read books, hang out with people who inspire you, practice what you preach in the little day-to-day stuff.
It's okay to rebuild: If you realize you've compromised too much, don't be afraid to start over. Sometimes you gotta go back to basics to build something that'll last.
Play the long game: Remember, life isn't a race to build the tallest tower as fast as you can. It's about building something that'll stand up to whatever life throws at it.
As we're all fumbling our way through this game called life, let's keep this Jenga thing in mind. Every decision, every compromise, every value we ignore - they're all blocks in our tower. Handle 'em carefully, think about your moves, and try to build a life that stands tall not just in height, but in being rock-solid and true to who you are.
So, what blocks are you moving around in your life's Jenga tower? And the million-dollar question: how stable is your foundation? Something to chew on, isn't it?