Started in 2011 as my outlet to share my experiences on personal life and work topics. My handle is khanmjk. My friends & colleagues call me "Mo". This blog is an experiment, work-in-progress.
Life: Self-awareness, personal development, growth hacking (quantified self).
Work: Software and Systems Engineering, Leadership, Management, New Ideas and OpEds.
So I had to test drive the new Firebase Studio that's powered by the latest Gemini 2.5 model, that has got the whole world talking, and leading the webdev llm arena.
For my test, I simply prompted it to build a game like the classic battleship, where the player plays against the computer. The game didn't work on first try, what industry refers to as "one-shot" builds - but it had a lot of potential, so I guided the assistant through addressing bugs, improving user experience of the game layout, and also tweaking the AI rules. Gemini was smart enough to use GenAI itself as the computer opponent - which I think was pretty neat. Historically, before AI, we would have needed to code the computer's moves, strategy and keep track of moves, etc. Instead, we initiate an AI agent, task it to play as the opponent in the game, and provide just enough context and guidelines for moves (like don't place a move on a previous move).
A walkthrough of an app I decided to build to help me with my work as a software engineering leader. Core thesis: I wanted a planning management tool that is centered around core system architecture, working backwards from there to model the organizational / team structures that feeds the planning process. I found most project management tools lacking support of including software architecture inputs - so I decided to build something to close this gap.
My journey started in late 2024 (October), when LLM models started exposing coding capabilities. I wasn't too impressed by the likes of ChatGPT and Claude (at the time). But with Gemini 2.5 entering the scene, I picked this project up again in April 2025 - and slowly built out the MVP, first as a single page application (html/javascript) with basic old skool UI (this video), then refreshed the UI, refactored the code (next video "Building apps with AI (2) - MVP...". Since I stopped coding professionally in 2010, this entire journey was quite the learning experience. I'm going back to school again, but I've learnt so much about how to engage with the LLM, that I'm increasingly confident I will find a place in the new world of AI. AI-assisted coding is definitely the future, but it is not that straightforward yet, even with the latest craze in vibe coding.
Just take a look at my document sharing my conversation with Gemini here.
50 pages and counting!!! Essentially you need to be very clear with articulating the requirements and interacting with the model as a true assistant, like you would with any human engineer, business analyst, or UX designer.
Listen to the podcast generated by NotebookLM here You can interact with my Google NotebookLM here
The rest of this blog post was written by partner, Gemini >>>>
From Vision to MVP: Building a Software Management & Planning Powerhouse (with an AI Co-Pilot!)
For any software engineering leader, manager, or even a senior individual contributor, the landscape of modern software development is a complex beast. We juggle intricate system architectures, dynamic team structures, ever-shifting priorities, and the constant challenge of aligning finite resources with ambitious goals. What if there was a tool that not only helped model this complexity but also empowered smarter, data-driven planning?
That's the question that sparked the creation of the Software Management & Planning Tool, an MVP we've been passionately building. This isn't just about another planning application; it's about a foundational belief: effective planning starts by working backwards from your systems and organizational architecture.
This blog post shares the journey of bringing this vision to life – from a core idea to a feature-rich Minimum Viable Product – and the unique experience of co-developing it with a generative AI coding partner, Gemini.
The Core Idea: Architecture-First Planning
The tool is built upon a hierarchical understanding of a software organization:
Systems: The overarching products or platforms.
Services & APIs: The functional components within those systems, each with defined interfaces.
Teams (2-Pizza Teams): Agile squads owning services, each with software development managers (SDMs) and engineers.
Dependencies: Critical platform components and inter-service relationships.
This rich data model allows the application to provide a holistic view, forming the bedrock for all subsequent planning and visualization.
Our MVP Journey: An Iterative Evolution with an AI Partner
What started as a concept rapidly took shape, built incrementally feature by feature. The initial scaffolding was a single HTML/JavaScript file – a testament to getting ideas off the ground quickly. But the vision was always much larger.
Key Milestones in Our Development:
System & Org Modeling: The first step was enabling the definition of systems, services, APIs, teams, and their intricate relationships. This involved creating UI elements for adding, editing, and connecting these entities.
Visualizing Complexity: Static data isn't enough. We introduced dynamic visualizations:
System Architecture Diagrams: Showing services, APIs, and platform dependencies.
Team Relationship Graphs: Illustrating how teams connect based on service ownership.
Service Dependency Views: Drilling down into specific service connections.
Interactive Org Charts & Team Breakdowns: Providing clear views of the organizational structure and team compositions.
Initially, these were separate views, but based on user experience feedback, we evolved the system overview into a slick carousel navigation allowing users to seamlessly glide through these critical perspectives.
The Game Changer: Yearly Planning: This became a cornerstone. The goal was an interactive table to manage initiatives against team capacity, showing what's "Above The Line" (ATL) and "Below The Line" (BTL). We implemented:
Dynamic team columns and SDE Year estimations.
Drag-and-drop prioritization.
"Protected" initiatives that lock to the top.
The crucial concepts of "Funded Headcount" vs. "Builders In Seats" (BIS).
Introducing "Away Teams": Recognizing that real-world capacity isn't static, we enhanced the model to include "Away Team Members" – borrowed resources. This led to the "Effective BIS" metric, providing a more realistic picture of available capacity.
Deep Dive into Capacity Tuning: This was a pivotal enhancement. We built a dedicated "Tune Capacity Constraints" module where users can define:
Global defaults (working days, holidays, org-wide events).
Team-specific adjustments (leave uptake, variable leave, team activities, recurring overhead).
The output? A Calculated Net Project Capacity for each team, which can then feed back into the Yearly Planning view for a much more accurate ATL/BTL assessment. This module also includes a detailed narrative explanation and a waterfall chart to visualize capacity erosion.
SDM Resource Forecasting: Leveraging the detailed capacity tuning, we integrated a forecasting model. SDMs can now select their team, input hiring and ramp-up estimates, and see a 52-week projection of their headcount and effective SDE availability, all adjusted by the previously configured capacity constraints.
UI/UX Modernization: More recently, we've focused on enhancing the user experience, refactoring the codebase into modular files, and adopting a cleaner, card-based UI reminiscent of modern web applications.
The Gemini Co-Pilot Experience:
This journey wasn't a solo flight. As the human visionary and lead developer, I partnered extensively with Gemini, my AI coding assistant. This collaboration was an experiment in itself:
Acceleration: Gemini was instrumental in rapidly generating boilerplate code, implementing UI components based on descriptions, and refactoring complex functions. This significantly sped up the development of the MVP.
Brainstorming & Ideation: We bounced ideas for UI/UX improvements, data model structures, and feature refinements (like the Team Load Summary table in the planning view).
The Learning Curve (for both of us!): It wasn't always smooth sailing. There were moments where context was lost between sessions, leading to regressions or suggestions that didn't quite fit the current state. Clarifying requirements meticulously, sometimes repeatedly, became key. Debugging involved a human-AI partnership – I'd often describe the unexpected behavior, and Gemini would help pinpoint potential issues in the code it (or we) had written. It was a process of learning how to "prompt" effectively and how to guide the AI when it drifted.
This experience has been a powerful demonstration of how GenAI can be a force multiplier for developers, turning ambitious ideas into working software at an accelerated pace, even if it requires a new kind of collaborative patience and precision.
Key Features That Shine
While the tool is packed with interconnected features, some of the "wow" factors that have emerged include:
Holistic System Modeling: The ability to define not just services, but also the teams that own them, the engineers within those teams (including AI SWEs and their attributes like skills and experience!), and the management structure, all in one place.
Insightful Visualizations: The carousel of system, team, and service dependency graphs provides powerful, at-a-glance insights into complex relationships.
Realistic Capacity Tuning: Moving beyond simple headcount, the "Tune Capacity Constraints" feature allows for granular adjustments, factoring in everything from public holidays and company events to team-specific leave uptake and meeting overhead. The resulting "Net Project Capacity" and its narrative explanation offer a truer picture of what teams can achieve.
Interactive Yearly Planning: The drag-and-drop planning table, combined with dynamic ATL/BTL calculations based on selectable scenarios (Funded HC, Team BIS, Effective BIS) and the option to apply the "Net Capacity Constraints," makes for a powerful "what-if" analysis tool. The Team Load Summary provides immediate feedback on team-level impacts.
Integrated SDM Resource Forecasting: This module closes the loop by allowing managers to see how their hiring plans and attrition realities, combined with the tuned capacity constraints, will affect their team's SDE availability throughout the year.
An Evolving, Modern UI: The recent shift towards a cleaner, card-based design with a persistent top navigation bar significantly improves usability.
The Journey Continues: The Power of Iteration
This MVP is a significant milestone, but it's also a foundation. The backlog, as detailed in our project's README.md, is rich with possibilities: enhanced yearly planning with ROI tracking and versioning, a dedicated roadmap and backlog management module, deeper detailed planning with work packages, and, most excitingly, integrating more AI-powered enhancements for analysis and interaction.
The development of this tool underscores the power of iterative development and the transformative potential of collaborating with AI coding assistants. It shows how a complex vision can be broken down and built, piece by piece, learning and refining along the way. While the path of AI-assisted development has its unique challenges (like ensuring context retention and guiding the AI through ambiguities), the ability to rapidly prototype, generate code, and explore solutions is undeniable.
For any engineering leader wrestling with the complexities of software planning, or any developer curious about the potential of AI partnerships, we hope this journey provides some inspiration. With the right vision and the right tools (both human and AI!), ambitious ideas can indeed be brought to life.
Okay, I still can't help myself - here's a fully functional Minesweeper game implementation, coded in less than 2 minutes, no issues one-shot, by Google Gemini. Granted, it's been years since I played this game!
So I couldn't help myself - getting Gemini to build and bring back some memories of classic apps from the past, this time Microsoft's disk defrag tool. I remember spending hours using this tool, just watching the blocks move was so satisfying!
Norton Antivirus - created one shot, worked first time. Here's the prompt for nortonGPT, source code on github.
I'm going through a phase of nostalgia, thinking about the classic tools and applications I used in the 90s and 2000s. Can you help me create a simulation of the classic MSDOS antivirus program for Norton Antivirus? The simulation must run for 2 minutes - it should show scanning the hard disk (simulated) but referring to actual files on local disk, using the classic progress indicator of a spinning "\-/-/" sequence, and simulating some viruses and quarantining effects. Implement the simulation as a single page web application that can be served on any browser.
Windows Defrag - took 5 iterations - less than 30 minutes to generate. Still can be improved but the idea is to show what can be done (and the fun that can be had) with these new AI coding assistants!
I asked Gemini to summarize our interaction journey in building the pacman game. Roughly 6 hours of my time well spent in learning how to use this powerful piece of technology! I can definitely say the tech has come a long way - not much hallucinating, very large input windows, did not run out of memory, did not lose context, did not run out of tokens, session did not expire... and even though I am a paid subscriber, I got the same experience using the free version from Google's AI studio! This is unbelievable!
Our Pac-Man Project: A Development Journey
This post summarizes the collaborative and iterative process of building a classic-style Pac-Man game using HTML, CSS, and JavaScript, culminating in a functional tile-based version and a detailed tutorial explaining its mechanics.
Goal: Create a fully functioning, static Pac-Man game suitable for a webpage or blog.
(Estimated Time: This collaborative project spanned numerous iterations. While AI interaction time differs from human development, the back-and-forth debugging, code generation, and refinement represent an estimated equivalent of 4-6 hours of focused development effort.)
(Approximate Timeline & Key Stages)
Initial Request & Pixel-Based Attempt:
Prompt: User requested a "fully functioning classic style pacman game that can be served in a static webpage...".
Action: I generated the first version (pacman_game_v1) using HTML Canvas and JavaScript, aiming for pixel-based movement for smoothness.
Outcome: User reported issues: "not working, not interactive, there is no gameplay".
Debugging the Pixel-Based Version:
Prompt: User reported Pac-Man stuck in walls, ghosts making one move and stopping, game stalling after first key press (with screenshots).
Action: Multiple attempts were made to refine the initialization, game loop activation, movement logic (movePacman), and especially the collision detection (canMove) in the pixel-based version. Collision detection with pixel-perfect movement proved very tricky to get right remotely.
Outcome: Persistent issues with characters getting stuck indicated the pixel-based approach was too fragile to debug effectively in this environment.
Strategy Shift: Tile-Based Logic:
Prompt: User suggested alternatives (WASM), I proposed simplifying to a tile-based system first. User agreed: "ok, let's give javascript one more try!".
Action: Created a new version (pacman_game_tile_based) from scratch, focusing on grid coordinates, discrete tile-to-tile movement, and simplified collision checks (isPassable).
Debugging the Tile-Based Version - Core Mechanics:
Issue: User reported a "black screen with some text", worse than before.
Action: Identified and fixed an initialization error where the maze was drawn before its data (dynamicMaze) was ready.
Issue: User reported Pac-Man still stuck (with screenshot).
Action: Refined movePacman logic for input buffering.
Issue: User reported Pac-Man still stuck, drawn inside a wall (with screenshot). User asked about starting coordinates relative to wall.
Action: Realized Pac-Man's initial gridY was set to 17, which was a wall tile in mazeLayout1. Corrected start gridY to 16.
Issue: User confirmed Pac-Man moved but reported the tunnels weren't working visually – looked like solid walls blocked entry (with screenshots).
Action: Verified mazeLayout data did have paths (5) leading to the tunnel. Changed drawMaze to explicitly draw paths black, then pink for clarity upon user request.
Issue: User insisted entry was still blocked by a wall before the edge, providing specific coordinates and maze analysis ("No, you are in error...").
Action: Acknowledged user was correct – I had missed blocking walls (1) in the mazeLayout data itself preventing access to the tunnel path tiles. Corrected the mazeLayout data for levels 1, 2, and 3. Removed pink path drawing.
Issue: User reported unresponsive keys ("My last gripe...").
Action: Refined input buffering logic in movePacman to only clear the buffer after a turn was successfully applied, making it remember intended turns.
Debugging Tile-Based Version - Ghost & Game Logic:
Issue: User reported only one ghost leaving the pen, others stuck; eaten ghosts not respawning correctly.
Action: Identified that the initial maze layout blocked upward movement for some ghosts in the pen. Corrected mazeLayout above the pen. Implemented staggered ghost exit logic using gameSteps and canLeavePen flags. Modified respawn logic to allow immediate exit attempts. (This included fixing the flawed Level 3 maze pen).
Issue: User reported ghosts getting stuck again after Pac-Man loses a life.
Action: Realized the gameSteps counter wasn't being reset on life loss, causing all ghosts to get permission to leave simultaneously. Added gameSteps = 0; to resetPositionsAfterLifeLost.
Outcome: User confirmed basic mechanics seemed functional ("excellent. I think we are getting close...").
Adding Features:
Prompt: User asked for missing key features.
Action: Identified Bonus Fruit. Implemented fruit logic (appearance based on dotsEaten, timer, collection, drawing).
Issue: User reported fruit appearing inside a wall.
Action: Corrected the FRUIT_SPAWN_TILE coordinates.
Outcome: User confirmed fruit worked. User asked for multiple levels and music.
Action: Added multiple levels (storing layouts, modifying game flow). Deferred music request.
Issue: User identified flawed Level 3 ghost pen design trapping ghosts ("haha, I'm really having fun...").
Action: Corrected mazeLayout3 pen entry walls.
Issue: User reported subsequent version was broken ("Nothing is loading").
Action: Reverted code back to the last known good state requested by the user (the version incorporating the fix for resetting ghost timers on life loss).
Documentation:
Prompt: User requested documentation in the form of a blog post/tutorial for students.
Action: Generated the Markdown tutorial (pacman_tutorial_blog_post).
Prompt: User requested conversion to publishable HTML.
Action: Converted the Markdown to a single HTML file with embedded CSS (pacman_tutorial_html).
Prompt: User requested FAQs be added to the tutorial.
Action: Added FAQ section to the Markdown tutorial artifact.
Final Summary Request:
Prompt: User requested this summary, including timeline and prompts.
Action: Generating the initial summary.
Prompt: User requested adding estimated time.
Action: Added time estimate to the summary.
Prompt: User requested adding LLM statistics.
Action: Adding this section now.
Notes on the AI Collaboration & Model:
This project was developed collaboratively with an AI assistant based on Google's Gemini models. Here are a few points regarding the AI's capabilities relevant to this project:
Context Window: Models like the one used here often have large context windows (the amount of previous conversation and code they can "remember" and consider). This capability, ranging from tens of thousands to potentially over a million tokens (pieces of words), was crucial for maintaining consistency and iterating on the code across multiple turns and complex debugging steps. However, the exact amount of context used in this specific session isn't tracked.
Code Generation: The AI generated substantial amounts of HTML, CSS, and JavaScript code. The final baseline game artifact (pacman_game_tile_based) contains roughly 750-800 lines of code (including HTML structure, CSS rules, and JavaScript logic). This doesn't count the code generated in earlier, discarded versions or the separate documentation artifacts.
Iterative Refinement: The process heavily relied on the AI's ability to understand bug reports (often accompanied by screenshots analyzed via multimodal capabilities), analyze existing code, identify potential errors, propose fixes, and generate updated code versions incorporating those fixes.
Challenges & Learnings:
Debugging complex interactions (collision, state timing) without direct execution environment access is challenging and requires careful logic tracing and clear user feedback.
Visual feedback (screenshots, explicit drawing changes like the pink tunnel) is crucial but can sometimes be misinterpreted if the underlying data or logic is flawed (as seen with the tunnel entry walls).
Clear communication and persistence from the user were essential in identifying subtle bugs (like the maze layout errors and start position).
Iterative refinement is key – starting simple (tile-based) often leads to a working solution faster than tackling complex approaches (pixel-based) immediately. Fixing one bug can sometimes reveal or cause another!
Final Artifacts:
Game Code:pacman_game_tile_based (specifically, the version titled "Pac-Man Game (Tile-Based - Life Loss Restart Fix)" which was established as the baseline before documentation).
Tutorial:pacman_tutorial_html (HTML blog post format explaining the game code).
This covers the main points of our collaboration on this project! It was a great example of iterative development and debugging facilitated by AI.