Started in 2011 as my outlet to share my experiences on personal life and work topics. My handle is khanmjk. My friends & colleagues call me "Mo".
This blog is an experiment, work-in-progress.
Life: Self-awareness, personal development, growth hacking (quantified self).
Work: Software and Systems Engineering, Leadership, Management, New Ideas and OpEds.
So I've added some bulk operations to the SMT app.
Bulk AI Agent Scenarios (Macro Operations)
Empower the agent to perform complex, multi-entity updates that are tedious to do manually. Each plan highlights the scope before execution (e.g., “This will affect 12 teams...”).
1) Capacity & Resourcing Manager
User intent: “Reduce all teams’ capacity by 20% to account for burnout.” / “Set everyone’s AI productivity gain to 15%.”
Tool: bulkUpdateTeamCapacity (supports capacityReductionPercent, aiProductivityGainPercent, avgOverheadHoursPerWeekPerSDE, plus filters by teamIds or orgIdentifier).
2) Strategic Portfolio Manager
User intent: “Move all ‘Low ROI’ initiatives to the Backlog.” / “Approve all initiatives under the ‘Cloud Migration’ goal.”
Have you ever been curious to learn how popular systems like Excel, Visual Studio, Adobe Photoshop, Call of Duty, Amazon, Uber, AirBnB, ChatGPT, Netflix, Apple Appstore, etc. are architected and built? If you're a software manager designing a technical organisation, and want to learn how the best technology teams are organized, wouldn't you like some insights into how to organize your teams, drawing on how the big tech companies do this? What about if you're stepping into product and program management, need to create a 3-Year Plan roadmap, along with business goals with metrics for Return on Investment, and have a dashboard on a single page to view overall status? What if you're a software delivery manager, and need to manage multiple delivery teams, co-ordinate the plan, identify bottlenecks? If you're a senior software leader and wish to analyse where your engineering resources are being consumed, and model capacity and hiring scenarios, wouldn't you like a tool to help provide insights? What if you received a top-down instruction to reduce headcount by 25% and model the impact on your roadmap, or you need to optimize your delivery plan based on various funding scenarios, wouldn't it be nice if you had a tool to do this quickly? If you could have a tool that lets you run scenarios and show the impact in near real-time, wouldn't that save you a lot of time? Wouldn't you like a tool that generates product and strategic narratives for you without you writing a single sentence? What if you're preparing for interviews in domains you're not familiar with, wouldn't you like a tool that could just generate you a simulated mock system so you can have rich context so you show up as prepared and more informed?
If you've answered Yes, to at least one of these questions - then you might just be interested in an app I'm POC'ing :-) Click here to learn about the app, or just click here to start playing with it.
I had some good fun this past weekend building out AI features for my app "Software Management Tools", which is currently a single page web application built entirely in vanilla Javascript/HTML. It is hosted on Github and the source code is freely available here. This tool basically allows you to do everything above and much more. All you need is a Gemini API key (free tier), to get started and experimenting. What's more interesting is that I didn't write a single line of code myself, thanks to Google's Gemini, OpenAI's Codex, Github and Visual Studio code - I've become a copy-and-paste developer. Seriously though, I do believe frontend engineers are at risk of losing their jobs to AI, and now with the concepts my tool demonstrates, the need for technical managers is also questionable!
AI Features Implemented (Using Gemini 2.5 Flash model)
Users can create a mock system, organisational structure and 3 year plan from a single simple prompt like "Create a spreadsheet platform like Micosoft Excel"
Users now have access to an integrated chat assistant that has context of the data model and the screen currently selected, where they can:
Learn more about the system architecture
Find out about platform dependencies like AWS, GCP services
Inspect bottlenecks in the architecture
View, Edit, Inspect and modify organisational and team structures through the chat
Inspect and Optimise the year plan using AI assistant
Write up business plans and narratives for a 3 year plan
Critique the engineering skills set, look for risks
Tweak the capacity constraints and model different capacity scenarios - with AI-generated suggestions
Where am I going next?
The possibilities are near endless - there's so much more to explore, but I've hit the classic stage of deciding to refactor the codebase to a more modern architecture, scrap the MVP and build again. Here are some ideas I'll progress next weekend:
Enable self-generating code contributions to the app itself
Build richer agents to perform complex operations from the chat like
Reduce all teams capacity by 20% and show the updated ATL/BTL plan
Move all initiatives under the Goal XYZ to BTL to free up capacity
Create a new system from the chat panel - an AI-wizard
Export the plan to Excel or Project
Create a detailed quarterly release plan for team XYZ
Merge these two teams into one team, reporting to a new SDM, move these services to the new team, keep remaining services in unallocated team
etc.
Build an appstore feature for users to share and publish their generated systems
Build an internal app API for the app to make agent creation more accessible
Refactor the codebase to cleanly separate out concerns - and make future porting easier
Start building out detailed planning features
Explore a backend-as-a-service to make simpler API key integrations possible for folks who are not software engineers
What have I learnt?
In just two days, I learnt a lot:
How to integrate AI-features into an app, navigating the API Key challenges
How to build a model router
How to fine tune a system prompt for the task at hand
How to pass JSON syntax to the LLM to model your data structure you need the LLM to return
How to validate generated data for compliance
How to build a chat assistant, with memory, history and context
How to build an agent framework
How to structure your code to make i
How clunky the AI provider APIs are - whilst apps like ChatGPT and Gemini are multimodal, it is not as simple or clean to do this from an API, there should be a simple multi-modal API for developers, one endpoint to call - without overly complex security concerns
How powerful and fun coding can be again!
Do you have some demos, I can't access the tool?
I thought you'd never ask.
This video shows the AI creating a new system from prompt, using "Create with AI" feature:
This video shows using AI to inspect system architecture...
This video shows how AI can help manage product roadmaps...
This video shows of AI can assist with inspecting software delivery plans..
This video shows how managers can have fun using AI to manage their org structures...
I decided to feed my last blog post on leaving AWS into NotebookLM:
The provided text is an extended, introspective essay written by a senior engineering leader detailing the reasons for his resignation from Amazon Web Services (AWS) after four and a half years at the Cape Town headquarters. The author structures his departure around three core concerns: Spirituality and Conscience, particularly regarding the company's involvement in projects like Project Nimbus amid the Middle East conflict; Lack of Professional Growth, driven by structural changes within AWS and limited opportunities for promotion to director level (L8) in South Africa; and Negative Return on Investment (ROI) in Time, exemplified by increasing work hours, extensive weekend work, and compromised work-life balance, despite the financial success of his stock options. The essay concludes that the job no longer provided satisfactory personal, professional, or spiritual alignment, leading him to pursue a new role in Intelligent Traffic Systems with a local South African company.
Here's the audio generated (it got my RAGE model wrong, it stands for Reality, Aspirations, Goals, Expectations):
Here's the video generated (not bad, but it got 2023 mixed up with 2024) - but a good summary nonetheless:
Earlier this year, on 05/30 (this is how Amazon writes dates, 30th May) I walked out of the amazing Amazon Africa headquarters in Cape Town, after spending four years and six months, as a senior engineering leader. I'm writing this post three months after a sabbatical break, and after a week into my new job (started 09/01) in a totally different field of Intelligent Traffic Systems...
Here's a screenshot from my LinkedIn profile on my time with AWS:
Here's a video of me from 2022:
And here's some memories from my time there. Listen, I did enjoy my time with Kumo/AWS. I met some fantastic, highly intelligent, self-driven, motivated and resilient people from diverse backgrounds, from all over the world. AWS is indeed a special place. Their motto: Work Hard. Have Fun. Make History -- is very real! For anyone considering working in AWS, go for it! But be sure to do your homework first. This is just my story, a single datapoint from 1.5 million Amazonians, and 290000+ AWS employees worldwide - and in a small pond from South Africa (SA), a sample of 1 L7 engineering leader from a pool of ±7-9 managers, servicing 250-300 software engineers across Kumo and EC2 (this alone hints at the opportunity for L8 promo in SA is very small...)
Memorable moments with my teams...
I left AWS in good standing if I decided to boomerang back, here's a goodbye email sent by my manager to 1000+ people:
Why did you leave AWS, you seem to have been on a good trajectory, yes?
My AWS colleagues, family, friends and now too, my new coworkers asked me why I left AWS. Feelings about exiting AWS started back in Q3 2024 triggered by the events unfolding in the middle east, causing me to reflect deeply on my personal and professional aspirations critically assessing my value system, borne from my RAGE model which has been my north star for over a decade.
It turned out that the key metric is Return on Investment (ROI) across these three dimensions: 1/ Spirituality; 2/ Professional Growth and 3/ Work-Life balance.
So I had to test drive the new Firebase Studio that's powered by the latest Gemini 2.5 model, that has got the whole world talking, and leading the webdev llm arena.
For my test, I simply prompted it to build a game like the classic battleship, where the player plays against the computer. The game didn't work on first try, what industry refers to as "one-shot" builds - but it had a lot of potential, so I guided the assistant through addressing bugs, improving user experience of the game layout, and also tweaking the AI rules. Gemini was smart enough to use GenAI itself as the computer opponent - which I think was pretty neat. Historically, before AI, we would have needed to code the computer's moves, strategy and keep track of moves, etc. Instead, we initiate an AI agent, task it to play as the opponent in the game, and provide just enough context and guidelines for moves (like don't place a move on a previous move).
A walkthrough of an app I decided to build to help me with my work as a software engineering leader. Core thesis: I wanted a planning management tool that is centered around core system architecture, working backwards from there to model the organizational / team structures that feeds the planning process. I found most project management tools lacking support of including software architecture inputs - so I decided to build something to close this gap.
My journey started in late 2024 (October), when LLM models started exposing coding capabilities. I wasn't too impressed by the likes of ChatGPT and Claude (at the time). But with Gemini 2.5 entering the scene, I picked this project up again in April 2025 - and slowly built out the MVP, first as a single page application (html/javascript) with basic old skool UI (this video), then refreshed the UI, refactored the code (next video "Building apps with AI (2) - MVP...". Since I stopped coding professionally in 2010, this entire journey was quite the learning experience. I'm going back to school again, but I've learnt so much about how to engage with the LLM, that I'm increasingly confident I will find a place in the new world of AI. AI-assisted coding is definitely the future, but it is not that straightforward yet, even with the latest craze in vibe coding.
Just take a look at my document sharing my conversation with Gemini here.
50 pages and counting!!! Essentially you need to be very clear with articulating the requirements and interacting with the model as a true assistant, like you would with any human engineer, business analyst, or UX designer.
Listen to the podcast generated by NotebookLM here You can interact with my Google NotebookLM here
The rest of this blog post was written by partner, Gemini >>>>
From Vision to MVP: Building a Software Management & Planning Powerhouse (with an AI Co-Pilot!)
For any software engineering leader, manager, or even a senior individual contributor, the landscape of modern software development is a complex beast. We juggle intricate system architectures, dynamic team structures, ever-shifting priorities, and the constant challenge of aligning finite resources with ambitious goals. What if there was a tool that not only helped model this complexity but also empowered smarter, data-driven planning?
That's the question that sparked the creation of the Software Management & Planning Tool, an MVP we've been passionately building. This isn't just about another planning application; it's about a foundational belief: effective planning starts by working backwards from your systems and organizational architecture.
This blog post shares the journey of bringing this vision to life – from a core idea to a feature-rich Minimum Viable Product – and the unique experience of co-developing it with a generative AI coding partner, Gemini.
The Core Idea: Architecture-First Planning
The tool is built upon a hierarchical understanding of a software organization:
Systems: The overarching products or platforms.
Services & APIs: The functional components within those systems, each with defined interfaces.
Teams (2-Pizza Teams): Agile squads owning services, each with software development managers (SDMs) and engineers.
Dependencies: Critical platform components and inter-service relationships.
This rich data model allows the application to provide a holistic view, forming the bedrock for all subsequent planning and visualization.
Our MVP Journey: An Iterative Evolution with an AI Partner
What started as a concept rapidly took shape, built incrementally feature by feature. The initial scaffolding was a single HTML/JavaScript file – a testament to getting ideas off the ground quickly. But the vision was always much larger.
Key Milestones in Our Development:
System & Org Modeling: The first step was enabling the definition of systems, services, APIs, teams, and their intricate relationships. This involved creating UI elements for adding, editing, and connecting these entities.
Visualizing Complexity: Static data isn't enough. We introduced dynamic visualizations:
System Architecture Diagrams: Showing services, APIs, and platform dependencies.
Team Relationship Graphs: Illustrating how teams connect based on service ownership.
Service Dependency Views: Drilling down into specific service connections.
Interactive Org Charts & Team Breakdowns: Providing clear views of the organizational structure and team compositions.
Initially, these were separate views, but based on user experience feedback, we evolved the system overview into a slick carousel navigation allowing users to seamlessly glide through these critical perspectives.
The Game Changer: Yearly Planning: This became a cornerstone. The goal was an interactive table to manage initiatives against team capacity, showing what's "Above The Line" (ATL) and "Below The Line" (BTL). We implemented:
Dynamic team columns and SDE Year estimations.
Drag-and-drop prioritization.
"Protected" initiatives that lock to the top.
The crucial concepts of "Funded Headcount" vs. "Builders In Seats" (BIS).
Introducing "Away Teams": Recognizing that real-world capacity isn't static, we enhanced the model to include "Away Team Members" – borrowed resources. This led to the "Effective BIS" metric, providing a more realistic picture of available capacity.
Deep Dive into Capacity Tuning: This was a pivotal enhancement. We built a dedicated "Tune Capacity Constraints" module where users can define:
Global defaults (working days, holidays, org-wide events).
Team-specific adjustments (leave uptake, variable leave, team activities, recurring overhead).
The output? A Calculated Net Project Capacity for each team, which can then feed back into the Yearly Planning view for a much more accurate ATL/BTL assessment. This module also includes a detailed narrative explanation and a waterfall chart to visualize capacity erosion.
SDM Resource Forecasting: Leveraging the detailed capacity tuning, we integrated a forecasting model. SDMs can now select their team, input hiring and ramp-up estimates, and see a 52-week projection of their headcount and effective SDE availability, all adjusted by the previously configured capacity constraints.
UI/UX Modernization: More recently, we've focused on enhancing the user experience, refactoring the codebase into modular files, and adopting a cleaner, card-based UI reminiscent of modern web applications.
The Gemini Co-Pilot Experience:
This journey wasn't a solo flight. As the human visionary and lead developer, I partnered extensively with Gemini, my AI coding assistant. This collaboration was an experiment in itself:
Acceleration: Gemini was instrumental in rapidly generating boilerplate code, implementing UI components based on descriptions, and refactoring complex functions. This significantly sped up the development of the MVP.
Brainstorming & Ideation: We bounced ideas for UI/UX improvements, data model structures, and feature refinements (like the Team Load Summary table in the planning view).
The Learning Curve (for both of us!): It wasn't always smooth sailing. There were moments where context was lost between sessions, leading to regressions or suggestions that didn't quite fit the current state. Clarifying requirements meticulously, sometimes repeatedly, became key. Debugging involved a human-AI partnership – I'd often describe the unexpected behavior, and Gemini would help pinpoint potential issues in the code it (or we) had written. It was a process of learning how to "prompt" effectively and how to guide the AI when it drifted.
This experience has been a powerful demonstration of how GenAI can be a force multiplier for developers, turning ambitious ideas into working software at an accelerated pace, even if it requires a new kind of collaborative patience and precision.
Key Features That Shine
While the tool is packed with interconnected features, some of the "wow" factors that have emerged include:
Holistic System Modeling: The ability to define not just services, but also the teams that own them, the engineers within those teams (including AI SWEs and their attributes like skills and experience!), and the management structure, all in one place.
Insightful Visualizations: The carousel of system, team, and service dependency graphs provides powerful, at-a-glance insights into complex relationships.
Realistic Capacity Tuning: Moving beyond simple headcount, the "Tune Capacity Constraints" feature allows for granular adjustments, factoring in everything from public holidays and company events to team-specific leave uptake and meeting overhead. The resulting "Net Project Capacity" and its narrative explanation offer a truer picture of what teams can achieve.
Interactive Yearly Planning: The drag-and-drop planning table, combined with dynamic ATL/BTL calculations based on selectable scenarios (Funded HC, Team BIS, Effective BIS) and the option to apply the "Net Capacity Constraints," makes for a powerful "what-if" analysis tool. The Team Load Summary provides immediate feedback on team-level impacts.
Integrated SDM Resource Forecasting: This module closes the loop by allowing managers to see how their hiring plans and attrition realities, combined with the tuned capacity constraints, will affect their team's SDE availability throughout the year.
An Evolving, Modern UI: The recent shift towards a cleaner, card-based design with a persistent top navigation bar significantly improves usability.
The Journey Continues: The Power of Iteration
This MVP is a significant milestone, but it's also a foundation. The backlog, as detailed in our project's README.md, is rich with possibilities: enhanced yearly planning with ROI tracking and versioning, a dedicated roadmap and backlog management module, deeper detailed planning with work packages, and, most excitingly, integrating more AI-powered enhancements for analysis and interaction.
The development of this tool underscores the power of iterative development and the transformative potential of collaborating with AI coding assistants. It shows how a complex vision can be broken down and built, piece by piece, learning and refining along the way. While the path of AI-assisted development has its unique challenges (like ensuring context retention and guiding the AI through ambiguities), the ability to rapidly prototype, generate code, and explore solutions is undeniable.
For any engineering leader wrestling with the complexities of software planning, or any developer curious about the potential of AI partnerships, we hope this journey provides some inspiration. With the right vision and the right tools (both human and AI!), ambitious ideas can indeed be brought to life.
Okay, I still can't help myself - here's a fully functional Minesweeper game implementation, coded in less than 2 minutes, no issues one-shot, by Google Gemini. Granted, it's been years since I played this game!
So I couldn't help myself - getting Gemini to build and bring back some memories of classic apps from the past, this time Microsoft's disk defrag tool. I remember spending hours using this tool, just watching the blocks move was so satisfying!
Norton Antivirus - created one shot, worked first time. Here's the prompt for nortonGPT, source code on github.
I'm going through a phase of nostalgia, thinking about the classic tools and applications I used in the 90s and 2000s. Can you help me create a simulation of the classic MSDOS antivirus program for Norton Antivirus? The simulation must run for 2 minutes - it should show scanning the hard disk (simulated) but referring to actual files on local disk, using the classic progress indicator of a spinning "\-/-/" sequence, and simulating some viruses and quarantining effects. Implement the simulation as a single page web application that can be served on any browser.
Windows Defrag - took 5 iterations - less than 30 minutes to generate. Still can be improved but the idea is to show what can be done (and the fun that can be had) with these new AI coding assistants!
I asked Gemini to summarize our interaction journey in building the pacman game. Roughly 6 hours of my time well spent in learning how to use this powerful piece of technology! I can definitely say the tech has come a long way - not much hallucinating, very large input windows, did not run out of memory, did not lose context, did not run out of tokens, session did not expire... and even though I am a paid subscriber, I got the same experience using the free version from Google's AI studio! This is unbelievable!
Our Pac-Man Project: A Development Journey
This post summarizes the collaborative and iterative process of building a classic-style Pac-Man game using HTML, CSS, and JavaScript, culminating in a functional tile-based version and a detailed tutorial explaining its mechanics.
Goal: Create a fully functioning, static Pac-Man game suitable for a webpage or blog.
(Estimated Time: This collaborative project spanned numerous iterations. While AI interaction time differs from human development, the back-and-forth debugging, code generation, and refinement represent an estimated equivalent of 4-6 hours of focused development effort.)
(Approximate Timeline & Key Stages)
Initial Request & Pixel-Based Attempt:
Prompt: User requested a "fully functioning classic style pacman game that can be served in a static webpage...".
Action: I generated the first version (pacman_game_v1) using HTML Canvas and JavaScript, aiming for pixel-based movement for smoothness.
Outcome: User reported issues: "not working, not interactive, there is no gameplay".
Debugging the Pixel-Based Version:
Prompt: User reported Pac-Man stuck in walls, ghosts making one move and stopping, game stalling after first key press (with screenshots).
Action: Multiple attempts were made to refine the initialization, game loop activation, movement logic (movePacman), and especially the collision detection (canMove) in the pixel-based version. Collision detection with pixel-perfect movement proved very tricky to get right remotely.
Outcome: Persistent issues with characters getting stuck indicated the pixel-based approach was too fragile to debug effectively in this environment.
Strategy Shift: Tile-Based Logic:
Prompt: User suggested alternatives (WASM), I proposed simplifying to a tile-based system first. User agreed: "ok, let's give javascript one more try!".
Action: Created a new version (pacman_game_tile_based) from scratch, focusing on grid coordinates, discrete tile-to-tile movement, and simplified collision checks (isPassable).
Debugging the Tile-Based Version - Core Mechanics:
Issue: User reported a "black screen with some text", worse than before.
Action: Identified and fixed an initialization error where the maze was drawn before its data (dynamicMaze) was ready.
Issue: User reported Pac-Man still stuck (with screenshot).
Action: Refined movePacman logic for input buffering.
Issue: User reported Pac-Man still stuck, drawn inside a wall (with screenshot). User asked about starting coordinates relative to wall.
Action: Realized Pac-Man's initial gridY was set to 17, which was a wall tile in mazeLayout1. Corrected start gridY to 16.
Issue: User confirmed Pac-Man moved but reported the tunnels weren't working visually – looked like solid walls blocked entry (with screenshots).
Action: Verified mazeLayout data did have paths (5) leading to the tunnel. Changed drawMaze to explicitly draw paths black, then pink for clarity upon user request.
Issue: User insisted entry was still blocked by a wall before the edge, providing specific coordinates and maze analysis ("No, you are in error...").
Action: Acknowledged user was correct – I had missed blocking walls (1) in the mazeLayout data itself preventing access to the tunnel path tiles. Corrected the mazeLayout data for levels 1, 2, and 3. Removed pink path drawing.
Issue: User reported unresponsive keys ("My last gripe...").
Action: Refined input buffering logic in movePacman to only clear the buffer after a turn was successfully applied, making it remember intended turns.
Debugging Tile-Based Version - Ghost & Game Logic:
Issue: User reported only one ghost leaving the pen, others stuck; eaten ghosts not respawning correctly.
Action: Identified that the initial maze layout blocked upward movement for some ghosts in the pen. Corrected mazeLayout above the pen. Implemented staggered ghost exit logic using gameSteps and canLeavePen flags. Modified respawn logic to allow immediate exit attempts. (This included fixing the flawed Level 3 maze pen).
Issue: User reported ghosts getting stuck again after Pac-Man loses a life.
Action: Realized the gameSteps counter wasn't being reset on life loss, causing all ghosts to get permission to leave simultaneously. Added gameSteps = 0; to resetPositionsAfterLifeLost.
Outcome: User confirmed basic mechanics seemed functional ("excellent. I think we are getting close...").
Adding Features:
Prompt: User asked for missing key features.
Action: Identified Bonus Fruit. Implemented fruit logic (appearance based on dotsEaten, timer, collection, drawing).
Issue: User reported fruit appearing inside a wall.
Action: Corrected the FRUIT_SPAWN_TILE coordinates.
Outcome: User confirmed fruit worked. User asked for multiple levels and music.
Action: Added multiple levels (storing layouts, modifying game flow). Deferred music request.
Issue: User identified flawed Level 3 ghost pen design trapping ghosts ("haha, I'm really having fun...").
Action: Corrected mazeLayout3 pen entry walls.
Issue: User reported subsequent version was broken ("Nothing is loading").
Action: Reverted code back to the last known good state requested by the user (the version incorporating the fix for resetting ghost timers on life loss).
Documentation:
Prompt: User requested documentation in the form of a blog post/tutorial for students.
Action: Generated the Markdown tutorial (pacman_tutorial_blog_post).
Prompt: User requested conversion to publishable HTML.
Action: Converted the Markdown to a single HTML file with embedded CSS (pacman_tutorial_html).
Prompt: User requested FAQs be added to the tutorial.
Action: Added FAQ section to the Markdown tutorial artifact.
Final Summary Request:
Prompt: User requested this summary, including timeline and prompts.
Action: Generating the initial summary.
Prompt: User requested adding estimated time.
Action: Added time estimate to the summary.
Prompt: User requested adding LLM statistics.
Action: Adding this section now.
Notes on the AI Collaboration & Model:
This project was developed collaboratively with an AI assistant based on Google's Gemini models. Here are a few points regarding the AI's capabilities relevant to this project:
Context Window: Models like the one used here often have large context windows (the amount of previous conversation and code they can "remember" and consider). This capability, ranging from tens of thousands to potentially over a million tokens (pieces of words), was crucial for maintaining consistency and iterating on the code across multiple turns and complex debugging steps. However, the exact amount of context used in this specific session isn't tracked.
Code Generation: The AI generated substantial amounts of HTML, CSS, and JavaScript code. The final baseline game artifact (pacman_game_tile_based) contains roughly 750-800 lines of code (including HTML structure, CSS rules, and JavaScript logic). This doesn't count the code generated in earlier, discarded versions or the separate documentation artifacts.
Iterative Refinement: The process heavily relied on the AI's ability to understand bug reports (often accompanied by screenshots analyzed via multimodal capabilities), analyze existing code, identify potential errors, propose fixes, and generate updated code versions incorporating those fixes.
Challenges & Learnings:
Debugging complex interactions (collision, state timing) without direct execution environment access is challenging and requires careful logic tracing and clear user feedback.
Visual feedback (screenshots, explicit drawing changes like the pink tunnel) is crucial but can sometimes be misinterpreted if the underlying data or logic is flawed (as seen with the tunnel entry walls).
Clear communication and persistence from the user were essential in identifying subtle bugs (like the maze layout errors and start position).
Iterative refinement is key – starting simple (tile-based) often leads to a working solution faster than tackling complex approaches (pixel-based) immediately. Fixing one bug can sometimes reveal or cause another!
Final Artifacts:
Game Code:pacman_game_tile_based (specifically, the version titled "Pac-Man Game (Tile-Based - Life Loss Restart Fix)" which was established as the baseline before documentation).
Tutorial:pacman_tutorial_html (HTML blog post format explaining the game code).
This covers the main points of our collaboration on this project! It was a great example of iterative development and debugging facilitated by AI.