Monday, 15 December 2025

From "Vibe Coding" to Engineering: How I Tamed an AI-Generated Monolith

A survival guide for building software in the age of AI Agents.


If you've played with the latest AI coding tools, you know the feeling. It's intoxicating. You type "Build me a dashboard," and poof—code appears. You feel like a wizard. You are "Vibe Coding"—surfing on a wave of generated features, moving faster than you ever thought possible.

I know this feeling because I lived it. I spent Oct 2024 to June 2025 vibe-coding my way to a feature-rich management app called SMT (Software Management Tools). But then, the wave crashed. I decided the AI wasn't mature enough yet, and decided to take a break from vibe coding further. Until I woke up again in November '25, and in December '25 plunged right back in with the release of newer models and new tooling from Google, especially Antigravity got me hooked solid! Wow, the landscape is changing dramatically!

This is the story of how I went from a chaotic, AI-generated monolith to a nearly professional, engineered codebase. And more importantly, it's a guide on how you can avoid the trap of "Vibe Coding" and start using AI more effectively to build serious software, than just demos.

The Visual Journey: The "Model War" Timeline

The Industry Context: Riding the Wave

To understand why the code changed, you have to understand how the tools changed. The explosion of antigravity features coincided perfectly with the "Agentic Arms Race" of 2025.

Phase 1: The Trap of "Vibe Coding" & The Quota Shuffle (Winter 2025)

After a long summer working flat-out at Amazon, I found myself with a 3-month sabbatical in May 2025. It was a golden age for AI releases. Google had just dropped "Jules" (in Public Beta). Anthropic and OpenAI were trading blows with massive context windows.

I went fast. Recklessly fast.

I treated the AI models like a tag-team of interns.

  • Morning: I'd burn through my Gemini quota building the Gantt chart.
  • Afternoon: I'd switch to Claude for major technical debt refactoring, while using Gemini for UX refactoring and plumbing.
  • Evening: I'd use OpenAI to debug the logic and provide a third-party review.

The "Tower of Babel" Problem:

Each model had a different "personality." Because I wasn't providing governance, the codebase became a schizophrenic mess. main.js was a mix of 3 different coding styles fighting for dominance.

The Agent Blame Game (Whack-a-Mole)

My frustration peaked just yesterday (Dec 14) during the Gantt refactor. I entered a cycle of "Agent Whack-a-Mole":

  1. I was working with Claude 3.5 Opus to stylize the complex SVG dependency arrows. It was working perfectly.
  2. CRASH: "Quota Exceeded."
  3. I swapped to Gemini 2.0 to keep the momentum going. "Gemini, continue styling these arrows."
  4. Gemini, eager to help, rewrote the GanttRenderer class. It fixed the styling but deleted the logic for dependency calculation, breaking the core feature.
  5. My quota reset. I went back to Claude: "Claude, Gemini deleted the logic. Fix what it broke."

I spent hours just mediating disputes between AI models that couldn't see each other's work. It was clear: without a shared "Constitution" (The Contract), they were just tagging over each other's graffiti.

The Structural Evolution (Visualized)

The transformation wasn't just in the code; it was in the very shape of the project.

Phase 1: Inception (Oct 2024)

(Simple, clean, but incapable)


/root
├── index.html
├── LICENSE
└── README.md


Phase 2: The Sprawl (June 2025)

(Features added by "Vibe Coding" with no plan)


/root
├── index.html
├── js/
│   ├── main.js (3000+ lines)
│   ├── utils.js (Everything basket)
│   ├── data.js (Hardcoded state)
│   ├── visualizations.js (D3 spaghetti)
│   └── ... (20+ flat files)
└── css/
    ├── style.css (Global conflict zone)
    └── ...


Phase 3: The Engineered State (Dec 2025)

(Governed by Contract)


/root
├── ai/                 (Agentic Controllers)
├── docs/               (The Contracts)
├── css/
│   ├── components/     (Scoped styles)
│   └── themes/         (Variables)
└── js/
    ├── services/       (Logic Layer)
    ├── components/     (View Layer)
    ├── managers/       (State Layer)
    └── main.js         (Bootstrapper only)


Phase 2: The Pivot to Engineering (Summer 2025)

I realized that if I wanted this project to survive, I had to stop acting like a "prompter" and start acting like a "Principal Engineer."

The shift happened on December 3rd. I didn't ask the AI to write code. I asked it to read a Contract.

The Agent Contracts: A Rule of Law

We didn't just write one prompt; we established a constitutional framework for the AI. This came in the form of two critical documents that every agent was required to ingest before writing a single line of code.

1. The Coding Contract (coding-agent-contract.md)

This document outlawed the bad habits the AI had picked up during the "Vibe Coding" era. It established zero-tolerance policies:

  • "No Defensive Coding for Internal Code": Stop asking if (typeof SystemService !== 'undefined'). Trust the architecture.
  • "The Singleton Rule": All logic resides in Services. No ad-hoc functions in files.
  • "The Window Ban": Writing to window.myVariable was strictly prohibited. State must be encapsulated.

2. The Workspace Canvas Contract (workspace-canvas-contract.md)

This was the game-changer for UI. It stopped the AI from inventing new layouts for every page.

  • "The Semantic Color Rule": Hardcoded hex values like #fff or #000 were banned. The AI had to use semantic variables like var(--theme-bg-primary). This single rule made implementing Dark Mode instant.
  • "The Shell Protocol": Every view had to plug into a standard WorkspaceShell. No more rogue sidebars or inconsistent headers.

Why this matters:

These contracts turned the AI from a chaotic creative into a disciplined engineer. When the AI encountered a problem, it didn't just "fix it"; it checked the Contract to see how it was allowed to fix it.

Pro Tip: The Self-Audit

I didn't just trust the AI to follow the rules. On Dec 11, I ran a "Compliance Report" task. I asked the AI: "Scan the entire codebase. List every file that violates the workspace-canvas-contract.md."

It found 15 violations I had missed. The AI became its own auditor.

Phase 3: The Payoff (The "Agentic Spike")

With the contract in place, I unleashed the new Antigravity agents in December.

In one weekend (Dec 7-8), we processed 47 commits that didn't add a single feature but completely re-wired the application.

  • The "Junk Drawer" must die: We deleted utils.js. In the "Vibe Coding" days, this file became a 2,000-line dumping ground for everything from date formatting to API calls. We exploded it into dedicated Services.
  • 117 spaghetti files became 169 modular classes.
  • Global variables vanished.
  • Theming became a single-line change.

The Discipline of Deferral

The hardest part wasn't the code; it was the patience.

I had a major feature ready to go: Year Planning. In the old "Vibe Coding" days, I would have just asked Gemini to "add it."

Instead, I deferred it.

I spent a week refactoring the Gantt view and the Org view to meet the new Contract first. Only when the architecture was solid did I allow the AI to touch YearPlanning.

  • Result: When I finally asked the AI to refactor YearPlanning (Commit 392ffcd), it got it 90% right on the first shot. Because it had "learned" from the codebase and I had enforced the patterns elsewhere, the most complex feature in the app became the smoothest refactor.

The "Audit Loop" Struggle

It wasn't magic. It was a fight.

Enforcing the contract was a recursive nightmare at first. I would ask the AI to "Fix the compliance issues in GanttPlanning.js."

  1. Run 1: AI removes global variables. (Good)
  2. Run 1 Side Effect: AI introduces a new inline HTML template. (Violation!)
  3. Run 2: I scold the AI: "No inline HTML! Check the Contract!"
  4. Run 2: AI fixes HTML.
  5. Run 2 Side Effect: AI adds a defensive check if (typeof SystemService...). (Violation!)

I often had to prompt the agent 5 or 6 times just to get one file clean. I had to explicitly remind it: "You are an expert engineer. Do not regress on the Contract. Check your own work against coding-agent-contract.md before submitting."

The 5 Deadly Blind Spots

Even with the contract, I learned that Agents have recurring "blind spots" you must watch for:

  1. Zombie Code: They love to write the new class GanttMvc.js but forget to delete the old ganttPlanning.js, leaving you with two conflicting sources of truth.
  2. The "Partial Success" Hallucination: An agent will refactor 3 out of 5 methods and happily announce, "I have completed the refactor!" You have to check the bottom of the file.
  3. The Compatibility Trap: When refactoring, agents often try to be "helpful" by keeping the old broken methods "just in case," creating a hybrid mess instead of a clean break.
  4. Phantom Functions: They assume utils.formatDate exists because it feels right, even if you deleted utils.js yesterday.
  5. The Rename Reflex: Instead of finding the existing method calculateDuration(), they will invent a new one called getTaskLength(), duplicating logic because they didn't index the context.

The "Sync": When It Just Clicks

To be fair, it wasn't all struggle. There were moments of absolute magic.

During the "Agentic Spike" (Dec 7-8), when the Contract was fresh and the Context was clean, I hit a Flow State with the AI that felt telepathic.

  • I'd say: "Refactor the OrgView to use the new Service pattern."
  • The Agent would output 4 perfect files.
  • I'd say: "Excellent. Now do the same for the Gantt view."
  • It would interpret "the same" correctly, applying complex patterns without needing them restated.
  • "You Rock", "Awesome Job", "Perfect" — my chat logs from that weekend are full of these praises.

When you treat the AI as a partner with a shared mental model (the Contract), the friction disappears. You aren't prompting; you're just... building.

The New Metric: CW/H (Curse Words per Hour)

Waleed Kadous recently proposed a new metric for AI quality: Curse Words per Hour (CW/H).

My project logs are the perfect validation of this theory.

  • High CW/H: During the "Blind Spots" and "Whack-a-Mole" phases, my prompts were full of CAPS and desperation: "STOP. YOU DELETED THE LOGIC AGAIN. REVERT. WHAT ARE YOU DOING??" This wasn't just anger; it was a signal of Context Drift. The model had lost the thread.
  • Negative CW/H: During "The Sync," my CW/H went negative (praise acts as -1). "Perfect," "You nail it," "Go ahead."

The Lesson: Your emotional reaction is a debugging tool. If you find yourself swearing at the AI, stop. Do not prompt harder. The Context has drifted. Reset the chat, paste the Contract, and start fresh.

But once it learned? The velocity was unstoppable.

The "Gantt Odyssey": A Case Study in Persistence

If you want to know what "refactoring hell" looks like, look at the git history for the Gantt chart.

  • Nov 23 (The False Start): I tried to "refactor" the monolithic ganttPlanning.js while keeping the "vibe" intact. It failed. The code was too entangled.
  • Nov 25 (The Feature Trap): Instead of fixing the foundation, I asked the AI to add more features (Frappe renderer, toggles). This was a mistake. The main.js file ballooned to 3,000 lines.
  • Dec 9 (The Realization): Commit f4c4845 "WIP - re-architect gantt to mvc." This was the moment I realized the old code had to die.
  • Dec 14 (The Victory): Commit 6c5b6f7 "Complete refactor of gantt and delete legacy ganttPlanning.js."

It took 4 distinct attempts and over 20 commits just to clean up this one feature. The lesson? You cannot refactor spaghetti into lasagna one noodle at a time. Sometimes, you have to throw the bowl away and start fresh with a new recipe (the Contract).

The "Git Worktree" Trap: Parallel Agents & The Danger Zone "Git Merge Hell"

As I got comfortable with Antigravity, I got greedy. I thought: "Why wait for one agent to finish? I'll run TWO."

I used Git Worktrees to run multiple agents in parallel on different branches (feature/gantt and refactor/theming). Agents can make mistakes, some quite serious like messing up context about the branch they supposed to be working on!

  • The Dream: Double the velocity.
  • The Reality: "Merge Hell" and Corruption.

On Dec 8th, I nearly lost the entire Theming refactor. One agent overwrote the css/variables.css file while another was trying to read it. I spent 4 hours manually piecing together "lost" commits (see commit 2ac3f34: "fix: merge ALL missing theming changes from worktree").

The Warning: Antigravity is powerful, but it does not sandbox your agents. If you run them in parallel without strict discipline, they will step on each other's toes. Until we have automatic sandboxing, treat parallel execution like handling uranium: powerful, but deadly if mishandled.

Lessons for the AI-Augmented Developer

If you validly want to move from "Vibe Coding" to "AI Engineering," you need to fundamentally shift your mental model. What I've learnt so far:

1. Shift from "Writer" to "Architect"

The era of the "10x Developer" writing code alone in a basement is dead. We are now in the era of the "10x Architect."

AI generates code at near-zero cost. This means code is no longer an asset; it is a liability. Your job is not to produce more of it; your job is to curate it.

  • The Trap: Asking "Write me a function that does X."
  • The Fix: Asking "Given this architecture pattern, implement interface X for component Y."

Insight: You must have the entire system map in your head because the AI only has the context window.

2. Context Engineering > Prompt Engineering

Stop trying to find the "perfect prompt." It doesn't exist. Instead, focus on Context Engineering.

An AI agent is only as good as the files you feed it. If you feed it spaghetti utils.js, it will write more spaghetti.

  • The Strategy: Create "Context Files" (like our Contracts) that exist solely to guide the AI.
  • The Tactic: Before asking for a feature, pause. Ask yourself: "Does the AI have the current definition of our state management?" If not, paste the Contract first.

3. The Next Frontier: TDD as the Ultimate Spec

I'll be honest: SMT doesn't have a test suite yet. That is my next goal, but the refactoring was time consuming, laborious and very frustrating - either I introduce TDD or build some new features!

Why? Because looking back, I realize that Test-Driven Development (TDD) is the ultimate way to prompt an agent. A natural language prompt is ambiguous ("make it fast"). A failing test case is binary (Pass/Fail).

  • The Plan: We are going to implement a unit testing framework with the help of AI agents.
  • The Workflow: Write the test. Run it (Red). Feed the failure to the AI. Let it fix (Green).

4. Code Durability: Know What to Throw Away

Not all code deserves love. In an AI world, we must distinguish between Foundation Code and Disposable Code.

  • Foundation Code: Core business logic, data models, contracts. This must be reviewed by a human, typed strictly, and protected.
  • Disposable Code: UI prototypes, scripts, experimental features. Let the AI "vibe code" these. If they work, great. If not, delete them and regenerate. Do not fall in love with the prototype.

Final Words: The End of "Magic", The Beginning of Engineering

The journey of building SMT in 2025 taught me that AI is not a replacement for engineering; it is an amplifier of it. If you amplify chaos, you get "Vibe Coding"—a fast track to a tangled monolith. If you amplify structure, you get velocity.

We are entering a golden age of software development. Code is cheaper, faster, and more accessible than ever before. But let's not kid ourselves: this is not an easy journey. The "magic" of a one-shot prompt wears off the moment you need to maintain that code in production. The real work begins when the browser tab closes and the git commit hook runs.

Until AI agents evolve to possess the intuition of a Principal Engineer—knowing instinctively when to create a service vs. a utility, or when to refactor before building—human oversight remains critical. We are not just "prompters"; we are the guardians of the architecture. We provide the constraints (the Contracts) that allow the AI to be creative without being destructive.

My project SMT survived the "Model Wars" of 2025 not because I wrote better prompts, but because I stopped prioritizing speed and started prioritizing structure.

Don't just vibe. Build.

Sunday, 7 December 2025

A weekend with Antigravity: 17,000 Lines of Code in < 48 Hours

This weekend, I didn't just code. I accelerated.

I spent the last 48 hours pair-programming with Antigravity, an advanced AI coding assistant. We took a legacy JavaScript codebase—riddled with global variables, monolithic files, and "spaghetti" dependencies—and transformed it into a modern, service-oriented architecture.

The results go beyond just "helper" status. The metrics suggest a level of productivity that warps the traditional time-cost equation of software development.

This app started out in 2024 as a scrappy concept, MVP, rapidly coded as a single index.html file with inline vanilla Javascript & CSS code - messy spaghetti code that grew and grew, until this weekend when I decided to just stop, refactor and clobber all technical debt. The goal? To experience the full journey of coding with AI and improve the code over time, to eventually get a "modern" app deployed. This code was touched by many AI hands: GPT-o1, Gemini, Claude, Codex 5.1, and also my manual tweaks here and there. 


The Analytics: Friday to Sunday

Let's look at the raw data from git. This represents activity from Friday, December 5th, to Sunday evening, December 7th.

  • Commits: 60 (non-merges)
  • Files Changed: 89
  • Lines Added: 10,468
  • Lines Deleted: 6,755
  • Total Code Churn: ~17,223 lines
17,000 lines of code touched in one weekend.

The Narrative: Deconstructing the Monolith

When we started on Friday, the application was held together by window objects. Functions were exposed globally, state was scattered across main.js and utils.js, and components were tightly coupled to the DOM via inline onclick handlers.

Here is the story of our weekend, told through the commit log:

  1. The Purge of Globals: We began by hunting down window. dependencies. One by one, we replaced fragile global state with rigid, testable Singletons and Services.
  2. Breaking main.js: We took the 1,000+ line main.js file and chopped it up. We extracted logic into dedicated domains: PlanningService, AIService, SystemService, and NotificationManager.
  3. Refactoring the UI: We implemented a VIEW_REGISTRY pattern, moving away from ad-hoc function calls to structured Class-based component management.
  4. Safety Check-ins: You see this pattern in the logs constantly: "Safety checkin - refactoring...". We moved fast, but we didn't break things. The AI allowed me to make sweeping architectural changes with the confidence that I wasn't leaving behind broken references.

The "Antigravity" Effect: AI vs. Human Effort

How long would this have taken a human developer working alone?

The Human Estimate:

To touch 89 files and rewrite the core architecture of an application requires immense cognitive load.

  • Day 1-3: Reading code, mapping dependencies, planning the refactor safe-zones.
  • Day 4-8: Executing the refactor file-by-file. Manually checking imports. Wrangling with loads of ReferenceError
  • Day 9-10: Regression testing. Realizing you broke the "Save" button because it relied on a global variable you deleted three days ago.

Conservative Estimate: 2 Weeks (1 Sprint) of full-time, focused work.

The Antigravity Reality:

We did it in 14.7 hours.

I analyzed the timestamps of every commit. Our total active coding time—the actual time spent typing, refactoring, and verifying—was less than 15 hours.

  • Saturday: ~5 hours of strategic refactoring.
  • Sunday: ~9.5 hours of high-intensity execution.

The AI didn't just "autocomplete" lines. It understood the architectural intent. When I said "Refactor this to use a Singleton pattern", it didn't just write a class; it found every usage of the old global variable across 20 files and updated them instantly. It remembered the context I would have forgotten.

This wasn't just faster coding. It was higher leverage coding. I spent my weekend making high-level architectural decisions, while Antigravity handled the thousands of lines of execution required to make those decisions reality.


Conclusion

If this weekend proved anything, it's that the metric for "developer productivity" is changing. We aren't limited by typing speed or syntax recall anymore. We are limited only by our ability to describe the architecture we want.

My small experiment shows a software engineer would've saved two weeks of work in 15 hours! 

Did I waste my weekend coding with AI? Maybe...but I'm learning a ton and having fun coding after a very long time...it's like I'm back to my early teenage days of losing myself inside the machine...


Tuesday, 2 December 2025

A week with Antigravity animated by Google Gemini

Codebase Evolution: Time-Lapse

CSS
JS
HTML
Other

Fun animations with Google Gemini - my repo stats

Daily Throughput & Key Milestones

Having some fun with git stats dashboarding via Gemini Pro

Engineering DNA Report

Deep dive analytics of the "Antigravity" refactor

⚡ Vital Statistics

0
Files Touched
-0
Net Lines of Code
0
Active Hours

🕰️ Productivity Clock

📂 Architecture Heatmap

🔥 Churn Champions (Volatile Files)

🎨 The Monolith Buster: CSS Evolution

Tracking the deletion of legacy styles vs. the creation of modular view components.

The Antigravity Chronicles: A Week of Refactoring, Regret, and Redemption

In a previous post I shared my experience with Antigravity... here's what I was actually able to achieve in less than one week... for those of you still doubting the revolution and impact on software developers, you need to start waking up... I believe that AI coding agents will disrupt frontend software developers...

The rest of this blog post was written by Gemini, from within the Antigravity IDE (another use case for Antigravity, you don't need to leave the console to write a blog post!).

I asked it to review our changes from the past week and write a blog post reflecting on the journey so far... As usual, AI models are ever the optimists -- I actually spent a lot of time coaxing, cursing, shouting and fixing up both Claude and Gemini mistakes... sometimes undoing a lot of work and starting over again!!

The Spark

It started with a simple frustration: Mermaid.js wasn't cutting it.

On November 24th, I looked at our Gantt charts and realized they were static, rigid, and frankly, boring. I wanted interaction. I wanted to drag, drop, and resize. I wanted "Antigravity"—a branch where we would defy the weight of our legacy code and build something modern.

I didn't know I was about to break everything.

The Good: High Velocity & The "Frappe" Win

The early days were a rush. We integrated Frappe Gantt, and suddenly, the application felt alive.

  • Dynamic Resizing: We built a split-pane view that finally let users control their workspace.
  • Smart Scrolling: We conquered the "sync scroll" beast, ensuring the task table and chart moved in perfect harmony.

The velocity was intoxicating. We were shipping features daily. But speed has a price.

The Bad: The Monolith & The Mess

As we pushed new features, the cracks in our foundation began to show.

  • The 2600-Line Monster: Our style.css had grown into a terrifying monolith. Changing a button color in the "Settings" view would inexplicably break the layout in "Roadmaps."
  • Dependency Hell: At one point, we were loading multiple versions of libraries, causing race conditions that made the Gantt chart flicker like a strobe light.
  • The "Safety" Check-ins: You can see the panic in the git logs.
    Commit c628846: "safety checkin - fixed and broke service dependencies caroussel"

We were fixing one thing and breaking two others. The codebase was fighting back.

The Ugly: "I've Lost a Feature"

The low point came on December 1st. In our zeal to refactor the OrgView, we got careless. We deleted a chunk of legacy code that generated the engineer table. It wasn't just a bug; it was a regression.

Commit d3ebd4f: "checkin even if I've lost a feature in roadmaps :-("

This was the "Ugly." The moment where you question if the refactor is worth it. We had to go dumpster diving into the git history (specifically commit a0020189) to retrieve the generateEngineerTable function. It was a humbling reminder that "legacy" often means "working."

The Redemption: The Monolith Buster

We couldn't keep patching holes. We needed a structural change.

On December 2nd, we launched Operation Monolith Buster.

  • The Strategy: Divide and conquer. We identified every view (Org, Gantt, Roadmap) and gave it its own CSS file.
  • The Execution: We slashed style.css from 2,600 lines down to a manageable core.
  • The Result: 12 new modular files. 100% component isolation. Peace of mind.

The AI Tag Team: Gemini & Claude

This week wasn't just a test of code; it was a test of the "Multi-Model Workflow." We switched between Gemini and Claude depending on the problem at hand, and the results were illuminating.

Where Claude Shined: The Architect

When it came to the CSS Monolith Buster, Claude was the surgeon.

  • Strength: Precision in refactoring. I could paste a 500-line CSS file and ask it to "extract the Gantt styles," and it would do so with surgical accuracy, rarely missing a bracket.
  • The "Aha" Moment: Claude suggested the folder structure reorganization that finally made sense of our chaotic js/ directory.

Where Gemini Shined: The Visionary & The Fixer

Gemini was our "big picture" thinker and our debugger.

  • Strength: Contextual awareness. When we broke the OrgView, Gemini was the one that helped us "time travel" through the git history to find the missing generateEngineerTable function. It understood the intent of the missing feature, not just the code.
  • The "Aha" Moment: The narrative you're reading right now? That's Gemini. Its ability to synthesize logs, commits, and user intent into a coherent story is unmatched.

The Trip-Ups (The "Hallucinations")

It wasn't all smooth sailing.

  • The Over-Confidence: Both models struggled when I asked for "blind" fixes without enough context. The regression in commit d3ebd4f happened because we trusted an AI to "clean up unused code" without verifying if it was actually unused.
  • The Lesson: AI is a powerful accelerator, but it lacks object permanence. It doesn't "know" you need that feature you haven't touched in 3 months unless you tell it.

Lessons Learned

  1. Respect the Legacy: Don't delete code you don't understand until you're sure you don't need it.
  2. Commit Often, Even the Broken Stuff: Those "broken" commits saved us. They gave us a checkpoint when we needed to backtrack.
  3. Use the Right Model for the Job: Use Claude for structural refactoring and strict logic. Use Gemini for creative synthesis, debugging, and "big context" reasoning.
  4. AI is a Co-pilot, Not an Autopilot: The AI helped generate code fast, but it took human oversight to spot that we were await-ing outside of an async function.

Conclusion

The antigravity branch lives up to its name. We aren't just weighed down by technical debt anymore. We have a modern navigation system, a modular CSS architecture, and a powerful new Gantt engine.

It wasn't a straight line. It was a messy, chaotic, beautiful week of coding. And I wouldn't have it any other way.


By The Numbers

  • Total Commits: 80+
  • Net LoC Change: -1,200 (We deleted more than we wrote!)
  • Panic Commits: ~3
  • Features Resurrected: 1
Before Refactor Commit: 0b82efa
css
components
Modal.css
aiChatPanel.css
enhancedTableWidget.css
ganttChart.css
header.css
management.css
notifications.css
roadmapModal.css
settings.css
layout
main-layout.css
sidebar.css
style.css// 2600+ lines (MONOLITH)
js
accomplishmentsView.js
capacityTuning.js
dashboard.js
data.js
documentation.js
editSystem.js
featureFlags.js
ganttPlanning.js
goalsView.js
impactView.js
main.js
mermaidGenerator.js
orgView.js
roadmap.js
roadmapTableView.js
sdmForecasting.js
utils.js
visualizations.js
yearPlanning.js
components
HeaderComponent.js
ManagementView.js
RoadmapInitiativeModal.js
SettingsView.js
SidebarComponent.js
SystemsView.js
WorkspaceComponent.js
enhancedTableWidget.js
gantt
FrappeGanttRenderer.js
GanttFactory.js
GanttRenderer.js
MermaidGanttRenderer.js
ganttAdapter.js
ganttGenerator.js
managers
NavigationManager.js
NotificationManager.js
After Refactor Current State
css
style.css// Core only
views// NEW: Modular
accomplishments-view.css
capacity-tuning-view.css
dashboard-view.css
documentation-view.css
gantt-planning-view.css
goals-view.css
org-view.css
roadmap-table-view.css
roadmap-view.css
sdm-forecasting-view.css
system-edit-view.css
visualizations-view.css
welcome-view.css
year-planning-view.css
foundation-components// NEW: Design Sys
ai-chat.css
buttons.css
cards.css
collapsible.css
d3-visualizations.css
dependencies.css
dropdowns.css
dual-list.css
forms.css
legends.css
loading.css
modals.css
system-selector.css
tooltips.css
js
repositories// NEW: Data
SystemRepository.js
state// NEW: State
AppState.js
components
DashboardView.js
HeaderComponent.js
ManagementView.js
OrgView.js
RoadmapInitiativeModal.js
RoadmapView.js
SettingsView.js
SidebarComponent.js
SystemsView.js
WorkspaceComponent.js
enhancedTableWidget.js