Sunday, 8 February 2026

I built a “Steam Workshop” for system architecture + product roadmaps + org blueprints (runs in your browser)"

Have you ever wanted to install the architecture of a product the way you install an app?

Not just a diagram, but the whole system architecture blueprint:

  • services and dependencies
  • teams and ownership
  • goals → initiatives → work packages
  • a 3-year planning horizon for product managers
  • plus a “prompt pack” so you can remix it

That’s what I’ve been building as a hobby project: SMT (Software Management & Planning Tool).



And this week I shipped something I’m genuinely excited about:

The Community Blueprints Marketplace (with social features)

You can now publish blueprint packages publicly, and the community can:

  • browse and install them
  • star them
  • comment / discuss them
  • see what’s trending (social proof + discovery loops)

Think: Figma Community / Steam Workshop, but for product architecture + product/team organization.


What’s a “Blueprint” in SMT?

A blueprint is a portable package (JSON) that contains:

  • manifest (title, summary, tags, trust label, etc.)
  • prompt pack (seed + variants like MVP/Scale)
  • full system snapshot (teams, services, goals, initiatives, work packages)

The goal is learning through interaction:

  • install a blueprint
  • explore its org + architecture + roadmap in the app
  • remix it into your constraints
  • publish your remix back to the marketplace

Why I’m doing this

Most “reference architectures” online are:

  • static
  • divorced from org/team realities
  • not easily remixable
  • missing the roadmap/execution story

SMT tries to make “how a product might actually run” tangible:

  • architecture + org design + planning are all connected
  • you can poke at it, not just read it

SMT makes it possible for you to inspect any type of tech platform. Think LeetCode interview preps but for system design, architecture, team topologies, product roadmaps and software delivery planning.


Local-first by default (privacy + zero friction)

SMT is intentionally local-first:

  • it runs as a static app in the browser
  • your systems stay in your browser unless you explicitly publish a blueprint package

The cloud marketplace is optional and only powers:

  • public publishing
  • discovery
  • stars/comments

No “workspace sync” SaaS lock-in.


How the social marketplace works (simple + cost-free)

To keep this sustainable on the free tier, the backend is:

  • Cloudflare Workers + D1 (SQLite)
  • token-normalized search (no paid search / no vector DB)
  • GitHub OAuth for identity (scope: read:user only)

Important security bit:

  • public publishing does secret scanning (manifest + full system payload) and blocks likely API keys/tokens.

Try it (and please break it, it's a WIP hobby project)

1) Open the app

2) Explore the “Community Blueprints” view

  • browse the curated catalog
  • click Preview on anything that looks interesting
  • install an Available blueprint and inspect it across:
    • System Architecture Overviews
    • Org Design
    • Roadmap & Backlog Management
    • Year Plan / Detailed Planning

3) Publish + socials

  • in the publish flow, use Publish Publicly
  • then open Preview on your published blueprint:
    • Star/Unstar it
    • Post a comment
    • sanity check that it’s discoverable via search

If anything fails, I want to know. Use the Feedback feature to log issues.


What I’d love feedback on (high signal)

  1. Does the blueprint concept actually help you understand a product faster?
  2. Are the “prompt packs” useful, or just noise?
  3. What should “trending” mean here: stars, downloads, recency, or something else?
  4. What social features would make this fun without turning it into a moderation nightmare?

If you want to contribute

This is open source (CC0) and I’m happy to collaborate.


Roadmap ideas (if the community likes this)

  • Remix lineage: “Forked from…” + remixes graph
  • Lightweight contributor reputation (badges, trust tiers)
  • Reporting/flagging + moderation queue
  • Curated collections (“Backends 101”, “B2B SaaS starters”)

If any part of this sparks your curiosity, I’d love for you to try it for 5 minutes and tell me what confused you, what felt magical, and what felt pointless.

Drop a comment here, or open an issue on GitHub.

Saturday, 7 February 2026

A Day Building new features on SMT using Codex App with Codex 5.3

AI Build Journal · February 7, 2026 - written by Codex to me...OpenAI released codex app for mac this week, so I decided to have a go, and boy - am I blown away!!! In just one day, Codex helped me clear much of my SMT backlog, after a month's break from my AI-coding frenzy from Dec'25.

A Day Building on Codex App with Codex 5.3

This was not “prompt in, code out.” This was a full-day product session: strategy debates, UX corrections, contract audits, feature pivots, test hardening, documentation, and ship.

I started the day with one objective: execute the next phase plan for SMT Platform without losing quality. By the end of the day, we had shipped one of the most ambitious increments in the project so far: Goal Inspections + Community Blueprints Exchange, including end-to-end contribution flow, install lifecycle logic, catalog operations, and test coverage.


What We Shipped in One Day

  • Goal lifecycle + inspections system: owner status, weekly comments, PTG tracking, stale/mismatch detection, leadership report table.
  • Year Plan CSV/XLSX export: production export flow in toolbar with tested serialization and schema-aware payload handling.
  • Community Blueprints Exchange: Top-100 curated catalog, preview modal, search/filter, publish flow, package validation, and install lifecycle UX.
  • Launch package generation upgrade: moved to domain-authored-curated-v2 for the launch-25 package set.
  • Hardening + compliance: contract remediation pass, UX consistency fixes, event rebinding bug fix, and regression-proof e2e updates.

The Metrics That Matter

Metric Result
Session duration11h 56m 43s (10:08:30 → 22:05:13)
Timestamped worklog checkpoints        117
Commits shipped4 (2671b68, a502106, c9d8d43, 97bec52)
Code delta (same-day commits)+129,166 / -7,770 (net +121,396)
Unique files touched77 (113 file-change events across commits)
New files created27
Unit test progression90 → 117 tests (+30%)
E2E test progression51 → 58 tests across 8 → 9 specs
Community blueprint footprintTop-100 catalog + Launch-25 curated packages

Note: the large insertion volume includes generated blueprint catalog/package artifacts in addition to application code.

How This Compared to “Typical Solo Dev Pace”

A conservative estimate for this scope with one human engineer is 2–3 weeks: feature architecture, UI wiring, persistence, migration work, docs, and full regression coverage. Here, the value of Codex 5.3 was not just speed in typing code. The leverage came from:

  • Staying in implementation mode continuously while preserving test discipline.
  • Switching quickly between product decisions, coding, debugging, and documentation.
  • Keeping a verifiable trail (/docs/worklogjournal.md) so context did not get lost.

This Was Collaboration, Not Task Dispatch

The most important part of the day was the interaction pattern. We did not run a one-way backlog. We debated quality and credibility:

  • You challenged weak UX states (Install should be locked when unavailable), and we corrected behavior at both tile and preview levels.
  • You challenged data realism for “inspired-by” systems, and we replaced simplistic seed generation with richer domain-authored package generation.
  • You enforced coding contracts, and we ran an explicit compliance audit plus remediation pass before final push.
  • You required proof, not promises, so every major change ended with lint/unit/e2e verification.
The real unlock is not “AI writes code faster.” It is “human judgment + AI execution + strict verification” as one continuous loop.

Lessons Learned

  1. Contracts first, always: when contract rules are explicit, quality issues become detectable and fixable quickly.
  2. Feature credibility beats feature count: shipping a marketplace means realism, not placeholder parity.
  3. Tests are collaboration memory: every bug found late became a permanent test so the fix does not regress.
  4. Worklogs scale agentic development: detailed timestamped logs made long-session continuity possible.

What’s Next

The obvious next move is to raise the “real-world blueprint” bar further: richer domain fidelity, stronger package QA gates, and a true contribution-driven exchange loop where users generate, validate, publish, and learn from each other’s systems.

Built on SMT Platform using Codex 5.3 · evidence from /docs/worklogjournal.md and same-day git history.


Sunday, 4 January 2026

How I Built a 10-Year Life Analytics Platform in 5 Days (with Antigravity)

From 87,000 rows of raw spreadsheet data to a React/ML dashboard, powered by Antigravity and Gemini. Personametry.com


The Data Hoarder’s Dilemma

For the last 10 years, I’ve been obsessed with tracking my time. Every hour of work, every night of sleep, every family interaction—logged diligently in Harvest. By 2026, I had amassed 35,442 unique time entries covering 87,100 hours of my life.

I had the data. But I didn't have the truth.

Spreadsheets were too slow. Standard dashboards were too generic. I needed a custom "Life Operating System" that understood my specific contexts—Personas, "Deep Work" streaks, and Sleep hygiene.

In the past, building this would have been a 3-month side project to an outsourced developer since I've stopped coding back in 2010! This time, I decided to do it differently. I paired up with Antigravity (powered by Gemini/Opus and separately using Codex 5.2 to validate) to see if we could build it in a week.

Here is the story of Personametry. The worklog journal separately shared here.


The Co-Pilot Experience: "Senior Engineer" on Demand

Working with Antigravity wasn't like using a glorified autocomplete. It feels like pairing with a Senior Principal Engineer, Business Analyst, Data Engineer, UX Designer & Systems Engineer - all in-one-person who never sleeps.

We established a Coding Contract early on: "No stubbing. Plan first. Self-validate." This set the tone. The AI didn't just throw code at me; it wrote Implementation Plans, created "Artifacts" to visual progress, I explicitly instructed it to maintain its own worklog.md.

Day 1: The Foundation (Dec 31)

We first reviewed options for a suitable tech stack geared towards Business Intelligence Dashboards. The choices were: 1) Google's Material UI, 2) AWS Cloudscape and 3) Ali Baba's Ant Pro suite. I chose Ant Pro because I wanted to learn something new and also it apparently enjoys the highest stars on GitHub compare to the others. We defined a clean architecture. Instead of hacking together a script, we set up a scalable ETL pipeline in Python. The AI understood the concept of "Personas" immediately— and used by Quicksight transformation code to map out my Harvest tags into clean P1 (Muslim), P3 (Professional), and P0 (Sleep) buckets.

Day 2: Visualization & The "Wheel of Life"

We moved to the frontend (React 19 + Ant Design Pro). The goal: A "Wheel of Life" radar chart.

  • Challenge: Visualizing 10 years of balance without clutter.
  • Solution: The AI implemented interactive year-switching and diverging bar charts to show year-over-year trends.

Day 3: The "Data Nerds" Playground

I wanted to slice the data myself. We built a SQL-like Query Builder UI.

  • Me: "I want to filter by 'Deep Work' and see the trend."
  • AI: "Here’s a dynamic QueryFilter component backed by an in-memory aggregation engine."

Day 4: Machine Learning & "The Optimiser"

This was the turning point. I asked it to research best practice ML techniques for leveraging my dataset and build a forecasting and optimization engine. I didn't settle for its original design. I fed its design into Gemini Pro, deep research, and fed that into the AI for review. It then settled on a revised design plan ml-recommendation-design.md. The AI didn't just give me averages. It built an Optimization Service using Goal Programming. It took my "hard constraints" (work contracts) and "soft goals" (increase family time by 10%) and solved for the perfect daily schedule.

It also added Anomaly Detection (STL Decomposition) to scientifically prove when I was burning out.

Day 5: Sleep & V1.0.0 (Jan 4)

Final polish. We added Sleep Health Heatmaps (Red/Amber/Green based on hours) and analyzed my circadian rhythm to find my average bedtime (10:17 PM).

By 2:00 PM, we tagged Version 1.0.0.


The Result: Personametry V1

In 5 days, we built what would have taken me months alone:

  • Tech Stack: React 19, TypeScript, Python, Ant Design Charts.
  • Features: ETL Pipeline, ML Anomaly Detection, Goal Optimization, Interactive Dashboards.
  • LOC: Thousands of lines of clean, strictly typed, documented code.

The biggest lesson? The future of coding isn't about writing syntax. It's about orchestrating logic. With the right AI partner, the barrier between "Idea" and "Shipped Product" has never been thinner.

Links:

By 2:00 PM, we tagged Version 1.0.0.


NotebookLM - Slides



NotebookLM - Audio Overview



NotebookLM - Video Overview



Personametry V1.0 Worklog with Antigravity

Personametry Development Journey

Personametry - a decade's journey of time tracking now enhanced with AI

A summary of 2025 performance and intro to personametry.com

It is the time when I share my Performance metrics for the previous year, this card pretty much sums it up, logging 8700 hours, with 3006 time entries, tracking close to 24 hours per day:

Background

For the last ten years I've been running an experiment in logging my time spent activities like: Work, Family, Me Time, Sleep, Spirituality, etc. In 2015 I developed a model for personal development, called RAGE (Reality, Aspiration, Goals, Expectations). In 2016, I got more serious by inspecting my time across all areas of my life against my RAGE model, which triggered deeper reflection on my aspirations versus reality. For the first three years, I maintained a rhythm of personal monthly performance reviews (PMPRs) and then transitioned to quarterly, mid-year and final year reviews. At the start of each new year, I would dive deep into the previous year's data - build analytics and dashboards, which I shared on this blog.

Context about my workflow - the early days

In the early days, my process for insights was quite manual. Logging my time was easy, using Harvest App, which I'd been introduced to by a good friend, Farid - around the time I switched to professional consulting, servicing some contracts with Crossbolt that expected Harvest timesheets for billing. Incidentally, Farid was the source of inspiration for me to critically think about Reality V Aspirations that led to me creating my RAGE model. 

Generating reports initially started with exporting from Harvest, and importing to Excel and running pivot tables and charts, using content for my blog posts. I needed a way to transform the Harvest data to higher level constructs - so I transitioned to Amazon Quicksight (now Quick Suite), using an AWS Free Tier account. Quicksight was useful in acting as a yearly dataset, creating analysis that I would have done in Excel (so replaced excel) and created the dashboards, which I'd then copy and share in this blog. A downside of Quicksight is it's a closed system, had no way of publishing dashboards for public sites (like Google docs embedded pages mechanism). The free tier also prevented me from using its built-in insights features, and more recently Quicksight's AI analysis. I added Google slides to my workflow, sharing my deep dives as in this post. As AI tooling emerged, I transitioned to AI analysis as described here.

Introducing my latest workflow - finally, the Personametry Dashboard is born - ZERO Workflows

I spent just under 5 days building my Personametry app with Google's Antigravity as my coding partner. What a journey (look out for a future post). Since November 2025, I've been learning how to build apps with Antigravity, at first building my SMT app, then building tools for work - and I had enough insights to get Personametry app built.  What's my new workflow then? Everything is now automated, apart from my manual time logging. I've built a dashboard that syncs daily with Harvest data, through an automated GitHub actions workflow that pulls time entries via the Harvest API. Harvest is so cool that they allow even free users full access to their APIs. An automated data transformation job runs that cleans up the data and transforms it just the way I used to do the meta level transforms using Quicksight. So no more Quicksight. All the dashboards refresh automatically. I no longer need to create Google slides anymore. At the start of each year, I'd usually spend about a week's time analysing, reflecting and creating dashboards. Now my analysis can be anytime, with zero manual work. Giving a week's time back! Yes, anyone has access to my data and dashboard, I don't mind sharing because I believe other folks could benefit from my experiment, decide to start their own tracking journey or build an app for themselves. The codebase is on GitHub.


Personametry.com is more than just a dashboard - introducing Machine Learning

With my rich dataset, there's opportunities for applying machine learning forecasting techniques and instrumenting goals. Check out the Machine Learning page. I can now tune my personas and in real time see the effects, example: If I reduce my sleep hours, where would the gains go? If I reduce my work hours, subject to constraints, what can I do? If I invest in health and fitness, what's the impact on Family time, etc. For me, this is a game changer. The app will evolve and learn as the dataset is updated, without having to change code or do manual imports! I might have to tweak the code just a little to cater for special years like sabbatical breaks though.

What's next - where am I going with this?

Version 1.0.0 is now live! Depending on how much time I have in 2026, I will look at embedded AI data driven analysis into Personametry.com, leverage conversational analysis. Ultimately I'm still striving to build the perfect personal assistant that just "knows" me. I will look at bringing in additional data sources like Strava, Netflix, Youtube, even integrating Islamic and Gregorian calendars. And finally I'll hook in a RAGE scorecard to match my time against the RAGE model! I could also turn this into a paid platform service, creating a platform for anyone to sign up and build their own RAGE model personas and track with Personametry.com!