Piers Fawkes · Fodda

Follow My Flow

How I work with Antigravity and AI agents to think, create, and ship — at a pace and depth that wasn't possible before.

The Issue

I have a demo — and my knowledge graph mentor just got 'flu.

Postpone?

Or do I build?

And so I built.

And after the demo I had other questions.

And I asked my knowledge graph mentor if they knew some juniors.

And she said... "Ask AI."

And so I built.

So this is what I built.

A multi-agentic system that intersects

There isn't one "AI" that does everything. I use three distinct layers that work together to replace an entire team.

🧠

Thinking Partner

Copilot / ChatGPT / Gemini. Used for reasoning, planning, structuring ideas, and generating code briefs.

⚙️

Execution Layer

Antigravity / AI Studio. The agents that write the actual code, execute commands, modify files, and deploy to production.

🔍

Analysis Layer

Deep Research / Search. Performs deep competitive analysis, comparisons, and ingests macro trends to synthesize.

I don’t believe in agents.

I believe in agentic code that traverses steps but it’s not alive. Every time you return to an agent it’s going to run a xerox of yesterday’s code but because Gen AI is probabilistic, it’s going to do it slightly differently. A new temp worker arrives to work every day to do the job the last one did yesterday.

There is no agent WAITING for you to return.

— Understand the constraints and play within the rules

The Fodda Stack

This is the system I use to build, run, and sell a product — with AI. It's a distributed ecosystem of related codebases.

🧠

1. AI & Agent Layer

Gemini (core/embeddings), Claude (testing), Antigravity (builder). Your brain + engineering team.

📊

2. Data & Knowledge

Neo4j (graph), Airtable (ops/ingestion), Firestore (logs), Feedly (signals). This is the product.

🔀

3. Pipeline Layer

Ingest → classify → categorize. Embeddings and macro trend synthesis. Data becomes insight.

📡

4. API & Distribution

Fodda API, MCP Server, Copilot Studio, npm packages. How agents and enterprises access it.

☁️

5. Cloud & Infra

Cloud Run, App Engine, Docker, Cloud Scheduler, Secret Manager.

🖥️

6. Frontend Layer

React, Vite, Next.js, Express. The demo / access layer, not the core value.

💳

7. Payments & Ops

Stripe, Streak (CRM), Nodemailer, Search Console. Running a business.

💻

8. Dev Stack

Node.js, TypeScript, Zod, Axios, GitHub.

9. Agent Operating System

CHANGELOG.md → memory
BACKBURNER.md → roadmap
Briefs → execution instructions
Workflows → automation
Agent handoffs → coordination.

— This is my real differentiator. The layer nobody else has.

Building a System that builds Fodda

For Fodda, I broke the project into various workspaces that do different functions. Fodda is a platform with 8 interconnected codebases — like operating a system of agents like a team.

Workspace Role
/Fodda Main app — React/Vite frontend + Express server, deployed to Cloud Run (app.fodda.ai)
/Fodda API Core graph query API — Neo4j, embeddings, semantic search, supplemental data
/Fodda MCP MCP server — exposes Fodda to Claude, Gemini, Notion, Copilot, OpenAI
/Fodda PSFK The PSFK/Retail knowledge graph — Neo4j data, signal ingestion, trend patterns
/Fodda Ben Dietz Ben Dietz's expert graph — separate data pipeline
/Fodda CE Consumer Electronics & Design expert graph — Piers' graph
/Fodda Sales CRM / sales tooling — likely powers the Streak integration and pipeline tracking
/Fodda Website Marketing site — www.fodda.ai / www.psfk.com portals, deployed to App Engine

Ensuring Repeatability & Delegation

Because agents have no persistent memory, I created files and workflows to maintain consistency across 8 codebases.

Changelog & Backburner

Every deployment is logged in CHANGELOG.md with exact filenames changed. This serves as both audit trail and onboarding doc.

BACKBURNER.md captures deferred ideas so nothing falls through the cracks.

Deploy Workflows

Every workspace has a .agents/workflows/ directory with slash-command-style instructions (like /deploy) so Antigravity executes tasks reliably without re-explaining them.

A Brief for Every Agent

When I work, I ask the agent in each folder to do the work that affects that repo. It picks up a tailored brief, finds the context, and goes — stopping me from making blind mistakes.

Note Passing & Coordination

When the API changes, that agent leaves a structured handoff note telling the App or MCP agent exactly what APIs changed schema so they can adapt.

A Living Rosetta Stone

Every workspace maintains "Agent Bibles" — ecosystem_overview.md, product_and_system_reference.md, and system_clarifications.md. I copy these to other workspaces and feed them to Claude and ChatGPT to serve as co-pilots.

Problem Framing as Key Process

What I actually focus on: turning messy inputs (calls, notes, decks) into structured prompts.

🎯

Understand Intent

Getting the AI to truly understand intent and the problem before writing any execution instructions.

🏛️

Extract Architecture

Loading inputs into a "project" and asking the LLM to design the architecture first.

💬

Generate Prompts

Iterating until it produces usable, structured prompts to feed the coding agents.

That's not dev work. That's problem framing.

If the problem is unclear, the output will be wrong.

Fragility

My workflow reveals the limitation: Things break. I don't fully understand the stack. I rely on another LLM to debug. Expect errors and avoid over-trusting outputs.

My Real Process (The Flow)

The pattern throughout is agentic speed with human product judgement — I define *what* Fodda should be and write the briefs, Antigravity handles the technical execution.

📥
Step 1

Gather Inputs

I pull together messy inputs: sales calls, meeting notes, product ideas, bugs, and external user feedback.

Observation Insight
🧠
Step 2

Use AI to Interpret

I use a thinking system (Claude/ChatGPT) to interpret the problem, define success, and clarify the constraints.

Problem framing Architecture generation
📋
Step 3

Generate Structured Prompts & Briefs

I write precise execution briefs for the specific workspace agents (e.g. "Update the API code to support Copilot").

Agent documentation Instruction design
Step 4

Feed into Antigravity

I pass the brief to Antigravity inside the correct codebase. The agent inspects the code context and executes.

Agentic coding Implementation
🔄
Step 5

Iterate & Debug via AI

I validate the output. Does it work? Is it what I wanted? If things break, I use another LLM to debug and refine.

Validation Handoff notes creation
🚀
Step 6

Deploy

I run automated workflows (like `/deploy`) to push to production, and update the memory layer (`CHANGELOG.md`).

Cloud Run / App Engine Updating system memory

Automating Patterns of Success

Create Operators

This is advanced, but powerful. Turn repeated actions into named workflows and reusable operators.

I've started with /deploy and /update-changelog — the goal is to do more.

Right now, I use AI to build faster.

The next step is using AI to build better — without me.