How I work with Antigravity and AI agents to think, create, and ship — at a pace and depth that wasn't possible before.
Postpone?
Or do I build?
And so I built.
And after the demo I had other questions.
And I asked my knowledge graph mentor if they knew some juniors.
And she said... "Ask AI."
And so I built.
So this is what I built.
There isn't one "AI" that does everything. I use three distinct layers that work together to replace an entire team.
Copilot / ChatGPT / Gemini. Used for reasoning, planning, structuring ideas, and generating code briefs.
Antigravity / AI Studio. The agents that write the actual code, execute commands, modify files, and deploy to production.
Deep Research / Search. Performs deep competitive analysis, comparisons, and ingests macro trends to synthesize.
I don’t believe in agents.
I believe in agentic code that traverses steps but it’s not alive. Every time you return to an agent it’s going to run a xerox of yesterday’s code but because Gen AI is probabilistic, it’s going to do it slightly differently. A new temp worker arrives to work every day to do the job the last one did yesterday.
There is no agent WAITING for you to return.
— Understand the constraints and play within the rules
This is the system I use to build, run, and sell a product — with AI. It's a distributed ecosystem of related codebases.
Gemini (core/embeddings), Claude (testing), Antigravity (builder). Your brain + engineering team.
Neo4j (graph), Airtable (ops/ingestion), Firestore (logs), Feedly (signals). This is the product.
Ingest → classify → categorize. Embeddings and macro trend synthesis. Data becomes insight.
Fodda API, MCP Server, Copilot Studio, npm packages. How agents and enterprises access it.
Cloud Run, App Engine, Docker, Cloud Scheduler, Secret Manager.
React, Vite, Next.js, Express. The demo / access layer, not the core value.
Stripe, Streak (CRM), Nodemailer, Search Console. Running a business.
Node.js, TypeScript, Zod, Axios, GitHub.
9. Agent Operating System
CHANGELOG.md → memory
BACKBURNER.md → roadmap
Briefs → execution instructions
Workflows → automation
Agent handoffs → coordination.
— This is my real differentiator. The layer nobody else has.
For Fodda, I broke the project into various workspaces that do different functions. Fodda is a platform with 8 interconnected codebases — like operating a system of agents like a team.
| Workspace | Role |
|---|---|
| /Fodda | Main app — React/Vite frontend + Express server, deployed to Cloud Run (app.fodda.ai) |
| /Fodda API | Core graph query API — Neo4j, embeddings, semantic search, supplemental data |
| /Fodda MCP | MCP server — exposes Fodda to Claude, Gemini, Notion, Copilot, OpenAI |
| /Fodda PSFK | The PSFK/Retail knowledge graph — Neo4j data, signal ingestion, trend patterns |
| /Fodda Ben Dietz | Ben Dietz's expert graph — separate data pipeline |
| /Fodda CE | Consumer Electronics & Design expert graph — Piers' graph |
| /Fodda Sales | CRM / sales tooling — likely powers the Streak integration and pipeline tracking |
| /Fodda Website | Marketing site — www.fodda.ai / www.psfk.com portals, deployed to App Engine |
Because agents have no persistent memory, I created files and workflows to maintain consistency across 8 codebases.
Every deployment is logged in CHANGELOG.md with exact filenames changed. This serves as both audit trail and onboarding doc.
BACKBURNER.md captures deferred ideas so nothing falls through the cracks.
Every workspace has a .agents/workflows/ directory with slash-command-style instructions (like /deploy) so Antigravity executes tasks reliably without re-explaining them.
When I work, I ask the agent in each folder to do the work that affects that repo. It picks up a tailored brief, finds the context, and goes — stopping me from making blind mistakes.
When the API changes, that agent leaves a structured handoff note telling the App or MCP agent exactly what APIs changed schema so they can adapt.
A Living Rosetta Stone
Every workspace maintains "Agent Bibles" — ecosystem_overview.md, product_and_system_reference.md, and system_clarifications.md. I copy these to other workspaces and feed them to Claude and ChatGPT to serve as co-pilots.
What I actually focus on: turning messy inputs (calls, notes, decks) into structured prompts.
Getting the AI to truly understand intent and the problem before writing any execution instructions.
Loading inputs into a "project" and asking the LLM to design the architecture first.
Iterating until it produces usable, structured prompts to feed the coding agents.
If the problem is unclear, the output will be wrong.
My workflow reveals the limitation: Things break. I don't fully understand the stack. I rely on another LLM to debug. Expect errors and avoid over-trusting outputs.
The pattern throughout is agentic speed with human product judgement — I define *what* Fodda should be and write the briefs, Antigravity handles the technical execution.
I pull together messy inputs: sales calls, meeting notes, product ideas, bugs, and external user feedback.
I use a thinking system (Claude/ChatGPT) to interpret the problem, define success, and clarify the constraints.
I write precise execution briefs for the specific workspace agents (e.g. "Update the API code to support Copilot").
I pass the brief to Antigravity inside the correct codebase. The agent inspects the code context and executes.
I validate the output. Does it work? Is it what I wanted? If things break, I use another LLM to debug and refine.
I run automated workflows (like `/deploy`) to push to production, and update the memory layer (`CHANGELOG.md`).
This is advanced, but powerful. Turn repeated actions into named workflows and reusable operators.
I've started with /deploy and /update-changelog — the goal is to do more.
Right now, I use AI to build faster.
The next step is using AI to build better — without me.