What if an AI Agent Was Just a Git Repo?
Andy Smith
Every AI agent framework I’ve seen makes the same pitch: here’s our SDK, write your agent in our way, deploy on our infra, pay us money.
The agent becomes inseparable from the framework. Want to switch LLMs? Rewrite. Move to a different cloud? Rewrite. Version control your agent’s evolution? Well, you can version control the code that defines the agent, but the agent itself — its personality, its memory, its configuration — lives somewhere in a database you don’t control.
I think there’s a better way. I’m going to build it in public and document everything — the wins, the dead ends, the moments where I stare at a terminal wondering why nothing is happening.
The core idea
An AI agent is a thing that has:
- A name
- A personality (system prompt)
- Ways to talk to the world (transports)
- Memory
That’s it. Everything else — which LLM it runs on, how the container is built, how secrets are managed — is infrastructure. Important infrastructure, but not the agent’s identity.
So what if the agent’s identity lived in a Git repo? A config file that says who the agent is, and nothing more:
agent = {
name = "Ada";
system-prompt = "
You are Ada, a helpful assistant.
You respond in the same language
the user writes to you.
";
};
Why Git?
Git already solves half the problems we’re trying to solve with AI agents:
Version history is agent evolution. Every commit is a snapshot of who the agent was at that point. Changed the system prompt? That’s a commit. Added a new skill? That’s a commit. The agent’s entire life is in git log.
Branches are experiments. Want to try a different personality? Branch. If it works, merge. If it doesn’t, delete the branch. No one gets hurt.
Pull requests are change review. This gets interesting when the agent itself can open PRs. “Hey, I think my system prompt should be updated because…” — and a human reviews it before it goes live.
Forks are reproduction. Want a new agent based on an existing one? Fork the repo. Tweak the config. You now have a new agent with a shared ancestry, and you can even pull upstream improvements.
None of this requires inventing anything. Git has been doing this for 20 years.
Why declarative config?
Code says HOW. Config says WHAT.
“I am Ada, I speak via Telegram” is a statement of identity. It’s not an implementation detail. The moment you write def handle_message(msg): you’ve coupled the agent to a specific runtime, a specific language, a specific way of doing things.
A declarative config is a contract. It says what the agent needs without saying how to provide it. This means we can swap out everything underneath — today a bash script, tomorrow a proper agent runtime, next month a completely different LLM provider. Same config. Same agent. Different machinery.
What’s next
The first step is simple: define an agent as a config file and see if we can make it run. Everything after that — I don’t know yet. That’s the point.
Why build in public?
Because the interesting part isn’t the final product. It’s the decisions along the way.
Why did we choose X over Y? What broke when we tried Z? What assumption did we have that turned out to be wrong?
Every AI agent blog post I’ve read shows you the happy path. Here’s the framework, here’s the tutorial, here’s your agent in 5 minutes. Nobody shows you the 6 failed attempts to build a rootless container, or the 3 hours spent debugging why an LLM silently hangs when a directory isn’t writable.
I’ll show you all of it. Each post will follow the same structure:
- Hypothesis: what we think should work
- Increment: what we actually built to test it
- Result: what happened, including everything that went wrong
Let’s see what breaks.