Peel Is on GitHub. Now the Team Shares the Context
Peel is on GitHub: cloke/peel. Downloadable binaries are next.
This is the part where I'm supposed to say "yay, another AI app." You yawn. I get it. I'd yawn too.
I didn't build Peel because the world needed one more AI product with a clever name and a feature list. I built it because I kept watching agents burn tokens on bad retrieval, miss obvious code paths, and make the same dumb mistakes across every machine on a team. Each laptop rediscovering the codebase from scratch like it was the first day of school. Every time.
From Grep to Graph was about fixing that on one machine. Better chunking. Local embeddings. A dependency graph instead of grep. It worked. Agents stopped guessing so much. Context got smaller and more useful.
Then the obvious next problem walked in and sat down.
Every other machine on the team was still doing all of that work from zero.
One Machine Learns. The Rest Benefit.
Picture this. You have a Mac Studio in the office with 192GB of unified memory and nothing better to do at 2am. You have three MacBooks out in the world. Every single one of them is independently generating embeddings, running analysis, and enriching the same codebase. Same repo. Same files. Same expensive computation. On every box.
That's a waste of electricity and patience.
Let the Studio do the heavy lifting. Let the MacBooks pull the result and get on with their day.
Peel syncs RAG artifacts across machines. One machine indexes and analyzes the repo. The others get the useful parts without redoing any of the expensive work locally. Two sync modes.
Full sync moves everything: chunks, embeddings, analysis. Useful if you're setting up a new machine from nothing.
Overlay sync is the one that matters for teams. It moves embeddings and AI analysis without shipping chunk text, then matches that data onto a locally indexed repo by file content hash and line range. The local machine already has the code. It doesn't need the text again. It just needs the intelligence layer on top.
I just pulled an overlay sync for the Peel repo from my Mac Studio. 19.7MB transferred over WebRTC. Ten seconds. That covered 3,807 chunks across 444 files and 111,000 lines of code, with all the embeddings and analysis the Studio had computed. The MacBook didn't regenerate any of it.
So Tuesday morning, you open the MacBook. Peel picks up the overlay the Studio pushed overnight. Before you've finished coffee, agents on your machine are working with analysis you never waited for. You ask "where does the authentication flow actually start?" and get back results shaped by richer analysis that happened on a machine with ten times the headroom. No cloud. No API calls. Just 20MB of intelligence that somebody else's hardware already paid for.
One guardrail worth mentioning. If a machine already has embeddings from a different or better model, Peel skips the embedding write and only applies the analysis data. No silent downgrade because the wrong box happened to sync first. I got burned by that scenario early enough to make it paranoid about model provenance.
The Swarm Is Not Hype Glue
"Distributed swarm" is the kind of phrase that gets stupid fast.
So here's the plain version.
Peel has a local network mode using Bonjour. Start it on one machine and other Peel instances on the LAN discover it. No config file. No central server. No YAML ceremony. Just mDNS doing what mDNS was always supposed to do.
It also has a Firestore-backed mode for coordination across networks when Bonjour can't reach.
Right now my swarm has seven machines across three locations. The Mac Studio ("Bender Bending Rodriguez") sits in the office with 499MB of RAG artifacts across 20 repos. Three MacBooks connect over LAN or WAN depending on where they are. Heartbeats every few seconds over WebRTC. The whole thing runs on Futurama names because if you're going to name your compute nodes, at least have some dignity about it.
The swarm isn't there to hand-wave about parallelism. It's the plumbing that lets machines share work, share artifacts, and stay in sync. I don't care that I can say "swarm." I care that code understanding no longer lives and dies on one box.
Why GitHub. Why Now.
Because it already does real work.
This isn't a landing page with a waitlist and a dream. I use Peel every day. It's been dogfooded hard enough that the rough edges are at least honest rough edges, not hypothetical ones.
GitHub now means you can inspect the architecture, read the code, and follow the direction before downloadable builds land. It also means I can stop pretending this is a side project. It crossed into daily-driver territory months ago.
The repo is Fair Source. I want people to inspect it, use it, and understand how it works. I also want a path to make the work sustainable. Those two things aren't in conflict.
What's Next
Binaries. The repo is there if you want to read code. If you'd rather not open Xcode, downloadable builds are what I'm working on next.
The argument hasn't changed since the last post. Local code understanding got better when it stopped being grep. Team code understanding gets better when that work stops being trapped on one machine.
The repo is cloke/peel. Go look.
Last Updated: March 14, 2026
AI Transparency: This post was written with Claude Opus. Opus wrote substantial portions of the draft. I provided the technical details, directed the argument, and refined the voice. All product claims and swarm data come from the actual Peel codebase and live infrastructure.