Your Agent Memory Is a Junk Drawer
2026-04-02 · George Moon
We were building the Forkzero marketing site with Claude Code when we saved a preference: "never run ad-hoc AWS commands — always script infrastructure changes."
Claude stored it as a markdown file. A name, a type, a rule, a reason, and a scope. Simple. Useful. And missing everything that would make it trustworthy over time.
What's in the file
The memory captures a rule and a rationale. It works for the next conversation. If you squint, it looks like a lightweight specification.
What's not in the file
- **No link to the incident.** We saved "why" as prose, but there's no traceable reference to what happened. A month from now, we won't know whether the original incident still applies or whether it was a one-off.
- **No connection to related decisions.** This preference affects how we build infrastructure, which relates to deployment strategy, which relates to CI/CD design. Those relationships don't exist in the memory system.
- **No version.** The memory was written once. If our infrastructure approach changes — maybe we move to Terraform and ad-hoc scripts become irrelevant — nothing flags this memory as stale.
- **No drift detection.** If a related memory changes ("we now use CDK for all infrastructure"), this memory should be reviewed. But there are no edges between memories, so there's no way to know.
The pattern is familiar
This is the same problem we built Lattice to solve for codebases. Replace "memory" with "requirement" and the gap is identical:
| Agent memory | Lattice |
|---|---|
| Flat file, no relationships | Typed nodes with versioned edges |
| Prose "why" inline | Source node with traceable evidence |
| No version tracking | Semantic versioning on every node |
| No staleness detection | Drift detection flags outdated edges |
| One rule per file | One rule per node, connected to the chain of reasoning |
If we had modeled that preference in Lattice, the trace would look like this: a source capturing the incident, a thesis that all infra changes must be scripted, and two requirements — "never run ad-hoc CLI commands" and "all setup scripts must be idempotent" — both derived from that thesis.
Now if the thesis changes — we adopt Terraform and the "scripts" thesis becomes "IaC modules" — both requirements flag for review. The preference doesn't silently rot.
This isn't just about agents
Every knowledge system that accumulates decisions over time has this problem:
- **Research workflows.** A literature review informs a hypothesis, which drives experiment design. When new papers challenge the hypothesis, which experiments need revisiting?
- **Compliance documentation.** A regulation informs a policy, which derives specific controls. When the regulation is updated, which controls need review?
- **Product strategy.** Market research informs positioning, which derives feature priorities. When the market shifts, which features are building on stale assumptions?
These are all graph problems. Decisions form chains from evidence to action. The chains have versions. When links go stale, downstream work needs to know.
The missing layer
Most knowledge tools optimize for storage and retrieval. Notion stores pages. Confluence stores docs. Agent memory stores preferences. RAG retrieves similar text.
None of them track **why a decision was made, what evidence supports it, and whether that evidence is still current.**
That's not a documentation problem. It's a coordination problem. And it's the problem Lattice was built to solve — whether the knowledge lives in a codebase, an agent's memory, or a team's decision log.
Get started
curl -fsSL https://forkzero.ai/lattice/install.sh | sh
- [Lattice on GitHub](https://github.com/forkzero/lattice)
- [Live dashboard](https://forkzero.ai/reader?url=https://forkzero.github.io/lattice/lattice-data.json)
- [Forkzero](https://forkzero.ai)