AI Amnesia
The Amnesia Problem
Imagine my day, I'm working away happily in a codebase, I need Claude to help me with something, so I spend ten minutes of the conversation explaining how the system works - how the digest pipeline delivers emails, why the metrics API has that weird caching layer, what the reconciliation script actually does and why we built it that way....
Claude gives a great answer. We get the work done. I close the session.
Next day, new session, new problem in the same codebase... and Claude has no idea what a digest is.
I have the memory of a goldfish, and it turns out Claude does too, cos AI coding assistants are stateless.
But codebases are not. Every project accumulates layers of context - architectural decisions, tribal knowledge, "we tried that and it broke production" - and none of it survives between conversations. The tool forgets, and I have no hope of remembering... and it happens every time.
You Are the Context Engine
I put up with this for a while because the alternative seemed like it would be worse - some overengineered RAG pipeline or a vector database I'd have to babysit. But at some point I realised the thing that was actually bothering me wasn't the AI's memory. It was that I had become the memory.
Every session, I was the one doing the retrieval work. Pulling context out of my own head (or worse, scrolling through old conversations) and re-injecting it into a new chat window. I was the knowledge retrieval system. I was doing the one job the machine should be doing for me.
The AI wasn't the bottleneck. I was. And I was the bottleneck because the AI couldn't remember anything.
Once I framed it that way, the solution felt obvious: what if I just started writing things down in a way the AI could find? Documentation, right. Who knew :D
But I am a busy person. Nah that's a lie - I am a lazy person. I don't want to spend time building and maintaining some fancy system or some startup-grade retrieval pipeline. Given the native language of AI is markdown, and I've been keeping markdown notes in Obsidian for years... Could we just use markdown files on disk?
If I saved the output every time I asked Claude to give me an ELI5 of how something works, I could organise by repo and add something on top that makes them searchable... That sounds perfect right? MCP turned out to be the easy answer: it lets you plug custom tools into Claude Code, so I could build a small server that indexes those files and exposes them as searchable context. And there's libraries that create MCP servers that index (and vectorise if you're feeling fancy) markdown files.
What I Actually Built
The foundation is embarrassingly simple: a folder of markdown files, organised by repository name.
Each file explains something - how a system works, how to do a specific task, why a decision was made, how to debug a common problem. They use YAML frontmatter — title, category, tags, date — so they can be filtered and searched in my Obsidian vault. But they're just markdown right, you can read them in any editor, on GitHub, in a terminal.
This is deliberate. The files live in three places at once:
- Obsidian — so I can browse, search, and edit them like normal notes
- GitHub — for backup, version history, and (eventually) sharing
- An MCP server — which indexes them into a searchable database and exposes tools like
search_docsandget_docto Claude Code
The MCP server is a small Python service that chunks the documents at paragraph boundaries, stores them with metadata (I'm using gnosis-mcp), and lets Claude search by keyword (or optionally by semantic similarity using another LLM - OpenAI for me). When a conversation starts, Claude has access to a search_docs tool so before it goes exploring a codebase from scratch, it can check whether someone's already written up how that system works.
But the thing that makes this actually work isn't the search, it's the feedback loop.
I ask Claude about something, it explores the code, builds an explanation, gives me a good answer. I save that explanation to the knowledge base. Next time I ask about the same thing, the answer is already there. The knowledge base grows as a side effect of doing normal work. I never sat down and did a documentation sprint. I just kept working and kept saving the good explanations. Over a few months, that turned into 105 documents across 18 repositories.
I also got lazy about the "save that explanation" bit pretty quickly, cos even that felt like too much work. So I built a couple of Claude Code slash commands - one that saves the last explanation from a conversation, and one that generates an explanation and saves it in one step. Then I added an instruction telling Claude to just do it automatically after any deep-dive or ELI5. Now the knowledge base basically populates itself. I ask a question, get an answer, and the explanation gets filed away without me lifting a finger. It's the kind of automation that works precisely because it requires zero discipline.
Practically speaking, I did cheat slightly too. All our teams use AI in one way or another, and most have started keeping a /docs folder in their project root, so I wrote a tiny cron job to sync each repo's docs into my knowledge base dir on the regular so I can take advantage of other people's work too. This is especially good for repos I work in rarely - let the experts write the docs and learn from them, I can then ELI5 the specific interactions I need and add them to the knowledge base.
Does It Work?
Honestly, I haven't measured anything. I don't have a controlled experiment where I timed sessions with and without the knowledge base. I don't have before-and-after graphs showing a productivity improvement. I have no metrics.
What I have is vibes, and my vibey vibe is that.. I have stopped having to explain things over and over and I appreciate that a LOT.
Instead of ten minutes of context-setting, Claude searches the knowledge base, finds the relevant doc, and we're immediately into the actual problem. It's not that the AI is smarter, it's that it has access to the same context I do.
And here's the bit I didn't expect: the knowledge base became useful independently of the AI. When a colleague asks me how something works, I send them the markdown file. When I come back to a system after a few weeks away, I read my own docs before diving into the code. Writing things down for the AI turned out to be writing things down for myself. The best documentation system is the one you actually use, and it turns out "save this so I don't have to explain it to Claude again" is an infinitely better motivator than "we should really document this."
What's Next
The knowledge base works well for one person. But the question that keeps nagging me is - what if it wasn't just mine?
Imagine onboarding a new dev and their AI assistant already knows where the bodies are buried - not because someone wrote an onboarding guide that went stale six months ago, but because the team's working knowledge accumulated naturally, the same way mine did, just shared.
The technical pieces are there. The files are in GitHub. The MCP server supports PostgreSQL for multi-user setups. But the hard problems aren't technical, they're human. Who curates? How do you keep 200 docs from going stale? How do you make contributing feel like a natural part of working, not like a chore bolted on top?
I haven't figured this out yet. But I think the direction is obvious - the most useful thing I've built around AI tooling started as a hack to stop repeating myself, and it's pointing towards something that looks a lot like institutional memory. The kind that doesn't live in one person's head and doesn't disappear when someone leaves the team.
I don't know what that looks like at scale. But I know the amnesia problem isn't just mine, and markdown files in a folder got me further than I expected.