Keeping your AI code assistants under control...
As AI tools like Claude Code, Cursor, and Windsurf become integral to our workflows, they bring incredible productivity gains. But they also introduce new risks. Without proper guardrails, an AI tool might make unauthorised changes, ignore project-specific conventions, or create security vulnerabilities. Each project ends up with its own interpretation of "good code," making onboarding a nightmare and code reviews painful.
That's why I created my dev standards project — a centralised governance system that brings order to the chaos. Think of it as a rulebook for both humans and AI tools, ensuring everyone follows the same high-quality standards across every project. It's experimental at the moment, but I have introduced it to a few repos - both at work and home projects - and it seems to have helped with some of our common problems.
The beauty of the system lies in its simplicity. Projects can pull the latest standards, getting only what's relevant to their technology stack. A Python project gets Python-specific rules about FastAPI and Pydantic. A JavaScript project gets JavaScript guidelines. And crucially, all AI tools read a .ai-context file that tells them exactly what they can and cannot do.
What makes this system powerful is that it establishes clear boundaries without killing productivity. AI tools know they should never auto-commit code, modify dependencies, or delete files without explicit human permission. They know which testing frameworks to use, how to structure logging, and what naming conventions to follow. Bonus feature is that this folder is intended to be committed into the repo so devs (and AIs) can't ignore it. I've definitely had experiences with devs who don't install pre-commit and cause all manner of problems.
Of course it's not completely foolproof. I have definitely been surprised by Claude Code creating a bunch of commits even though it's not supposed to (I authorised it one time...). I also had to be very clear that these instructions were for new code only, because left to its own devices Claude will refactor your whole codebase and break everything. I needed that to be strongly discouraged!!
This is the kind of thing that will definitely become standardised as time moves on. Right now I've not managed to find a "best practices" way to do this, hence experimenting with my own. I've had to do some fiddling around trying to make different tools pick up the same docs (at work we have users of Claude Code, Windsurf, Cursor and other more niche tools like OpenCode and Aider). I've got the .ai-context doc in the root that's supposedly picked up by most tools automatically and directs them towards the the main folder. I also have a little chunk of text added to a project's readme that should hopefully help those tools that don't find the other files.
I want to add an extra feature that manages project specific docs (think CLAUDE.md or AGENTS.md) but I've not quite figured that out yet - they can be a little territorial about these files...!
Anyway. I would not recommend just blindly adding my docs to your project, you will have different ideas of "good" and what sort of tools you want to use. I just wanted to share the idea. If you fancy it, fork the repo or copy it and try it for yourself.