Yury Selivanov recently released lat.md, a knowledge graph for your codebase, written in markdown. The tool itself sounds useful enough, but checking it out and working out what it provides for your code was more useful than the tool's existence itself.
The problem is in how agent instructions are laid out. Typically there's a CLAUDE.md or an AGENTS.md, or something like that, depending on the LLM you choose to work with. (Obviously your mileage may vary in wildly amusing ways.) But this file is essentially limited by virtue of being a single file: everything's in one place. Skills can help, by migrating specific agent functionality and instructions into their own scopes, but that doesn't help with design decisions or architectural constraints, which means that agents fill the gaps with confident, plausible, wrong code. That wastes time, assuming the human catches the errors, and wastes money if not, and potentially deploys wrong or harmful code.
lat.md attempts to fix this by providing a deeper graph structure, cross-references, and - most usefully - a lat check command that can fail the build when documentation drifts out of sync with the source.
That last part is the genuinely new contribution. Everything else is discipline with better syntax.
Therein lies an actual problem definition, one that sheds more light than a tool's simple existence might.
Most agent configuration files are weak not because they lack the right format or the right tooling - they're just Markdown - but because few sit down and do the self-inventory required to write them well. The files that actually work - the ones where an agent stays in its lane, respects constraints, doesn't helpfully disable tests that are inconveniently failing - read less like documentation and more like what you'd tell a capable junior developer on their first day. Or week.
Let's say you demand tests. You demand test suites preferred over individual runs - it's not enough to pass a new test, the entire test suite has to pass. You do not disable functionality because it's inconvenient. You keep documentation current when you change behavior. This is how this codebase is structured, this is why, these are the things that are not negotiable.
That's not a prompt. That's onboarding.
The reason most people don't write it that way is that it requires knowing, clearly enough to write down, what you actually care about, what your codebase's non-negotiable invariants are, and what a smart developer would need to know to work on it without breaking things you'd have to fix later. That self-inventory is harder than it sounds. It requires the same honest examination that effective mentorship requires. It requires choosing what you're doing before just doing it.
It also means that nobody else's configuration file works for you. The specific constraints that matter on your project are yours - artifacts of your architecture, your history, your accumulated decisions. A template gets you started; it doesn't get you there.
Tools like lat.md become useful downstream of this work. Once you know what matters and have written it down clearly, enforcement tooling and graph traversal and semantic search start to add real value. Before that, they're infrastructure for a foundation that hasn't been poured yet.
Start with what you'd tell the junior developer. Be specific. Be honest about what breaks when people don't know it. The file that results will probably be too personal to publish as a template - your author's has odd things like music and literature preferences - and that's how you'll know you did it right.
For the record, lat.md does work well - using an LLM on a codebase that's had lat.md applied to it does indeed create an interaction with design documents that promises to ensure compliance to requirements. Good job, Yuri.