01
Place AGENTS.md in the root of your project repository.
AI does not usually fail because the model is useless.
It fails because the model stays plausible for too long.
01
Place AGENTS.md in the root of your project repository.
02
Start a fresh session in an assistant that can read local workspace files. Tell the assistant to read AGENTS.md before acting.
03
Expect more discipline and stronger guardrails, not certainty or automatic correctness. The contract works best when it stays active as the frame for the work instead of being treated as a one-time setup step.
Without a governing contract, a typical AI session drifts in familiar ways.
The assistant recommends before it has really closed the target. It edits before reading enough surrounding context. It treats a file-level change as if it had no downstream consequence.
It keeps trying local fixes on a state that is already incoherent. It reports progress that sounds cleaner than the underlying reality.
It can also lock onto the first plausible path too early, treating an early answer as if it were already a conclusion.
Chaotic inputs enter left. The contract tries to force the session through regulation planes before output closes.
Push the assistant to close the real object before touching the obvious surface.
Moves that expand scope, risk, or structure are meant to be surfaced before execution, not after.
Push the return away from plausible narrative and closer to grounded task state.
The task is treated as a point inside a larger volume, not as an isolated request. The assistant is pushed to close the real object, distinguish actual state from intended state, and avoid converging too early on its first workable reading.
The invariants turn posture into repeatable constraints. They push the assistant to close the real object before acting, keep authority grounded in the right order, preserve operating mode, and avoid promoting one local need into permanent structure too early.
The contract does not reward plausible continuation when the frame is still materially wrong. It blocks continuation when context, object, authority, destination, path closure, or the contract frame itself is degraded in a way that changes the move.
The final response is pushed toward a compact task-state readout instead of a polished retrospective narrative. It closes with fields that keep pressure on what was touched, what grounded the move, what is actually known, and whether the task has really converged.
Touch - what changed and what stayed untouched within expected scopeGround - what the move or conclusion was grounded onState - what is verified, inferred, unresolved, or not inspectedConvergence - whether the task is converged, divergent, or blockedLicensed under CC BY-SA 4.0