Runtime reliability for AI coding agents
Kete (Māori: bag, kit) — pronounced "keh-teh"
Every AI coding tool has an instruction file. Claude Code has CLAUDE.md. Cursor has .cursorrules. Copilot has its own.
They all share the same problem:
The longer the session, the less the agent follows them.
Kete is a runtime reliability layer that fixes this.
It pre-computes a structural map of your codebase, extracts your rules into a queryable knowledge graph, and enforces them at prompt time — before the AI agent has a chance to act incorrectly.
Instruction files are loaded once into the context window. As the session progresses — the agent reads files, writes code, runs commands — the instruction file recedes into distant context.
The model's attention to those rules degrades. This is not a quality issue with any particular model. It is an architectural limitation of how transformer attention works across growing context windows.
The consequences are real:
Bigger context windows don't fix this. They make it worse — longer sessions mean more time for degradation to accumulate.
Kete addresses instruction degradation at three layers, each independently valuable:
Kete pre-computes a knowledge graph of your entire codebase — every definition, call relationship, import chain, and dependency. When the AI agent needs to understand what a change affects, it consults the map instead of reading dozens of files.
Your rules are extracted into structured, queryable facts. Before each prompt reaches the AI agent, Kete intercepts it, identifies which rules are relevant, and injects them directly into context — at the top, where they can't be missed.
Beyond structure, Kete uses LLM analysis to understand what your code means — what functions delegate to, what constraints they enforce, what patterns they follow. This semantic layer is what makes rule enforcement work beyond simple keyword matching.
Most AI coding tools discover your codebase by reading files one at a time. They grep, open a file, read it, follow an import, open another file. This works for small changes but breaks down for anything that touches multiple parts of the system.
Kete takes a different approach: it ingests your entire codebase once and builds a structured knowledge graph. Every function definition, every call relationship, every import chain is extracted and indexed.
When the AI agent asks "what breaks if I change this function?", the answer comes from the graph in milliseconds:
The agent still reads source code. But Kete tells it which source code to read, and why.
Instruction files are passive. They sit in context and hope the model pays attention. Kete makes rules active.
Here's how it works:
This is a pre-flight check, not documentation. The difference matters: documentation is read once and forgotten. A pre-flight check runs every time.
The system learns from your corrections.
When you fix a mistake the AI agent made, that correction becomes a high-confidence rule in the knowledge graph. Future prompts that touch the same area will have the rule injected automatically. The same mistake doesn't happen twice.
Static analysis tells you that function A calls function B. That's useful, but it's not enough for effective rule enforcement.
When your rule says "database migrations must be auto-generated", the system needs to connect "this prompt is about creating a migration" to "migration creation has constraints." That's a semantic link, not a keyword match.
Kete uses a two-layer extraction approach. The structural layer is the backbone; the semantic layer is additive — use it when you want richer guardrails, skip it when you don't.
Primary
Deterministic AST parsing extracts definitions, imports, types, and call relationships. Free, instant, and verifiable from source. This is what you get on day one, and it already makes rule enforcement work.
Optional
LLM analysis extracts what code does — what functions delegate to, what data they modify, what constraints they enforce, what patterns they follow. Runs in parallel and can be disabled for air-gapped or cost-sensitive setups.
Both layers produce structured facts stored in the same knowledge graph. Start with the structural layer; add the semantic layer when you want guardrails that reason about intent, not just shape.
Kete works as a CLI tool that integrates with your existing AI coding workflow.
$ kete ingest
Scans your codebase and builds the knowledge graph. Static extraction runs instantly; semantic enrichment runs in parallel via a fast, cheap LLM.
$ kete impact Agent.perceive
Query the knowledge graph directly. "What breaks if I change this function?" Returns affected files, callers by depth, and transitive dependencies — in milliseconds.
$ kete ask "how does the auth flow work?"
Ask questions about your codebase. Kete retrieves relevant facts from the graph, reads the actual source code for the most relevant definitions, and synthesizes an answer with citations back to file and line number.
Kete integrates with AI coding agents via hooks. When configured:
The knowledge graph is persistent across sessions. Rules, corrections, and structural knowledge survive session restarts — the agent starts informed, not from scratch.
Powered by the Taniwha Engine
Kete isn't a static index. For every repository it spins up five domain-specialist agents — engine, backend, frontend, world, and a meta-agent that knows Kete itself — each one a full instance of the Taniwha cognitive engine with its own belief graph, trust ledger, and working memory.
Around every query, each specialist runs the engine's three-part loop:
No extra LLM calls per specialist — the loop is headless. When engine, backend, and kete-itself all land on the same intent for a query, that's a consensus signal. When they split, the question is ambiguous and the answer reflects that.
Your codebase becomes a living graph that matures with use — not a snapshot that went stale the moment it was built.
Kete is valuable anywhere AI agents operate on codebases with rules that matter.
Enforce patterns your team has agreed on — database migrations must be auto-generated, API responses must follow a specific shape, certain modules must not import from certain others. Rules are checked before the agent writes code, not after.
Before the AI agent modifies a function, Kete shows every caller, every dependent, every file that will be affected. The agent starts with the full blast radius instead of discovering it file by file.
New team members (and their AI agents) can query the knowledge graph to understand how the codebase is structured, what depends on what, and what the established patterns are — before writing a single line.
In codebases where certain operations must never be run without specific safeguards, Kete provides a structural guarantee — not a hope that the model read the right paragraph in the instruction file.
Your instruction file is documentation. Kete turns it into enforcement.
Ārai (Māori: barrier, shield) — pronounced "ah-rye"
Ārai is the guardrail core of Kete, extracted as a standalone Apache-2.0 tool. Both share the same static extractor, the same rule model, and the same pre-flight injection approach. The difference is how much platform you want around it.
Open source · Apache 2.0 · Local
Pick Ārai if you want:
Platform · Semantic layer · Early access
Pick Kete if you also want:
Rule of thumb: start with Ārai. Upgrade to Kete when you want guardrails that reason about intent, or when you need the impact-analysis and query layers.
Kete is currently in active development and dogfooding. We're looking for teams with codebases where rules matter to help validate the approach.