Products

Three tools. One thesis.

AI agents that behave well over time need structured memory, not longer context windows. Our products apply that idea in different places — simulated populations, coding assistants, and the guardrail layer that makes either one trustworthy.

Cognitive engine for AI agents

Taniwha Engine

A four-layer cognitive architecture — scene tree, world memory, per-agent belief graphs, and composable decision procedures — that gives AI agents real beliefs, memory, trust, and identity. Agents form convictions, lose trust, consolidate memory overnight, and behave like individuals rather than chat turns.

  • Per-agent belief graphs with provenance and ambivalence
  • Working memory + sleep consolidation + forgetting
  • Trust/TSI per source, firewall gates, uncertainty (UQ) tracking
  • Event-seeded narrative pressure and group-mind crowd dynamics
  • DB-backed worlds, YAML-ingested content, live science dashboard

Runtime reliability for AI coding agents

Kete

A pre-computed structural map of your codebase plus runtime rule enforcement. Kete intercepts prompts, identifies relevant rules, and injects them at the top of context — so your AI agent follows the rules in hour twelve the same way it did in hour one.

  • Structural map of every definition, call, and import chain
  • Instruction-file rules extracted as typed, queryable constraints
  • Pre-flight rule injection via hooks and MCP
  • Optional semantic layer (LLM analysis of what code means)
  • Corrections persist across sessions as high-trust facts

Open-source guardrails for AI coding agents

Ārai

The guardrail layer of Kete, extracted as a standalone Apache-2.0 tool. Ārai parses your instruction files into structured rules, runs as an MCP server so agents can author their own guardrails mid-session, and logs every firing locally so you can inspect what rules fired, on which tools, for which prompts — all running on your machine, no hosted service required.

  • Static (AST/regex) code extraction — no LLM cost
  • Instruction files → structured, queryable rules
  • Pre-flight guardrail enforcement via Claude Code hooks
  • MCP server — agents register their own rules mid-session
  • Local audit log of every firing (JSONL, no network egress)
  • Apache 2.0 — free for internal and commercial use

How they relate

The Taniwha Engine is the research platform — simulated populations with real belief graphs, used to study how agents form convictions and lose trust over long horizons.

Kete applies the same architecture to a narrower problem: keeping an AI coding agent aligned with your codebase's rules for the duration of a session. It is a hosted platform with structural mapping plus optional semantic enrichment.

Ārai is the guardrail core of Kete, extracted as a standalone open-source tool. If you just need your instruction file to be enforced — no hosted service, no telemetry — start with Ārai.

Want to talk?

We're a small team and we read every message. Tell us what you're building.

Talk to us