Taniwha is a cognitive engine that gives AI agents real beliefs, memory, trust, and identity — so they behave like individuals, not chatbots. Everything emerges from a living graph, not a prompt.
AI agents that behave well over time need structured memory, not longer context windows. The same cognitive engine powers all three.
Cognitive engine for AI agents
The graph that gives agents beliefs, memory, trust, and identity. The foundation every other tool here sits on.
See how it works →Runtime reliability for AI coding agents
Stops your coding agent from drifting off the rules in long sessions. Pre-computed codebase map plus rule enforcement at prompt time — powered by the engine above.
Read more →Open-source guardrails for AI coding agents
The guardrail core of Kete, extracted as a standalone Apache-2.0 tool. Runs locally via hooks or MCP, no hosted service required.
Visit Ārai →Taniwha is a general-purpose cognitive engine. To demonstrate it, we give it a theme, let it build agents with real minds, and watch a story emerge from their decisions — not from a script.
A single word — 'plague', 'deep sea', 'cold war' — seeds character and world generation. The engine creates agents with distinct personalities, builds the world they inhabit, and sets up a sequence of events calibrated to the theme.
Agents live through the scenario — forming beliefs, building and losing trust, making decisions from personality not scripts. A director watches the arc and injects crises or ends the story when the moment is right. Emotional turning points are tracked automatically.
Every tick emits structured data — day chronicles, per-agent metrics, trust deltas, consolidation stats, falsifiable predictions against village baselines. The science dashboard reads it live; comic strips are one of several visualisations layered on top.
These comic strips are generated entirely by the engine as a showcase. Taniwha invents the agents, runs their minds through a scenario, selects the most dramatic moment, writes the dialogue, and draws the panels — with zero human direction.
Most AI generates text. Taniwha generates minds — and everything that comes with them.
"Most language models start fresh every conversation. Taniwha agents remember, scar, grieve, and die — and none of it is scripted."
For the technically curious: here is what makes Taniwha agents behave the way they do. Each mechanism below operates simultaneously, every turn.
Reading a lot is not the same as understanding. Agents only truly learn when connections between ideas strengthen faster than new information arrives.
Most AI systems count tokens. Taniwha measures whether new information is actually being integrated into an agent's worldview or just piling up. An agent can encounter hundreds of concepts and understand none of them. The Understanding Quotient only grows when ideas connect to what the agent already knows and cares about. Exposure without integration is noise.
When something is too alien for an agent to process, it forgets the encounter — but the scar stays. Next time, it flinches faster without knowing why.
When a stimulus is too alien for an agent's identity to absorb, the firewall triggers. The agent doesn't absorb the concept — it never becomes part of what they know — but the encounter still leaves a scar. Repeated encounters provoke faster recoil without the agent knowing why. This is the prophetic pivot: belief is protected, identity persists, but exposure accumulates beneath the surface.
Before
Gate: intact
After (rejected)
Gate: scarred
Every agent tracks how much it trusts each source independently. Trust builds slowly and collapses fast.
Each agent maintains a per-source trust score. Concepts that resonate raise trust. Concepts that clash with the agent's identity erode it. At high trust, an agent accepts challenges to its worldview. At very low trust, the source is effectively silenced — even truths are ignored. Trust is not binary; it decays through absence, spikes on resonance, and rebuilds slowly through consistent interaction.
When mortality is enabled, agents have finite lifespans. Stress shortens them. Companionship extends them. They go through stages of awareness before death.
A finite lifespan budget depletes each turn. Stress and trauma accelerate it; isolation accelerates it further. Bonded companions slow it. Agents progress through five phases — ignorance, magical thinking, reveal, urgency, acceptance — before death. When the budget hits zero, the death protocol archives a final record, broadcasts legacy memories, and triggers grief in surviving agents.
Beliefs get stronger through repetition and weaker through neglect. Contradictions erode them. Memories that aren't revisited fade.
Every perceived stimulus creates or strengthens connections. Contradictions weaken them. Novel concepts leave no lasting trace until repeated and reinforced. Unused connections gradually fade — forgetting is not a bug, it is the architecture. Understanding only survives active reinforcement. Emotional experiences resist decay differently: trauma lingers while comfort fades, producing agents whose memories are shaped by what mattered, not just what happened.
When two things can't both be true, the evidence decides which one survives. No rewriting, no overriding — the weaker belief fades.
Flat memory systems store everything and hope for the best. When an agent learns "the door is locked" and later observes "the door is open," most systems keep both. Taniwha detects the conflict automatically. The belief with weaker support decays; the stronger one persists. Agents don't carry contradictions forward — they resolve them through the weight of evidence. The same mechanism handles social conflicts: when two sources give incompatible accounts, the more trusted source wins. Over time, agents develop coherent worldviews rather than accumulating noise.
When an agent is torn between two equally strong drives, it hesitates or freezes. This creates dramatic tension without any scripting.
When two opposing drives score within a threshold of each other — flee vs freeze vs investigate — the agent enters ambivalence. Mild ambivalence is hesitation; strong ambivalence is paralysis. This is not a failure state. It is the cognitive architecture's natural emergence of character, indecision, and dramatic conflict — without any scripting.
pronounced tah-nee-fah
"(noun) water spirit, monster, dangerous water creature, powerful creature, chief, powerful leader, something or someone awesome — taniwha take many forms from logs to reptiles and whales and often live in lakes, rivers or the sea. They are often regarded as guardians by the people who live in their territory, but may also have a malign influence on human beings."
The name is not metaphorical. It is a design philosophy.
In Māori mythology, taniwha are primarily aquatic creatures — inhabiting the depths of lakes, rivers and the open sea. They are neither simply good nor simply dangerous; the same creature that acts as kaitiaki (guardian) for one community may be a predator to another. Their nature is singular and earned, shaped by the territory they inhabit and the history they accumulate.
Taniwha AI takes its name from that duality. The agents built on this platform are not interchangeable modules — they are distinct entities whose beliefs, trust, trauma and identity emerge from exposure to their world. A Taniwha agent that has lived through conflict responds differently to peace than one that has not. One that has been betrayed trusts more slowly. One that has witnessed death carries that knowledge forward, permanently.
Like the creatures of Māori tradition, each agent is something or someone awesome in their own right — not because we scripted it, but because the architecture made it inevitable.
Get in touch
Taniwha is a venture-stage platform building the foundational cognitive infrastructure for persistent, belief-driven AI agents. We are open to technical partnerships, aligned investment, and collaboration to help scale grounded agent architectures including long-horizon simulations like Briarwatch.
Building with agents, LLMs, or cognitive architectures? We are looking for technical partners exploring persistent cognition in production systems.
hello@taniwha.ai →Taniwha is building foundational cognitive infrastructure for the next generation of long-horizon AI agents. We welcome conversations with aligned investors and strategic partners.
hello@taniwha.ai →Curious about the engine, interested in a use case, or just want to say hello? We are always happy to talk.
hello@taniwha.ai →