A different way of thinking about artificial intelligence
Most AI systems today are built to answer questions, predict outcomes, or optimise a task.
Taniwha is built to explore something else entirely:
How minds change over time.
Download technical overview (PDF) ↓
Questions about the architecture? hello@taniwha.ai
Taniwha is a cognitive architecture for simulating long-lived agents — entities that form beliefs, develop identity, build trust, refuse information, forget, adapt socially, and sometimes break.
It is not a prompt-response system. It is designed for correctness, inspectability, and persistence in long-running simulations. Taniwha treats cognition as a process, not an output.
In Taniwha, an agent's mind is represented as a living graph.
Understanding is not stored as isolated facts. It emerges from how strongly ideas connect to one another.
An agent can accumulate many concepts and still understand very little — or hold few ideas that are deeply integrated. What matters is structure, not volume.
The graph is persistent. It changes gradually. And it carries history.
Each agent operates in cycles of perception followed by periods of consolidation.
When something new happens, the agent does not simply "record" it. Instead, the information is filtered through several layers:
At the center of this process is an identity gate — a cognitive firewall.
Sometimes information is too incompatible with an agent's identity to absorb.
When that happens, the identity gate rejects the information entirely. The agent doesn't absorb the concept — it's rejected before it becomes part of what they know.
But something still changes.
The rejection leaves a structural scar on the gate itself. Future encounters with similar information provoke faster resistance, even though the agent cannot explain why.
This is intentional.
Taniwha models a phenomenon common in human cognition: defensive refusal that protects identity without preserving the rejected belief. We sometimes call this trauma without memory.
Taniwha agents do not learn instantly.
Learning happens during consolidation phases, where the agent:
Forgetting is not a failure mode. It is a core feature.
Beliefs that are not used or reinforced gradually decay. Understanding only survives active integration.
Agents can form analogies.
They can detect structural similarities between situations, generate speculative bridges, and transfer expectations or rules from one domain to another when alignment is strong enough.
These inferences are imperfect by design — agents can over-generalise and must correct course through experience. This allows them to reason beyond what they've directly encountered, while still being grounded by what actually happens.
Inference in Taniwha is exploratory and abductive, not formally optimal. Agents can generate hypotheses that feel plausible, act on them, and revise over time.
Identity is not a label in Taniwha. It is a structure.
Agents maintain identity lineage paths that influence what they accept, what they reject, and what they are willing to share.
Trust is not a single score. It is layered, contextual, and shaped by experience.
Importantly, disclosure decisions are made before communication happens. An agent may simply never offer certain knowledge to another agent, based on identity compatibility and trust.
Taniwha includes a built-in theory of mind system.
Agents observe the behaviour of others over time. As evidence accumulates, they form portraits — persistent beliefs about another agent's tendencies, intentions, or reliability.
These portraits influence:
Social understanding is not assumed. It is constructed.
Agents act through a tactical planner.
Rather than pursuing rigid goal hierarchies, they balance:
They can navigate, gather information, react to events, or act directly.
When actions are blocked — by trust, identity, or context — the refusal is defensive. It shapes future behaviour through motivation and trust, not curiosity by default.
Taniwha agents exist in time.
They can be configured with:
Stress, isolation, and repeated defensive responses can shorten an agent's life. Stable relationships and reinforcement can extend it.
When an agent dies, it leaves behind structured traces of what it learned and how it changed.
Ending is treated as a first-class phenomenon.
Taniwha runs in two primary modes:
Multi-agent simulations with space, time, and social interaction, where agents form relationships and shared history.
A controlled single-agent environment for probing internal dynamics such as identity gating, trust evolution, consolidation, and forgetting.
Both modes run on the same underlying engine.
Taniwha is a general-purpose cognitive engine. Here are some of the domains where persistent, belief-driven agents offer something that stateless AI cannot.
Simulate realistic human-like agents for training environments — emergency response, corporate negotiation, medical triage. Because agents remember prior interactions and develop opinions, trainees face unpredictable, evolving scenarios rather than scripted ones.
Build assistants that genuinely learn your preferences over weeks and months — not by storing a list of rules, but by forming beliefs about what you value and adapting when those beliefs are challenged. Trust builds naturally with consistent use.
NPCs that hold grudges, form alliances, and change their minds. Characters in interactive fiction that remember what you said three chapters ago and act on it. The cognitive engine provides the personality — your game provides the world.
Model how beliefs spread through populations, how trust erodes under misinformation, or how group dynamics shift over time. Every cognitive step is logged and auditable — useful for research that needs to show its working.
Deploy agents that watch data streams, build beliefs about what is normal, and flag anomalies based on evolving understanding — not static rules. They get better at their job the longer they run, and can explain why they flagged something.
Tutoring agents that understand where a student is struggling — not from a test score, but from observing how their beliefs about a subject evolve. The tutor adapts its approach based on what seems to stick and what keeps fading.
It is an attempt to take cognition seriously — including its limits.
Want to see the technical detail?
Read the deep dive