A different way of thinking about artificial intelligence

Taniwha, Explained

Most AI systems today are built to answer questions, predict outcomes, or optimise a task.

Taniwha is built to explore something else entirely:

How minds change over time.

Download technical overview (PDF) ↓

Questions about the architecture? hello@taniwha.ai

Taniwha is a cognitive architecture for simulating long-lived agents — entities that form beliefs, develop identity, build trust, refuse information, forget, adapt socially, and sometimes break.

It is not a prompt-response system. It is designed for correctness, inspectability, and persistence in long-running simulations. Taniwha treats cognition as a process, not an output.

The core idea: the graph is the mind

In Taniwha, an agent's mind is represented as a living graph.

  • Nodes represent beliefs, motivations, memories, identities, and experiences.
  • Edges represent relationships: support, contradiction, influence, trust, resemblance.

Understanding is not stored as isolated facts. It emerges from how strongly ideas connect to one another.

An agent can accumulate many concepts and still understand very little — or hold few ideas that are deeply integrated. What matters is structure, not volume.

The graph is persistent. It changes gradually. And it carries history.

How Taniwha agents perceive the world

Each agent operates in cycles of perception followed by periods of consolidation.

When something new happens, the agent does not simply "record" it. Instead, the information is filtered through several layers:

  • What the agent already believes
  • Who the information is coming from
  • How familiar or trusted the source is
  • What the agent cares about
  • Whether the information fits the agent's identity

At the center of this process is an identity gate — a cognitive firewall.

Refusal, identity, and "trauma without memory"

Sometimes information is too incompatible with an agent's identity to absorb.

When that happens, the identity gate rejects the information entirely. The agent doesn't absorb the concept — it's rejected before it becomes part of what they know.

But something still changes.

The rejection leaves a structural scar on the gate itself. Future encounters with similar information provoke faster resistance, even though the agent cannot explain why.

This is intentional.

Taniwha models a phenomenon common in human cognition: defensive refusal that protects identity without preserving the rejected belief. We sometimes call this trauma without memory.

Learning, forgetting, and consolidation

Taniwha agents do not learn instantly.

Learning happens during consolidation phases, where the agent:

  • Commits selected short-term experiences into longer-term structure
  • Detects recurring patterns and crystallises them into stable beliefs
  • Weakens or removes beliefs that are no longer reinforced
  • Rebalances motivation and exploratory pressure

Forgetting is not a failure mode. It is a core feature.

Beliefs that are not used or reinforced gradually decay. Understanding only survives active integration.

Analogy and inference

Agents can form analogies.

They can detect structural similarities between situations, generate speculative bridges, and transfer expectations or rules from one domain to another when alignment is strong enough.

These inferences are imperfect by design — agents can over-generalise and must correct course through experience. This allows them to reason beyond what they've directly encountered, while still being grounded by what actually happens.

Inference in Taniwha is exploratory and abductive, not formally optimal. Agents can generate hypotheses that feel plausible, act on them, and revise over time.

Identity, trust, and social understanding

Identity is not a label in Taniwha. It is a structure.

Agents maintain identity lineage paths that influence what they accept, what they reject, and what they are willing to share.

Trust is not a single score. It is layered, contextual, and shaped by experience.

Importantly, disclosure decisions are made before communication happens. An agent may simply never offer certain knowledge to another agent, based on identity compatibility and trust.

Understanding other agents: portraits

Taniwha includes a built-in theory of mind system.

Agents observe the behaviour of others over time. As evidence accumulates, they form portraits — persistent beliefs about another agent's tendencies, intentions, or reliability.

These portraits influence:

  • Interpretation of future actions
  • Expectations during interaction
  • Willingness to cooperate or resist

Social understanding is not assumed. It is constructed.

Planning and action

Agents act through a tactical planner.

Rather than pursuing rigid goal hierarchies, they balance:

  • Long-term background pressures
  • Short-term tactical concerns
  • Immediate operational actions

They can navigate, gather information, react to events, or act directly.

When actions are blocked — by trust, identity, or context — the refusal is defensive. It shapes future behaviour through motivation and trust, not curiosity by default.

Time, development, and mortality

Taniwha agents exist in time.

They can be configured with:

  • Developmental stages
  • Shifting motivations
  • Finite lifespans (when mortality is enabled)

Stress, isolation, and repeated defensive responses can shorten an agent's life. Stable relationships and reinforcement can extend it.

When an agent dies, it leaves behind structured traces of what it learned and how it changed.

Ending is treated as a first-class phenomenon.

How Taniwha is used

Taniwha runs in two primary modes:

Persistent worlds

Multi-agent simulations with space, time, and social interaction, where agents form relationships and shared history.

Cognitive test harness

A controlled single-agent environment for probing internal dynamics such as identity gating, trust evolution, consolidation, and forgetting.

Both modes run on the same underlying engine.

What would I use it for?

Taniwha is a general-purpose cognitive engine. Here are some of the domains where persistent, belief-driven agents offer something that stateless AI cannot.

Training and simulation

Simulate realistic human-like agents for training environments — emergency response, corporate negotiation, medical triage. Because agents remember prior interactions and develop opinions, trainees face unpredictable, evolving scenarios rather than scripted ones.

Long-running digital assistants

Build assistants that genuinely learn your preferences over weeks and months — not by storing a list of rules, but by forming beliefs about what you value and adapting when those beliefs are challenged. Trust builds naturally with consistent use.

Interactive storytelling and games

NPCs that hold grudges, form alliances, and change their minds. Characters in interactive fiction that remember what you said three chapters ago and act on it. The cognitive engine provides the personality — your game provides the world.

Research and behavioural modelling

Model how beliefs spread through populations, how trust erodes under misinformation, or how group dynamics shift over time. Every cognitive step is logged and auditable — useful for research that needs to show its working.

Autonomous monitoring and decision-making

Deploy agents that watch data streams, build beliefs about what is normal, and flag anomalies based on evolving understanding — not static rules. They get better at their job the longer they run, and can explain why they flagged something.

Education and adaptive tutoring

Tutoring agents that understand where a student is struggling — not from a test score, but from observing how their beliefs about a subject evolve. The tutor adapts its approach based on what seems to stick and what keeps fading.

What Taniwha is — and is not

Taniwha is:

  • A cognitive engine for building agents that develop over time
  • A system that treats failure, refusal, and forgetting as meaningful
  • A system where identity shapes how agents learn, trust, and decide

Taniwha is not:

  • A prompt-response system — agents carry persistent state between every interaction
  • A general problem solver
  • An optimisation engine
  • A claim about human-level intelligence

It is an attempt to take cognition seriously — including its limits.

Want to see the technical detail?

Read the deep dive