Aquin is the research company using interpretability to

designingintelligence

Models today are largely black boxes. We can prompt them, fine-tune them, and add guardrails, but our understanding of what they actually learned and why they behave the way they do remains limited.

AquinAquin LabsEiffel Tower

tap a node to edit that layer

tokposembL0L1L2L3L4L5L6L723%L849%L965%L1052%L1140%L1221%L13L14L15outParisINPUTL0-3L4-7L8-11L12-15OUTPUT
Aquin

Agent resolves automatically

Devtools for LLMs

SFT, LoRA, QLoRA — every mainstream method treats the model as a black box. You adjust data, tweak hyperparameters, hope the right behaviour emerges, and have no real idea what changed inside. When something breaks, you retrain from scratch. When someone asks you to explain a decision, you can't.

They're all just hammers. None of them let you look inside.

The model, as you know it
SFTLoRAQLoRA
What Aquin adds
inspectlocateeditno retraining
InputHiddenOutput

these are the weights — Aquin lets you see exactly what each one learned, where behaviour lives, and edit it directly

inspect

See inside any open-source model. Which layers store what, how specific weights connect to specific behaviours. Like inspect element, but for LLMs.

layer 1
layer 2
layer 3
layer N
weight → behaviour mapped

locate + edit

Find the exact weight responsible for a behaviour. Edit it directly. No retraining, no fine-tuning, no compute wasted. Based on ROME — rank-one model editing.

all weights
one weight

AMF

Aquin Model Format. A new weight format that stores behavioural metadata alongside the weights themselves. Models that are inspectable and editable by design, not as an afterthought.

weights
+
what it learned
where it lives
why it fires
.amf

aquin — use cases

Aquin is for
every team

segment

the problem today

what aquin does

outcome

ML engineers

Retrain from scratch every time something breaks. No idea what changed or why.

Locate the exact weight causing the problem. Edit it directly without retraining.

fix in seconds

0 retraining runs

AI researchers

Models reason one thing, say another. No way to verify, isolate, or prove it.

Causal tracing and activation patching. Map exactly which circuits produce which behaviours.

real visibility

first public LLM debugger

startups

Deploying fine-tuned models with no way to audit what they learned or what broke.

Inspect any checkpoint. See what a fine-tune changed, what it broke, and why — before it ships.

ship with confidence

no surprise regressions

universities

Interpretability research is expensive, fragmented, hard to reproduce.

A standard inspection layer. Run experiments, share results, reproduce on any open model.

research platform

built on monosemanticity

compliance vendors

EU AI Act. AIDA. NIST RMF. Regulations demand explainability and nothing delivers it.

Show what a model learned, where a behaviour lives, produce an audit trail for regulators.

compliance-ready

EU AI Act: Aug 2026

AI consultancies

Building and deploying models for clients with no way to explain what they do or why.

Show clients exactly what a model learned. Demonstrate auditability. Reduce liability.

explainable delivery

non-technical client ready

we didn't design these networks. we grew them. aquin is how we finally look inside.

inspectlocateedit

Not sure if Aquin is right for you?

Aquin