archon-memory-core vs Mem0

Mem0 is the closest competitor to archon-memory-core — both are Apache 2.0 Python libraries that solve durable memory for LLM agents. The real difference is where contradiction resolution happens: write-time (Mem0) vs retrieval-time (archon-memory-core).

Short answer: If you want an LLM to actively merge and reconcile memories as they're written, Mem0 is mature and battle-tested. If you want write-time to be cheap and deterministic and contradiction handling to live in the retriever, use archon-memory-core.

Core design difference

Mem0 calls an LLM on the write path. When a new memory arrives, Mem0 asks a model whether it's new information, an update to an existing fact, a contradiction to be replaced, or a duplicate to drop. This produces a clean memory graph but ties every write to an LLM call.

archon-memory-core stores the write verbatim with a persistence class (ephemeral, session, long-term, canonical) and optional priority. Contradictions are allowed to coexist. The retriever uses persistence class, priority, and recency to score, so the "right" memory wins at query time without a model judging correctness up front.

Feature comparison

CapabilityMem0archon-memory-core
LicenseApache 2.0Apache 2.0
Installpip install mem0aipip install archon-memory-core
Write-path LLM callYes (required)No
Contradiction handlingReconciled at write by LLMScored at retrieval
Hosted tierYes (mem0.ai cloud)No — local-first
Storage backendVector + graph storeSQLite / Postgres + embeddings
BenchmarksLoCoMo, company-reportedAMB v2.3 preregistered, 99.2% top-1
DeterminismLLM call adds varianceDeterministic writes

When to pick Mem0

When to pick archon-memory-core

What the benchmarks actually measure

Mem0's public benchmarks focus on LoCoMo-style long-context retrieval. AMB v2.3 was designed specifically to probe contradictions: "My dog's name is Rex" followed weeks later by "My dog's name is Max." The test asks which name the retriever surfaces when the user asks. AMC scored 99.2% top-1 on the canonical fact. The point isn't that AMC beats Mem0 — neither team has run the other's benchmark on a fair harness yet — it's that the question has to be asked explicitly, and AMB exists to do that.

FAQ

Is archon-memory-core a fork of Mem0?
No — different designs. Mem0 reconciles at write time with an LLM. archon-memory-core keeps conflicting facts and resolves at retrieval using persistence class and priority.
Which handles contradictions better?
Depends on whether you want reconciliation to be an LLM judgment (Mem0) or a deterministic retrieval property (archon-memory-core). AMB v2.3 is the preregistered benchmark that measures this directly.
What are the deployment differences?
Both install via pip. Mem0 has an optional cloud tier. archon-memory-core is local-first with SQLite or Postgres.
Which is cheaper to operate?
archon-memory-core — no LLM on the write path means no per-write model cost.
View archon-memory-core on GitHub