Skip to content

Releases: JohnnyFiv3r/Core-Memory

v1.1.0

09 May 11:56

Choose a tag to compare

What's Changed

Added

  • Deterministic OpenClaw bridge operations scripts (scripts/openclaw_bridge_install.sh, openclaw_bridge_doctor.sh, openclaw_bridge_ci_smoke.sh).
  • CI smoke workflow .github/workflows/openclaw-bridge-smoke.yml.

Changed

  • OpenClaw bridge plugin manifest schema includes coreMemoryRepo.
  • OpenClaw integration docs document canonical install/verify path and runtime verification signals.
  • Association inference v2.1 hardening: crawler/model-inferred association ingestion validates a strict canonical inference subset by default and quarantines malformed/non-canonical rows.
  • Causal grounding policy now requires at least one non-temporal structural relation for full grounding; follows/associated_with-only chains downgrade to partial grounding.
  • Canonical retrieval cleanup: removed deprecated public retrieval surfaces (/v1/memory/search-form, /v1/memory/reason, OpenClaw bridge search-form/reason); adapters/docs aligned to search/trace/execute.
  • Canonical planner authority cleanup: removed legacy retrieval/pipeline/execute.py; canonical planner authority is retrieval/pipeline/canonical.py.
  • Test contract cleanup: retired obsolete search-form/reason legacy test modules; rebaselined retrieval contract coverage on canonical search/trace/execute + hydration/tenant/v2.1 policies.

Fixed

  • Bridge ingestion modules now read stdin fully to avoid truncated JSON payload parsing failures in large event envelopes.

Packaging

  • License correctly recorded as Apache-2.0 (the 1.0.1 PyPI release showed MIT due to an automation error during initial publish).
  • Author email corrected to john@linelead.io.

Full Changelog: v1.0.1...v1.1.0

v1.0.1 – stabilize event-driven session-first runtime; reduce memoryFlush timeout risk

09 Mar 16:09

Choose a tag to compare

Summary

This release stabilizes the newer event-driven, session-first runtime architecture and is intended to reduce the risk of OpenClaw stalls caused by the older memoryFlush → transcript extraction → consolidate lifecycle.

The main architectural shift is that Core Memory now supports a finalize-hook / side-effect memory flow where memory work can happen after turn finalization or outside the active model turn, instead of asking the model to perform heavy memory construction during compaction.

This release is especially important for deployments where the older OpenClaw integration path has caused:

  • compaction timeouts
  • stale snapshot / duplicate reply behavior
  • restart loops tied to memoryFlush

Why this release matters

Older Core Memory integrations often depended on OpenClaw memoryFlush to run:

  • extract-beads.py
  • consolidate.py
  • promotion / rolling-window maintenance

inside the same bounded agent run used for compaction.

That architecture could brick the runtime when compaction timed out before memory work completed.

v1.0.1 moves the codebase much closer to the intended model:

  • session-first live memory
  • event-driven turn finalization
  • agent-reviewed association pass
  • rolling continuity as an explicit surface
  • retrieval from archive/association truth
  • MEMORY.md retained as a parallel OpenClaw-only semantic layer

Highlights

Event-driven runtime path

Core Memory now has a much clearer runtime center around memory_engine.py, making finalized-turn processing the intended entrypoint for memory creation and update.

This supports:

  • emit_turn_finalized(...)
  • coordinator/finalize-hook integrations
  • async or side-effect memory execution after reply finalization

Session-first live authority

Live memory now more clearly centers on session-local append-only bead storage rather than treating index.json as the primary live source of truth.

This improves alignment with the intended architecture:

  • current session beads are written first
  • association and promotion happen against visible session memory
  • archive truth is established at flush/session transition

Association pass contract

The association layer now better reflects the intended behavior:

  • review visible beads each turn
  • determine promotion / promotion-candidate state
  • append causal associations across current session and rolling continuity context
  • keep memory history append-only

Rolling continuity surface

Rolling-window continuity is now represented more explicitly as a surface/store rather than only as a rendered artifact. This improves the architecture for bounded prompt injection and continuity carryover between sessions.

Retrieval alignment

Retrieval form/catalog behavior is now more aligned with canonical association records and archive truth, improving the bridge between:

  • vague memory prompts
  • structured bead attributes
  • causal archive search

Recommended integration change for OpenClaw users

If your OpenClaw instance is still using the old model:

  • memoryFlush prompt runs extract-beads.py
  • memoryFlush prompt runs consolidate.py
  • memory is constructed during compaction

you should treat that as legacy integration.

Preferred model

Use:

  • finalize hook
  • emit_turn_finalized(...)
  • async side-effect processing
  • or scheduled sidecar_sync_session.py as a bridge

Important guidance

memoryFlush should be treated as a lightweight session-transition signal, not a place to perform heavyweight memory construction inside the active agent turn.

This release helps support that transition.

Compatibility notes:
Still supported

  • existing root scripts
  • compatibility wrappers
  • legacy transcript extraction path for backfill/replay
  • OpenClaw MEMORY.md as a parallel semantic memory surface

Transitional areas

Some compatibility layers and shims remain in the repo to avoid breaking current workflows. The architecture is now suitable for forward development, but some cleanup and consolidation may still happen in future releases.

Upgrade guidance

If you are using the older OpenClaw memoryFlush prompt flow

Recommended next step:

  • update to this release
  • stop relying on in-turn extract-beads.py + consolidate.py as the primary memory lifecycle
  • wire OpenClaw to emit finalized-turn payloads through the finalize hook / API path
  • if native finalize hook wiring is not yet done, run sidecar_sync_session.py on a schedule outside the agent turn

If you are already on the newer runtime path

This release should be a straightforward improvement and a better base for continued development.

Architecture notes

This release reflects the following core design direction:

  • memory_engine.py as canonical runtime entry
  • session-local full beads as live memory authority
  • per-turn association over visible memory
  • flush as session transition, not primary memory construction
  • rolling continuity as compressed injected context
  • archive graph as retrieval truth
  • MEMORY.md retained as OpenClaw-only semantic summary memory

Known limitations

Some transitional modules and compatibility wrappers remain

Older OpenClaw deployments may still be wired to the legacy memoryFlush script path until manually updated

Full elimination of legacy live-index fallback behavior may continue to evolve in follow-up work

v0.1.0 — Initial Public Release

04 Mar 01:39

Choose a tag to compare

Release: v0.1.0 — Initial Public Release

Core Memory is a deterministic causal memory layer for AI agents.

Instead of relying on chat log replay or vector similarity, Core Memory stores structured memory events ("beads") and explicit causal relationships between them. These events are persisted in an append-only JSONL log and used to construct bounded, deterministic context packets for agent prompts.

This release represents the first public version of the architecture and reference implementation.

Core Concepts

Core Memory introduces several primitives for agent memory:

• Beads – structured memory events (lessons, outcomes, decisions, hypotheses)
• Associations – explicit causal relationships between beads
• Context Packets – bounded sets of relevant memory injected into prompts
• Compaction – promotion and summarization of stable knowledge over time

The system is designed to provide:

• deterministic recall
• causal reasoning over prior events
• bounded prompt context
• an inspectable append-only memory log

Features in v0.1.0

  • append-only JSONL memory store
  • bead schema and promotion workflow
  • causal association graph
  • deterministic context packet generation
  • sidecar extraction hooks for agent turns
  • "Dreamer" association analysis bot module
  • integration adapters (including OpenClaw, PydanticAI, and SpringAI)
  • CLI tooling and test suite

Project Status

This is an early public release intended for experimentation and feedback.

The architecture and core primitives are expected to evolve as the project is used with real agent systems.

Contributions, feedback, and design discussions are welcome.

https://github.com/core-memory