English | δΈζη
A highly modular agentic file system framework that provides structured memory and operating environments for Large Language Models (LLMs). GAM supports both text and video modalities, offering four access levels: Python SDK, CLI, REST API, and Web Platform.
- π Intelligent Chunking: LLM-based text segmentation that automatically identifies semantic boundaries.
- π§ Memory Generation: Generates structured memory summaries (Memory + TLDR) for each text chunk.
- π Hierarchical Organization: Automatically organizes memories into a hierarchical directory structure (Taxonomy).
- β Incremental Addition: Append new content to existing GAMs without rebuilding.
- π³ Multi-environment Support: Supports both local file systems and Docker container workspaces.
- π Flexible LLM Backends: Compatible with OpenAI, SGLang, and other inference engines.
- π Long Text: Hierarchical memory organization and exploratory QA for long documents.
- π₯ Long Video: Automated detection, segmentation, and description for building long video memory.
- ποΈ Long-horizon (Agent Trajectory): Efficient compression and organization of long-sequence agent trajectories (e.g., complex reasoning steps, tool invocation logs), enabling agents to manage context across extensive operations.
- π Python SDK: High-level Python SDK for easy integration into agentic workflows.
- π» CLI Tools: Unified
gam-addandgam-requestcommands for command-line interaction. - π REST API: High-performance RESTful API (FastAPI + Uvicorn) with auto-generated OpenAPI docs, request validation, and CORS support.
- π Web Platform: Flask-based visualization and management interface.
# Full installation with all features
pip install -e ".[all]"GAM can be used through the Python SDK, CLI, REST API, or Web interface.
from gam import Workflow
wf = Workflow("text", gam_dir="./my_gam", model="gpt-4o-mini", api_key="sk-xxx")
wf.add(input_file="paper.pdf")
result = wf.request("What is the main conclusion?")
print(result.answer)# Add content
gam-add --type text --gam-dir ./my_gam --input paper.pdf
# Query content
gam-request --type text --gam-dir ./my_gam --question "What is the main conclusion?"# Start REST API server (FastAPI + Uvicorn)
python examples/run_api.py --port 5001
# Interactive docs available at http://localhost:5001/docs
# See usage example
python examples/rest_api_client.pypython examples/run_web.py --model gpt-4o-mini --api-key sk-xxxSet up environment variables to avoid repeated parameter input. GAM Agent (memory building) and Chat Agent (Q&A) can be configured independently:
# GAM Agent (memory building)
export GAM_API_KEY="sk-your-api-key"
export GAM_MODEL="gpt-4o-mini"
export GAM_API_BASE="https://api.openai.com/v1"
# Chat Agent (Q&A) β falls back to GAM Agent config when not set
export GAM_CHAT_API_KEY="sk-your-chat-api-key"
export GAM_CHAT_MODEL="gpt-4o"
export GAM_CHAT_API_BASE="https://api.openai.com/v1"Detailed usage instructions for each component can be found in the following guides:
- π Python SDK Usage:
WorkflowAPI and advanced component usage. - π» CLI Usage Guide: Detailed
gam-addandgam-requestcommands. - π REST API Usage: RESTful API access and programmatic integration.
- π Web Usage Guide: Setting up and running the visual management platform.
Check the examples/ directory for sample projects and usage guides:
| Example | Description |
|---|---|
long_text/ |
Text GAM building and QA. |
long_video/ |
Video GAM building and QA. |
long_horizon/ |
Long-horizon agent trajectory compression with search/memorize/recall. |
The research/ directory contains the original research codebase for the GAM paper, including benchmark evaluation scripts (LoCoMo, HotpotQA, RULER, NarrativeQA) and the dual-agent (Memorizer + Researcher) implementation:
cd research
pip install -e .from gam_research import MemoryAgent, ResearchAgentFor more details, see the Research README.
This project is licensed under the MIT License - see the LICENSE file for details.