Graphory logo

Graphory Labs

How Graphory Works

Graphory is graph-based memory for AI agents. Your tools pour in, a knowledge graph comes out, and any AI you choose queries it. No vectors, no LLMs in the extraction path, full audit trail on every fact.

The data layer, not the reasoning layer

Graphory builds and serves a graph. It does not think for you. There are no LLM API keys in the production pipeline, no model routing, no prompt templates guessing at what your data means. Extraction is code. Storage is a typed graph. Retrieval is a query.

Your AI is the reasoning layer. Claude, ChatGPT, Gemini, Cursor, Windsurf, Codex, a custom agent, whatever you prefer. Graphory hands it tools through the Model Context Protocol and a REST API. The same graph answers every client, so switching models never means re-indexing your memory.

You bring the AI. We are the memory.

Why a graph, not vectors

Vector memory retrieves things that look similar. That works well for fuzzy recall and poorly for anything structured. A graph memory answers questions vectors cannot:

Graph edges are typed and directional. They carry provenance. They can be audited. They do not hallucinate relationships, because nothing gets written unless a deterministic rule fired on a real source file. Mem0, Zep, and Graphiti opened this category; Graphory is the production-grade, deterministic alternative that ships without an LLM in the ingest loop.

Deterministic extraction

Every ingested file starts as Markdown with a YAML frontmatter block. The frontmatter is the structured contract: id, source, type, entity, domain, title, date. The universal extractor reads frontmatter with regex and heuristics, consults the master ontology for node and edge types, and writes typed nodes and edges to the per-org graph.

What that buys you:

The ontology is a live document. New patterns, new sources, and new user corrections accumulate across orgs. The system gets sharper with use.

Temporal provenance

Every node and edge in the graph carries a provenance record. Source system, rule identifier, confidence score, authority level, ingest timestamp, the time the underlying event actually occurred, the time the fact was last seen, and the window over which it is considered valid.

Authority is qualitative, not numeric. Facts written by code are the default. Facts written by AI tools are trusted above code. Facts written by user correction are trusted above AI. Admin overrides sit at the top. When two sources disagree about the same relationship, higher-authority wins, and the loser is kept in the audit log rather than discarded.

This means a Graphory graph is not just a database. It is a defensible record of where every claim came from and when.

Two-phase async lifecycle

Ingest and extraction are intentionally decoupled. When a source file arrives, it lands on disk and is indexed into a full-text search sidecar immediately. That means search_graph and the ingest API are effectively synchronous: you can query the content you just pushed within seconds.

Graph extraction is deferred. The nightly pipeline processes new files, and you can trigger an on-demand pass with the sync_graph MCP tool. This is the phase that creates typed nodes and edges, resolves identities across sources, and runs community detection.

The practical consequence: body-text search returns fresh results right away, but get_entity and traverse reflect the state of the most recent extraction pass. Ingest fast, crystallize on a schedule.

Per-org graph isolation

One organization equals one named graph. Graphs are isolated at the database level, not just filtered at query time. API keys are scoped per org with the gs_ak_ prefix, and MCP sessions resolve to a single org through the Graph API.

Users can belong to multiple orgs and switch contexts freely. Inside an org, all members currently share full access to the graph. Role-based access is on the roadmap and is deferred for the MVP.

The canonical node types used across every org are Person, Organization, Activity, Asset, Account, and Thread. Industry-specific flavors (an invoice, a deal, a vessel) are expressed as properties on these core types, not as new types. That keeps the ontology portable across industries.

BYOC AI (bring your own client)

Graphory exposes two surfaces: a REST API and an MCP server. The MCP server is the primary interface for AI clients. After signing up, you run graphory login to install a gs_ak_ API key locally, and your AI client discovers the tools automatically.

From that point on, your AI becomes the reasoning layer. It searches, traverses, builds timelines, writes insights back with confidence scores, triggers extraction, manages connections, and reads billing. All of it through tool calls against a graph that belongs to your org.

Your AI becomes smarter every session because the graph accumulates. Your AI does not become locked in, because the graph is portable and queryable by any other client you choose later.

Where to go next