LangGraph
How do I connect LangGraph to Areev?
Point LangGraph at the Areev A2A endpoint (POST /a2a) with a Bearer API key, wrap each skill you need as a LangGraph tool, and attach the tools to your agent. The agent card at /.well-known/agent.json lists all 15 skills.
Areev exposes its full memory surface — 15 skills including memory_remember, memory_recall, memory_cal, memory_graph, and session_bootstrap — via the Agent-to-Agent (A2A) JSON-RPC protocol. LangGraph agents consume these as tools. If your LangGraph stack already ships an A2A tool helper, use it. If not, a six-line HTTP wrapper around requests.post gives you the same thing with no extra dependencies.
The A2A contract is stable: every skill accepts a tasks/send JSON-RPC call with message.parts[] (text or data) and metadata.skill, and returns a result object. See the A2A reference for the complete skill catalog and input schemas.
import requests
from langchain_core.tools import tool
from langgraph.prebuilt import create_react_agent
AREEV_URL = "https://your-areev-host"
AREEV_KEY = "ar_..." # API key from the Areev app
def a2a_call(skill: str, text: str) -> dict:
r = requests.post(
f"{AREEV_URL}/a2a",
headers={"Authorization": f"Bearer {AREEV_KEY}"},
json={
"jsonrpc": "2.0",
"id": 1,
"method": "tasks/send",
"params": {
"id": f"task-{skill}",
"message": {"role": "user", "parts": [{"type": "text", "text": text}]},
"metadata": {"skill": skill},
},
},
timeout=30,
)
r.raise_for_status()
return r.json()["result"]
@tool
def remember(fact: str) -> dict:
"""Store a natural-language fact in Areev memory."""
return a2a_call("memory_remember", fact)
@tool
def recall(question: str) -> dict:
"""Search Areev memory with a natural-language question."""
return a2a_call("memory_recall", question)
agent = create_react_agent(model, tools=[remember, recall])
result = agent.invoke({"messages": [("user", "What does john prefer to drink?")]})
How do I authenticate?
Use an Authorization: Bearer ar_... header on every A2A request. API keys are created in the Areev app under Settings → API Keys.
Areev keys carry organization, memory, and role scope. The key determines which memories the tool can read and write — there is nothing to configure on the LangGraph side beyond setting the header. Keys starting with ar_ are admin-scoped; keys starting with ap_ are project-scoped. See authentication for the full key model.
For cloud-hosted Areev, the base URL is your org subdomain (e.g. https://acme.areev.ai). For self-hosted, it is the host where areev serve --a2a is running. A2A is enabled by default in cloud deployments; self-hosted operators must pass --a2a to the server.
import os
AREEV_URL = os.environ["AREEV_URL"]
AREEV_KEY = os.environ["AREEV_API_KEY"]
How do I run a CAL query from LangGraph?
Call memory_cal with a CAL query string. CAL is Areev’s declarative Context Assembly Language — one query can fuse text search, structural filters, graph traversal, and formatting.
The memory_cal skill is text-input only. Pass the CAL source as the text part and Areev returns the assembled context ready for your agent to reason over. This is the highest-leverage Areev skill for LangGraph because a single tool call replaces several sequential recall + filter + rank steps. See CAL Queries for the full syntax.
@tool
def assemble_context(cal_query: str) -> dict:
"""Run a CAL query against Areev and return the assembled context.
Example: 'RECALL beliefs ABOUT john LIMIT 5 FORMAT markdown'"""
return a2a_call("memory_cal", cal_query)
agent = create_react_agent(model, tools=[remember, recall, assemble_context])
How do I use session bootstrap for long-running agents?
Call session_bootstrap at the start of a LangGraph run to hydrate the graph state with the session’s latest state grain, active goals, and recent actions.
A LangGraph state machine typically starts from a blank slate every invocation. Session bootstrap gives the agent a consistent memory of what was happening at the end of its last turn — the last State grain, any open Goal grains, and the last N Action grains. Bind the result into your graph’s initial state so downstream nodes can read it without another tool call.
def bootstrap_state(session_id: str) -> dict:
return a2a_call("session_bootstrap", f'{{"session_id":"{session_id}"}}')
initial = bootstrap_state("sess_abc123")
result = agent.invoke({
"messages": [("user", "Continue where we left off")],
"areev_session": initial,
})
What skills should I expose as tools?
For most LangGraph agents, three tools cover 90% of cases: memory_remember (write), memory_recall or memory_cal (read), and memory_forget (delete on user request).
Exposing every Areev skill as a separate tool inflates the tool-choice prompt and slows reasoning. Pick the smallest set that matches your agent’s job. Add memory_graph for agents that need relationship traversal, compliance_verify for agents operating in regulated domains, and pii_scan for pre-storage PII filtering.
| Agent pattern | Tools to expose |
|---|---|
| Conversational memory | memory_remember, memory_recall |
| Research / RAG | memory_cal, memory_recall_chain |
| Relationship reasoning | memory_graph, memory_recall |
| Regulated-domain agents | compliance_verify, pii_scan + above |
| User-facing with “forget me” | memory_forget + above |
Related
- A2A — full skill catalog, schemas, and agent card format
- CAL Queries — CAL syntax for
memory_cal - Authentication — API key model, scopes, rotation
- CrewAI — same pattern for CrewAI
- AutoGen — same pattern for AutoGen