AutoGen
How do I connect AutoGen to Areev?
Write a small function that POSTs to Areev’s A2A endpoint, register it with assistant.register_for_llm(...) and user_proxy.register_for_execution(...), and the assistant can read and write memory like any other tool.
AutoGen’s function-calling model maps one-to-one with Areev’s A2A skills. Each skill becomes a function the assistant can invoke; the user proxy executes it and returns the result. The wiring works the same across AutoGen Core and AutoGen AgentChat — no framework-specific A2A plugin is needed. See the A2A reference for the full 15-skill catalog.
Because AutoGen’s function registration is opinionated about signatures, the cleanest approach is one Python function per Areev skill with typed parameters. This gives the LLM a clean tool schema and lets the user proxy validate inputs before calling Areev.
import os, requests
from typing import Annotated
from autogen import AssistantAgent, UserProxyAgent
AREEV_URL = os.environ["AREEV_URL"]
AREEV_KEY = os.environ["AREEV_API_KEY"]
def _a2a(skill: str, text: str) -> dict:
r = requests.post(
f"{AREEV_URL}/a2a",
headers={"Authorization": f"Bearer {AREEV_KEY}"},
json={
"jsonrpc": "2.0",
"id": 1,
"method": "tasks/send",
"params": {
"id": f"task-{skill}",
"message": {"role": "user", "parts": [{"type": "text", "text": text}]},
"metadata": {"skill": skill},
},
},
timeout=30,
)
r.raise_for_status()
return r.json()["result"]
assistant = AssistantAgent("assistant", llm_config={...})
user_proxy = UserProxyAgent("user_proxy", human_input_mode="NEVER")
@user_proxy.register_for_execution()
@assistant.register_for_llm(description="Store a natural-language fact in Areev memory.")
def remember(fact: Annotated[str, "A short factual statement"]) -> dict:
return _a2a("memory_remember", fact)
@user_proxy.register_for_execution()
@assistant.register_for_llm(description="Search Areev memory with a natural-language question.")
def recall(question: Annotated[str, "A natural-language question"]) -> dict:
return _a2a("memory_recall", question)
user_proxy.initiate_chat(assistant, message="What does john prefer to drink?")
How do I authenticate?
Set AREEV_URL and AREEV_API_KEY environment variables. Every A2A request carries Authorization: Bearer <key>.
Areev API keys come from the Areev app under Settings → API Keys. A key is scoped to an organization and optionally a memory and role. The assistant cannot elevate its scope; whatever the key permits is the upper bound of what AutoGen can read or write. Full key model: authentication.
For cloud Areev, the base URL is https://<org>.areev.ai. For self-hosted, use the host where areev serve --a2a is running (A2A must be enabled explicitly in self-hosted).
How do I run a CAL query?
Register memory_cal as its own function. CAL is text-input only, so the AutoGen signature is a single string parameter that carries the CAL source.
CAL (Context Assembly Language) lets the assistant fetch a ranked, structured, formatted context in one tool call instead of chaining recall + filter + rerank. For AutoGen assistants this cuts tool-call overhead significantly — one cal_query can replace four recall calls. See CAL Queries for the full grammar.
@user_proxy.register_for_execution()
@assistant.register_for_llm(
description=(
"Run a CAL query against Areev and return the assembled context. "
"Example: 'RECALL beliefs ABOUT john LIMIT 5 FORMAT markdown'"
)
)
def cal_query(query: Annotated[str, "A CAL query source string"]) -> dict:
return _a2a("memory_cal", query)
How do I bootstrap a session?
Call session_bootstrap with a session ID to hydrate the assistant with the last State grain, active Goal grains, and recent Action grains from that session.
Long-running AutoGen workflows benefit from persistent session context. Rather than re-deriving “where we left off” from chat history (which gets truncated), session_bootstrap returns the durable grain-level picture of the session. Inject its result into the assistant’s system prompt or pass it as the first turn of the conversation.
def bootstrap(session_id: str) -> dict:
return _a2a("session_bootstrap", f'{{"session_id":"{session_id}"}}')
state = bootstrap("sess_abc123")
user_proxy.initiate_chat(
assistant,
message=f"Resume this session. Prior context: {state}",
)
How do I add structured-input skills?
Some A2A skills (memory_forget, memory_supersede, memory_accumulate) require structured data. Pass a JSON string as the text part — Areev parses it on the server side.
Skills that need precise identifiers like grain hashes or user IDs declare data as their only input mode. The A2A wire format still accepts them as a text part containing JSON; Areev deserializes it. This keeps the AutoGen function signature simple (one string arg) while preserving the skill’s strict schema. The A2A reference documents the required fields per skill.
@user_proxy.register_for_execution()
@assistant.register_for_llm(description="Forget all memory for a user (GDPR crypto-erasure).")
def forget_user(user_id: Annotated[str, "User identifier to erase"]) -> dict:
import json
return _a2a("memory_forget", json.dumps({"user_id": user_id}))
See Crypto-Erasure for the erasure guarantee.
Related
- A2A — full skill catalog, input modes, and agent card
- CAL Queries — syntax for the
memory_caltool - Authentication — API key scopes and rotation
- LangGraph — same pattern for LangGraph
- CrewAI — same pattern for CrewAI