Hey y’all, we’re releasing an early preview of the Agentic Learning SDK!
The Learning SDK makes everything stateful!
The Learning SDK is designed to provide a convenient wrappers to add memory to pretty much anything that calls a language model, using Python or TypeScript. This includes Anthropic, the Claude Agent SDK, OpenAI completions, OpenAI responses, Gemini, Vercel AI SDK.
The Learning SDK is essentially a thin wrapper around a Letta agent, and will intercept, inject, and remember anything that goes through an inference call.
The general pattern is to wrap an inference call with learning. In this example, we’re going to add a Letta agent’s state to a simple OpenAI completions request, which many codebases currently have:
from openai import OpenAI
from agentic_learning import learning
client = OpenAI()
# Add continual learning with one line
with learning(agent="my_assistant"):
# All LLM calls inside this block have learning enabled
response = client.chat.completions.create(
model="gpt-5",
messages=[{"role": "user", "content": "My name is Alice"}]
)
# wait a little bit for the Letta agent to remember stuff
# Agent remembers prior context
response = client.chat.completions.create(
model="gpt-5",
messages=[{"role": "user", "content": "What's my name?"}]
)
# Returns: "Your name is Alice"
Alternatively, you can use Typescript with a similar pattern:
import { learning } from '@letta-ai/agentic-learning';
import OpenAI from 'openai';
const client = new OpenAI();
// Add continual learning with one line
await learning({ agent: "my_assistant" }, async () => {
// All LLM calls inside this block have learning enabled
const response = await client.chat.completions.create({
model: "gpt-5",
messages: [{ role: "user", content: "My name is Alice" }]
});
// wait a little bit for the Letta agent to remember stuff
// Agent remembers prior context
const response2 = await client.chat.completions.create({
model: "gpt-5",
messages: [{ role: "user", content: "What's my name?" }]
});
// Returns: "Your name is Alice"
});
Features
- Supports the vast majority of tools you might want state in – use whatever your current stack is
- Simple learning context scoped to agent name, such as
support_agent,sales_bot,personal_information, etc. - Make anything stateful
- Automatic context injection, which you can disable if you only want logging/remembering without state being included
- Semantic search across agent knowledge with
learning_client.memory.search(agent="personal_info", query="how sad am I")
How it works
It’s pretty straightforward. Basically, the learning SDK will:
- Intercept messages before and after they go to the inference provider
- Inject Letta’s memory context into the prompt (unless disabled)
- Learns the retrieved prompt by sending it to the Letta agent
Rough diagram:
┌─────────────────-┐
│ Your Code │
│ client.create() │
└────────┬────────-┘
│
▼
┌─────────────────-┐
│ Agentic Learning │ ← Intercepts call
│ Interceptor │ ← Injects context
└────────┬───────-─┘
│
▼
┌───────────────-──┐
│ LLM API │ ← Sees enriched prompt
│ (OpenAI, etc) │
└────────┬──────-──┘
│
▼
┌────────────────-─┐
│ Letta Server │ ← Stores conversation
│ (Persistent DB) │ ← Learning update
└─────────────────-┘
Installation
Install on Python with the following:
pip install agentic-learning --prerelease=allow
prelease is only required while the Letta v1 SDK is in the pre-release phase, you won’t need this in the future.
For Typescript, install with:
npm install @letta-ai/agentic-learning
Self-hosting
If you’re self-hosting, you’ll need the 0.14 server as this requires the Letta v1 SDK pre-release.
Why this is interesting
We have a lot of users who basically want state added to infrastructure they already have, and we hope this can help you add an automatic memory shim to whatever you’re currently building on.
Comments, questions, projects, etc are all welcome! We hope you’ll give it a shot.
Thanks very much to Caren on our team for putting the Learning SDK together. She did an excellent job.
Learn more about the SDK here: GitHub - letta-ai/learning-sdk: Drop-in SDK for adding persistent memory and learning to any agent.
– Cameron