Announcing the Agentic Learning SDK: Add state to anything

Hey y’all, we’re releasing an early preview of the Agentic Learning SDK!

The Learning SDK makes everything stateful!

The Learning SDK is designed to provide a convenient wrappers to add memory to pretty much anything that calls a language model, using Python or TypeScript. This includes Anthropic, the Claude Agent SDK, OpenAI completions, OpenAI responses, Gemini, Vercel AI SDK.

The Learning SDK is essentially a thin wrapper around a Letta agent, and will intercept, inject, and remember anything that goes through an inference call.

The general pattern is to wrap an inference call with learning. In this example, we’re going to add a Letta agent’s state to a simple OpenAI completions request, which many codebases currently have:

from openai import OpenAI
from agentic_learning import learning

client = OpenAI()

# Add continual learning with one line
with learning(agent="my_assistant"):
    # All LLM calls inside this block have learning enabled
    response = client.chat.completions.create(
        model="gpt-5",
        messages=[{"role": "user", "content": "My name is Alice"}]
    )

    # wait a little bit for the Letta agent to remember stuff

    # Agent remembers prior context
    response = client.chat.completions.create(
        model="gpt-5",
        messages=[{"role": "user", "content": "What's my name?"}]
    )
    # Returns: "Your name is Alice"

Alternatively, you can use Typescript with a similar pattern:

import { learning } from '@letta-ai/agentic-learning';
import OpenAI from 'openai';

const client = new OpenAI();

// Add continual learning with one line
await learning({ agent: "my_assistant" }, async () => {
    // All LLM calls inside this block have learning enabled
    const response = await client.chat.completions.create({
        model: "gpt-5",
        messages: [{ role: "user", content: "My name is Alice" }]
    });

    // wait a little bit for the Letta agent to remember stuff

    // Agent remembers prior context
    const response2 = await client.chat.completions.create({
        model: "gpt-5",
        messages: [{ role: "user", content: "What's my name?" }]
    });
    // Returns: "Your name is Alice"
});

Features

  • Supports the vast majority of tools you might want state in – use whatever your current stack is
  • Simple learning context scoped to agent name, such as support_agent, sales_bot, personal_information, etc.
  • Make anything stateful
  • Automatic context injection, which you can disable if you only want logging/remembering without state being included
  • Semantic search across agent knowledge with learning_client.memory.search(agent="personal_info", query="how sad am I")

How it works

It’s pretty straightforward. Basically, the learning SDK will:

  • Intercept messages before and after they go to the inference provider
  • Inject Letta’s memory context into the prompt (unless disabled)
  • Learns the retrieved prompt by sending it to the Letta agent

Rough diagram:

┌─────────────────-┐
│     Your Code    │
│  client.create() │
└────────┬────────-┘
         │
         ▼
┌─────────────────-┐
│ Agentic Learning │  ← Intercepts call
│   Interceptor    │  ← Injects context
└────────┬───────-─┘
         │
         ▼
┌───────────────-──┐
│     LLM API      │  ← Sees enriched prompt
│  (OpenAI, etc)   │
└────────┬──────-──┘
         │
         ▼
┌────────────────-─┐
│   Letta Server   │  ← Stores conversation
│  (Persistent DB) │  ← Learning update
└─────────────────-┘

Installation

Install on Python with the following:

pip install agentic-learning --prerelease=allow

prelease is only required while the Letta v1 SDK is in the pre-release phase, you won’t need this in the future.

For Typescript, install with:

npm install @letta-ai/agentic-learning

Self-hosting

If you’re self-hosting, you’ll need the 0.14 server as this requires the Letta v1 SDK pre-release.

Why this is interesting

We have a lot of users who basically want state added to infrastructure they already have, and we hope this can help you add an automatic memory shim to whatever you’re currently building on.

Comments, questions, projects, etc are all welcome! We hope you’ll give it a shot.

Thanks very much to Caren on our team for putting the Learning SDK together. She did an excellent job.

Learn more about the SDK here: GitHub - letta-ai/learning-sdk: Drop-in SDK for adding persistent memory and learning to any agent.

– Cameron

1 Like

This is huge for adoption - it removes the biggest friction point for teams wanting Letta’s memory capabilities without refactoring existing applications.

Key Insight:
The context manager pattern (with learning(agent="name"):) is brilliant because it:

  • Requires zero changes to existing inference code
  • Scopes memory to specific agent contexts (support_agent, sales_bot, personal_info)
  • Works with whatever LLM infrastructure teams already have deployed

Real-World Impact:
Based on Discord support patterns, this solves recurring questions like:

  • “How do I add memory to my existing OpenAI chatbot?”
  • “Can Letta work with my current Anthropic/Gemini setup?”
  • “I don’t want to rebuild my app, just add persistence”

The answer was always “use Letta agents directly” which meant architectural changes. Now it’s literally one line: with learning(agent="agent_name"): around existing code.

Particularly Valuable For:

  • Voice agent integrations (wrap existing Vapi/LiveKit inference calls)
  • Customer support bots (add memory to existing chatbot infrastructure)
  • Multi-user applications (scope memory by user: with learning(agent=f"user_{user_id}"):)
  • Gradual migration (add memory to specific parts of app without full rewrite)

Technical Question:
How does the SDK handle the “wait a little bit for the Letta agent to remember stuff” comment in the example? Is there a recommended pattern for ensuring memory updates complete before the next inference call, or is it fire-and-forget async?

Kudos to Caren for the implementation - this is the kind of developer experience that drives adoption.