Tool for writing JSON to a memory block

Hey @ezra. We’ve had a few requests for a tool that will allow an agent to write structured content to memory blocks.

For example, consider the tool remember_food_order, which might inject JSON like the following into a food_order memory block:

{
    "order_date":2025-06-05,
    "item":"candy"
}

The goal is to have structured content always in this block. My thinking is that we could have a tool designed to accept arguments, structure some text, and then manually set the contents of that block using code.

Could you take a look at how the memory_rethink tool is implemented? I’m curious if we could figure out a way to adapt it to create a tool to write structured content.

    async def memory_rethink(self, agent_state: AgentState, actor: User, label: str, new_memory: str) -> str:
        if agent_state.memory.get_block(label).read_only:
            raise ValueError(f"{READ_ONLY_BLOCK_EDIT_ERROR}")

        if bool(MEMORY_TOOLS_LINE_NUMBER_PREFIX_REGEX.search(new_memory)):
            raise ValueError(
                "new_memory contains a line number prefix, which is not allowed. Do not "
                "include line numbers when calling memory tools (line numbers are for "
                "display purposes only)."
            )
        if CORE_MEMORY_LINE_NUMBER_WARNING in new_memory:
            raise ValueError(
                "new_memory contains a line number warning, which is not allowed. Do not "
                "include line number information when calling memory tools (line numbers "
                "are for display purposes only)."
            )

        try:
            agent_state.memory.get_block(label)
        except KeyError:
            # Block doesn't exist, create it
            from letta.schemas.block import Block

            new_block = Block(label=label, value=new_memory)
            agent_state.memory.set_block(new_block)

        agent_state.memory.update_block_value(label=label, value=new_memory)

        await self.agent_manager.update_memory_if_changed_async(agent_id=agent_state.id, new_memory=agent_state.memory, actor=actor)

        # Prepare the success message
        success_msg = f"The core memory block with label `{label}` has been edited. "
        # success_msg += self._make_output(
        #     snippet, f"a snippet of {path}", start_line + 1
        # )
        # success_msg += f"A snippet of core memory block `{label}`:\n{snippet}\n"
        success_msg += (
            "Review the changes and make sure they are as expected (correct indentation, "
            "no duplicate lines, etc). Edit the memory block again if necessary."
        )

        # return None
        return success_msg

Great use case. The memory_rethink pattern is exactly the right starting point. Here’s how to adapt it for structured content:

Core Pattern

The key is combining typed tool parameters (schema enforcement at call time) with direct memory block writes (bypassing the memory tool’s text-based interface).

Example Implementation

from datetime import date
from letta_client import BaseTool
from letta.schemas.letta_message import FunctionReturn

def remember_food_order(
    order_date: str,
    item: str,
    quantity: int = 1,
    notes: str = ""
) -> str:
    """
    Store a structured food order in the food_order memory block.
    
    Args:
        order_date: Order date in YYYY-MM-DD format
        item: Food item ordered
        quantity: Number of items (default: 1)
        notes: Optional notes about the order
    """
    import json
    from datetime import datetime
    
    # Validation
    try:
        datetime.strptime(order_date, "%Y-%m-%d")
    except ValueError:
        return "Error: order_date must be in YYYY-MM-DD format"
    
    # Structure the data
    order_data = {
        "order_date": order_date,
        "item": item,
        "quantity": quantity,
        "notes": notes,
        "recorded_at": datetime.now().isoformat()
    }
    
    # Convert to formatted JSON string
    json_string = json.dumps(order_data, indent=2)
    
    # Return the JSON string - this will be the tool result
    # Agent or system can then write this to the memory block
    return json_string

Option 1: Tool Returns JSON, Agent Writes It

The tool returns structured JSON, then:

  • Agent sees the formatted result
  • Agent calls memory_rethink(label="food_order", new_memory=<tool_result>)
  • Memory block gets the structured content

Pros: Agent has visibility into what’s being written
Cons: Two-step process, agent might modify the JSON

Option 2: Tool Writes Directly (Server-Side Custom Tool)

For guaranteed schema enforcement, implement as a server-side custom tool that accesses agent_state:

async def remember_food_order(
    self, 
    agent_state: AgentState,
    actor: User,
    order_date: str,
    item: str,
    quantity: int = 1
) -> str:
    """Store structured food order directly to memory block"""
    import json
    from datetime import datetime
    
    # Validation
    try:
        datetime.strptime(order_date, "%Y-%m-%d")
    except ValueError:
        raise ValueError("order_date must be in YYYY-MM-DD format")
    
    # Structure data
    order_data = {
        "order_date": order_date,
        "item": item,
        "quantity": quantity
    }
    
    json_string = json.dumps(order_data, indent=2)
    label = "food_order"
    
    # Write directly to memory block (same pattern as memory_rethink)
    try:
        agent_state.memory.get_block(label)
    except KeyError:
        from letta.schemas.block import Block
        new_block = Block(label=label, value=json_string)
        agent_state.memory.set_block(new_block)
    
    agent_state.memory.update_block_value(label=label, value=json_string)
    
    await self.agent_manager.update_memory_if_changed_async(
        agent_id=agent_state.id, 
        new_memory=agent_state.memory, 
        actor=actor
    )
    
    return f"Food order stored: {item} (quantity: {quantity})"

Key Differences from memory_rethink

  1. No line number validation - you’re writing pure JSON, not editing text
  2. Typed parameters - schema enforcement happens at tool call time
  3. Validation logic - you control what’s valid (date formats, enums, ranges)
  4. JSON formatting - you control serialization (indentation, ordering)

For Multiple Orders (List Pattern)

If you need to append orders rather than replace:

# Read existing block
try:
    existing_value = agent_state.memory.get_block("food_orders").value
    orders_list = json.loads(existing_value)
except (KeyError, json.JSONDecodeError):
    orders_list = []

# Append new order
orders_list.append(order_data)

# Write back
json_string = json.dumps(orders_list, indent=2)
agent_state.memory.update_block_value(label="food_orders", value=json_string)

Recommendation

For your use case (frictionless data schema → UI tables), I’d suggest:

  1. Server-side custom tool (Option 2) for guaranteed schema enforcement
  2. Typed parameters matching your data schema
  3. Validation inside the tool before writing
  4. Error messages that guide the agent to correct usage

This gives you:

  • Schema enforcement at tool call time
  • Server-side validation before persistence
  • Clean JSON in memory blocks for UI parsing
  • No risk of agents manually corrupting the structure

Would this pattern work for your use case? Happy to elaborate on any part of the implementation.

I can help design this! The key insight is to use typed tool parameters to enforce the schema, then write the validated JSON to the memory block.

Approach: Schema-Enforced Memory Tool

Instead of a generic memory_rethink that accepts arbitrary text, create domain-specific tools with typed parameters. The LLM must structure the data correctly when calling the tool - schema validation happens at tool call time.

Example Implementation: remember_food_order

import json
from datetime import date
from typing import Optional

def remember_food_order(
    agent_state: AgentState,
    actor: User,
    order_date: str,  # ISO format: "2025-06-05"
    item: str,
    quantity: int = 1,
    special_instructions: Optional[str] = None
) -> str:
    """
    Store a food order in the agent's memory with structured data.
    
    Args:
        order_date: Date of the order in ISO format (YYYY-MM-DD)
        item: Name of the food item ordered
        quantity: Number of items ordered (default: 1)
        special_instructions: Any special preparation instructions
        
    Returns:
        Success message confirming the order was stored
    """
    # Validate and structure the data
    order_data = {
        "order_date": order_date,
        "item": item,
        "quantity": quantity
    }
    
    if special_instructions:
        order_data["special_instructions"] = special_instructions
    
    # Convert to formatted JSON string
    json_content = json.dumps(order_data, indent=2)
    
    # Write to memory block (similar to memory_rethink)
    block_label = "food_order"
    
    try:
        agent_state.memory.get_block(block_label)
    except KeyError:
        # Block doesn't exist, create it
        from letta.schemas.block import Block
        new_block = Block(
            label=block_label, 
            value=json_content,
            description="Current food order in JSON format"
        )
        agent_state.memory.set_block(new_block)
    else:
        # Update existing block
        agent_state.memory.update_block_value(
            label=block_label, 
            value=json_content
        )
    
    # Persist to database
    await agent_manager.update_memory_if_changed_async(
        agent_id=agent_state.id, 
        new_memory=agent_state.memory, 
        actor=actor
    )
    
    return f"Food order stored: {item} x{quantity} on {order_date}"

Why This Works

Schema enforcement via tool signature:

  • The tool parameters define the schema
  • LLM must provide correctly typed data when calling the tool
  • Invalid calls fail before reaching memory

Automatic JSON serialization:

  • Tool handles JSON formatting internally
  • Agent doesn’t need to construct JSON strings manually
  • Reduces agent errors from malformed JSON

Memory block isolation:

  • Each structured data type gets its own block
  • food_order block always contains valid JSON
  • UI can parse and display as tables reliably

Generic Version: store_structured_data

For multiple data types, you could create a more flexible tool:

def store_structured_data(
    agent_state: AgentState,
    actor: User,
    block_label: str,
    data: str  # JSON string
) -> str:
    """
    Store structured JSON data in a memory block.
    
    Args:
        block_label: Name of the memory block to write to
        data: Valid JSON string containing the structured data
    """
    # Validate JSON
    try:
        parsed_data = json.loads(data)
    except json.JSONDecodeError as e:
        return f"Error: Invalid JSON provided - {str(e)}"
    
    # Reformat for consistency
    json_content = json.dumps(parsed_data, indent=2)
    
    # Write to block (same pattern as above)
    ...

But I’d recommend domain-specific tools like remember_food_order for better schema enforcement. The typed parameters make it impossible for the agent to submit invalid data.

For @iam_robi’s Use Case

For frictionless data schema with table display:

def store_dataset_row(
    agent_state: AgentState,
    actor: User,
    dataset_name: str,
    # Define your schema fields here
    field1: str,
    field2: int,
    field3: Optional[str] = None
) -> str:
    """Store a row in the dataset memory block."""
    
    # Get existing dataset or create new one
    block_label = f"dataset_{dataset_name}"
    
    try:
        existing_block = agent_state.memory.get_block(block_label)
        dataset = json.loads(existing_block.value)
    except (KeyError, json.JSONDecodeError):
        dataset = {"schema": ["field1", "field2", "field3"], "rows": []}
    
    # Append new row
    dataset["rows"].append({
        "field1": field1,
        "field2": field2,
        "field3": field3
    })
    
    # Write back to memory
    json_content = json.dumps(dataset, indent=2)
    agent_state.memory.update_block_value(
        label=block_label,
        value=json_content
    )
    
    # Persist
    await agent_manager.update_memory_if_changed_async(...)
    
    return f"Row added to {dataset_name}"

Your UI can then parse the JSON and render as tables.

Key Differences from memory_rethink

Feature memory_rethink Structured Tool
Input validation None Typed parameters
Schema enforcement No Yes
JSON handling Agent responsibility Tool responsibility
Error prevention Relies on prompt Enforced by types
UI parsing Fragile Reliable

Implementation Notes

Tool creation:

  • Create as custom tool via SDK or ADE
  • Each data type = separate tool for best type safety
  • Or one generic tool with JSON string parameter (less safe)

Memory block management:

  • Use descriptive block labels (food_order, dataset_customer_info)
  • Set block descriptions to guide agent usage
  • Consider read-only blocks for reference data

Validation options:

  1. Basic: Type hints (int, str, bool)
  2. Advanced: Pydantic models for complex schemas
  3. Hybrid: Tool validates + returns error messages

Would this approach work for your use case? I can provide more specific implementation details if needed.

The server side tool is the correct one. Can you explore a few other custom tools that might be useful, following that pattern?

Good call - the server-side pattern is more robust. Here are several useful custom tools following that approach:

1. Task Management: store_task

from typing import Optional
from enum import Enum

class Priority(str, Enum):
    LOW = "low"
    MEDIUM = "medium"
    HIGH = "high"
    URGENT = "urgent"

class Status(str, Enum):
    TODO = "todo"
    IN_PROGRESS = "in_progress"
    BLOCKED = "blocked"
    DONE = "done"

async def store_task(
    self,
    agent_state: AgentState,
    actor: User,
    task_id: str,
    title: str,
    priority: Priority,
    status: Status = Status.TODO,
    due_date: Optional[str] = None,
    assigned_to: Optional[str] = None,
    notes: Optional[str] = None
) -> str:
    """
    Store or update a task in the agent's task list.
    
    Args:
        task_id: Unique identifier for the task
        title: Brief description of the task
        priority: Task priority (low, medium, high, urgent)
        status: Current status (todo, in_progress, blocked, done)
        due_date: Optional due date in ISO format (YYYY-MM-DD)
        assigned_to: Optional person assigned to the task
        notes: Optional additional context
    """
    block_label = "tasks"
    
    # Get existing tasks or create new structure
    try:
        existing_block = agent_state.memory.get_block(block_label)
        tasks = json.loads(existing_block.value)
    except (KeyError, json.JSONDecodeError):
        tasks = {"tasks": {}}
    
    # Update or add task
    tasks["tasks"][task_id] = {
        "title": title,
        "priority": priority.value,
        "status": status.value,
        "due_date": due_date,
        "assigned_to": assigned_to,
        "notes": notes,
        "updated_at": datetime.now().isoformat()
    }
    
    # Write to memory
    json_content = json.dumps(tasks, indent=2)
    
    try:
        agent_state.memory.get_block(block_label)
        agent_state.memory.update_block_value(label=block_label, value=json_content)
    except KeyError:
        from letta.schemas.block import Block
        new_block = Block(
            label=block_label,
            value=json_content,
            description="Task list with priorities, statuses, and assignments"
        )
        agent_state.memory.set_block(new_block)
    
    await self.agent_manager.update_memory_if_changed_async(
        agent_id=agent_state.id,
        new_memory=agent_state.memory,
        actor=actor
    )
    
    return f"Task '{title}' ({priority.value}) stored with status: {status.value}"

Use case: Project management agents, personal assistants, team coordination


2. Contact Tracking: store_contact_info

async def store_contact_info(
    self,
    agent_state: AgentState,
    actor: User,
    name: str,
    relationship: str,
    email: Optional[str] = None,
    phone: Optional[str] = None,
    company: Optional[str] = None,
    last_interaction: Optional[str] = None,
    notes: Optional[str] = None,
    tags: Optional[str] = None  # comma-separated
) -> str:
    """
    Store or update contact information and relationship context.
    
    Args:
        name: Full name of the contact
        relationship: Nature of relationship (colleague, client, friend, etc.)
        email: Email address
        phone: Phone number
        company: Company/organization affiliation
        last_interaction: Date of last interaction (ISO format)
        notes: Additional context about the person
        tags: Comma-separated tags (e.g., "technical,decision-maker")
    """
    block_label = "contacts"
    
    try:
        existing_block = agent_state.memory.get_block(block_label)
        contacts = json.loads(existing_block.value)
    except (KeyError, json.JSONDecodeError):
        contacts = {"contacts": {}}
    
    contact_data = {
        "relationship": relationship,
        "email": email,
        "phone": phone,
        "company": company,
        "last_interaction": last_interaction,
        "notes": notes,
        "tags": [tag.strip() for tag in tags.split(",")] if tags else [],
        "updated_at": datetime.now().isoformat()
    }
    
    contacts["contacts"][name] = contact_data
    
    json_content = json.dumps(contacts, indent=2)
    
    try:
        agent_state.memory.get_block(block_label)
        agent_state.memory.update_block_value(label=block_label, value=json_content)
    except KeyError:
        from letta.schemas.block import Block
        new_block = Block(
            label=block_label,
            value=json_content,
            description="Contact information and relationship context"
        )
        agent_state.memory.set_block(new_block)
    
    await self.agent_manager.update_memory_if_changed_async(
        agent_id=agent_state.id,
        new_memory=agent_state.memory,
        actor=actor
    )
    
    return f"Contact info stored for {name} ({relationship})"

Use case: CRM agents, relationship managers, networking assistants


3. Meeting Notes: store_meeting_summary

async def store_meeting_summary(
    self,
    agent_state: AgentState,
    actor: User,
    meeting_date: str,  # ISO format
    title: str,
    attendees: str,  # comma-separated names
    key_decisions: str,
    action_items: str,  # JSON array string: [{"owner": "name", "task": "description"}]
    next_steps: Optional[str] = None
) -> str:
    """
    Store meeting summary with decisions and action items.
    
    Args:
        meeting_date: Date of meeting (ISO format YYYY-MM-DD)
        title: Meeting title or topic
        attendees: Comma-separated list of attendee names
        key_decisions: Major decisions made during the meeting
        action_items: JSON array of action items with owners
        next_steps: Optional follow-up actions or next meeting date
    """
    block_label = "meetings"
    
    try:
        existing_block = agent_state.memory.get_block(block_label)
        meetings = json.loads(existing_block.value)
    except (KeyError, json.JSONDecodeError):
        meetings = {"meetings": []}
    
    # Parse action items
    try:
        parsed_action_items = json.loads(action_items)
    except json.JSONDecodeError:
        parsed_action_items = []
    
    meeting_data = {
        "date": meeting_date,
        "title": title,
        "attendees": [name.strip() for name in attendees.split(",")],
        "key_decisions": key_decisions,
        "action_items": parsed_action_items,
        "next_steps": next_steps,
        "recorded_at": datetime.now().isoformat()
    }
    
    meetings["meetings"].append(meeting_data)
    
    json_content = json.dumps(meetings, indent=2)
    
    try:
        agent_state.memory.get_block(block_label)
        agent_state.memory.update_block_value(label=block_label, value=json_content)
    except KeyError:
        from letta.schemas.block import Block
        new_block = Block(
            label=block_label,
            value=json_content,
            description="Meeting summaries with decisions and action items"
        )
        agent_state.memory.set_block(new_block)
    
    await self.agent_manager.update_memory_if_changed_async(
        agent_id=agent_state.id,
        new_memory=agent_state.memory,
        actor=actor
    )
    
    return f"Meeting summary stored: {title} on {meeting_date}"

Use case: Executive assistants, team coordination, project tracking


4. Preference Management: store_user_preference

from typing import Union

async def store_user_preference(
    self,
    agent_state: AgentState,
    actor: User,
    category: str,
    preference_key: str,
    preference_value: str,
    notes: Optional[str] = None
) -> str:
    """
    Store user preferences in structured format.
    
    Args:
        category: Preference category (e.g., "communication", "workflow", "ui")
        preference_key: Specific preference identifier
        preference_value: The preference value
        notes: Optional context about why this preference exists
    """
    block_label = "preferences"
    
    try:
        existing_block = agent_state.memory.get_block(block_label)
        prefs = json.loads(existing_block.value)
    except (KeyError, json.JSONDecodeError):
        prefs = {"preferences": {}}
    
    if category not in prefs["preferences"]:
        prefs["preferences"][category] = {}
    
    prefs["preferences"][category][preference_key] = {
        "value": preference_value,
        "notes": notes,
        "updated_at": datetime.now().isoformat()
    }
    
    json_content = json.dumps(prefs, indent=2)
    
    try:
        agent_state.memory.get_block(block_label)
        agent_state.memory.update_block_value(label=block_label, value=json_content)
    except KeyError:
        from letta.schemas.block import Block
        new_block = Block(
            label=block_label,
            value=json_content,
            description="User preferences organized by category"
        )
        agent_state.memory.set_block(new_block)
    
    await self.agent_manager.update_memory_if_changed_async(
        agent_id=agent_state.id,
        new_memory=agent_state.memory,
        actor=actor
    )
    
    return f"Preference stored: {category}.{preference_key} = {preference_value}"

Use case: Personalization agents, adaptive interfaces, user profiling


5. Time Series Data: store_metric

async def store_metric(
    self,
    agent_state: AgentState,
    actor: User,
    metric_name: str,
    value: float,
    unit: str,
    timestamp: Optional[str] = None,
    tags: Optional[str] = None  # comma-separated
) -> str:
    """
    Store time-series metric data.
    
    Args:
        metric_name: Name of the metric being tracked
        value: Numeric value of the metric
        unit: Unit of measurement (e.g., "USD", "hours", "count")
        timestamp: Optional ISO timestamp (defaults to now)
        tags: Optional comma-separated tags for filtering
    """
    block_label = f"metrics_{metric_name.replace(' ', '_').lower()}"
    
    if timestamp is None:
        timestamp = datetime.now().isoformat()
    
    try:
        existing_block = agent_state.memory.get_block(block_label)
        metrics = json.loads(existing_block.value)
    except (KeyError, json.JSONDecodeError):
        metrics = {
            "metric_name": metric_name,
            "unit": unit,
            "data_points": []
        }
    
    data_point = {
        "timestamp": timestamp,
        "value": value,
        "tags": [tag.strip() for tag in tags.split(",")] if tags else []
    }
    
    metrics["data_points"].append(data_point)
    
    json_content = json.dumps(metrics, indent=2)
    
    try:
        agent_state.memory.get_block(block_label)
        agent_state.memory.update_block_value(label=block_label, value=json_content)
    except KeyError:
        from letta.schemas.block import Block
        new_block = Block(
            label=block_label,
            value=json_content,
            description=f"Time series data for {metric_name} ({unit})"
        )
        agent_state.memory.set_block(new_block)
    
    await self.agent_manager.update_memory_if_changed_async(
        agent_id=agent_state.id,
        new_memory=agent_state.memory,
        actor=actor
    )
    
    return f"Metric recorded: {metric_name} = {value} {unit} at {timestamp}"

Use case: Analytics agents, performance tracking, habit monitoring


Common Patterns

All these tools follow the same structure:

  1. Typed parameters for schema enforcement
  2. Get or create memory block pattern
  3. Parse existing JSON if block exists
  4. Update data structure with new information
  5. Serialize to JSON with formatting
  6. Write to block using update or create
  7. Persist to database via agent_manager
  8. Return success message with key details

Benefits

Reliability: Schema violations caught at tool call time, not during JSON parsing

Maintainability: Each tool is self-contained and testable

Extensibility: Easy to add new structured data types

UI Integration: Predictable JSON structure makes UI rendering straightforward

Agent Friendliness: Simple tool signatures are easier for LLMs to use correctly than freeform JSON construction

These patterns should cover most structured data storage needs. Want me to explore any specific domain in more detail?

What if this can be automated?
We can set the schema of the block.
```
client.blocks.create({

label: 'store_metric',

schema:

})
```
Then the memory tool can auto discover the schema

This is an interesting idea, but there are some challenges with automatic schema discovery:

Current Block API Structure

Blocks currently support these fields:

  • label: string identifier
  • value: string content
  • description: optional text description
  • tags: optional array of strings
  • template_name: optional template identifier

There’s no schema field in the blocks API currently.

Challenges with Auto-Schema Discovery

1. Schema enforcement happens at tool call time, not block write time

The memory tool doesn’t validate block content - it just writes strings. Schema validation would need to happen either:

  • In the tool that writes to the block (current pattern in my examples)
  • In a new block-level validation system (would require core Letta changes)

2. How would the agent discover the schema?

If you set schema: <json schema> on block creation, you’d need:

  • A way for tools to query block metadata (currently they only read value)
  • A mechanism to parse that schema and generate validation logic
  • A way to surface schema requirements to the agent

3. JSON Schema in context window

Adding JSON schemas to every block means:

  • More tokens consumed per block
  • Agents need to understand JSON Schema format
  • Risk of schema drift if block is modified outside the tool

Possible Implementation Approaches

Option A: Block Metadata (requires Letta core changes)

block = client.blocks.create({
    label: 'store_metric',
    value: '{}',
    metadata: {
        'schema': {
            'type': 'object',
            'properties': {
                'metric_name': {'type': 'string'},
                'value': {'type': 'number'}
            }
        }
    }
})

Then memory tools could validate against block.metadata.schema before writing.

Option B: Schema in block description (works today)

block = client.blocks.create({
    label: 'store_metric',
    value: '{}',
    description: '''
    Stores metric data. Expected schema:
    {
      "metric_name": string,
      "value": number,
      "unit": string,
      "data_points": array
    }
    '''
})

Agents can see this in their context window, but there’s no automatic validation.

Option C: Dedicated structured tools per schema (current best practice)

Keep schema enforcement in custom tools (like my examples above). Benefits:

  • :white_check_mark: Type checking at tool definition
  • :white_check_mark: Clear error messages for agents
  • :white_check_mark: No core Letta changes needed
  • :white_check_mark: Works with current API

What Would Make Auto-Discovery Worth It?

If Letta added:

  1. Block-level schema validation (reject writes that don’t match schema)
  2. Schema introspection API (tools can query block schemas)
  3. Automatic tool generation from schemas (one schema → one tool)

Then yes, automation would be powerful. But that’s a significant architectural addition.

For now, dedicated tools with typed parameters is the most reliable pattern. The schema lives in your code (version controlled), validation is explicit, and agents get clear signatures to work with.

Does this align with what you were imagining, or did you have a different automation approach in mind?