Chọn trang

Guide to Building a ‘Golden Brain’: Integrating Letta AI Long-Term Memory into n8n

 

Hello everyone, Ticmiro here.

One of the biggest challenges when building virtual assistants or automated chatbots on platforms like n8n is the issue of “state.” N8n is incredibly powerful at handling stateless tasks—it receives a request, processes it, and finishes. But how do you build an assistant that can remember what you said in the previous turn, or even from last week?

After a lot of experimentation and refinement, I have successfully built a robust and stable architecture that transforms n8n from a simple automation tool into a platform for creating AI agents with genuine memory. This is a memory that develops its own personality and writing style over time, continuously updating and refining its emotional understanding.

Today, I want to share the entire process and logic behind it, with the main character being Letta AI—our self-hosted, long-term memory solution.


 

The “Memory Loop” Architectural Model

 

For an AI to “remember,” we need to create a loop where every new piece of information is recorded, and old information can be retrieved when needed. My architecture includes three main components:

  • n8n (The Orchestrator): This acts as the central brain, coordinating the entire workflow. It receives user requests, decides what to do, and calls other services.
  • Letta AI on a VPS (The Long-Term Memory): This is the heart of the system. Every interaction and message from both the user and the AI is sent to Letta for secure storage in a PostgreSQL database with pgvector support. It synchronizes with the RAG system that we’ve built.
  • LLM (The Inference Engine): A service like OpenAI acts as the reasoning engine, taking the context (including memory retrieved from Letta) and generating an intelligent response.

 

Guide to Integrating Letta AI into n8n

This architecture reduces the number of nodes and speeds up the response time of the virtual assistant.

The optimized workflow is as follows: Webhook -> Save & Recall Memory (1 node) -> Context Preparation -> Call LLM -> Save AI Response -> Send Response

 

Step 1: Setup and Preparation

Make sure you have a Letta AI infrastructure set up on a VPS and the following information:

  • Letta API URL: http://<YOUR_IP>:8283 (For self-hosting Letta AI with a single command, refer to this article).
  • Letta API Key: The secret key for authentication. (You create this key during the self-hosting process or receive it from a provider if you’re using the Cloud version, which has a cost, so self-hosting is recommended).

 

Step 2: Building the n8n Workflow Step-by-Step

Node 1: Webhook – “The Starting Point” (Can use ZALO, TELEGRAM, etc.)

 

  • Node Type: Webhook
  • Function: Receives requests from the user. Sample input data:

    JSON

    {
      "sessionId": "user-123",
      "chatInput": "Hello, what's your name?"
    }
    

     

 

Node 2: Record, Recall, and Filter Information (3 jobs in 1)

This is the most important node in the optimized workflow. It simultaneously saves the new user message to memory, retrieves the entire conversation history, and intelligently filters it (this is where Letta AI excels)—all in one operation.

  • Node Type: HTTP Request
  • Node Name (Suggested): Letta – Save User Msg & Get History
  • Method: POST
  • URL: {{ $env.LETTA_URL }}/v1/agents/{{ $json.body.sessionId }}/messages
  • Authentication: Bearer Token
  • Token: Use your Letta API Key.
  • Body (JSON):

    JSON

    {
      "messages": [
        {
          "content": "{{ $json.body.chatInput }}",
          "role": "user"
        }
      ]
    }
    

     

The output of this node will be a JSON object containing the complete, updated message history. We will use this output for the next step.

 

Node 3: Prepare Context for the AI Agent

 

  • Node Type: Code

  • Node Name (Suggested): Format History for LLM

  • Input: Connect the input from the Letta - Save User Msg & Get History node.

  • Purpose: Convert the JSON message array from the previous node into a clean text string.

  • Code (JavaScript):

    JavaScript

    // Get data from the OUTPUT of the previous node
    const messages = {{ $('Letta - Save User Msg & Get History').item.json.messages }};
    let formattedHistory = "This is the conversation history:\n";
    // Limit the history to prevent the prompt from getting too long
    const historyLimit = 10;
    const recentMessages = messages.slice(-historyLimit);
    for (const msg of recentMessages) {
      formattedHistory += `${msg.role}: ${msg.content}\n`;
    }
    return { formattedHistory };
    

 

Node 4: Call the AI Agent to Respond (n8n)

 

  • Node Type: AI Agent

  • Input: Connect the input from the Format History for LLM node.

  • Prompt:

    {{ $json.formattedHistory }}
    
    Based on the conversation history above, respond to the user's last question naturally.
    User's question: "{{ $('Webhook').item.json.body.chatInput }}"
    

 

Node 5: Save the AI Agent’s Response to Letta and Send the Final Response to the User

 

  • Node Type: HTTP Request

  • Node Name (Suggested): Letta – Save AI Msg

  • Method: POST

  • URL: {{ $env.LETTA_URL }}/v1/agents/{{ $('Webhook').item.json.body.sessionId }}/messages

  • Authentication: Bearer Token

  • Token: Use your Letta API Key.

  • Body (JSON):

    JSON

    {
      "messages": [
        {
          "content": "{{ $('Call AI to Respond').item.json.choices[0].message.content }}",
          "role": "assistant"
        }
      ]
    }
    


 

The Result: An Assistant That Never Forgets

 

By combining these nodes, we’ve created a powerful system. Now, you can have a conversation like this:

You: Hi, I’m Ticmiro.

AI: Hello Ticmiro, it’s a pleasure to meet you! How can I help you today?

(A few minutes later)

You: Do you know my name?

AI: Of course, your name is Ticmiro. We just spoke a little while ago.

After talking about many topics, you might return a week later and ask:

You: What did we talk about last weekend?

AI: Oh! We talked about many topics last week, especially about a thoughtful girl you were texting on Tinder… Would you like me to suggest ways to flirt with her?

You: Maybe.

AI: …,,, yes… there are many…

This is just a simple example. The potential is limitless: from building personalized customer service chatbots and long-term task management assistants to creating AI agents that can self-learn from previous interactions.

This journey has shown me that by separating “memory” from “reasoning” and connecting them intelligently, we can overcome the inherent limitations of AI and create truly groundbreaking applications.