Lola NLP AI v2 icon

Lola NLP AI v2

Basic Lola Sample Node

Overview

The Lola NLP AI v2 node integrates with the Lola AI platform to process natural language messages using various AI models (such as GPT-3, GPT-3.5 Turbo, and GPT-4). It is designed for conversational AI scenarios, such as chatbots or virtual assistants, where context, topic management, and mood detection are important. The node can operate in synchronous or asynchronous modes, allowing either immediate responses or deferred processing via a callback.

Common use cases:

  • Building intelligent chatbots that understand conversation context.
  • Automating customer support with advanced topic and mood detection.
  • Integrating AI-driven message processing into multi-channel communication workflows.

Practical examples:

  • A Telegram bot that uses Lola AI to answer user questions, detect topic shifts, and adjust its responses accordingly.
  • An internal helpdesk assistant that leverages mood detection to escalate conversations when users appear frustrated.

Properties

Name Meaning
Knowledge Base The knowledge base used to train the AI for understanding the conversation's context.
Mode How the node processes requests:
- Sync: Waits for Lola's response and outputs it directly.
- Async: Returns immediately; response will be available in the Lola Callback node.
Model The AI model to use:
- GPT3 4K
- GPT-3.5 Turbo 4K
- GPT-4 8K
- GPT-4 32K
Execution ID The n8n execution ID, used for tracking.
Load From Message Composer Whether the message comes from the Message Composer. If true, some fields are auto-filled from incoming data.
Incoming Message The text message from the user (required if not loading from Message Composer).
Chat Identifier Unique identifier for the chat session (required if not loading from Message Composer).
Message Identifier Identifier to track the specific message (shown if not loading from Message Composer).
Source ID The source of the message (e.g., "Telegram") (required if not loading from Message Composer).
Main Scope Main prompt scope (number of previous messages considered for context).
Switch Topic Whether to automatically switch topics based on conversation flow.
Topic Shift Detection Enable/disable automatic detection of topic changes.
User Shift Detection Number Number of messages to consider for detecting user-initiated topic shifts.
Mood Detection Enable/disable mood detection in the conversation.
Mood Detection Scope Number of recent messages to analyze for mood detection.
Allow Retries Whether to allow retries for NLP processing.
Maximun Retries Maximum number of retries for NLP processing (shown if "Allow Retries" is enabled).
Allow Client Commands Whether to allow client commands (e.g., /debug) in the conversation.
Skip Append To History On Topic Change Whether to skip appending messages to history when the topic changes.
Prompt Params Additional metadata parameters to pass to the AI, as key-value pairs.
SubPrompts List of sub-prompts, each with a topic and associated text, used for topic-specific prompting.

Output

The node produces up to three output branches:

  1. Main Output (msg):

    • Contains the processed response from Lola AI in json.lola_response.
    • Structure example:
      {
        "lola_response": {
          "header": {
            "executionId": "...",
            "workflowId": "...",
            "source": "...",
            "chatId": "...",
            "msgId": "..."
          },
          "content": {
            "type": "text",
            "text": "...",
            "subPrompts": [ { "topic": "...", "text": "..." } ],
            "params": { "key": "value" },
            "prompt": "...",
            "commands": [...]
          },
          "settings": {
            "language": "en",
            "autoSwitchTopic": true,
            "topicShiftDetection": true,
            "topicShiftDetectionScope": 5,
            "moodDetection": true,
            "moodDetectionScope": 5,
            "retries": 0,
            "allowClientCommands": true,
            "onTopicShiftSkipAppendToHistory": false,
            "model": "gpt3",
            "mainPromptScope": 10
          }
        }
      }
      
    • If the response type is "command", the item is routed to the second output.
  2. Command Output (cmd):

    • Items where the AI response is a command (i.e., lola_response.content.type === "command").
  3. Error Output (err):

    • Items where an error occurred during processing.
    • Example:
      {
        "lola_response": {
          "error": "Error message or object"
        }
      }
      

Note: The node does not output binary data.


Dependencies

  • External Service: Requires access to the Lola AI API endpoint.
  • API Key: Needs a valid Lola API key credential (lolaKeyApi), including token and uri.
  • n8n Configuration: Ensure the Lola API credentials are set up in n8n under the appropriate credential type.

Troubleshooting

Common Issues:

  • Missing or invalid API credentials:
    Error may occur if the Lola API key is not configured or is incorrect.
  • Incorrect input property values:
    Required fields (like "Incoming Message" or "Chat Identifier") must be provided, especially if not using the Message Composer.
  • API errors or timeouts:
    If the Lola service is unavailable or returns an error, the error will be present in the third output branch.

Common Error Messages:

  • "lola_response": { "error": ... }
    Indicates an error occurred during the request to Lola AI. Check the error details for more information (e.g., authentication failure, network issues, or invalid parameters).

How to resolve:

  • Double-check your Lola API credentials in n8n.
  • Ensure all required properties are filled out.
  • Review the error message in the third output for clues about what went wrong.

Links and References

Discussion