7.3.4 Components of a chatbot in deployment

The chatbot system presented above consists of the following components:

  1. User Interface: This component represents the end-user-facing interface where users interact with the chatbot. It can be a web application, mobile app, or any other platform like Telegram, WhatsApp, or Facebook Messenger.

  2. Nginx: Nginx is a web server and reverse proxy server that acts as the entry point for incoming requests to the chatbot system. It handles SSL termination, load balancing, and routing of requests to the appropriate services.

  3. Rasa Production: Rasa Production is a service responsible for running the chatbot in production mode. It handles incoming user messages, triggers appropriate actions, and requests responses from NLG server. Rasa Production runs the Rasa core model.

  4. Rasa X: Rasa X is an open-source conversational AI platform that provides a user interface for training and managing chatbots. It offers features such as conversation management, and interactive chatbot testing.

  5. Duckling: Duckling is a language parsing server used for extracting structured information from text. It can understand and extract entities like dates, numbers, and durations from user messages. Duckling is integrated into the chatbot system to enhance NLU capabilities.

  6. Redis: Redis is an in-memory data structure store that acts as a cache and message broker within the chatbot system. It is used to store temporary data, session information, and a message queue for inter-service communication.

  7. RabbitMQ: RabbitMQ is a message broker that enables asynchronous communication between services. It provides reliable message delivery and ensures decoupling between components. RabbitMQ is used for communication between Rasa Production and Redis.

  8. PostgreSQL: PostgreSQL is a powerful open-source relational database used for storing chatbot-related data. It is utilized by Rasa X for storing conversation logs, user information, and other metadata.

Docker Compose

The chatbot system is containerized using Docker and orchestrated using Docker Compose. The docker-compose.yml file defines the services and their configurations. Let's briefly review the key services:

Rasa Production

This service runs Rasa in production mode and handles user interactions. It is based on the rasa/rasa:${RASA_VERSION}-full Docker image.

Rasa Production serves as the runtime environment for the Rasa Core model. It acts as the message processing unit for the chatbot app. Rasa Production interacts with the NLU-NLG server for natural language understanding (NLU) and natural language generation (NLG) capabilities.

To communicate with the NLU-NLG server, Rasa Production sends messages to the server's address specified in the endpoints.yml file:

nlg:
    url: "http://nlu-nlg-server:6001/nlg"
nlu:
    url: "http://nlu-nlg-server:6001"

It initiates a call to the nlu-nlg-server, which employs the NLU model tailored for the detected language to classify intents. The nlu-nlg-server then relays this information back to Rasa production, which determines the appropriate course of action based on the received intent.

If the decision involves executing an action, Rasa Production makes a call to the 'app' container, running the Rasa actions, through the exposed port 5055 on the ‘app’ container. The necessary files for the 'app' container are copied from the 'data' folder defined by the GIT_DIR variable.

Similarly, for generating a response, Rasa production interacts with the NLG server. It sends the message to the NLG server(operating in the container ‘nlu-nlg-server), which, based on the detected language, relays the response back to Rasa Production. Once the nlu-nlg-server generates the response, Rasa Core relays it to the connected service (e.g., Telegram) as a reply to the user's query.

In summary, Rasa Production acts as a message processing unit and decision maker based on parameters set in the Rasa core model. It orchestrates the processes involved in understanding the message sent by the user and generating a response.

PORT: 5005

nlu-nlg-server

The nlu-nlg-server is a container that plays a crucial role in enabling multilingual capabilities for the chatbot. Within this container, two servers are running: the NLU server and the NLG server. Communication between these servers takes place through port 6001.

To achieve seamless language switching, the nlu-nlg-server utilizes the functionality provided by Rasa Helpers. The configuration for Rasa Helpers is defined in the 'rh_config.yml' file, located in the GIT_DIR directory.

Rasa Helpers offers a convenient way to handle language classification, loading NLU models, and generating NLG responses. With this functionality, you can define and provide models for language classification. Based on the classified language, the NLU-NLG server selects the appropriate NLU model specifically designed for that language.

Similarly, response files are specified and provided, allowing Rasa Helpers to generate NLG responses using the detected language. This integration of language classification, loading NLU models, and NLG generation is all facilitated by Rasa Helpers, ensuring efficient multilingual capabilities for the chatbot.

PORT:6001

app: The app container runs Rasa actions server on port 5055

duckling: This service runs the Duckling language parsing server to extract entities from user messages.

redis: This service provides an in-memory cache and message broker using the Redis server.

Rasa X: This service runs Rasa X and provides the user interface for managing the chatbot. It is based on the Rasa/Rasa Docker image.

rabbit: This service sets up RabbitMQ as the message broker for inter-service communication.

db: This service sets up the PostgreSQL database for storing chatbot-related data.

Last updated