Language AI Playbook
  • 1. Introduction
    • 1.1 How to use the partner playbook
    • 1.2 Chapter overviews
    • 1.3 Acknowledgements
  • 2. Overview of Language Technology
    • 2.1 Definition and uses of language technology
    • 2.2 How language technology helps with communication
    • 2.3 Areas where language technology can be used
    • 2.4 Key terminology and concepts
  • 3. Partner Opportunities
    • 3.1 Enabling Organizations with Language Technology
    • 3.2 Bridging the Technical Gap
    • 3.3 Dealing with language technology providers
  • 4. Identifying Impactful Use Cases
    • 4.1 Setting criteria to help choose the use case
    • 4.2 Conducting A Needs Assessment
    • 4.3 Evaluating What Can Be Done and What Works
  • 5 Communication and working together
    • 5.1 Communicating with Communities
    • 5.2 Communicating and working well with partners
  • 6. Language Technology Implementation
    • 6.1 Navigating the Language Technology Landscape
    • 6.2 Creating a Language-Specific Peculiarities (LSP) Document
    • 6.3 Open source data and models
    • 6.4 Assessing data and model maturity
      • 6.4.1 Assessing NLP Data Maturity
      • 6.4.2 Assessing NLP Model Maturity:
    • 6.5 Key Metrics for Evaluating Language Solutions
  • 7 Development and Deployment Guidelines
    • 7.1 Serving models through an API
    • 7.2 Machine translation
      • 7.2.1 Building your own MT models
      • 7.2.2 Deploying your own scalable Machine Translation API
      • 7.2.3 Evaluation and continuous improvement of machine translation
    • 7.3 Chatbots
      • 7.3.1 Overview of chatbot technologies and RASA framework
      • 7.3.2 Building data for a climate change resilience chatbot
      • 7.3.3 How to obtain multilinguality
      • 7.3.4 Components of a chatbot in deployment
      • 7.3.5 Deploying a RASA chatbot
      • 7.3.6 Channel integrations
        • 7.3.6.1 Facebook Messenger
        • 7.3.6.2 WhatsApp
        • 7.3.6.3 Telegram
      • 7.3.7 How to create effective NLU training data
      • 7.3.8 Evaluation and continuous improvement of chatbots
  • 8 Sources and further bibliography
Powered by GitBook
On this page
  1. 7 Development and Deployment Guidelines
  2. 7.3 Chatbots

7.3.3 How to obtain multilinguality

Previous7.3.2 Building data for a climate change resilience chatbotNext7.3.4 Components of a chatbot in deployment

Last updated 1 year ago

7.1.3 How to obtain multilinguality

The training data we prepared in the previous section contained both English and Hindi, making the resulting bot multilingual. Multilinguality is not just an added functionality; it depends on the local context. It is sometimes the most logical way. Multilingual chatbots, capable of understanding and responding in multiple languages, hold distinct advantages in certain scenarios. In regions with linguistic diversity or areas where multiple languages are commonly spoken, a single chatbot that accommodates different languages can greatly enhance accessibility and user engagement. Moreover, multilingual bots can facilitate communication in cross-border or multicultural contexts, streamlining interactions and information exchange. In this section, we'll explore how to harness the power of multilinguality for your chatbot, enabling it to serve a wider audience while maintaining a seamless conversational experience.

In terms of training data preparation, one might replicate intents and responses for each language to enable multilingual support in a chatbot. For example:

Intents:

  • greet_eng

  • greet_swh

  • greet_fra

Responses:

  • utter_greet_eng

  • utter_greet_swh

  • utter_greet_fra

While this approach initially seems straightforward, it presents challenges as the number of intents grows. Training the model with a limited number of samples can lead to decreased performance and the potential misclassification of similar intents.

To overcome these challenges, we suggest a robust multilingual architecture that leverages different NLU models for every request. This architecture employs a language classifier, which identifies the language of each incoming query. Based on the detected language, the query is directed to the specific NLU model responsible for intent recognition in that language. Once the intent is recognized, it is passed to the core model, which predicts the appropriate action, such as generating a response or triggering a custom action.

The below diagram shows an example of this that was used in one of CLEAR Global’s chatbot systems:

By implementing this multilingual architecture, you can ensure accurate intent recognition and provide language-specific responses for a seamless user experience. Throughout this documentation, we will explain the architecture, how to train the models, and how to run the chatbot.

Multilingual chatbot deployment architecture