Language AI Playbook
  • 1. Introduction
    • 1.1 How to use the partner playbook
    • 1.2 Chapter overviews
    • 1.3 Acknowledgements
  • 2. Overview of Language Technology
    • 2.1 Definition and uses of language technology
    • 2.2 How language technology helps with communication
    • 2.3 Areas where language technology can be used
    • 2.4 Key terminology and concepts
  • 3. Partner Opportunities
    • 3.1 Enabling Organizations with Language Technology
    • 3.2 Bridging the Technical Gap
    • 3.3 Dealing with language technology providers
  • 4. Identifying Impactful Use Cases
    • 4.1 Setting criteria to help choose the use case
    • 4.2 Conducting A Needs Assessment
    • 4.3 Evaluating What Can Be Done and What Works
  • 5 Communication and working together
    • 5.1 Communicating with Communities
    • 5.2 Communicating and working well with partners
  • 6. Language Technology Implementation
    • 6.1 Navigating the Language Technology Landscape
    • 6.2 Creating a Language-Specific Peculiarities (LSP) Document
    • 6.3 Open source data and models
    • 6.4 Assessing data and model maturity
      • 6.4.1 Assessing NLP Data Maturity
      • 6.4.2 Assessing NLP Model Maturity:
    • 6.5 Key Metrics for Evaluating Language Solutions
  • 7 Development and Deployment Guidelines
    • 7.1 Serving models through an API
    • 7.2 Machine translation
      • 7.2.1 Building your own MT models
      • 7.2.2 Deploying your own scalable Machine Translation API
      • 7.2.3 Evaluation and continuous improvement of machine translation
    • 7.3 Chatbots
      • 7.3.1 Overview of chatbot technologies and RASA framework
      • 7.3.2 Building data for a climate change resilience chatbot
      • 7.3.3 How to obtain multilinguality
      • 7.3.4 Components of a chatbot in deployment
      • 7.3.5 Deploying a RASA chatbot
      • 7.3.6 Channel integrations
        • 7.3.6.1 Facebook Messenger
        • 7.3.6.2 WhatsApp
        • 7.3.6.3 Telegram
      • 7.3.7 How to create effective NLU training data
      • 7.3.8 Evaluation and continuous improvement of chatbots
  • 8 Sources and further bibliography
Powered by GitBook
On this page
  1. 7 Development and Deployment Guidelines
  2. 7.3 Chatbots

7.3.2 Building data for a climate change resilience chatbot

Previous7.3.1 Overview of chatbot technologies and RASA frameworkNext7.3.3 How to obtain multilinguality

Last updated 1 year ago

In this section, we provide a tutorial for developing all the necessary files for creating a RASA-based climate resilience FAQ chatbot. This tutorial is based on a project in collaboration between CLEAR Global and Gram Vaani for the farmers in the Bihar region of India. For more information you can refer to CLEAR Global's blogpost:

When preparing an FAQ-based bot, you need to start by defining the following:

  1. Topics you want to cover

  2. List of possible questions you can receive

  3. Proper answers to those questions

  4. The different ways those questions can be uttered by your users.

It’s useful to curate this type of data in a spreadsheet where your linguists, interaction designers, and content specialists can easily collaborate. Having it in this structured form also helps your developers pull the content automatically to create the specifically formatted files for training and testing chatbot models.

When starting to prepare chatbot content, it’s convenient to work on a format that’s easily editable by both technical and non-technical profiles. A simple way to do this is through a spreadsheet.

As part of this tutorial, we are providing both a publicly accessible spreadsheet and a codebase that pulls data automatically from this spreadsheet to create RASA format files:

  1. In our spreadsheet, we find three main topics represented in the three sheets:

    1. Climate Change T1 on definitions,

    2. Climate Change T2 on impact,

    3. Climate Change T3 on adaptation methods and government programs

Each topic has a list of FAQs. Let’s take the first FAQ from the first sheet:

This data encapsulates all the information necessary for a chatbot to learn how to receive a question related to the definition of climate change and how to answer it. These encapsulations are also referred to as intents. We will dive deeper into the technical definition of intents in 7.3.7 How to create effective NLU training data.

In this particular tutorial, we are working with two languages, three main topics, and 25 FAQs in total.

To access the Climate Resilience FAQ sheet, .

To access the Python-based scripts and instructions on Github,

click here
click here.
Using AI to support farmers to adapt to climate changeCLEAR Global
Blogpost explaining TILES project in India
An example of chatbot datasheet
Logo