Deploy a Rasa Chatbot on a Virtual Machine on Google Cloud Platform (GCP)

Zara Dana
3 min readJan 30, 2022

--

Deploying a Rasa Chatbot from a private GitHub repo on a Google Cloud VM

After spending hours on conversation design, training, and testing your Rasa chatbot, you are finally ready to deploy it as a backend API. You start looking at the deployment documentation, and there is no easy step-by-step guide to deploy your beloved chatbot. I know because I have been there! That’s why I am sharing how I deployed my chatbot here.

First, I built a custom connector to interact with my chatbot through a webhook. Next, I deployed my chatbot to a cloud virtual machine (VM). I used the Google Cloud Platform (GCP) compute instance. However, the steps outlined here are applicable to other cloud providers. Below you can find the details of the step-by-step guide on how I deployed my Rasa chatbot on GCP.

Step 1: Create a Custom Connector

A custom connector is an interface between the chatbot users and the Rasa chatbot. Rasa has prebuilt integrations with many channel connectors such as Slack, Telegram, Facebook Messenger, etc. However, to make a custom connector we need to implement a class that subclasses rasa.core.channels.channel.InputChannel and implements name and blueprint methods. More details on this can be found here.

In my chatbot repo, I created a folder called addons that contains my custom connector class, custom_channel.py:

Additionally, I appended my credentials.yml file with:

Also, in the domain.yml file, I included the channel name myio after the first response, i.e, utter_greet:

And that is all we need to do on the chatbot side to enable the custom connection. To run the chatbot, we will need to specify the path to credentials.yml:

Now that we have the Rasa chatbot server running on port 5005 locally, multiple users can communicate with it simultaneously, each within their own session using the http://localhost:5005/webhooks/myio/webhook webhook and a payload similar to the following:

Vola! We have our custom webhook working. Let’s now move on to deploying our VM.

Step 2: Deploy on a VM

I used a private GitHub repository to store my Rasa chatbot. To enable access to this private repository from within the VM, I set up a GitHub repository SSH deploy key by following the steps below:

  • Created a pair of private and public ssh keys using ssh-keygen command
  • Copied the public key (found in the file with the .pub extension) to deploy keys of the repository. If you have never done this before, navigate to the repo > click on “Settings” on the top right > “Deploy keys” on the left menu > “Add deploy key”
  • Copied the private key to Google Secret Manager. To accomplish this, navigate to the secret manager > click on “+ CREATE SECRET” > simply assign a name to the secret and upload or copy the key to the secret value. To access the secret from within the VM,
    – the scope needs to authenticate with cloud-platform (see below), and
    – the service account needs to have the Secret Manager Secret Accessor permissions.

To avoid retraining the chatbot on the VM, I stored my trained model weights in cloud storage. I also created a requirements.txt configuration file for my Python packages including rasa, google-cloud-core, google-cloud-storage, etc.

Finally, I used a startup bash script to deploy the Rasa chatbot remotely. The script

  1. downloads the Rasa chatbot from a private GitHub repository
  2. sets up the Python environment
  3. downloads the trained model weights from the cloud storage
  4. starts the Rasa server (and action server if exists)

Below is my startup_script.sh:

Running the script below, I spun the VMs and opened port 5005 for the Rasa chatbot:

If you made it this far with me, you now have your Rasa chatbot accessible on the http://<VM-IP-ADDRESS>:5005/webhooks/myio/webhook. Enjoy chatting away with your in-cloud bot!

--

--