Introduction to Chatbots – How Chatbots work? 1/4

Filed Under: Python Advanced
Introduction To Chatbots

Many companies today claim that they have chatbots running on NLP and that they are creating responses on the fly. But how do Chatbots work?

Chatbots are aiming to make natural interactions indistinguishable from human conversations, but how much is true? What goes into such a backend?

Let’s discuss.

How do Chatbots work?

Chatbots work using multiple methodologies. A few of those techniques are easy ones, based on keyword extraction. Some of the techniques respond with FAQ sections.

While some are more advanced ones like Haptik that work on NLP and respond in a much more human-like manner. Let’s find out the different ways of how chatbots work. In the upcoming articles, we’ll talk about creating your first chatbot.

1. Collection of Responses method

Many chatbots on the market today use a repository of predefined responses and an algorithm to select an acceptable answer based on feedback and context.

The criterion may be as basic as a rule-based speech match, or as specific as a series of Machine Learning classifiers.

These systems do not produce any new text but only select an answer from a given set.

This may sound like an NLP or a deep learning machine, but it’s not.

Since the responses are pre-written, they don’t make grammatical mistakes.

They are also unable to manage unseen cases for which there is no suitable predefined answer. For the same reasons, these models, like names listed earlier in the discussion, can not refer back to contextual entity knowledge.

2. Neuro Linguistic Programming(NLP) Methods

These are much harder to create, and understandably so. They have a few or several dedicated servers running an NLP model on the text responses in real-time.

Unsupervised models are not based on pre-defined answers.

From scratch, they produce new answers.

Usually, these models are based on language translation methods, but we “translate” from an input to output (response) instead of translating from one language to another.

Essentially, they’re “smarter”. They can refer back to entities in the input and give the impression that you’re talking to a human.

However, these models are hard to train and are quite likely to make grammatical mistakes (especially on longer sentences). They also typically require huge amounts of training data. Also, they can be much slower.

Architectures in deep learning such as Seq2Seq are ideally suited to text generation and researchers hope to make rapid progress in this field. We are now, however at the early stages of developing unsupervised models that function fairly well.

3. Using Transformers

Models like GPT-2 and BERT have immensely brought the standards up for human-like conversations. You can test it yourself using :

https://transformer.huggingface.co/doc/distil-gpt2

For example this is written entirely from the model.

Image 20
Text written by a transformer

While these models are in the testing phases and aren’t ready per se, there is a huge potential in these models being used for future language generation.

Who knows, you might even talk to a bot on the phone without noticing anything strange in the responses in future?

Chatbots work better with short-text communication

The longer the dialogue the harder it is to automate it.

Short-Text Communications are on one side of the continuum, where the aim is to establish a single answer to a single input.

You can receive a particular question from a customer, for example, and respond with an acceptable response.

Then there are lengthy discussions where you need to keep track of what has been said and go through several twists.

For example, customer service interviews are lengthy conversational threads containing many questions.

Though Google has made significant progress in using continuous conversation, and the Google Assistant can now keep track of what you had said earlier, there’s a long way to go for this.

Chatbots can make sense of a question when it is short and to the point. But when we ramble around the topic and discuss a lot of things to come to the point, the AI model is confused and cannot respond with the right answers here.

Chatbots can learn and answer questions about your product/service

This is also an important part of NLP conversations – an obstacle to be overcome. There are two types of domains – open & closed.

Open Domain: Where the topic of the conversation can be anything – sports, news, health, celebrities, etc and the objective of the model is to keep the conversation going with relevance and meaning.

Closed domain: On particular subjects, we can only ask a small range of questions. The model will answer on the basis of predefined paths in the chatbot.

Open domain chatbots are still a thing of the future, because there is an excess amount of data to encode into the model.

But most of the chatbots are trained on a closed domain where there’s only one topic, a set of features, or a niche. The chatbot is trained on the wordset, and learns to correlate different term usages.

Once the model is trained for live use, people can talk to the chatbot and receive relevant responses based on the training material. Though this isn’t the most efficient chatbot, it helps automate a lot of the basic queries while a human can spend time responding to the more complex ones.

Let’s now move ahead to understanding intent and how chatbots can classify intent.

Retrieval-based Intent Classification in Chatbots 2/4

Ending Note

If you liked reading this article, follow me as an author. Until then, keep coding!

Leave a Reply

Your email address will not be published. Required fields are marked *

close
Generic selectors
Exact matches only
Search in title
Search in content
Search in posts
Search in pages