Machine learning vs rules-based chatbots

With the right expectations and a good understanding of the current capabilities and limitations of today’s chatbot technology businesses can use AI to solve a number of customer contact challenges.

Aditya Challa

Your AI doesn’t need to think for itself yet, it just needs to work for your customers

Despite the hype around AI, machine learning and chatbots, we are a long, long way from developing a genuine artificial intelligence, like HAL 9000 from 2001: A Space Odyssey, that ‘thinks’ and understands context and intent like humans do.

On the other hand, we are at the beginning of an AI revolution that will forever change the way businesses interact with their customers. And the excitement around this, is why chatbots, and the AI technologies that power them, are near the top of the Hype Cycle today.

With the right expectations and a good understanding of the current capabilities and limitations of today’s chatbot technology businesses can use AI to solve a number of customer contact challenges.
Intelligence in modern chatbots

The chatbots that businesses are starting to deploy today come in two flavours. Those that are based on machine learning try to understand the context of and intent behind an input before formulating a corresponding response to it. This relies on training a neural network to ‘think’ for itself by providing it with thousands or millions of examples of whatever it is you want it to be able to do.

The second approach we call rules-based. This type of AI, simply takes an input from the customer, checks the rules that have been defined, and gives the pre-programd response.

The reality is that machine learning chatbots are not yet advanced enough to be deployed on a mass scale by businesses, with a chatbot solving multiple use cases and engaging customers at will. In fact, unless it’s for a very niche purpose, machine learning chatbots can do more harm than good for a business because they cannot guarantee the experience they will deliver.

The pitfalls of machine learning chatbots

Machine learning chatbots for customer service work by extracting intent from whatever a customer says to them. This is a complex skill that takes most humans years to develop, and we still get it wrong some of the time, especially when we don’t have visual and tone of voice cues to go on.

Assuming the data sets used to train the bot are not biased – which they will be in the absence of a large number of real-world examples – they are unlikely to be exhaustive enough to cover every possible scenario the chatbot might encounter. Yes, a machine learning chatbot will get better and better with every interaction – because it learns – but it’s doubtful any bot based on current technology will get anywhere near a human’s hit rate when it comes to identifying intent.

And that’s a serious problem, because when a chatbot misclassifies a case the repercussions can be anything from lost time for the customer to lost revenue and reputation. When misclassifications do happen, it’s not usually possible to find out why as machine learning neural networks are ‘black boxes’ which don’t tell you how they applied their logic to arrive at a decision. You have to interpret or reverse engineer them, which makes correcting mistakes a difficult process.

For most businesses, the types of interactions they wish to automate are either too routine to require a machine learning chatbot, or they are too complex to automate at this moment in time.

Rules-based chatbots – our approach to what works best right now

Considering these current drawbacks to the machine learning approach, we choose to build chatbots using a rules engine that is fully functional and useful today, while investing in the infrastructure required for powering the chatbots of tomorrow.

Each bot has a knowledge representation, technically known as its ontology, based on the task it is built to solve. Our platform is armed with an ever-expanding library of state-of-the art Natural Language Understanding modules (open source, paid APIs and proprietary) covering spell check, parse trees, sentiment analysis, and so on.

With our NLP engine, any combination of NLP modules can be strung together to form an NLP pipeline that is suitable to solve a specific use case. The pipeline’s NLU layer, combined with a custom rule engine will map the semantics of a natural language sentence to the bot's ontology. The bot's current state is then matched to a pre-programd intent corresponding to one of the customer service issues the bot has been designed to handle. Based on this intent the bot is configured to generate pre-configured actions, which can be customised for each channel (Facebook Messenger, Twitter, web chat, etc.) as well as personalised for each user.

While flexible enough to allow customers to phrase essentially the same issue in many different ways, the bot does no self-learning to understand intent. Yet with this setup we are able to satisfy the key criteria our enterprise clients require: Flexibility and modularity; compatibility with legacy systems and streamline agent handover; accuracy despite low levels of systems integration, data isolation and high security; and accountability as we always know how and why the bot made the choices it did.

Each technology has its time and place

Machine learning is a powerful technology and promises an exciting future where machines can come to understand our needs and our intent, perhaps better than we do ourselves. However, at this moment in time we only recommend machine learning for scenarios where there is little scope for ambiguity, and where vectorisation (converting non-numeric input to numeric inputs) is straightforward.

When it comes to chatbots, particularly for customer service, the jury is currently out on whether the hype around machine learning will eventually be justified. To confidently deploy an enterprise-grade chatbot right now that your customers want to interact with we recommend the rules-based approach.