We have more and more inquiries about the automation of business interactions and processes with intelligent technologies. Today’s tech landscape offers a wide range of services to implement such requirements, so I would like to highlight some introductory thoughts around bots I came across recently.
In the field of intelligent technologies, we see three main levels of engagement:
Bots are often seen as chat robots which simulate human conversation on websites and inside apps. A chatbot will typically communicate with a real person – even though there may also be scenarios where two chatbots can communicate with each other. The great thing about bots is that, once installed, they require minimal support from IT but only business user engagement to ensure the bot is up-to-date in its domain. Microsoft serves the area with their Azure based bot service and a set of advanced cognitive services.
Robotic process automation (RPA) on the other hand emulates human interactions with systems as business applications or databases and the likes. Quite similar to an Excel macro, it includes the ability to follow rule-based decisions, copy, paste or exchange data between systems to automate routine and manual tasks with precision and speed. RPA does typically not have self-learning capabilities. A pioneer in that space is the UK based blue prism software company.
And last but certainly not least, artificial intelligence (AI) consists of algorithms which can sense, comprehend, act and learn with large datasets. AI technology can combine and analyze data with large data sets to deliver outcomes and carry out tasks more efficiently than humans and with higher and more trustworthy precision. Also, it’s in the nature of AI to provide self-learning capabilities. During the last years, Microsoft has fundamentally invested into cloud based AI solutions to increase business productivity.
How Chatbots Work
In general, bots allow users or customers to either complete commands / transactions or find information through clicks, conversations or messaging. While we often see bots with an external customer facing focus (like customer support or request / provide product information), there are also many internal employee-oriented scenarios (Help-Desk or Human Resources) where bots can really offer a significant added value. It’s important to keep in mind that bots do not continuously learn on their own like for example machine learning but require human input to update and increase accuracy. Indeed, bots can call cognitive services (like Speech, Language or Vision) to increase their quality and relevance.
So essentially a chatbot…
- …is UI based and communicates with a customer or employee in natural language through messaging or speech
- …has API’s that allows the bot to call services or to integrate with apps to deliver the outcomes to the user (finding information, scheduling a meeting, …)
- …maps key phrases and words to answers in the backend. Accuracy is increased through manual updates and feedback (“was this chat session useful? – “yes” or “no” or “did you have to be routed to second level support”?)
Make your Bot intelligent
To extend the characteristics listed above, Microsoft offers a comprehensive suite of cognitive services based on the Azure cloud platform to be integrated into the bot architecture. Microsoft itself calls it a “democratization of AI” as with the cloud based deployment model, this kind of technology becomes available to basically everyone.
One capability I would like to highlight here in particular is the Language Understanding service (called LUIS) which is machine learning-based and allows to build natural language understanding into apps and bots. It is helping customers to improve the way their chatbots understand language contextuality, so that their bots can communicate in natural language. There is a great light control demo on the Azure website to further illustrate this.
Beside LUIS there are other categories to improve your bot experience and make it smarter:
- Vision – identify and analyze content within images and videos
A possible example for image recognition would be a support bot in which the customer can upload a photo of a defective item for diagnostic purposes and the bot recognizes which product and which defect is present based on predefined patterns.
- Speech – integration of speech processing capabilities
This area offers a lot of starting points. For example, speech to text can be used to have meeting minutes entered the CRM via speech while sitting in the car, or the other way around if something is in text form and this is reproduced via speech. It`s also possible to use real-time translation within a bot environment, e.g. if different support languages are to be offered without dedicated teams being available for all languages.
- Language – understand the meaning of unstructured text
Text analytics recognizes key phrases and different languages from the text and can use them later in the bot-based conversation. Content Moderator on the other hand provides machine-assisted content moderation for images, text and videos to detect offensive or unwanted content within a conversation.
- Knowledge – integrate knowledge resources into apps and services
QnA Maker allows to extract information into a FAQ environment based on your existing content and also allows to train your bot to improve the answer quality of the service.
- Search – build search enabled apps and services
Bing based Web / Image / Video / News / Visual web-scale, ad-free search engine to enable your apps and services
One final thought
The dissemination of intelligent assistants and natural language processing also brings an ethical dimension to the discussion: as a customer, do I want to talk to a bot? And how do I feel if I don`t know that it`s a bot but later I find out? There is an interesting article from IBM around this discussion, worth a read.