A year ago, Facebook announced the opening of Messenger Platform, its platform for developing and creating chatbots on Messenger. A year later, the success is there: 33,000 chatbots have been created on Facebook’s messaging platform. All sectors of activity have quickly grasped the potential of chatbots, you can now book a train ticket, a plane ticket, a hotel room, or even consult your Direct Energie account via Messenger chatbots. But is this secure enough for an industry as sensitive as insurance? Can we trust chatbots when it comes to sensitive data like our health data?
The chatbot in insurance, yes, but why do it?
As we saw in the introduction, a chatbot can make it possible to carry out very varied and more or less complex tasks. However, it is mainly used in the context of customer relations. Responding to a customer’s request for information, scheduling an appointment with their advisor, are examples of very common chatbots. For these tasks, the chatbot is even considered to be more efficient than a human advisor mainly, it is true, for its availability 24/7.
Internationally, chatbots are already being used to allow the subscription of insurance contracts entirely online and without the intervention of an advisor. This is particularly the case with Kevin, a solution built thanks to the collaboration between a startup and an Australian insurance company. The concept is to ensure transactions between individuals on collaborative platforms (eBay, leboncoin…). The system then uses data from both parties’ Facebook accounts and secures the contract through the use of blockchain technology. However, this system is still in the experimental phase and remains dependent on the correct understanding of the transaction to be ensured by the chatbot.
Different levels of access to sensitive data depending on usage
All of these tasks performed by chatbots require access to insurer data to interact with the customer. However, the data can be more or less nested with the IS core business. We can differentiate three main types of use of a chatbot in the context of customer relations:
1. Dynamic FAQ, aimed at answering simple customer questions 24/7. For example, a client wants to know the procedure for contacting an expert.
2. Passive advisor, aiming to answer simple questions contextualized according to the client’s contract. For example, a client wants to know the details of their dental health coverage.
3. Active advisor, aiming to respond to complex requests requiring strong interactions with the IS. For example, taking out a health insurance contract.
In the first level, the chatbot only accesses non-personal and static data. Not a lot of risk then. However, from the second level, the chatbot must access personal data and must even process requests in level three. Securing access to this data then becomes an essential issue.
How to secure interactions between chatbot and insurance IS?
There are three levels of risk to be covered:
– Low risks: they mainly concern the risks associated with the use of a chatbot editor solution as well as the access rights of this solution to the IS.
– Significant risks relate in particular to the interception of sensitive data as well as access to them via the chatbot. To cover these risks Aude Thirriot and Nicolas Gauchard, cybersecurity experts at Wavestone, advise setting up various devices such as data storage encryption, data exchange tracing but also to conduct a real security awareness campaign. within teams handling data.
– Finally, the last category of risks, critical risks. They concern about actions that could jeopardize the insurance IS and cause harmful negative communication. In this case, our two experts recommend end-to-end data encryption, data processing, and storage on European soil as well as strong authentication.
A solution that is still not very mature but in full swing, chatbots will undoubtedly gain importance in 2021 in all sectors of activity. A real driver of added value in customer relations, chatbots will be required, in parallel with the evolution of technologies, to carry out more and more complex tasks. Nevertheless, securing these solutions will remain a major issue for user companies, and more particularly in very sensitive areas such as insurance.