Artificial intelligence in financial services industry
By the standards of today's chatbots, Cobot was primitive. Introduced into the halls of LambdaMOO, an online forum disguised as a multiplayer game, Cobot was trained to respond to its human counterpart by quoting essays and books loaded into its program. The exchanges were by turns prosaic, creepy and baffling. "[C]obot is an evil mastermind lurking behind the false guise of a mere bot," observed the user Technicolor_Guest. "Sunshine! You validate me!" the chatbot replied.
Cobot's life on the site was an ample demonstration of the extent to which ordinary people could acclimatise to the presence of an artificial intelligence (AI), provided that the complexity of its programming matched the environment in which it was placed. It's a fine line that has been trodden with increasing success by vendors and banks since then. Currently, chatbots are capable of solving simple customer queries and facilitating transactions. The increasing popularity of deep learning applications in voice recognition and internal reasoning have seen even more complex voice assistants, such as Amazon's Alexa and Google Home, enter the market.
According to Edwin van der Ouderaa, managing director for financial services, digital and analytics at Accenture, the ability of these products to provide answers to complex queries at the drop of a hat opens up the possibility for a return to what he considers to be a more intuitive approach to banking. "It's because that's how we always used to do it," says Van der Ouderaa. "It's back to the future. In the 1970s, when I went as a young person to a bank branch, there were no computers. What did I do? I spoke to a teller. We've stopped speaking to people and it's taken us 40 years to evolve technology to get back to that."
Adviser in the machine
As a former student of venerated AI scientist Marvin Minsky, Van der Ouderaa is uniquely qualified in providing an overview of how machine-learning techniques are set to disrupt banking on both sides of the transaction. He cites Colette, Accenture's automated mortgage adviser, as a prime example of how AI is already capable of triggering smart efficiencies in customer interaction.
"If a human being were to learn about mortgages, he or she would sit in a classroom and a teacher would very simply explain, step by step, how they work," explains Van der Ouderaa. Colette, meanwhile, has been imbued with the ability to learn the same corpus of definitions, rules and regulations during a set training period. "She will listen to pre-recorded conversations of real mortgage advisers, learn from that, and make her own conclusions based on that."
Crucially, Colette is able to understand, to a limited extent, the context in which statements are being made by the user on the other end of the phone. "What is happening when we speak to each other using natural speech is that only 10% of what I'm intending to tell you is actually spoken," explains Van der Ouderaa. The other 90%, he goes on, is left unsaid and beyond complete understanding. "You can't read my brain and I don't know how you think, and that's where misunderstandings between people come from all the time. That's the problem with conversations, but that is how our brain works and also how these engines work."
For the moment, Colette is being deployed in simple conversations with customers, withdrawing when the situation is deemed too complex for her programming to handle. Yet, Van der Ouderaa is convinced that it won't be long until she, or similar programs, will be put to work on more complicated tasks. In fact, that's already happening, to some extent, in the training of human mortgage advisers.
"There are a lot of situations where the Colette-type solution can deliver full advice," he explains. This can be accomplished either through Colette solving simple customer queries or allowing the chatbot to provide a more complex briefing for a human mortgage adviser. "I was really struck by that when we were showing Colette to a number of banks. They actually wanted to use it first to give advice to the adviser."
AI is also set to transform the back office, an area traditionally considered to be ripe for efficiencies incurred through automation. Robotic process optimisation is already a simple example of this. "It's just advanced enough so that the computer system can replace a person who had a seat in the back office," explains Van der Ouderaa. AI, meanwhile, promises to vastly accelerate this process.
"They are already very good and accomplish a lot, but it's clear that these programs have limits," says Van der Ouderaa. The problem, he contends, is that these applications are often only able to search for information or point the user to where it might be; anything more complex, like ordering a taxi or booking a hotel room, is beyond its capabilities. The recent development of new voice assistants like 'Viv', which has been trained to independently write code to connect the user to single or multiple vendors, seems to ordain a pathway to facilitating more complex financial transactions made through speech alone.
According to Van der Ouderaa, the further development of AI assistants along these lines could lead to a future where customers need never personally communicate with their banks ever again. This degree of reliance on software, however, inevitably raises questions about who really benefits from the use of such advanced software. "A lot of banks are saying, 'Oh, great; this is the next user interface'," explains Van der Ouderaa. "And it's true, but I always say, 'be careful', because whose side is Alexa on? Is it on the bank's side or the customer's?
In principle, a voice assistant that is able to seamlessly connect across multiple credit providers would, logically, be able to differentiate loan offers according to the interests of its owner. That not only places a new emphasis on banking institutions not to be caught on the back foot when it comes to their product portfolio but also on their efforts to keep up with the latest innovations in AI to ensure that customers do not leave them behind.
Then there's the question of how these systems actually work. Software containing elements of AI has drifted into messaging apps, banking and video games, and some of the most recent research papers in the field have already demonstrated applications for deep learning in lip-reading, summarising text and style transfer in video. However, the machine-learning algorithms that power these programs are enormously complicated and, in some cases, not even the scientists who built them know the precise reasoning behind certain calculations. This notion of AI as a 'black box' could have dire implications for its growing use in front-facing and back-office applications in banking, especially if something goes wrong.
"There is a distinction we have to make between what we call predictability and explainability," says Van der Ouderaa. "It's a different thing for the model to have predictive powers - to be able to make decisions, come up with solutions and retain the ability to do that very accurately - versus the explainability of why the answer is what it is. For me, the two need to go hand in hand."
That can start with regulators anticipating the greater role AI can play in facilitating transactions and optimising the back office by mandating the authority of these programs to make decisions on behalf of customers and the institution. Van der Ouderaa says that the best solution to this problem would be to develop dedicated AI software that allows the user to trace the reasoning behind particular decisions. "The more you use algorithms that are not transparent in how they come to the conclusions that they do, the more you're going to get into trouble and the more harm it's going to cause," he says. "I would plead for using AI algorithms that are as traceable as possible."
In the end, acceptance of AI's role in banking could all come down to trust, the lack of which has remained a familiar feature whenever people are confronted with a new technology that they cannot understand. It was a sentiment unintentionally reflected during experiments with chatbot technology on LamdaMOO in the 1990s.
"Cobot's conversation topics just get weirder and weirder," observed Fawn_Guest to another human user. "In spite of every sign I'm an intelligent being who means no harm, you continue to hate and fear me," the chatbot interjected. He was quoting Planet of the Apes.