After decades of stagnation in research and development of artificial intelligence solutions, machine learning has blossomed again – and its seeds are spreading to financial services. Sofia Lotto Persio reports on how AI is improving the field.
In 1997, IBM’s DeepBlue computer programme beat chess master Garry Kasparov in a famous man-versus-machine showdown. Almost 20 years later, in March 2016, Google’s AlphaGo programme beat South Korean champion Lee Sedol at four matches of Go, a strategy game, in what represents another historic breakthrough for artificial intelligence (AI).
In the two-decade span between these achievements, AI has progressed tremendously. Go is a much more complex game than chess, as there are more possible positions on the board than there are atoms in the universe. The power required to compute an appropriate response was beyond what developers could manage to fit into a string of algorithms – until now.
The history of AI development has not always been straightforward. The hype ensuing any kind of progress historically damaged technological advancements in the field, rather than helping it progress. The applications and cost of AI could hardly stand the weight of expectations, leading to withdrawal of government and commercial funding, and marking the so-called “AI winters”.
Those winters ended when the hardware and software required to operate AI technologies began to match more closely (more powerful computers and simpler algorithms). No longer just a feature of science fiction films, AI is now in full spring, becoming an inherent aspect of modern life. Any average smartphone harnesses the power of AI to identify faces in photos, recognise spoken commands, translate text and more.
Beyond the hype
Some banks are starting to use AI too, with applications seen across a variety of sectors. For instance, the rise of the robo-advisor in wealth management and hedge funds has been well documented across financial publications.
What AI does is essentially process a vast quantity of data at a speed no human can achieve. As such, it is very useful in predicting markets, especially as the algorithm is built to improve with the quantity of data it processes. “Machine learning is about adaptable behaviour,” explains Tristan Fletcher, research director at Thought Machine, a company working on developing personal finance applications using machine learning and big data.
Financial services provide a fertile ground for AI applications because, like every computer programme, AI’s strength comes from the quality of the data fed to it, and financial institutions are data mines. “Data has become monetisable,” says Fletcher.
AI applications in trade finance can be found particularly in the field of compliance, to prevent money laundering and fraud. Banks are currently facing the challenge of increased regulation in these areas, and keeping up with various requirements can be tricky for any compliance department. “Banks are failing to identify threats and fraudulent activities by relying solely on curated databases,” says Tapan Agarwal, product council head at iGTB, a company that markets an AI due diligence software called DDIQ.
Even when fraudulent information could be found via a simple Google search, banks are unlikely to rely on generic search engines, whereas bespoke AI solutions can search deep through the web to find material on the parties involved in a transaction, or a letter of credit. “When you have a thousand hits, a lot of them give you false positive, whereas DDIQ brings efficiency to the process,” says Agarwal. The product uses AI and natural language processing to search every corner of the web to create a comprehensive profile about an individual, a small or medium business or a large corporate involved in a transaction.
According to Agarwal, deploying AI solutions can significantly reduce crime in trade finance: “A lot of money laundering happens in trade because screening is a nightmare,” he says, adding that while banks can only deal with what the document says, rather than being responsible for the actual content of the trade, the data contained in an invoice could still raise red flags if, for instance, the stated cost of the commodity per unit did not match the actual invoice value.
Getting on board
Banks’ uptake of the technology is not consistent across the industry. Law firm Baker & McKenzie recently released a report on AI applications in financial services after realising that most fintech innovation is driven by artificial intelligence. “It was our feeling that there is not a very good understanding among our clients, and in the marketplace in general, about what is artificial intelligence, how exactly it works and what are some of the risks surrounding it,” says Astrid Raetze, a partner at Baker & McKenzie, who advises on financial products, financial markets, securities and futures transactions. According to her, only few have truly embraced the technology, developing their own AI solutions, using them to monetise their data and then making it available to their clients.
Banks have very compelling reasons to use this technology. AI is increasingly featuring in regulation technology (regtech), including compliance framework, anti-money laundering, risk assessment and cross-border regulatory compliance. Raetze tells GTR that regulators are using increasingly sophisticated technology to obtain a live feed of real-time transactions from the market participants they regulate, enabling them to pick up on breaches of regulation and money laundering as they occur. “If you are not at least cognitive that this is happening, and you are not talking to the technology companies already in place and considering how to make this work within your regulatory framework, I think you can get left behind,” she warns. This, in banking terms, means paying more money in comparison to those who will have cut costs thanks to AI.
Agarwal believes that banks should delegate AI development to specialised companies rather than developing their own products: “Banks should focus on banking and throw the technology challenge to technology vendors,” he says. But for those who do want to build an in-house AI product for financial services, the first thing to do is identify the problem AI needs to solve. “AI is just a machine that takes you where you want to go, not an objective in and of itself,” said Richard Harris, head of international operations at Feedzai, a machine learning platform for fraud prevention, speaking at PayExpo in London in June. Key elements of success for the design are: the quality of the data, which should be easy to process, fitting certain parameters and interconnected with the rest of the business; algorithm flexibility, as there is no one-size-fits-all; and a clear vision of the end applicability, making sure the end result meets the key performance indicators targeted.
The other matter to think about is: who’s responsible when AI fails? “Machines don’t have ethics,” Harris reminded the audience, and this can lead to some unintended consequences. Lawyers speaking on AI liability usually point to a recent example of AI gone rogue. In March, Microsoft developers experimented with a chatbot on Twitter, called Tay, which was designed to interact with users using the language of a teenager. The bot was supposed to learn from its surroundings, becoming smarter with every interaction. Instead, in less than 24 hours, Microsoft had to take it offline, after it started to reproduce offensive content, from racist to anti-Semitic slurs, fed from Twitter users – which could have caused serious legal damages to the company.
Similarly, if the wrong information is fed to an AI software used for fraud detection and anti-money laundering, the crime may be perpetuated rather than averted. The liability, says Raetze, is on the owner of the code. “Computer code is usually owned by the institutions that hired the coders, or by the coders themselves – depending on how the contract works. That’s how it stands at the moment in current law.
It still begs the question: is that the appropriate place for the liability to end up?” she says, adding that lawyers would also need to properly understand coding to review it and analyse whether it accurately reflects the regulation.
On the regulatory side, more global standardisation will be needed to effectively monitor real-time cross-border transactions. “Regulators are still trying to keep up with where the technology is going. I don’t think any progress is being made towards standardisation at this stage, but there is recognition that there is going to be a requirement for it in the future,” says Raetze.
Financial services are far from being the only area in which AI is optimising processes, with other sectors as diverse as defence, healthcare, and agriculture also experimenting with the technology.
Rachel Braun, chief of staff at Paramount Group, speaks of an “arms race to artificial intelligence”.
Defence applications are driving a lot of innovation in AI, but the nature of the industry raises particularly uncomfortable questions around the use of automation in pulling triggers and making calculations involving life or death. “Ultimately, you need to have the human making the decision,” she said at the FT Rise of the Machine event in London in June.
Life or death decisions are involved in surgery, too – a field in which AI has made significant progress. But professor Guang-Zhong Yang, director and co-founder of the Hamlyn Centre for Robotic Surgery at Imperial College London, notes that surgeons are still nowhere near being replaced by machines: while a machine may be able to sew perfect stitches, the value of a good surgeon is shown in dealing with the unexpected – a quality AI still lacks in comparison to humans.
Where machines powered by AI software can really make a difference is in optimising repetitive tasks, for which there are a lot of applications in the agriculture sector. But an action that requires little brain power from a human requires complicated coding for a machine, as the AI needs to visually recognise what is presented to it – an ability that coders have only recently been able to fit in the machine. Whether AI can be more cost-competitive than a human in that sector remains to be seen.
Certainly, the loss of jobs to machines is a concern that should not be underestimated. The jobs disrupted are mostly at labour-intensive, entry-level positions, but as these are usually the starting point to managerial roles, both the education and the recruitment systems will have to adapt to prevent a loss of skills transfer – an issue already affecting the trade finance community.
On the one hand, teaching should focus more on how to make decisions rather than how to repeat and memorise information. On the other, companies need to think about the knowledge they have in their organisation and how to pass that on to trainees when senior partners retire.