AI/ML has been around for a long time but recently hit the headlines with ChatGPT which has broken every tech record in the book for a piece of tech that attracts users faster than any other tech ever. If Generative AI is the hot topic of the day though what is happening behind the scenes with […]


AI/ML has been around for a long time but recently hit the headlines with ChatGPT which has broken every tech record in the book for a piece of tech that attracts users faster than any other tech ever. If Generative AI is the hot topic of the day though what is happening behind the scenes with more prosaic Fintech use cases. Tristan who is CEO of commodity price predictors and now Insurtech ChAI first joined us in 2016 for an an overview of AI/ML and updated us more recently in 2019 to talk about then hot-topics. What has changed – what is with Generative AI, what are the behind the scenes factors driving change, how are fintechs being invested and indeed how is investment in Fintech being affected by AI?


We first heard from Tristan a long-term academic and Fintech-er in the whole AIML realm in 2016 for an overview of the topic which has remained in the top 5 most downloaded episodes of all time. In 2019 re joined us to discuss hot topics in AIML in particular prediction, explicability, alt. Data sources and self learning.


Three factors have been driving an astonishing rate of change – computing power (NVIDIA been increasing by a factor of 10 every year in recent years), an abundance of data sources beyond all recognition even a few years ago and advances in computer science – not least of which Transformers (the T in GPT and a leap forward from RNNs), so-called self-attention and RLHF – reinforcement learning from human feedback. In the image space the likes of, GANs (Generative Adversarial Networks), Latent Diffusion Models and the clever use of efficiency gains.


Taking the current eye-catcher as a metric GPT1 was released in 2018, GPT2 in 2019 and GPT3 in 2020 with its offshoot ChatGPT in 2022. GPT3 has 176bn parameters compared to GPT2’s 1.5bn and was trained on 570GB of data compared to 40GB. Huuuge changes.


But this is just the sexiest, most public face of AIML which as a whole has been finally one might say getting somewhere are computing power continues to increase. As we heard in LFP219 the ex-GCHQ-ers at Ripjar have a database of 18bn news articles which they can process. Once one gets to this kind of scale of computing then – even avoiding the unfortunate anthropomorphism implicit in terms like intelligence or learning – if we simply revert to the oldskool term data processing then we are getting to a stage where the results from data processing are truly phenomenal. Even I have shifted to keeping track of the latest developments having been pretty sniffy about the uber-hype of a field that generally failed in most of its initial objectives set out by pioneers in the 60s.


Topics discussed include:

experiences using ChatGPT – Case Study with monetary policy in economics
the need to “learn to dance” with Generative AI systems
comparison with the early search engines – cf now there is a verb “to Google”
the challenge of tying up Generative AI systems to the real world and hence late to the party and suitable for a subcategory of use cases
media hysteria in re as a phenomenon
machines making us machines
one of the unmentioned reasons behind its success is that much of human thinking/discourse right now is bland and predictable
Barnum statements
Tristan’s overview of AI changes over the past four years

way more pervasive
field now so broad it’s hard to be an expert in all subsilos
people no longer afraid of it nor hypnotised by it
people less sceptical re fund-raising an rather view it as something that should be using it by default
cloud means that everyone can become a data scientist at much lower entry costs

Tristan has changed his mind over concern about not sufficiently sophisticated users of cloud AI tools – in particular the leap forwards in user interfaces
encapsulation
so many people have used them the interface has been perfected
the importance however of domain understanding if less so AI understanding
explicability remains highly important but quite often you can retrofit explanations to an AI system’s output
furthermore human understanding is generally limited to “Key factors” and “ranked by importance” rather than anything more complicated hence the need to map onto such a framework when explaining it anyway
how retrofitting works
the explosion of data sources and the implications – eg satellite data
implications of this for the field – excess choice isn’t always a good thing even if barriers to entry have been lowered
spurious correlations in neural nets cf hallucinations in generative AI

notable examples being that the number of Nick Cage films correlates with the number of drownings in US swimming pools

autopilot systems cannot be generative systems due to reliability challenges – it doesnt matter if you get a wrong chat response it does if you crash into a tree
the artificial media hype cycle as well as emergence thereof in social media
future developments
hottest tip/best bet – quantum computing + machine learning
what would be a huge surprise – sentience (a category error as per 2023 New Year Special)
ChAI developments and moves into Insurtech and productisation for producers in hedging real world exposures

And much much more


Share and enjoy!