This is a super-rare episode on a fundamentally different way of assessing consumer loan risk so far into the Fintech Age focusing on an area which is bizarrely massively undercovered namely that of utilising Open Banking Data for credit assessment of personal loans. Furthermore this is a rare example of a Fintech – the parent […]


This is a super-rare episode on a fundamentally different way of assessing consumer loan risk so far into the Fintech Age focusing on an area which is bizarrely massively undercovered namely that of utilising Open Banking Data for credit assessment of personal loans. Furthermore this is a rare example of a Fintech – the parent company is Fintern.ai –  which both provides SAAS services to banks under its Render brand and directly lends itself through its Abound brand – which is quite a compelling display of faith in the approach which has so far over two years led to an astonishing 70% reduction in loan defaults :-O



Gerald was previously Global Head of Digital Lending at McKinsey so he should know a thing or two about this topic. Furthermore despite Abound being formed only in 2020 last year they raised an eye-opening £1/2bn in funding so there must be quite a few people out there impressed by Abound.


Topics discussed in this rich episode include:

being brought up as an itinerant and the love of the new more generally
choosing the best climate
the lifestyle of a Partner at McKinsey when one has a global role
realisations of gaps in credit assessment
Gerald’s career journey
the challenge of moving from a solid role to founding a start-up
the outdated aspects of consumer credit models
revisiting the 1920s debate on the nature of risk in the 1920s namely something one can calculate using data or something that is unknowable (covered in “Radical Uncertainty: Decision-making for an unknowable future” by Lord King and John Kay)
squaring the circle between these left-brain and right-brain perspectives
the challenge of operationally utilising stress test data
comparing models and data
frequency of updating models/data as a factor influencing how much the philosophical risk question needs squaring
adding in operational management speed of making decisions and changing courses as where the rubber hits the road
the four principal reasons behind banks being slow to evolve their credit assessment approach
how many metrics can one derive from a stream of banking data especially given the noise – 1, 10, 100, 1000?
kinds of metric derived – traditional and non-traditional
real assessments compared to say ONS average data being the major key reason along with the iteratively improved model for the massive improvement of default performance compared to market averages
the machine underwriter and how it approaches the numerous metrics
human underwriter and computer underwriter interaction and mutual support/growth
comparing the model to standard rating agency metrics
transparency of algorithms
dealing with phenomenal discontinuities in assessing risk
reinforcement learning from human feedback – close liaison between human underwriters and the computer bods
a shoutout for “Sunburst and Luminary: An Apollo Memoir – Don Eyles” as a fascinating description of the human-machine interaction in the case of the Apollo program
dealing with discontinuities way better than a traditional lender due to the real time vs historic data approach
binary discontinuity vs “seismograph before a volcanic eruption” phenomena in a risk context
shoutouts for Abound/Render and vision going forwards
the regulator as a force that will end up pushing banks towards using this type of data
shoutouts for new staff – 55 people at present

And much more


Share and enjoy!