Neil Sahota is an AI Advisor to the UN, co-founder of the UN’s AI for Good initiative, IBM Master Inventor, and author of Own the AI Revolution. In today’s episode, Neil shares some of the valuable lessons he learned during his first experience working in the AI world, which involved training the Watson computer system. We then dive into a number of different topics, ranging from Neil’s thoughts on synthetic data and to the language learning capacity of AI versus a human child, to an overview of the AI for Good initiative and what Neil believes our a “cyborg future” could entail! 

Key Points From This Episode:

A few of the thousands of data points that humans use to make rapid judgments.Neil’s introduction into the world of AI.How data collection changed AI, using the Watson computer system as an example. Lessons that Neil learned through training Watson.The relative importance of confidence levels with regard to training AI in different fields.Why reaching a 99.9% confidence level is not realistic.Examples of cases where synthetic data is and isn’t helpful.A major difference between the language learning trajectory of AI versus a human child.Areas that Neil believes AI is best suited for.Focus of the United Nations’ AI for Good initiative.The UN’s approach to bringing AI technologies to remote parts of the world.Benefits of being exposed to technology at a young age.The cyborg future: what Neil believes this is going to look like.Why Neil is excited about AI augmentation for human creativity. 

Tweetables:

“We, as human beings, have to make really rapid judgement calls, especially in sports, but there’s still thousands of data points in play and the best of us can only see seven to 12 in real time.” — @neil_sahota [0:01:21]

“Synthetic data can be a good bridge if we’re in a very closed ecosystem.” — @neil_sahota [0:11:47]

“For an AI system, if it gets exposed to about 100 billion words it becomes proficient and fluent in a language. If you think about a human child, it only needs about 30 billion words. So, it’s not the volume that matters, there’s certain words or phrases that trigger the cognitive learning for language. The problem is that we just don’t understand what that is.” — @neil_sahota [0:14:22]

“Things that are more hard science, or things that have the least amount of variability, are the best things for AI systems.” — @neil_sahota [0:16:26]

“Local problems have global solutions.” — @neil_sahota [0:20:06]

Links Mentioned in Today’s Episode:

Neil Sahota

Neil Sahota on LinkedIn

Own the A.I. Revolution

AI for Good

Twitter Mentions