2023 ML Pulse Report

 

Joining us today are our panelists, Duncan Curtis, SVP of AI products and technology at Sama, and Jason Corso, a professor of robotics, electrical engineering, and computer science at the University of Michigan. Jason is also the chief science officer at Voxel51, an AI software company specializing in developer tools for machine learning. We use today’s conversation to discuss the findings of the latest Machine Learning (ML) Pulse report, published each year by our friends at Sama. This year’s  report focused on the role of generative AI by surveying thousands of practitioners in this space. Its findings include feedback on how respondents are measuring their model’s effectiveness, how confident they feel that their models will survive production, and whether they believe generative AI is worth the hype. Tuning in you’ll hear our panelists’ thoughts on key questions in the report and its findings, along with their suggested solutions for some of the biggest challenges faced by professionals in the AI space today. We also get into a bunch of fascinating topics like the opportunities presented by synthetic data, the latent space in language processing approaches, the iterative nature of model development, and much more. Be sure to tune in for all the latest insights on the ML Pulse Report!

Key Points From This Episode:

Introducing today’s panelists, Duncan Curtis and Jason Corso.An overview of what the Machine Learning (ML) Pulse report focuses on.Breaking down what the term generative means in AI.Our thoughts on key findings from the ML Pulse Report.What respondents, and our panelists, think of hype around generative AI.Unpacking one of the biggest advances in generative AI: accessibility.Insights on cloud versus local in an AI context.Generative AI use cases in the field of computer vision.The powerful opportunities presented by synthetic data.Why the role of human feedback in synthetic data is so important.Finding a middle ground between human language and machine understanding.Unpacking the notion of latent space in language processing approaches.How confident respondents feel that their models will survive production.The challenges of predicting how well a model will perform.An overview of the biggest challenges reported by respondents.Suggested solutions from panelists on key challenges from the report.How respondents are measuring the effectiveness of their models.What Duncan and Jason focus on to measure success.Career advice from our panelists on making meaningful contributions to this space.

Quotes:

“It's really hard to know how well your model is going to do.” — Jason Corso [0:27:10]

“With debugging and detecting errors in your data, I would definitely say look at some of the tooling that can enable you to move more quickly and understand your data better.” — Duncan Curtis [0:33:55]

“Work with experts – there's no replacement for good experience when it comes to actually boxing in a problem, especially in AI.” — Jason Corso [0:35:37]

“It's not just about how your model performs. It's how your model performs when it's interacting with the end user.” — Duncan Curtis [0:41:11]

“Remember, what we do in this field, and in all fields really, is by humans, for humans, and with humans. And I think if you miss that idea [then] you will not achieve – either your own potential, the group you're working with, or the tool.” — Jason Corso [0:48:20]

Links Mentioned in Today’s Episode:


Duncan Curtis on LinkedIn
Jason Corso

Jason Corso on LinkedIn

Voxel51

2023 ML Pulse Report

ChatGPT

Bard

DALL·E 3

How AI Happens

Sama