OpenAI's new "dangerous" GPT-2 language model
Practical AI: Machine Learning, Data Science
English - February 25, 2019 20:15 - 40 minutes - 55.6 MB - ★★★★★ - 37 ratingsTechnology Education How To changelog machine learning deep learning artificial intelligence neural networks computer vision Homepage Download Apple Podcasts Google Podcasts Overcast Castro Pocket Casts RSS feed
This week we discuss GPT-2, a new transformer-based language model from OpenAI that has everyone talking. It’s capable of generating incredibly realistic text, and the AI community has lots of concerns about potential malicious applications. We help you understand GPT-2 and we discuss ethical concerns, responsible release of AI research, and resources that we have found useful in learning about language models.
This week we discuss GPT-2, a new transformer-based language model from OpenAI that has everyone talking. It’s capable of generating incredibly realistic text, and the AI community has lots of concerns about potential malicious applications. We help you understand GPT-2 and we discuss ethical concerns, responsible release of AI research, and resources that we have found useful in learning about language models.
Changelog++ members support our work, get closer to the metal, and make the ads disappear. Join today!
Sponsors:
Linode – Our cloud server of choice. Deploy a fast, efficient, native SSD cloud server for only $5/month. Get 4 months free using the code changelog2018. Start your server - head to linode.com/changelog
Rollbar – We move fast and fix things because of Rollbar. Resolve errors in minutes. Deploy with confidence. Learn more at rollbar.com/changelog.
Fastly – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com.
Featuring:
Chris Benson – Twitter, GitHub, LinkedIn, WebsiteDaniel Whitenack – Twitter, GitHub, Website
Show Notes:
Relevant learning resources:
Jay Alammar “Illustrated” blog articles:
The illustrated transformer
The illustrated BERT, elmo, and co
Machine Learning Explained blog:
An In-Depth Tutorial to AllenNLP (From Basics to ELMo and BERT)
Paper Dissected: “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding” Explained
References/notes:
GPT-2 blog post from OpenAI
GPT-2 Paper
GPT-2 GitHub Repo
GPT-2 PyTorch implementation
Episode 22 of Practical AI about BERT
OpenAI’s GPT-2: the model, the hype, and the controversy (towardsdatascience)
The AI Text Generator That’s Too Dangerous to Make Public (Wired)
Transformer paper
Preparing for malicious uses of AI (OpenAI blog)
Something missing or broken? PRs welcome!