Discussing Prompt Engineering and recent OpenAI developments with ex-OpenAI Creative Apps and Scientific Communicator Andrew Mayne


 


Timestamps:


00:00:00 - Teaser Reel Intro


00:01:01 - Intro / Andrew's background


00:02:49 - What was it like working at OpenAI when you first joined?


00:12:59 - Was Andrew basically one of the earliest Prompt Engineers?


00:14:04 - How Andrew Hacked his way into a tech job at OpenAI


00:17:08 - Parallels between Hollywood and Tech jobs


00:20:58 - Parallels between the world of Magic and working at OpenAI


00:25:00 - What was OpenAI like in the Early Days?


00:30:24 - Why it was hard promoting GPT-3 early on


00:31:00 - How would you describe the current 'instruction age' of prompt design?


00:35:22 - What was GPT-4 like freshly trained?


00:39:00 - Is there anything different about the raw base model without RLHF?


00:42:00 - Optimizations that go into Language models like GPT-4


00:43:30 - What was it like using DALL-E 3 very early on?


00:44:38 - Do you know who came up with the 'armchair in the shape of an avocado' prompt at OpenAI?


00:45:48 - Did you experience 'DALL-E Dreams' as a part of the DALL-E 2 beta?


00:47:16 - How else has prompt design changed?


00:49:27 - How has prompt design changed because of ChatGPT?


00:52:40 - How to get ChatGPT to mimick and emulate personalities better?


00:54:30 - Mimicking Personalities II (How to do Style with ChatGPT)


00:56:40 - Fine Tuning ChatGPT for Mimicking Elon Musk


00:59:44 - How do you get ChatGPT to come up with novel and brilliant ideas?


01:02:40 - How do you get ChatGPt to get away from conventional answers?


01:05:14 - Will we ever get single-shot, real true novelty from LLM's?


01:10:05 - Prompting for ChatGPT Voice Mode


01:12:20 - Possibilities and Prompting for GPT-4 Vision


01:15:45 - GPT-4 Vision Use Cases/Startup Ideas


01:21:37 - Does multimodality make language models better or are the benefits marginal?


01:24:00 - Intuitively, has multimodality improved the world model of LLM's like GPT-4?


01:25:33 - What would it take for ChatGPT to write half of your next book?


01:29:10 - Qualitatively, what would it take to convince you about a book written by AI?  What are the characteristics?


01:31:30 - Could an LLM mimick Andrew Mayne's writing style?


01:37:49 - Jailbreaking ChatGPT


01:41:12 - What's the next era of prompt engineering?


01:45:50 - How have custom instructions changed the game?


01:54:41 - How far do you think we are from asking a model how to make 10 million dollars and getting back a legit answer?


02:01:07 - Part II - Making Money with LLM's


02:11:32 - How do you make a chat bot more reliable and safe?


02:12:12 - How do you get ChatGPT to consistently remember criteria and work within constraints?


02:12:45 - What about DALL-E?  How do you get it to better create within constraints?


02:14:14 - What's your prompt practice like?


02:15:10 - Do you intentionally sit down and practice writing prompts?


02:16:45 - How do you build an intuition around prompt design for an LLM?


02:20:00 - How do you like to iterate on prompts? Do you have a process?


02:21:45 - How do you know when you've hit the ceiling with a prompt?


02:24:00 - How do you know a single line prompt is has room to improve?


02:26:40 - Do you actually need to know OpenAI's training data?  What are some ways to mitigate this?


02:30:40 - What are your thoughts on automated prompt writing/optimization?


02:33:20 - How do you get a job as a prompt engineer?  What makes a top tier prompt engineer different from an everyday user?


02:37:20 - How do you think about scaling laws a prompt engineer?


02:39:00 - Effortless Prompt Design


02:40:52 - What are some research areas that would get you a job at OpenAI?


02:43:30 - The Research Possibilities of Optimization & Inference


02:45:59 - If you had to guess future capabilities of GPT-5 what would they be?


02:50:16 - What are some capabilities that got trained out of GPT-4 for ChatGPT?


02:51:10 - Is there any specific capability you could imagine for GPT-5?  Why is it so hard to predict them?


02:56:06 - Why is it hard to predict future LLM capabilities? (Part II)


02:59:47 - What made you want to leave OpenAI and start your own consulting practice?


03:05:29 - Any remaining advice for creatives, entrepreneurs, prompt engineers?


03:09:25 - Closing


 


Subscribe to the Multimodal By Bakz T. Future Podcast!


Spotify - https://open.spotify.com/show/7qrWSE7ZxFXYe8uoH8NIFV


Apple Podcasts - https://podcasts.apple.com/us/podcast/multimodal-by-bakz-t-future/id1564576820


Google Podcasts -  https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2Jha3p0ZnV0dXJlL2ZlZWQueG1s


Stitcher - https://www.stitcher.com/show/multimodal-by-bakz-t-future


Other Podcast Apps (RSS Link) - https://feed.podbean.com/bakztfuture/feed.xml


 


Connect with me:



YouTube - https://www.youtube.com/bakztfuture


Substack Newsletter - https://bakztfuture.substack.com


Twitter - https://www.twitter.com/bakztfuture


Instagram - https://www.instagram.com/bakztfuture


Github - https://www.github.com/bakztfuture


 

Twitter Mentions