Yujian is working as a Developer Advocate at Zilliz, where they develop and write tutorials for proof of concepts for large language model applications. They also give talks on vector databases, LLM Apps, semantic search, and tangential spaces.


MLOps podcast #206 with Yujian Tang, Developer Advocate at Zilliz, RAG Has Been Oversimplified, brought to us by our Premium Brand Partner, Zilliz

// Abstract
In the world of development, Retrieval Augmented Generation (RAG) has often been oversimplified. Despite the industry's push, the practical application of RAG reveals complexities beyond its apparent simplicity. This talk delves into the nuanced challenges and considerations developers encounter when working with RAG, providing a candid exploration of the intricacies often overlooked in the broader narrative.

// Bio
Yujian Tang is a Developer Advocate at Zilliz. He has a background as a software engineer working on AutoML at Amazon. Yujian studied Computer Science, Statistics, and Neuroscience with research papers published to conferences including IEEE Big Data. He enjoys drinking bubble tea, spending time with family, and being near water.

// MLOps Jobs board
https://mlops.pallet.xyz/jobs

// MLOps Swag/Merch
https://mlops-community.myshopify.com/

// Related Links
Website: zilliz.com

--------------- ✌️Connect With Us ✌️ -------------
Join our slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Catch all episodes, blogs, newsletters, and more: https://mlops.community/

Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Yujian on LinkedIn: linkedin.com/in/yujiantang

Timestamps:
[00:00] Yujian's preferred coffee
[00:17] Takeaways
[02:42] Please like, share, and subscribe to our MLOps channels!
[02:55] The hero of the LLM space
[05:42] Embeddings into Vector databases
[09:15] What is large and what is small LLM consensus
[10:10] QA Bot behind the scenes
[13:59] Fun fact getting more context
[17:05] RAGs eliminate the ability of LLMs to hallucinate
[18:50] Critical part of the rag stack
[19:57] Building citations
[20:48] Difference between context and relevance
[26:11] Missing prompt tooling
[27:46] Similarity search
[29:54] RAG Optimization
[33:03] Interacting with LLMs and tradeoffs
[35:22] RAGs not suited for
[39:33] Fashion App
[42:43] Multimodel Rags vs LLM RAGs
[44:18] Multimodel use cases
[46:50] Video citations
[47:31] Wrap up