Hello guys, in this episode I explain how we can scale the context window of an LLM to more than 1M tokens using Ring Attention. In the episode, I also discuss if RAG is dead or not based on these advancements in the context window.




Paper Lost in the Middle: https://arxiv.org/pdf/2307.03172.pdf


Gemini technical report: https://storage.googleapis.com/deepmind-media/gemini/gemini_v1_5_report.pdf


Paper Ring Attention: https://arxiv.org/pdf/2310.01889.pdf


Instagram of the podcast: https://www.instagram.com/podcast.lifewithai


Linkedin of the podcast: https://www.linkedin.com/company/life-with-ai