#78- RAFT: Why just to use RAG if you can also fine tune?
Life with AI
English - March 21, 2024 08:01 - 9 minutes - 13 MB - ★★★★★ - 1 ratingTechnology Homepage Download Apple Podcasts Google Podcasts Overcast Castro Pocket Casts RSS feed
Previous Episode: #77- Ring Attention and 1M context window, is RAG dead?
Next Episode: #79- LoRA and QLoRA.
Hello, in this episode I talk a Retrieval Aware Fine Tuning (RAFT), a paper that proposes a new technique to use both domain specific fine-tuning and RAG to improve the retrieval capabilities of LLMs.
In the episode I also talk about another paper that is called RAFT, but this time Reward rAnking Fine Tuning, which proposes a new technique to perform RLHF without the convergence problems of Reinforcement Learning.
Retrieval Aware Fine Tuning: https://arxiv.org/abs/2403.10131v1
Reward rAnking Fine Tuning: https://arxiv.org/pdf/2304.06767.pdf
Instagram of the podcast: https://www.instagram.com/podcast.lifewithai
Linkedin of the podcast: https://www.linkedin.com/company/life-with-ai