#blip #interview #salesforce




Paper Review Video: https://youtu.be/X2k7n4FuI7c


Sponsor: Assembly AI


https://www.assemblyai.com/?utm_sourc...




This is an interview with Junnan Li and Dongxu Li, authors of BLIP and members of Salesforce research.


Cross-modal pre-training has been all the rage lately in deep learning, especially training vision and language models together. However, there are a number of issues, such as low quality datasets that limit the performance of any model trained on it, and also the fact that pure contrastive pre-training cannot be easily fine-tuned for most downstream tasks. BLIP unifies different tasks and objectives in a single pre-training run and achieves a much more versatile model, which the paper immediately uses to create, filter, clean and thus bootstrap its own dataset to improve performance even more!




OUTLINE:


0:00 - Intro


0:40 - Sponsor: Assembly AI


1:30 - Start of Interview


2:30 - What's the pitch?


4:40 - How did data bootstrapping come into the project?


7:10 - How big of a problem is data quality?


11:10 - Are the captioning & filtering models biased towards COCO data?


14:40 - Could the data bootstrapping be done multiple times?


16:20 - What was the evolution of the BLIP architecture?


21:15 - Are there additional benefits to adding language modelling?


23:50 - Can we imagine a modular future for pre-training?


29:45 - Diving into the experimental results


42:40 - What did and did not work out during the research?


45:00 - How is research life at Salesforce?


46:45 - Where do we go from here?




Paper: https://arxiv.org/abs/2201.12086


Code: https://github.com/salesforce/BLIP


Demo: https://huggingface.co/spaces/Salesfo...




Abstract:


Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to video-language tasks in a zero-shot manner. Code, models, and datasets are released at this https URL.




Authors: Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi




Links:


TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick


YouTube: https://www.youtube.com/c/yannickilcher


Twitter: https://twitter.com/ykilcher


Discord: https://discord.gg/4H8xxDF


BitChute: https://www.bitchute.com/channel/yann...


LinkedIn: https://www.linkedin.com/in/ykilcher


BiliBili: https://space.bilibili.com/2017636191




If you want to support me, the best thing to do is to share out the content :)




If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):


SubscribeStar: https://www.subscribestar.com/yannick...


Patreon: https://www.patreon.com/yannickilcher


Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq


Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2

Twitter Mentions