Click to copy, then share by pasting into your messages, comments, social media posts and websites.
Click to copy, then add into your webpages so users can view and engage with this video from your site.
Report Content
We also accept reports via email. Please see the Guidelines Enforcement Process for instructions on how to make a request via email.
Thank you for submitting your report
We will investigate and take the appropriate action.
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding&Generation
#blip #review #ai
Cross-modal pre-training has been all the rage lately in deep learning, especially training vision and language models together. However, there are a number of issues, such as low quality datasets that limit the performance of any model trained on it, and also the fact that pure contrastive pre-training cannot be easily fine-tuned for most downstream tasks. BLIP unifies different tasks and objectives in a single pre-training run and achieves a much more versatile model, which the paper immediately uses to create, filter, clean and thus bootstrap its own dataset to improve performance even more!
Sponsor: Zeta Alpha
https://zeta-alpha.com
Use code YANNIC for 20% off!
OUTLINE:
0:00 - Intro
0:50 - Sponsor: Zeta Alpha
3:40 - Paper Overview
6:40 - Vision-Language Pre-Training
11:15 - Contributions of the paper
14:30 - Model architecture: many parts for many tasks
19:50 - How data flows in the model
26:50 - Parameter sharing between the modules
29:45 - Captioning & Filtering bootstrapping
41:10 - Fine-tuning the model for downstream tasks
Paper: https://arxiv.org/abs/2201.12086
Code: https://github.com/salesforce/BLIP
Demo: https://huggingface.co/spaces/Salesfo...
Abstract:
Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to video-language tasks in a zero-shot manner. Code, models, and datasets are released at this https URL.
Authors: Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi
Links:
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yann...
LinkedIn: https://www.linkedin.com/in/ykilcher
BiliBili: https://space.bilibili.com/2017636191
If you want to support me, the best thing to do is to share out the content :)
Category | Science & Technology |
Sensitivity | Normal - Content that is suitable for ages 16 and over |
Playing Next
Related Videos
[ML News] Llama 3 changes the game
18 hours ago
1 week ago
Flow Matching for Generative Modeling (Paper Explained)
2 weeks, 3 days ago
Warning - This video exceeds your sensitivity preference!
To dismiss this warning and continue to watch the video please click on the button below.
Note - Autoplay has been disabled for this video.