Click to copy, then share by pasting into your messages, comments, social media posts and websites.
Click to copy, then add into your webpages so users can view and engage with this video from your site.
Report Content
We also accept reports via email. Please see the Guidelines Enforcement Process for instructions on how to make a request via email.
Thank you for submitting your report
We will investigate and take the appropriate action.
One Model For All The Tasks - BLIP (Author Interview)
#blip #interview #salesforce
Paper Review Video: https://youtu.be/X2k7n4FuI7c
Sponsor: Assembly AI
https://www.assemblyai.com/?utm_sourc...
This is an interview with Junnan Li and Dongxu Li, authors of BLIP and members of Salesforce research.
Cross-modal pre-training has been all the rage lately in deep learning, especially training vision and language models together. However, there are a number of issues, such as low quality datasets that limit the performance of any model trained on it, and also the fact that pure contrastive pre-training cannot be easily fine-tuned for most downstream tasks. BLIP unifies different tasks and objectives in a single pre-training run and achieves a much more versatile model, which the paper immediately uses to create, filter, clean and thus bootstrap its own dataset to improve performance even more!
OUTLINE:
0:00 - Intro
0:40 - Sponsor: Assembly AI
1:30 - Start of Interview
2:30 - What's the pitch?
4:40 - How did data bootstrapping come into the project?
7:10 - How big of a problem is data quality?
11:10 - Are the captioning & filtering models biased towards COCO data?
14:40 - Could the data bootstrapping be done multiple times?
16:20 - What was the evolution of the BLIP architecture?
21:15 - Are there additional benefits to adding language modelling?
23:50 - Can we imagine a modular future for pre-training?
29:45 - Diving into the experimental results
42:40 - What did and did not work out during the research?
45:00 - How is research life at Salesforce?
46:45 - Where do we go from here?
Paper: https://arxiv.org/abs/2201.12086
Code: https://github.com/salesforce/BLIP
Demo: https://huggingface.co/spaces/Salesfo...
Abstract:
Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to video-language tasks in a zero-shot manner. Code, models, and datasets are released at this https URL.
Authors: Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi
Category | Science & Technology |
Sensitivity | Normal - Content that is suitable for ages 16 and over |
Warning - This video exceeds your sensitivity preference!
To dismiss this warning and continue to watch the video please click on the button below.
Note - Autoplay has been disabled for this video.