Click to copy, then share by pasting into your messages, comments, social media posts and websites.
Click to copy, then add into your webpages so users can view and engage with this video from your site.
Report Content
We also accept reports via email. Please see the Guidelines Enforcement Process for instructions on how to make a request via email.
Thank you for submitting your report
We will investigate and take the appropriate action.
Pretrained Transformers as Universal Computation Engines (Machine Learning Research Paper Explained)
#universalcomputation #pretrainedtransformers #finetuning
Large-scale pre-training and subsequent fine-tuning is a common recipe for success with transformer models in machine learning. However, most such transfer learning is done when a model is pre-trained on the same or a very similar modality to the final task to be solved. This paper demonstrates that transformers can be fine-tuned to completely different modalities, such as from language to vision. Moreover, they demonstrate that this can be done by freezing all attention layers, tuning less than .1% of all parameters. The paper further claims that language modeling is a superior pre-training task for such cross-domain transfer. The paper goes through various ablation studies to make its point.
OUTLINE:
0:00 - Intro & Overview
2:00 - Frozen Pretrained Transformers
4:50 - Evaluated Tasks
10:05 - The Importance of Training LayerNorm
17:10 - Modality Transfer
25:10 - Network Architecture Ablation
26:10 - Evaluation of the Attention Mask
27:20 - Are FPTs Overfitting or Underfitting?
28:20 - Model Size Ablation
28:50 - Is Initialization All You Need?
31:40 - Full Model Training Overfits
32:15 - Again the Importance of Training LayerNorm
33:10 - Conclusions & Comments
Paper: https://arxiv.org/abs/2103.05247
Code: https://github.com/kzl/universal-comp...
Abstract:
We investigate the capability of a transformer pretrained on natural language to generalize to other modalities with minimal finetuning -- in particular, without finetuning of the self-attention and feedforward layers of the residual blocks. We consider such a model, which we call a Frozen Pretrained Transformer (FPT), and study finetuning it on a variety of sequence classification tasks spanning numerical computation, vision, and protein fold prediction. In contrast to prior works which investigate finetuning on the same modality as the pretraining dataset, we show that pretraining on natural language improves performance and compute efficiency on non-language downstream tasks. In particular, we find that such pretraining enables FPT to generalize in zero-shot to these modalities, matching the performance of a transformer fully trained on these tasks.
Authors: Kevin Lu, Aditya Grover, Pieter Abbeel, Igor Mordatch
Links:
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yann...
Minds: https://www.minds.com/ykilcher
Parler: https://parler.com/profile/YannicKilcher
LinkedIn: https://www.linkedin.com/in/yannic-ki...
BiliBili: https://space.bilibili.com/1824646584
If you want to support me, the best thing to do is to share out the content :)
If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannick...
Category | Science & Technology |
Sensitivity | Normal - Content that is suitable for ages 16 and over |
Playing Next
Related Videos
[ML News] Llama 3 changes the game
3 days, 17 hours ago
1 week, 3 days ago
Flow Matching for Generative Modeling (Paper Explained)
2 weeks, 5 days ago
Warning - This video exceeds your sensitivity preference!
To dismiss this warning and continue to watch the video please click on the button below.
Note - Autoplay has been disabled for this video.