Yannic Kilcher

#flamingo #mlnews #tech

Your updates directly from the state of the art in Machine Learning!

OUTLINE:
0:00 - Intro
0:30 - DeepMind's Flamingo: Unified Vision-Language Model
8:25 - LiT: Locked Image Tuning
10:20 - Jurassic X & MRKL Systems
15:05 - Helpful Things
22:40 - This AI does not exist

References:
DeepMind's Flamingo: Unified Vision-Language Model
https://www.deepmind.com/blog/tackling-multiple-tasks-with-a-single-visual-language-model
https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/tackling-multiple-tasks-with-a-single-visual-language-model/flamingo.pdf
https://twitter.com/Inoryy/status/1522621712382234624

LiT: Locked Image Tuning
https://ai.googleblog.com/2022/04/locked-image-tuning-adding-language.html
https://google-research.github.io/vision_transformer/lit/

Jurassic X & MRKL Systems
https://www.ai21.com/blog/jurassic-x-crossing-the-neuro-symbolic-chasm-with-the-mrkl-system#reading
https://arxiv.org/pdf/2205.00445.pdf
https://arxiv.org/pdf/2204.10019.pdf
https://studio.ai21.com/jurassic-x

StyleGAN Human
https://stylegan-human.github.io/
https://github.com/stylegan-human/StyleGAN-Human?utm_source=pocket_mylist
https://huggingface.co/spaces/hysts/StyleGAN-Human

Helpful Things
https://github.com/rish-16/grafog
https://huggingface.co/bertin-project/bertin-gpt-j-6B
https://github.com/pytorch/torchdistx
https://pytorch.org/torchdistx/latest/fake_tensor.html
https://github.com/Netflix/vectorflow?utm_source=pocket_mylist
https://iclr-blog-track.github.io/2022/03/25/ppo-implementation-details/
https://twitter.com/DeepMind/status/1517146462571794433
https://github.com/ai-forever/mgpt
https://github.com/cleanlab/cleanlab
https://efficientdlbook.com/?utm_source=pocket_mylist
https://minihack-editor.github.io/
https://mugen-org.github.io/
https://www.amazon.science/blog/amazon-releases-51-language-dataset-for-language-understanding
https://github.com/phuselab/openFACS?utm_source=pocket_mylist
https://medium.com/pytorch/avalanche-and-end-to-end-library-for-continual-learning-based-on-pytorch-a99cf5661a0d

This AI does not exist
https://thisaidoesnotexist.com/

Links:
Merch: https://ykilcher.com/merch
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
LinkedIn: https://www.linkedin.com/in/ykilcher
BiliBili: https://space.bilibili.com/2017636191

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannickilcher
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m

#mlnews #dalle #gpt3

An inside look of what's happening in the ML world!

Sponsor: Weights & Biases
https://wandb.me/yannic

OUTLINE:
0:00 - Intro
0:20 - Sponsor: Weights & Biases
1:40 - Meta AI releases OPT-175B
4:55 - CoCa: New CLIP-Competitor
8:15 - DALL-E Mega is training
10:05 - TorToiSe TTS is amazing!
11:50 - Investigating Vision Transformers
12:50 - Hugging Face Deep RL class launched
13:40 - Helpful Things
17:00 - John Deere's driverless tractors

References:
Meta AI releases OPT-175B
https://ai.facebook.com/blog/democratizing-access-to-large-scale-language-models-with-opt-175b/
https://arxiv.org/abs/2205.01068
https://arxiv.org/pdf/2205.01068.pdf
https://github.com/facebookresearch/metaseq/tree/main/projects/OPT
https://github.com/facebookresearch/metaseq/blob/main/projects/OPT/chronicles/OPT175B_Logbook.pdf
https://github.com/facebookresearch/metaseq/tree/main/projects/OPT/chronicles
https://twitter.com/yoavgo/status/1522150063815987201

CoCa: New CLIP-Competitor
https://arxiv.org/abs/2205.01917
https://arxiv.org/pdf/2205.01917.pdf

DALL-E Mega is training
https://twitter.com/borisdayma
https://twitter.com/borisdayma/status/1521891895001112577
https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-Mega--VmlldzoxODMxMDI2

TorToiSe TTS is amazing!
https://github.com/neonbjb/tortoise-tts
https://nonint.com/static/tortoise_v2_examples.html
https://colab.research.google.com/drive/1wVVqUPqwiDBUVeWWOUNglpGhU3hg_cbR
https://github.com/neonbjb

Investigating Vision Transformers
https://github.com/sayakpaul/probing-vits/?utm_source=pocket_mylist
https://twitter.com/RisingSayak/status/1515918406171914240?utm_source=pocket_mylist
https://keras.io/examples/vision/probing_vits/
https://github.com/sayakpaul/probing-vits/tree/main/notebooks?utm_source=pocket_mylist

Hugging Face Deep RL class launched
https://github.com/huggingface/deep-rl-class

Helpful Things
https://merantix-momentum.com/technology/squirrel/?utm_source=pocket_mylist
https://github.com/merantix-momentum/squirrel-core?utm_source=pocket_mylist
https://pyscript.net/?utm_source=pocket_mylist
https://github.com/google-research/big_vision
https://deepsportradar.github.io/challenge.html
https://github.com/DeepSportRadar/camera-calibration-challenge
https://twitter.com/alekseykorshuk/status/1515989357961920514?utm_source=pocket_mylist
https://github.com/AlekseyKorshuk/huggingnft

John Deere's driverless tractors
https://thenextweb.com/news/john-deere-slowly-becoming-one-worlds-most-important-ai-companies
https://tractorhacking.github.io/

Links:
Merch: https://ykilcher.com/merch
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
LinkedIn: https://www.linkedin.com/in/ykilcher
BiliBili: https://space.bilibili.com/2017636191

#nft #gan #ai

Today we build our own AI that can create as many bored apes as we want! Fungibility for everyone!

Try the model here: https://huggingface.co/spaces/ykilcher/apes
or here: https://ykilcher.com/apes
Files & Models here: https://huggingface.co/ykilcher/apes/tree/main
Code here: https://github.com/yk/apes-public (for the "what's your ape" app, look for the file interface_projector.py)

This video is sponsored by BrightData, use this link for free credits:
https://brightdata.grsm.io/yannickilcher

OUTLINE:
0:00 - Introduction
2:05 - Generative Adversarial Networks
3:40 - Scraping Opensea with BrightData
7:55 - Training the GAN
11:35 - Here are the results!
15:20 - Diving deeper into BrightData

References:
Stylegan 3 imagery: https://nvlabs.github.io/stylegan3/
Bored Ape Yacht Club NFT Collection: https://opensea.io/collection/boredapeyachtclub
Better GANFT model: https://medium.com/@nathancooperjones/these-bored-apes-do-not-exist-6bed2c73f02c
Abstract AI-created apes: https://opensea.io/collection/gan-apes-nft
https://mobile.twitter.com/gannft
Another good model: https://twitter.com/cyrilzakka/status/1463944040878071811
StyleGAN2 versions: https://thispersondoesnotexist.com/
https://thissneakerdoesnotexist.com/
https://thischairdoesnotexist.com/
GANs: https://en.wikipedia.org/wiki/Generative_adversarial_network
https://arxiv.org/pdf/1406.2661.pdf
StyleGAN3: https://nvlabs.github.io/stylegan3/
StyleGAN2 code: https://github.com/NVlabs/stylegan2-ada-pytorch
CLIP: https://openai.com/blog/clip/
DALL-E 2 images: https://twitter.com/search?q=%23dalle&f=image
My music video: https://www.youtube.com/watch?v=2iq7WXSw26s
BrightData Links: https://brightdata.com/products/data-collector
https://brightdata.com/testimonials
https://brightdata.com/use-cases/adtech
https://brightdata.com/use-cases/social-media-for-marketing
https://brightdata.com/use-cases/ecommerce

Links:
Merch: https://ykilcher.com/merch
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
LinkedIn: https://www.linkedin.com/in/ykilcher
BiliBili: https://space.bilibili.com/2017636191

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this), find options at https://ykilcher.com

#saycan #robots #ai

This is an interview with the authors Brian Ichter, Karol Hausman, and Fei Xia.
Original Paper Review Video: https://youtu.be/Ru23eWAQ6_E
Large Language Models are excellent at generating plausible plans in response to real-world problems, but without interacting with the environment, they have no abilities to estimate which of these plans are feasible or appropriate. SayCan combines the semantic capabilities of language models with a bank of low-level skills, which are available to the agent as individual policies to execute. SayCan automatically finds the best policy to execute by considering a trade-off between the policy's ability to progress towards the goal, given by the language model, and the policy's probability of executing successfully, given by the respective value function. The result is a system that can generate and execute long-horizon action sequences in the real world to fulfil complex tasks.

OUTLINE:
0:00 - Introduction & Setup
3:40 - Acquiring atomic low-level skills
7:45 - How does the language model come in?
11:45 - Why are you scoring instead of generating?
15:20 - How do you deal with ambiguity in language?
20:00 - The whole system is modular
22:15 - Going over the full algorithm
23:20 - What if an action fails?
24:30 - Debunking a marketing video :)
27:25 - Experimental Results
32:50 - The insane scale of data collection
40:15 - How do you go about large-scale projects?
43:20 - Where did things go wrong?
45:15 - Where do we go from here?
52:00 - What is the largest unsolved problem in this?
53:35 - Thoughts on the Tesla Bot
55:00 - Final thoughts

Paper: https://arxiv.org/abs/2204.01691
Website: https://say-can.github.io/

Abstract:
Large language models can encode a wealth of semantic knowledge about the world. Such knowledge could be extremely useful to robots aiming to act upon high-level, temporally extended instructions expressed in natural language. However, a significant weakness of language models is that they lack real-world experience, which makes it difficult to leverage them for decision making within a given embodiment. For example, asking a language model to describe how to clean a spill might result in a reasonable narrative, but it may not be applicable to a particular agent, such as a robot, that needs to perform this task in a particular environment. We propose to provide real-world grounding by means of pretrained skills, which are used to constrain the model to propose natural language actions that are both feasible and contextually appropriate. The robot can act as the language model's "hands and eyes," while the language model supplies high-level semantic knowledge about the task.

#saycan #robots #ai

Large Language Models are excellent at generating plausible plans in response to real-world problems, but without interacting with the environment, they have no abilities to estimate which of these plans are feasible or appropriate.

Sponsor: Zeta Alpha
https://zeta-alpha.com
Use code YANNIC for 20% off!

OUTLINE:
0:00 - Introduction & Overview
3:20 - Sponsor: Zeta Alpha
5:00 - Using language models for action planning
8:00 - Combining LLMs with learned atomic skills
16:50 - The full SayCan system
20:30 - Experimental setup and data collection
21:25 - Some weaknesses & strengths of the system
27:00 - Experimental results

Paper: https://arxiv.org/abs/2204.01691
Website: https://say-can.github.io/

Abstract:
Large language models can encode a wealth of semantic knowledge about the world. Such knowledge could be extremely useful to robots aiming to act upon high-level, temporally extended instructions expressed in natural language. However, a significant weakness of language models is that they lack real-world experience, which makes it difficult to leverage them for decision making within a given embodiment. For example, asking a language model to describe how to clean a spill might result in a reasonable narrative, but it may not be applicable to a particular agent, such as a robot, that needs to perform this task in a particular environment. We propose to provide real-world grounding by means of pretrained skills, which are used to constrain the model to propose natural language actions that are both feasible and contextually appropriate. The robot can act as the language model's "hands and eyes," while the language model supplies high-level semantic knowledge about the task. We show how low-level skills can be combined with large language models so that the language model provides high-level knowledge about the procedures for performing complex and temporally-extended instructions, while value functions associated with these skills provide the grounding necessary to connect this knowledge to a particular physical environment. We evaluate our method on a number of real-world robotic tasks, where we show the need for real-world grounding and that this approach is capable of completing long-horizon, abstract, natural language instructions on a mobile manipulator.

Authors: Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil J Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan

#ai #accel #evolution

Automatic curriculum generation is one of the most promising avenues for Reinforcement Learning today. Multiple approaches have been proposed, each with their own set of advantages and drawbacks. This paper presents ACCEL, which takes the next step into the direction of constructing curricula for multi-capable agents. ACCEL combines the adversarial adaptiveness of regret-based sampling methods with the capabilities of level-editing, usually found in Evolutionary Methods.

OUTLINE:
0:00 - Intro & Demonstration
3:50 - Paper overview
5:20 - The ACCEL algorithm
15:25 - Looking at the pseudocode
23:10 - Approximating regret
33:45 - Experimental results
40:00 - Discussion & Comments

Website: https://accelagent.github.io
Paper: https://arxiv.org/abs/2203.01302

Abstract:
It remains a significant challenge to train generally capable agents with reinforcement learning (RL). A promising avenue for improving the robustness of RL agents is through the use of curricula. One such class of methods frames environment design as a game between a student and a teacher, using regret-based objectives to produce environment instantiations (or levels) at the frontier of the student agent's capabilities. These methods benefit from their generality, with theoretical guarantees at equilibrium, yet they often struggle to find effective levels in challenging design spaces. By contrast, evolutionary approaches seek to incrementally alter environment complexity, resulting in potentially open-ended learning, but often rely on domain-specific heuristics and vast amounts of computational resources. In this paper we propose to harness the power of evolution in a principled, regret-based curriculum. Our approach, which we call Adversarially Compounding Complexity by Editing Levels (ACCEL), seeks to constantly produce levels at the frontier of an agent's capabilities, resulting in curricula that start simple but become increasingly complex. ACCEL maintains the theoretical benefits of prior regret-based methods, while providing significant empirical gains in a diverse set of environments. An interactive version of the paper is available at this http URL.

Authors: Jack Parker-Holder, Minqi Jiang, Michael Dennis, Mikayel Samvelyan, Jakob Foerster, Edward Grefenstette, Tim Rocktäschel

Links:
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yann...
LinkedIn: https://www.linkedin.com/in/ykilcher
BiliBili: https://space.bilibili.com/2017636191

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannick...
Patreon: https://www.patreon.com/yannickilcher

#ai #accel #evolution

This is an interview with the authors Jack Parker-Holder and Minqi Jiang.
Original Paper Review Video: https://www.youtube.com/watch?v=povBD...

Automatic curriculum generation is one of the most promising avenues for Reinforcement Learning today. Multiple approaches have been proposed, each with their own set of advantages and drawbacks. This paper presents ACCEL, which takes the next step into the direction of constructing curricula for multi-capable agents. ACCEL combines the adversarial adaptiveness of regret-based sampling methods with the capabilities of level-editing, usually found in Evolutionary Methods.

OUTLINE:
0:00 - Intro
1:00 - Start of interview
4:45 - How did you get into this field?
8:10 - What is minimax regret?
11:45 - What levels does the regret objective select?
14:20 - Positive value loss (correcting my mistakes)
21:05 - Why is the teacher not learned?
24:45 - How much domain-specific knowledge is needed?
29:30 - What problems is this applicable to?
33:15 - Single agent vs population of agents
37:25 - Measuring and balancing level difficulty
40:35 - How does generalization emerge?
42:50 - Diving deeper into the experimental results
47:00 - What are the unsolved challenges in the field?
50:00 - Where do we go from here?

Website: https://accelagent.github.io
Paper: https://arxiv.org/abs/2203.01302
ICLR Workshop: https://sites.google.com/view/aloe2022
Book on topic: https://www.oreilly.com/radar/open-en...

Abstract:
It remains a significant challenge to train generally capable agents with reinforcement learning (RL). A promising avenue for improving the robustness of RL agents is through the use of curricula. One such class of methods frames environment design as a game between a student and a teacher, using regret-based objectives to produce environment instantiations (or levels) at the frontier of the student agent's capabilities. These methods benefit from their generality, with theoretical guarantees at equilibrium, yet they often struggle to find effective levels in challenging design spaces. By contrast, evolutionary approaches seek to incrementally alter environment complexity, resulting in potentially open-ended learning, but often rely on domain-specific heuristics and vast amounts of computational resources. In this paper we propose to harness the power of evolution in a principled, regret-based curriculum. Our approach, which we call Adversarially Compounding Complexity by Editing Levels (ACCEL), seeks to constantly produce levels at the frontier of an agent's capabilities, resulting in curricula that start simple but become increasingly complex. ACCEL maintains the theoretical benefits of prior regret-based methods, while providing significant empirical gains in a diverse set of environments. An interactive version of the paper is available at this http URL.

Authors: Jack Parker-Holder, Minqi Jiang, Michael Dennis, Mikayel Samvelyan, Jakob Foerster, Edward Grefenstette, Tim Rocktäschel

#laion #clip #dalle

LAION-5B is an open, free dataset consisting of over 5 billion image-text-pairs. Today's video is an interview with three of its creators. We dive into the mechanics and challenges of operating at such large scale, how to keep cost low, what new possibilities are enabled with open datasets like this, and how to best handle safety and legal concerns.

OUTLINE:
0:00 - Intro
1:30 - Start of Interview
2:30 - What is LAION?
11:10 - What are the effects of CLIP filtering?
16:40 - How big is this dataset?
19:05 - Does the text always come from the alt-property?
22:45 - What does it take to work at scale?
25:50 -When will we replicate DALL-E?
31:30 - The surprisingly efficient pipeline
35:20 - How do you cover the S3 costs?
40:30 - Addressing safety & legal concerns
55:15 - Where can people get started?

References:
LAION website: https://laion.ai/
LAION Discord: https://discord.com/invite/mVcgxMPD7e
LAION-5B: https://laion.ai/laion-5b-a-new-era-o...
img2dataset tool: https://github.com/rom1504/img2dataset
LAION-400M: https://paperswithcode.com/dataset/la...

Links:
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yann...
LinkedIn: https://www.linkedin.com/in/ykilcher
BiliBili: https://space.bilibili.com/2017636191

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannick...
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

#nlp #sparsity #transformers

This video is an interview with Barret Zoph and William Fedus of Google Brain about Sparse Expert Models.
Sparse Expert models have been hugely successful at distributing parts of models, mostly Transformers, across large array of machines and use a routing function to effectively route signals between them. This means that even though these models have a huge number of parameters, the computational load for a given signal does not increase because the model is only sparsely activated. Sparse expert models, such as Switch Transformers and GLAM can scale up to trillions of parameters and bring a number of desirable properties. We discuss everything from the fundamentals, history, strengths and weaknesses, up to the current state of the art of these models.

OUTLINE:
0:00 - Intro
0:30 - What are sparse expert models?
4:25 - Start of Interview
5:55 - What do you mean by sparse experts?
8:10 - How does routing work in these models?
12:10 - What is the history of sparse experts?
14:45 - What does an individual expert learn?
19:25 - When are these models appropriate?
22:30 - How comparable are sparse to dense models?
26:30 - How does the pathways system connect to this?
28:45 - What improvements did GLAM make?
31:30 - The "designing sparse experts" paper
37:45 - Can experts be frozen during training?
41:20 - Can the routing function be improved?
47:15 - Can experts be distributed beyond data centers?
50:20 - Are there sparse experts for other domains than NLP?
52:15 - Are sparse and dense models in competition?
53:35 - Where do we go from here?
56:30 - How can people get started with this?

Papers:
Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity (https://arxiv.org/abs/2101.03961)
GLaM: Efficient Scaling of Language Models with Mixture-of-Experts (https://arxiv.org/abs/2112.06905)
Designing Effective Sparse Expert Models (https://arxiv.org/abs/2202.08906)

Links:
Merch: store.ykilcher.com
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yann...
LinkedIn: https://www.linkedin.com/in/ykilcher
BiliBili: https://space.bilibili.com/2017636191

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannick...
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

#dsi #search #google

Search engines work by building an index and then looking up things in it. Usually, that index is a separate data structure. In keyword search, we build and store reverse indices. In neural search, we build nearest-neighbor indices. This paper does something different: It directly trains a Transformer to return the ID of the most relevant document. No similarity search over embeddings or anything like this is performed, and no external data structure is needed, as the entire index is essentially captured by the model's weights. The paper experiments with various ways of representing documents and training the system, which works surprisingly well!

Sponsor: Diffgram
https://diffgram.com?ref=yannic

OUTLINE:
0:00 - Intro
0:45 - Sponsor: Diffgram
1:35 - Paper overview
3:15 - The search problem, classic and neural
8:15 - Seq2seq for directly predicting document IDs
11:05 - Differentiable search index architecture
18:05 - Indexing
25:15 - Retrieval and document representation
33:25 - Training DSI
39:15 - Experimental results
49:25 - Comments & Conclusions

Paper: https://arxiv.org/abs/2202.06991

Abstract:
In this paper, we demonstrate that information retrieval can be accomplished with a single Transformer, in which all information about the corpus is encoded in the parameters of the model. To this end, we introduce the Differentiable Search Index (DSI), a new paradigm that learns a text-to-text model that maps string queries directly to relevant docids; in other words, a DSI model answers queries directly using only its parameters, dramatically simplifying the whole retrieval process. We study variations in how documents and their identifiers are represented, variations in training procedures, and the interplay between models and corpus sizes. Experiments demonstrate that given appropriate design choices, DSI significantly outperforms strong baselines such as dual encoder models. Moreover, DSI demonstrates strong generalization capabilities, outperforming a BM25 baseline in a zero-shot setup.

Authors: Yi Tay, Vinh Q. Tran, Mostafa Dehghani, Jianmo Ni, Dara Bahri, Harsh Mehta, Zhen Qin, Kai Hui, Zhe Zhao, Jai Gupta, Tal Schuster, William W. Cohen, Donald Metzler

Links:
Merch: store.ykilcher.com
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yann...
LinkedIn: https://www.linkedin.com/in/ykilcher
BiliBili: https://space.bilibili.com/2017636191

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannick...
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq

#neuralsearch #interview #google

This is an interview with the authors Yi Tay and Don Metzler.
Paper Review Video: https://youtu.be/qlB0TPBQ7YY

Search engines work by building an index and then looking up things in it. Usually, that index is a separate data structure. In keyword search, we build and store reverse indices. In neural search, we build nearest-neighbor indices. This paper does something different: It directly trains a Transformer to return the ID of the most relevant document. No similarity search over embeddings or anything like this is performed, and no external data structure is needed, as the entire index is essentially captured by the model's weights. The paper experiments with various ways of representing documents and training the system, which works surprisingly well!

OUTLINE:
0:00 - Intro
0:50 - Start of Interview
1:30 - How did this idea start?
4:30 - How does memorization play into this?
5:50 - Why did you not compare to cross-encoders?
7:50 - Instead of the ID, could one reproduce the document itself?
10:50 - Passages vs documents
12:00 - Where can this model be applied?
14:25 - Can we make this work on large collections?
19:20 - What's up with the NQ100K dataset?
23:55 - What is going on inside these models?
28:30 - What's the smallest scale to obtain meaningful results?
30:15 - Investigating the document identifiers
34:45 - What's the end goal?
38:40 - What are the hardest problems currently?
40:40 - Final comments & how to get started

Paper: https://arxiv.org/abs/2202.06991

Abstract:
In this paper, we demonstrate that information retrieval can be accomplished with a single Transformer, in which all information about the corpus is encoded in the parameters of the model. To this end, we introduce the Differentiable Search Index (DSI), a new paradigm that learns a text-to-text model that maps string queries directly to relevant docids; in other words, a DSI model answers queries directly using only its parameters, dramatically simplifying the whole retrieval process. We study variations in how documents and their identifiers are represented, variations in training procedures, and the interplay between models and corpus sizes. Experiments demonstrate that given appropriate design choices, DSI significantly outperforms strong baselines such as dual encoder models. Moreover, DSI demonstrates strong generalization capabilities, outperforming a BM25 baseline in a zero-shot setup.

Authors: Yi Tay, Vinh Q. Tran, Mostafa Dehghani, Jianmo Ni, Dara Bahri, Harsh Mehta, Zhen Qin, Kai Hui, Zhe Zhao, Jai Gupta, Tal Schuster, William W. Cohen, Donald Metzler

Links:
Merch: store.ykilcher.com
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yann...
LinkedIn: https://www.linkedin.com/in/ykilcher
BiliBili: https://space.bilibili.com/2017636191

#mlnews #palm #dalle2

Google releases PaLM and OpenAI releases DALL-E 2 (and more news).

Sponsor: Weights & BIases
Start here: https://wandb.me/yannic

Thumbnail credit: DALL-E 2 via Sam Altman

OUTLINE
0:00 - Street interview w/ random stranger
2:25 - Intro
2:50 - PaLM - Google's 540B Pathways Language Model
7:50 - Sponsor: Weights & Biases
9:10 - OpenAI releases DALL-E 2
12:05 - Open Source Datasets and Models
13:20 - Salesforce releases CodeGen

My Live Reaction to DALL-E 2: https://youtu.be/gGPv_SYVDC8
My Video on GLIDE: https://youtu.be/gwI6g1pBD84
My Video on the Pathways System: https://youtu.be/vGFaiLeoLWw

References:
PaLM - Google's 540B Pathways Language Model
https://ai.googleblog.com/2022/04/pat...
https://storage.googleapis.com/pathwa...

OpenAI releases DALL-E 2
https://openai.com/dall-e-2/
https://cdn.openai.com/papers/dall-e-...
https://www.instagram.com/openaidalle/
https://twitter.com/sama/status/15117...
https://twitter.com/sama/media
https://twitter.com/BorisMPower/statu...
https://twitter.com/ariskonstant/stat...

Open Source Datasets and Models
https://twitter.com/multimodalart/sta...
https://laion.ai/laion-5b-a-new-era-o...
https://github.com/mlfoundations/open...

Salesforce releases CodeGen
https://github.com/salesforce/CodeGen

Links:
Merch: store.ykilcher.com
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yann...
LinkedIn: https://www.linkedin.com/in/ykilcher
BiliBili: https://space.bilibili.com/2017636191

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannick...
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

#reinforcementlearning #ai #explained

Exploration is one of the oldest challenges for Reinforcement Learning algorithms, with no clear solution to date. Especially in environments with sparse rewards, agents face significant challenges in deciding which parts of the environment to explore further. Providing intrinsic motivation in form of a pseudo-reward is sometimes used to overcome this challenge, but often relies on hand-crafted heuristics, and can lead to deceptive dead-ends. This paper proposes to use language descriptions of encountered states as a method of assessing novelty. In two procedurally generated environments, they demonstrate the usefulness of language, which is in itself highly concise and abstractive, which lends itself well for this task.

OUTLINE:
0:00 - Intro
1:10 - Paper Overview: Language for exploration
5:40 - The MiniGrid & MiniHack environments
7:00 - Annotating states with language
9:05 - Baseline algorithm: AMIGo
12:20 - Adding language to AMIGo
22:55 - Baseline algorithm: NovelD and Random Network Distillation
29:45 - Adding language to NovelD
31:50 - Aren't we just using extra data?
34:55 - Investigating the experimental results
40:45 - Final comments

Paper: https://arxiv.org/abs/2202.08938

Abstract:
Reinforcement learning (RL) agents are particularly hard to train when rewards are sparse. One common solution is to use intrinsic rewards to encourage agents to explore their environment. However, recent intrinsic exploration methods often use state-based novelty measures which reward low-level exploration and may not scale to domains requiring more abstract skills. Instead, we explore natural language as a general medium for highlighting relevant abstractions in an environment. Unlike previous work, we evaluate whether language can improve over existing exploration methods by directly extending (and comparing to) competitive intrinsic exploration baselines: AMIGo (Campero et al., 2021) and NovelD (Zhang et al., 2021). These language-based variants outperform their non-linguistic forms by 45-85% across 13 challenging tasks from the MiniGrid and MiniHack environment suites.

Authors: Jesse Mu, Victor Zhong, Roberta Raileanu, Minqi Jiang, Noah Goodman, Tim Rocktäschel, Edward Grefenstette

Links:
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yann...
LinkedIn: https://www.linkedin.com/in/ykilcher
BiliBili: https://space.bilibili.com/2017636191

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannick...
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq

#reinforcementlearning #ai #explained

This is an interview with Jesse Mu, first author of the paper.
Original Paper Review: https://youtu.be/NeGJAUSQEJI

Exploration is one of the oldest challenges for Reinforcement Learning algorithms, with no clear solution to date. Especially in environments with sparse rewards, agents face significant challenges in deciding which parts of the environment to explore further. Providing intrinsic motivation in form of a pseudo-reward is sometimes used to overcome this challenge, but often relies on hand-crafted heuristics, and can lead to deceptive dead-ends. This paper proposes to use language descriptions of encountered states as a method of assessing novelty. In two procedurally generated environments, they demonstrate the usefulness of language, which is in itself highly concise and abstractive, which lends itself well for this task.

OUTLINE:
0:00 - Intro
0:55 - Paper Overview
4:30 - Aren't you just adding extra data?
9:35 - Why are you splitting up the AMIGo teacher?
13:10 - How do you train the grounding network?
16:05 - What about causally structured environments?
17:30 - Highlights of the experimental results
20:40 - Why is there so much variance?
22:55 - How much does it matter that we are testing in a video game?
27:00 - How does novelty interface with the goal specification?
30:20 - The fundamental problems of exploration
32:15 - Are these algorithms subject to catastrophic forgetting?
34:45 - What current models could bring language to other environments?
40:30 - What does it take in terms of hardware?
43:00 - What problems did you encounter during the project?
46:40 - Where do we go from here?

Paper: https://arxiv.org/abs/2202.08938

Abstract:
Reinforcement learning (RL) agents are particularly hard to train when rewards are sparse. One common solution is to use intrinsic rewards to encourage agents to explore their environment. However, recent intrinsic exploration methods often use state-based novelty measures which reward low-level exploration and may not scale to domains requiring more abstract skills. Instead, we explore natural language as a general medium for highlighting relevant abstractions in an environment. Unlike previous work, we evaluate whether language can improve over existing exploration methods by directly extending (and comparing to) competitive intrinsic exploration baselines: AMIGo (Campero et al., 2021) and NovelD (Zhang et al., 2021). These language-based variants outperform their non-linguistic forms by 45-85% across 13 challenging tasks from the MiniGrid and MiniHack environment suites.

Authors: Jesse Mu, Victor Zhong, Roberta Raileanu, Minqi Jiang, Noah Goodman, Tim Rocktäschel, Edward Grefenstette

Links:
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yann...

#aiart #deeplearning #clip

Since the release of CLIP, the world of AI art has seen an unprecedented level of acceleration in what's possible to do. Whereas image generation had previously been mostly in the domain of scientists, now a community of professional artists, researchers, and amateurs are sending around colab notebooks and sharing their creations via social media. How did this happen? What is going on? And where do we go from here? Jack Morris and I attempt to answer some of these questions, following his blog post "The Weird and Wonderful World of AI Art" (linked below).

OUTLINE:
0:00 - Intro
2:30 - How does one get into AI art?
5:00 - Deep Dream & Style Transfer: the early days of art in deep learning
10:50 - The advent of GANs, ArtBreeder and TikTok
19:50 - Lacking control: Pre-CLIP art
22:40 - CLIP & DALL-E
30:20 - The shift to shared colabs
34:20 - Guided diffusion models
37:20 - Prompt engineering for art models
43:30 - GLIDE
47:00 - Video production & Disco Diffusion
48:40 - Economics, money, and NFTs
54:15 - What does the future hold for AI art?

Blog post: https://jxmo.notion.site/The-Weird-an...
Jack's Blog: https://jxmo.io/

Links:
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yann...
LinkedIn: https://www.linkedin.com/in/ykilcher
BiliBili: https://space.bilibili.com/2017636191

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannick...
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

#mlnews #gpt3 #pathways

Your updates on the latest and greatest from the depths of Machine Learning!

Sponsor: Weights & Biases
https://wandb.me/yannic

OUTLINE:
0:00 - Intro
0:15 - Weights & Biases Report about Reports
2:45 - GPT-3 learns to edit
6:30 - Make-A-Scene: Text-to-Image with Human Priors
8:00 - Pathways: Google's new High-Performance ML scheduler
10:45 - DouBlind: Open Peer-Review
12:45 - CLIP meets GamePhysics
14:40 - Residual Quantization pushes Image Generation SOTA
16:15 - Helpful Things

References:
Weights & Biases Report about Reports
https://wandb.ai/wandb/wandb_example/...

GPT-3 learns to edit
https://openai.com/blog/gpt-3-edit-in...
https://beta.openai.com/playground?mo...

Make-A-Scene: Text-to-Image with Human Priors
https://arxiv.org/pdf/2203.13131.pdf
https://www.youtube.com/watch?v=QLTyq...

Pathways: Google's new High-Performance ML scheduler
https://arxiv.org/pdf/2203.12533.pdf

DouBlind: Open Peer-Review
https://doublind.com/#web-intro
https://doublind.com/search?query=kil...

CLIP meets GamePhysics
https://arxiv.org/pdf/2203.11096.pdf
https://www.reddit.com/r/GamePhysics/...
https://asgaardlab.github.io/CLIPxGam...

Residual Quantization pushes Image Generation SOTA
https://arxiv.org/pdf/2203.01941.pdf
https://github.com/kakaobrain/rq-vae-...

Helpful Things
https://github.com/TDAmeritrade/stumpy
https://github.com/linkedin/fasttreeshap
https://github.com/vopani/jaxton
https://twitter.com/mark_riedl/status...
https://github.com/eilab-gt/NovGrid
https://developer.nvidia.com/isaac-gym
https://github.com/NVIDIA-Omniverse/I...

Links:
Merch: store.ykilcher.com
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yann...
LinkedIn: https://www.linkedin.com/in/ykilcher
BiliBili: https://space.bilibili.com/2017636191

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannick...
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

#nlp #gpt3 #prompt

This is an interview with the authors of this work, Aman Madaan and Niket Tandon.
Large language models such as GPT-3 have enabled many breakthroughs and new applications recently, but they come with an important downside: Training them is very expensive, and even fine-tuning is often difficult. This paper presents an adaptive method to improve performance of such models after deployment, without ever changing the model itself. This is done by maintaining a memory of interactions and then dynamically adapting new prompts by augmenting them with memory content. This has many applications, from non-intrusive fine-tuning to personalization.

OUTLINE:
0:00 - Intro
0:45 - Paper Overview
2:00 - What was your original motivation?
4:20 - There is an updated version of the paper!
9:00 - Have you studied this on real-world users?
12:10 - How does model size play into providing feedback?
14:10 - Can this be used for personalization?
16:30 - Discussing experimental results
17:45 - Can this be paired with recommender systems?
20:00 - What are obvious next steps to make the system more powerful?
23:15 - Clarifying the baseline methods
26:30 - Exploring cross-lingual customization
31:00 - Where did the idea for the clarification prompt come from?
33:05 - What did not work out during this project?
34:45 - What did you learn about interacting with large models?
37:30 - Final thoughts

Paper: https://arxiv.org/abs/2201.06009
Code & Data: https://github.com/madaan/memprompt

Abstract:
Large LMs such as GPT-3 are powerful, but can commit mistakes that are obvious to humans. For example, GPT-3 would mistakenly interpret "What word is similar to good?" to mean a homonym, while the user intended a synonym. Our goal is to effectively correct such errors via user interactions with the system but without retraining, which will be prohibitively costly. We pair GPT-3 with a growing memory of recorded cases where the model misunderstood the user's intents, along with user feedback for clarification. Such a memory allows our system to produce enhanced prompts for any new query based on the user feedback for error correction on similar cases in the past. On four tasks (two lexical tasks, two advanced ethical reasoning tasks), we show how a (simulated) user can interactively teach a deployed GPT-3, substantially increasing its accuracy over the queries with different kinds of misunderstandings by the GPT-3. Our approach is a step towards the low-cost utility enhancement for very large pre-trained LMs. All the code and data is available at this https URL.

Authors: Aman Madaan, Niket Tandon, Peter Clark, Yiming Yang

Links:
Merch: store.ykilcher.com
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yann...
LinkedIn: https://www.linkedin.com/in/ykilcher

#nlp #gpt3 #prompt

Large language models such as GPT-3 have enabled many breakthroughs and new applications recently, but they come with an important downside: Training them is very expensive, and even fine-tuning is often difficult. This paper presents an adaptive method to improve performance of such models after deployment, without ever changing the model itself. This is done by maintaining a memory of interactions and then dynamically adapting new prompts by augmenting them with memory content. This has many applications, from non-intrusive fine-tuning to personalization.

Sponsor: Introduction to Graph Neural Networks Course
https://www.graphneuralnets.com/p/int...

OUTLINE:
0:00 - Intro
0:40 - Sponsor: Introduction to GNNs Course (link in description)
1:30 - Paper Overview: Improve GPT-3 after deployment via user feedback
5:30 - Proposed memory-based architecture
13:00 - A detailed look at the components
15:00 - Example tasks
24:30 - My concerns with the example setup
26:20 - Baselines used for comparison
29:50 - Experimental Results
34:20 - Conclusion & Comments

Paper: https://arxiv.org/abs/2201.06009
Code & Data: https://github.com/madaan/memprompt

Abstract:
Large LMs such as GPT-3 are powerful, but can commit mistakes that are obvious to humans. For example, GPT-3 would mistakenly interpret "What word is similar to good?" to mean a homonym, while the user intended a synonym. Our goal is to effectively correct such errors via user interactions with the system but without retraining, which will be prohibitively costly. We pair GPT-3 with a growing memory of recorded cases where the model misunderstood the user's intents, along with user feedback for clarification. Such a memory allows our system to produce enhanced prompts for any new query based on the user feedback for error correction on similar cases in the past. On four tasks (two lexical tasks, two advanced ethical reasoning tasks), we show how a (simulated) user can interactively teach a deployed GPT-3, substantially increasing its accuracy over the queries with different kinds of misunderstandings by the GPT-3. Our approach is a step towards the low-cost utility enhancement for very large pre-trained LMs. All the code and data is available at this https URL.

Authors: Aman Madaan, Niket Tandon, Peter Clark, Yiming Yang

Links:
Merch: store.ykilcher.com
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yann...
LinkedIn: https://www.linkedin.com/in/ykilcher
BiliBili: https://space.bilibili.com/2017636191

If you want to support me, the best thing to do is to share out the content :)

#deeplearning #nlp #sampling

This is an interview with first author Clara Meister.
Paper review video hereé https://youtu.be/_EDr3ryrT_Y

Modern language models like T5 or GPT-3 achieve remarkably low perplexities on both training and validation data, yet when sampling from their output distributions, the generated text often seems dull and uninteresting. Various workarounds have been proposed, such as top-k sampling and nucleus sampling, but while these manage to somewhat improve the generated samples, they are hacky and unfounded. This paper introduces typical sampling, a new decoding method that is principled, effective, and can be implemented efficiently. Typical sampling turns away from sampling purely based on likelihood and explicitly finds a trade-off between generating high-probability samples and generating high-information samples. The paper connects typical sampling to psycholinguistic theories on human speech generation, and shows experimentally that typical sampling achieves much more diverse and interesting results than any of the current methods.

Sponsor: Introduction to Graph Neural Networks Course
https://www.graphneuralnets.com/p/int...

OUTLINE:
0:00 - Intro
0:35 - Sponsor: Introduction to GNNs Course (link in description)
1:30 - Why does sampling matter?
5:40 - What is a "typical" message?
8:35 - How do humans communicate?
10:25 - Why don't we just sample from the model's distribution?
15:30 - What happens if we condition on the information to transmit?
17:35 - Does typical sampling really represent human outputs?
20:55 - What do the plots mean?
31:00 - Diving into the experimental results
39:15 - Are our training objectives wrong?
41:30 - Comparing typical sampling to top-k and nucleus sampling
44:50 - Explaining arbitrary engineering choices
47:20 - How can people get started with this?

Paper: https://arxiv.org/abs/2202.00666
Code: https://github.com/cimeister/typical-...

Authors: Clara Meister, Tiago Pimentel, Gian Wiher, Ryan Cotterell

#deeplearning #nlp #sampling

Modern language models like T5 or GPT-3 achieve remarkably low perplexities on both training and validation data, yet when sampling from their output distributions, the generated text often seems dull and uninteresting. Various workarounds have been proposed, such as top-k sampling and nucleus sampling, but while these manage to somewhat improve the generated samples, they are hacky and unfounded. This paper introduces typical sampling, a new decoding method that is principled, effective, and can be implemented efficiently. Typical sampling turns away from sampling purely based on likelihood and explicitly finds a trade-off between generating high-probability samples and generating high-information samples. The paper connects typical sampling to psycholinguistic theories on human speech generation, and shows experimentally that typical sampling achieves much more diverse and interesting results than any of the current methods.

Sponsor: Fully Connected by Weights & Biases
https://wandb.ai/fully-connected

OUTLINE:
0:00 - Intro
1:50 - Sponsor: Fully Connected by Weights & Biases
4:10 - Paper Overview
7:40 - What's the problem with sampling?
11:45 - Beam Search: The good and the bad
14:10 - Top-k and Nucleus Sampling
16:20 - Why the most likely things might not be the best
21:30 - The expected information content of the next word
25:00 - How to trade off information and likelihood
31:25 - Connections to information theory and psycholinguistics
36:40 - Introducing Typical Sampling
43:00 - Experimental Evaluation
44:40 - My thoughts on this paper

Paper: https://arxiv.org/abs/2202.00666
Code: https://github.com/cimeister/typical-...

Authors: Clara Meister, Tiago Pimentel, Gian Wiher, Ryan Cotterell

#blip #review #ai

Cross-modal pre-training has been all the rage lately in deep learning, especially training vision and language models together. However, there are a number of issues, such as low quality datasets that limit the performance of any model trained on it, and also the fact that pure contrastive pre-training cannot be easily fine-tuned for most downstream tasks. BLIP unifies different tasks and objectives in a single pre-training run and achieves a much more versatile model, which the paper immediately uses to create, filter, clean and thus bootstrap its own dataset to improve performance even more!

Sponsor: Zeta Alpha
https://zeta-alpha.com
Use code YANNIC for 20% off!

OUTLINE:
0:00 - Intro
0:50 - Sponsor: Zeta Alpha
3:40 - Paper Overview
6:40 - Vision-Language Pre-Training
11:15 - Contributions of the paper
14:30 - Model architecture: many parts for many tasks
19:50 - How data flows in the model
26:50 - Parameter sharing between the modules
29:45 - Captioning & Filtering bootstrapping
41:10 - Fine-tuning the model for downstream tasks

Paper: https://arxiv.org/abs/2201.12086
Code: https://github.com/salesforce/BLIP
Demo: https://huggingface.co/spaces/Salesfo...

Abstract:
Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average [email protected]), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to video-language tasks in a zero-shot manner. Code, models, and datasets are released at this https URL.

Authors: Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi

Links:
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yann...
LinkedIn: https://www.linkedin.com/in/ykilcher
BiliBili: https://space.bilibili.com/2017636191

If you want to support me, the best thing to do is to share out the content :)

#blip #interview #salesforce

Paper Review Video: https://youtu.be/X2k7n4FuI7c
Sponsor: Assembly AI
https://www.assemblyai.com/?utm_sourc...

This is an interview with Junnan Li and Dongxu Li, authors of BLIP and members of Salesforce research.
Cross-modal pre-training has been all the rage lately in deep learning, especially training vision and language models together. However, there are a number of issues, such as low quality datasets that limit the performance of any model trained on it, and also the fact that pure contrastive pre-training cannot be easily fine-tuned for most downstream tasks. BLIP unifies different tasks and objectives in a single pre-training run and achieves a much more versatile model, which the paper immediately uses to create, filter, clean and thus bootstrap its own dataset to improve performance even more!

OUTLINE:
0:00 - Intro
0:40 - Sponsor: Assembly AI
1:30 - Start of Interview
2:30 - What's the pitch?
4:40 - How did data bootstrapping come into the project?
7:10 - How big of a problem is data quality?
11:10 - Are the captioning & filtering models biased towards COCO data?
14:40 - Could the data bootstrapping be done multiple times?
16:20 - What was the evolution of the BLIP architecture?
21:15 - Are there additional benefits to adding language modelling?
23:50 - Can we imagine a modular future for pre-training?
29:45 - Diving into the experimental results
42:40 - What did and did not work out during the research?
45:00 - How is research life at Salesforce?
46:45 - Where do we go from here?

Paper: https://arxiv.org/abs/2201.12086
Code: https://github.com/salesforce/BLIP
Demo: https://huggingface.co/spaces/Salesfo...

Abstract:
Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average [email protected]), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to video-language tasks in a zero-shot manner. Code, models, and datasets are released at this https URL.

Authors: Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi

#mlnews #gtc22 #ithaca

GTC Registration Link: https://ykilcher.com/gtc
Your regular updates on what's going on in the ML world!

OUTLINE:
0:00 - Intro
0:20 - Register to Nvidia GTC and win a 3090!
4:15 - DeepMind's Ithaca deciphers Lost Ancient Texts
6:45 - Drug discovery model turns toxic
10:00 - Gary Marcus: Deep Learning is hitting a wall
19:40 - GopherCite: Backing up answers with citations
22:40 - Yoshua Bengio appointed knight of the legion of honour
23:00 - Meta AI tags parody account of Yoshua Bengio
23:40 - Building games using just natural language
24:55 - YOU.com adds writing assistant
25:45 - Horace He: How to brrr
26:35 - Karpathy: Reproducing Yann LeCun's 1989 paper
27:50 - Pig grunt emotion classifier
28:20 - AI annotates protein domain functions
29:40 - Atwood & Carmack: 10k self-driving car bet
30:50 - Helpful Things

References:
Register to GTC and win a 3090!
https://twitter.com/NVIDIAEU/status/1...
https://www.nvidia.com/gtc/keynote/?n...
https://www.nvidia.com/gtc/?ncid=ref-...
https://www.nvidia.com/gtc/keynote/
https://www.nvidia.com/gtc/training/
https://developer.nvidia.com/nvidia-o...

DeepMind deciphers Lost Ancient Texts
https://deepmind.com/blog/article/Pre...
https://www.nature.com/articles/s4158...
https://github.com/deepmind/ithaca
https://ithaca.deepmind.com/?job=eyJy...

Drug discovery model turns toxic
https://www.theverge.com/2022/3/17/22...
https://www.nature.com/articles/s4225...

Gary Marcus: Deep Learning is hitting a wall
https://nautil.us/deep-learning-is-hi...
https://www.youtube.com/watch?v=fVkXE...

GopherCite: Backing up answers with citations
https://deepmind.com/research/publica...

Yoshua Bengio appointed knight of the legion of honour
https://mila.quebec/en/professor-yosh...

Meta AI tags parody account
https://twitter.com/MetaAI/status/150...

Building games using just natural language
https://andrewmayneblog.wordpress.com...

YOU.com adds writing assistant
https://you.com/search?q=how%20to%20w...

Horace He: How to brrr
https://horace.io/brrr_intro.html

Karpathy: Reproducing Yann LeCun's 1989 paper
https://karpathy.github.io/2022/03/14...

Pig grunt emotion classifier
https://science.ku.dk/english/press/n...

AI annotates protein domain functions
https://ai.googleblog.com/2022/03/usi...
https://google-research.github.io/pro...

Atwood & Carmack: 10k self-driving car bet
https://blog.codinghorror.com/the-203...

Helpful Things
https://github.com/recognai/rubrix
https://twitter.com/taiyasaki/status/...
https://github.com/mosaicml/composer?...
https://mujoco.org/
https://mujoco.readthedocs.io/en/late...
https://github.com/deepmind/mctx?utm_...
https://padl.ai/
https://github.com/LaihoE/did-it-spill
https://pytorch.org/blog/pytorch-1.11...

Links:
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yann...
LinkedIn: https://www.linkedin.com/in/ykilcher
BiliBili: https://space.bilibili.com/2017636191

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannick...
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

#multitasklearning #biology #neuralnetworks

Catastrophic forgetting is a big problem in mutli-task and continual learning. Gradients of different objectives tend to conflict, and new tasks tend to override past knowledge. In biological neural networks, each neuron carries a complex network of dendrites that mitigate such forgetting by recognizing the context of an input signal. This paper introduces Active Dendrites, which carries over the principle of context-sensitive gating by dendrites into the deep learning world. Various experiments show the benefit in combatting catastrophic forgetting, while preserving sparsity and limited parameter counts.

OUTLINE:
0:00 - Introduction
1:20 - Paper Overview
3:15 - Catastrophic forgetting in continuous and multi-task learning
9:30 - Dendrites in biological neurons
16:55 - Sparse representations in biology
18:35 - Active dendrites in deep learning
34:15 - Experiments on multi-task learning
39:00 - Experiments in continual learning and adaptive prototyping
49:20 - Analyzing the inner workings of the algorithm
53:30 - Is this the same as just training a larger network?
59:15 - How does this relate to attention mechanisms?
1:02:55 - Final thoughts and comments

Paper: https://arxiv.org/abs/2201.00042
Blog: https://numenta.com/blog/2021/11/08/c...

ERRATA:
- I was made aware of this by https://twitter.com/ChainlessCoder: "That axon you showed of the pyramidal neuron, is actually the apical dendrite of the neuron". Sorry, my bad :)

Authors: Abhiram Iyer, Karan Grewal, Akash Velu, Lucas Oliveira Souza, Jeremy Forest, Subutai Ahmad

#multitasklearning #biology #neuralnetworks

This is an interview with the paper's authors: Abhiram Iyer, Karan Grewal, and Akash Velu!
Paper Review Video: https://youtu.be/O_dJ31T01i8

Check out Zak's course on Graph Neural Networks (discount with this link): https://www.graphneuralnets.com/p/int...

Catastrophic forgetting is a big problem in mutli-task and continual learning. Gradients of different objectives tend to conflict, and new tasks tend to override past knowledge. In biological neural networks, each neuron carries a complex network of dendrites that mitigate such forgetting by recognizing the context of an input signal. This paper introduces Active Dendrites, which carries over the principle of context-sensitive gating by dendrites into the deep learning world. Various experiments show the benefit in combatting catastrophic forgetting, while preserving sparsity and limited parameter counts.

OUTLINE:
0:00 - Intro
0:55 - Sponsor: GNN Course
2:30 - How did the idea come to be?
7:05 - What roles do the different parts of the method play?
8:50 - What was missing in the paper review?
10:35 - Are biological concepts viable if we still have backprop?
11:50 - How many dendrites are necessary?
14:10 - Why is there a plateau in the sparsity plot?
20:50 - How does task difficulty play into the algorithm?
24:10 - Why are there different setups in the experiments?
30:00 - Is there a place for unsupervised pre-training?
32:50 - How can we apply the online prototyping to more difficult tasks?
37:00 - What did not work out during the project?
41:30 - How do you debug a project like this?
47:10 - How is this related to other architectures?
51:10 - What other things from neuroscience are to be included?
55:50 - Don't miss the awesome ending :)

Paper: https://arxiv.org/abs/2201.00042
Blog: https://numenta.com/blog/2021/11/08/c...

Link to the GNN course (with discount): https://www.graphneuralnets.com/p/int...

Abstract:
A key challenge for AI is to build embodied systems that operate in dynamically changing environments. Such systems must adapt to changing task contexts and learn continuously. Although standard deep learning systems achieve state of the art results on static benchmarks, they often struggle in dynamic scenarios. In these settings, error signals from multiple contexts can interfere with one another, ultimately leading to a phenomenon known as catastrophic forgetting. In this article we investigate biologically inspired architectures as solutions to these problems. Specifically, we show that the biophysical properties of dendrites and local inhibitory systems enable networks to dynamically restrict and route information in a context-specific manner. Our key contributions are as follows. First, we propose a novel artificial neural network architecture that incorporates active dendrites and sparse representations into the standard deep learning framework. Next, we study the performance of this architecture on two separate benchmarks requiring task-based adaptation: Meta-World, a multi-task reinforcement learning environment where a robotic agent must learn to solve a variety of manipulation tasks simultaneously; and a continual learning benchmark in which the model's prediction task changes throughout training. Analysis on both benchmarks demonstrates the emergence of overlapping but distinct and sparse subnetworks, allowing the system to fluidly learn multiple tasks with minimal forgetting. Our neural implementation marks the first time a single architecture has achieved competitive results on both multi-task and continual learning settings. Our research sheds light on how biological properties of neurons can inform deep learning systems to address dynamic scenarios that are typically impossible for traditional ANNs to solve.

Authors: Abhiram Iyer, Karan Grewal, Akash Velu, Lucas Oliveira Souza, Jeremy Forest, Subutai Ahmad

Links:
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yann...
LinkedIn: https://www.linkedin.com/in/ykilcher

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannick...
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

SHOW MORE

Created 2 years, 11 months ago.

318 videos

Category Science & Technology