Multitask Prompted Training Enables Zero-shot Task Generalization (Explained)
Can zero-shot generalization instead be directly induced by explicit multitask learning? Watch the video to find out!
0:00 - Intro
2:14 - Prompted training format
5:52 - Measuring generalization to unseen tasks
8:45 - Held-out tasks
10:45 - The future of NLP
11:48 - Model
12:17 - Experiment results
Connect
Linkedin
Twitter
email edwindeeplearning@
Paper
Code
Abstract
Large language models have recently been shown to attain reasonable zero-shot generalization on a diverse set of tasks. It has been hypothesized that this is a consequence of implicit multitask learning in language model training. Can zero-shot generalization instead be directly induced by explicit multitask learning? To test this question at scale, we develop a system for easily mapping general natu
15 views
34
15
1 year ago 06:47:21 0
Upstream Proficiency C2 Listening audiobook Learning English video book subtitles
1 year ago 00:11:44 3
Young Couple Beats The Housing Market & Lives in a Bus!