Multitask Prompted Training Enables Zero-shot Task Generalization (Explained)

Can zero-shot generalization instead be directly induced by explicit multitask learning? Watch the video to find out! 0:00 - Intro 2:14 - Prompted training format 5:52 - Measuring generalization to unseen tasks 8:45 - Held-out tasks 10:45 - The future of NLP 11:48 - Model 12:17 - Experiment results Connect Linkedin Twitter email edwindeeplearning@ Paper Code Abstract Large language models have recently been shown to attain reasonable zero-shot generalization on a diverse set of tasks. It has been hypothesized that this is a consequence of implicit multitask learning in language model training. Can zero-shot generalization instead be directly induced by explicit multitask learning? To test this question at scale, we develop a system for easily mapping general natu
Back to Top