T0: Multitask Prompted Training Enables Zero-Shot Task Generalization | Paper Explained

❤️ Become The AI Epiphany Patreon ❤️ ► 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► In this video I cover the “Multitask Prompted Training Enables Zero-Shot Task Generalization“ paper that introduced the T0 transformer. T0 basically = Google’s T5 LM pretraining additional training on prompted datasets. The paper came out of the BigScience workshop. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: ✅ BigScience: ✅ Models on HF hub:
Back to Top