Privacy-Preserving Natural Language Processing - Big Science for Large Language Models

For more details about this project see: Sign up for Module 1 of this project here: 0:35 Introductions 2:09 Why do we need to protect privacy of users in machine learning? 5:21 What is so hard about privacy-preserving ML? 7:54 What are some of the nuances of privacy preserving NLP? 16:44 PII vs NER? 18:08 What is Big Science trying to do? Who is inovlved? 21:18 How is Aggregate Intellect (ASIC) NLP is involved in this initiative? 23:09 What are the types of information that needs to be handled in text that’s fed into language models if we want to be careful about privacy? 25:48 What is the state of the art and practice in PII handling? 28:23 Once we can detect PII in text, what can we do beyond that? How can we handle more nuanced cases? 32:28 What are you personally excited to see in the near future in privacy-preserving ML? 35:07 wrap up; join this project!!
Back to Top