AI can do your homework. Now what?

We interviewed students and teachers on how schools should handle the rise of the chatbots. Subscribe to our channel! For a year now, students have had access to AI chatbots, otherwise known as Large Language Models, that can write at a high-school level and answer specific and diverse questions related to many school subjects. OpenAI’s ChatGPT kicked off a race among tech companies to release their own chatbots and integrate them into existing consumer products. The most advanced language models, like GPT-4 and Claude2 are kept behind paywalls. They offer more nuanced answers and make fewer mistakes but because reliability is not guaranteed, many businesses cannot yet deploy these systems. That means a significant portion of chatbot use cases are for low-stakes applications, like school work. This presents a major challenge to educators, who now need to rethink their curriculum to either incorporate chatbot use or to attempt to deter it. In this video, we hear from students and teachers about how they’re thinking through the problem, and review research in the science of learning to understand how the “fluency“ of a chatbot experience could disrupt the learning process that we go to school for. 00:00 Intro 02:28 Path 1: Banning AI 06:03 Path 2: Allowing AI 09:52 The problem with the calculator analogy 11:18 The science of learning 15:44 Conclusion Sources: “Rethinking GPS navigation: creating cognitive maps through auditory clues“ “Habitual use of GPS negatively impacts spatial memory during self-guided navigation“ “Measuring actual learning versus feeling of learning in response to being actively engaged in the classroom“ AI-text-detection error rates: Turnitin GPTZero Compilatio is a news website that helps you cut through the noise and understand what’s really driving the events in the headlines. Check out . Watch our full video catalog: Follow Vox on Facebook: Or Twitter:
Back to Top