Compositional Generalization, Referential Grounding & LLMs [Ellie Pavlick and Raphaël Millière]

Support us! MLST Discord: Twitter: Ellie Pavlick, Assistant Professor at Brown University, and Raphaël Millière, a lecturer in the Philosophy Department at Columbia University, discuss compositional generalization, referential grounding, and the development of large language models like GPT-3 and GPT-4. They share their perspectives on artificial intelligence, neural networks, and human cognition, exploring the mechanistic understanding of language models and the debate between symbolic and associative approaches in human cognition. The conversation covers the progress and challenges in compositional generalization benchmarks, the importance of understanding AI models’ mechanistic processes, and the potential of transformers in implementing core symbolic operations. They discuss the limitations of vision and language models like CLIP and compare them to models like DALL-E and GPT-
Back to Top