AI systems like GPT-4 have made headlines for how well they learn and use human language, but they do that by ingesting astronomical amounts of data from the internet—more text a human would encounter in 100,000 years. Human babies, meanwhile, learn words with much less input, just by absorbing what’s in their own environment. What would happen if an AI system had to learn words the way kids do, based only on what a single toddler sees and hears? NYU data science researchers recently Wai Keen Vong and Brenden Lake conducted that exact experiment, using video and audio captured from a camera mounted to a child’s head over a period of months to train a multimodal neural network. The results—published in the journal Science—shed light on long-standing debates on language acquisition processes in children, as well as on what it would mean to make AI learning processes more childlike, and potentially more efficient.
1 view
565
160
2 weeks ago 00:08:10 1
AI Agents Will Create MILLIONAIRES in 2025 – Are You Ready
3 weeks ago 00:23:23 1
Create CONSISTENT CHARACTERS from an INPUT IMAGE with FLUX! (ComfyUI Tutorial + Installation Guide)