A New Frontier in AI Cognition
A team of researchers from the Institute of Automation at the Chinese mainland's Chinese Academy of Sciences (CAS) and the CAS Center for Excellence in Brain Science and Intelligence Technology has unveiled that multimodal large language models (LLMs) can spontaneously develop human-like object concept representations. Published in Nature Machine Intelligence, this study charts a fresh course for AI cognitive science.
Mapping AI to the Human Brain
By integrating computational modeling, behavioral experiments, and neuroimaging, the scientists constructed a conceptual map for LLMs. They extracted 66 dimensions from the models’ behavioral data and discovered strong correlations with neural activity patterns in the human brain’s category-selective regions. Multimodal LLMs also outperformed single-mode models in aligning with human choice patterns.
Decision Making: Humans vs. LLMs
When identifying objects such as dogs, cars, or apples, humans tend to combine visual cues with semantic knowledge. LLMs, on the other hand, rely more on semantic labels and abstract concepts. Understanding these differences offers a roadmap for designing AI systems with authentic cognitive structures.
Implications and Next Steps
This research represents a milestone in AI cognition. Demonstrating that LLMs can mirror human object-concept frameworks opens doors to smarter, more intuitive AI tools. As these insights drive future developments, we may see AI applications that better complement human thought in fields ranging from education to creative design.
Reference(s):
Multimodal LLMs can develop human-like object concepts: study
cgtn.com