Meta’s AI Chief Yann LeCun: ChatGPT-like Models Will Never Match Human Reasoning

0
772
Share this

In a recent interview with the Financial Times, Meta’s chief AI scientist, Yann LeCun, shed light on the limitations of large language models (LLMs) like ChatGPT and Gemini, stating that they will never be able to match human capabilities in reasoning and planning.

LeCun pointed out that current LLMs have a limited understanding of logic and cannot comprehend the physical world, maintain persistent memory, reason effectively, or plan hierarchically. He emphasized that these models are “intrinsically unsafe” as they heavily rely on the accuracy of the data they are trained on to provide correct answers to prompts.

Despite their limitations, LeCun acknowledged the usefulness of LLMs like ChatGPT and Gemini. However, he noted that the evolution of these models is restricted as they only learn from human-fed data, and what may appear as reasoning is essentially the exploitation of accumulated knowledge from extensive training data.

When questioned about how AI can achieve human-level intelligence, LeCun revealed that Meta’s Fundamental AI Research (Fair) lab, consisting of around 500 individuals, is developing a new AI system focused on common sense and understanding how the world functions. This approach, known as ‘world modelling’, comes with potential risks for Meta, as investors expect swift returns on their AI investments.

LeCun believes that the development of artificial general intelligence (AGI) is not merely a design or technology development challenge but a scientific one.

This news comes on the heels of Mark Zuckerberg’s recent announcement of increased investment in AI to position Meta as “the leading AI company in the world,” which resulted in a staggering $200 billion loss in the social media giant’s valuation.

Share this