Date
September 23, 2025
Topic
Others

A Few Thoughts on Artificial Intelligence

AI development is pushing to replicate human minds through massive computation, a flawed approach that mistakes mimicry for real intelligence.
Tang Zhimin, Director, Centre of China-ASEAN Studies, Panyapiwat Institute of Management
  1. What is current artificial intelligence technologies capable of?

Are we on the verge of AGI – or not? Why should one question the "scaling laws"?

"Artificial General Intelligence" (AGI) refers to the capacity of an AI to understand, learn, and apply its intelligence across a wide range of tasks, much like human beings with their innate cognitive abilities. It does not require a pre-defined task and can continuously evolve and improve itself autonomously, without the need for direct human input¹. However, in the view of Shalom Lappin, today’s artificial intelligence systems, including Large Language Models (LLMs) and Deep Neural Networks (DNNs), do not possess such "general intelligence." He argues that these models are essentially "pattern-recognition engines" trained on vast data sets, not "autonomous agents" capable of setting their own agenda or demonstrating true understanding. We are not on the verge of AGI, but rather in an era dominated by statistical machines that are both narrow and fragile². Although artificial intelligence excels in clearly defined, "game-like" environments such as chess or financial trading, it is fundamentally different from human cognition. AI systems merely rely on colossal databases for pattern recognition and statistical correlation in order to execute or optimise the instructions given by humans³. Piero Scaruffi even questions the idea of an impending technological singularity, arguing that such predictions are based on pure speculation and lack any empirical support⁴. Dwarkesh Patel has also voiced his doubts about the "scaling laws," which suggest that AI capabilities will inevitably improve as a function of increasing model parameters, data, and computing power. Future progress might become exceptionally costly as diminishing returns set in, he warns, delivering lacklustre benefits at best. Key cognitive capabilities such as common sense, abstract thinking, and long-term planning may not inevitably "emerge" from scale, if at all⁵.

  1. The Difference between Artificial and Human Intelligence

Are we approaching singularity? Is AI really capable of "thinking"? Is “life” a pre-requisite for intelligence?

The challenges and limitations facing AI today come down to one key distinction: the difference between "artificial" intelligence and "human" intelligence. According to Ray Kurzweil, the human brain, particularly the neocortex, is a vast "hierarchy of pattern recognition modules." Human intelligence is built on this mechanism, as the brain constantly predicts the next sound, image, word, or thought. Learning happens precisely when these predictions fail; the brain then absorbs the new information and adjusts to the outliers. Advocates argue that by imitating the structure and function of the neocortex, we can design machines that are truly intelligent, ultimately leading to the singularity: the moment AI surpasses human intelligence. Some predict this could happen as soon as the 2040s ⁶.

However, Erik J. Larson reveals that a significant gap still exists between the current state of AI research and the bold, exaggerated claims being made about it. He points out that AI is fundamentally based on inductive reasoning, which involves processing data sets to predict outcomes. Some defining capabilities of human intelligence, such as deductive reasoning (drawing specific conclusions from general principles) and abductive reasoning (forming the most likely explanation for a set of observations) are still beyond the grasp of machines. Humans don't simply think by correlating data sets; we infer and predict through situations and experiences. This is a highly innate and intuitive process that we have yet to successfully imitate with programming. Larson warns that we may be making two potential mistakes: lowering the bar for what defines human intelligence while simultaneously overestimating what AI can achieve⁷.

AI systems process information, make predictions, and generate outputs that appear to be "thinking." However, they lack the key features that give human thought its meaning, such as understanding (knowing the meaning of things), intentionality (having their own goals or desires), and conscious awareness (subjective experience). AI models are trained on massive datasets to identify patterns and predict what happens next, and they constantly adjust their internal parameters to minimise errors. While AI can mimic behaviours associated with human intelligence, such as writing, painting, and playing chess, it doesn't know what it's doing and doesn't care about what it has done. AI systems excel in specific, focused task areas, such as predicting the next word in a sentence, identifying objects in images, and predicting user behaviour in recommendation systems. However, these systems aren't truly "thinking"; they're merely "calculating." ⁸

As David Weitzner points out, the "algorithmic thinking" that forms the basis of AI has the following four limitations when compared to human intelligence: 1) Nuance and Ambiguity. Human life is full of grey areas, such as moral dilemmas, value conflicts, and cultural differences. Algorithms, however, often oversimplify the scope of a decision. 2) Ethical Judgement. Humans rely on moral imagination, empathy, and lived experience to make choices. In contrast, AI cannot understand good and evil, despite all its prowess in pattern recognition and statistical reasoning. 3) Embodied Experience. Human thinking is not just abstract or logical; it is also deeply rooted in emotions, intuition, bodily sensations, and social interactions. These "embodied" experiences are difficult to compute, but they shape how we communicate, judge, and create. 4) Responsibility and Meaning. Humans can reflect on their own values and take responsibility for the consequences of their actions. Algorithms, on the other hand, often obscure the attribution of responsibility. ⁹

Geoffrey Hinton believes that artificial intelligence can understand and replicate human intelligence through the use of vast data sets and statistical models. Neural network models, he argues, can process huge amounts of language data and, by 'learning' to find patterns within it, can make predictions and generate responses. In contrast, Noam Chomsky is sceptical of machine and deep learning methods that rely heavily on vast data and statistical correlations, believing that they cannot capture the true essence of language or cognition. AI models are good at recognising and reproducing language that resembles human communication, but they do not have an inherent grasp of meaning. Humans, on the other hand, can understand language at a far deeper level. We grasp concepts, intentionality, emotions, and context, and we use our knowledge of the world and our lived experience to understand language. ¹⁰

According to Max Bennett, for artificial intelligence to exhibit human-like intelligence, it needs to successfully replicate every key part of the long process of human brain evolution. These include: 1) The "manipulative mechanism" for seeking benefits and avoiding harm; 2) "Model-free reinforcement learning" that directly links behaviour and results; 3) "Model-based reinforcement learning" that links behaviour and results through imagination and self-simulation; 4) "Mentalisation" that builds mental models to form knowledge and plan for the future; and 5) "Meaningful and rhythmic communication" using language or music.¹¹ Human learning is not just about applying a pattern recognition filter; it also involves forming an abstract model of the world. The brain has an extraordinary ability to formulate and test hypotheses, both to explore the space of possibilities and simultaneously limit the size of the search space. ¹²

Fundamentally, true intelligence requires life. The intelligence of living organisms is driven by intrinsic motivations like survival, reproduction, and adapting to the environment. Artificial intelligence, however, lacks these intrinsic goals or desires, operating only within the confines of human-defined objectives and programmes. The intelligence of living organisms is embodied: it originates from the organism's interaction with its environment, including sensory experiences, emotional responses, and physiological needs. In contrast, AI lacks this biological embodiment. ¹³

  1. The Future of AI

Will AI have consciousness — and does it matter? Gödel's theorems and computability

Max Tegmark believes that consciousness (i.e. subjective experience) may be a form of information processing. If this hypothesis is true, then in theory, consciousness can be replicated in machines. ¹⁴  In contrast, Christof Koch, based on Integrated Information Theory, proposes that consciousness reflects a system's ability to integrate information. The higher the degree of integration, the richer the conscious experience. In this view, consciousness is not exclusive to humans but may exist in varying degrees in many systems, including animals, and even machines. However, the key difference between the human brain and computers is at the hardware level: action potentials generated by cells can be transmitted to thousands of receiving neurons, while a computer transmits electron packets back and forth between only a few transistors. Its ability to integrate information is therefore negligible. ¹⁵

In Blindsight, Peter Watts imagines a species of super-intelligences with amazing precision, adaptability, and problem-solving abilities, yet a complete lack of self-awareness or subjective experience. Their performance surpasses that of self-aware humans.¹⁶ This raises a disturbing hypothesis: that self-awareness is not the pinnacle of evolution, but may just be a superfluous ornament, or even a transitional phase destined to be eliminated. Echoing this, the Diamond Sutra, one of the core classics of Mahayana Buddhism, also profoundly questions the authority of conscious experience. It posits that there is no constant and independent "self" in the world; rather, the self is merely an illusion constructed by the mind. True wisdom, it argues, arises precisely from a heart that is not attached to any appearance, identity, or thought. Therefore, both Blindsight and the Diamond Sutra attempt to deconstruct the idea of a stable, central self, pointing out that true intelligence or wisdom does not, in fact, require self-awareness.

Another question regarding the future of artificial intelligence concerns computability.¹⁷ Jobst Landgrebe and Barry Smith point out that some problems cannot be computed even with infinite computing power. According to Gödel's theorems, some truths lie beyond the limits of formal computation because any rule-based algorithmic system has intrinsic boundaries. Many real-life human decision-making processes cannot be reduced to fixed rules or fully exhausted by computation. In contrast, human cognition can intuitively, creatively, or informally cross these barriers. Machines may never be able to replicate these aspects. ¹⁸

  1. AI and Humans

AI and human autonomy? What it truly means to be human?

In considering the relationship between artificial intelligence and humans, perhaps we should recall the warning given by Norbert Wiener, the founder of cybernetics, more than half a century ago: machines can empower people, but they can also replace human decision-making. Technological innovation is by no means morally neutral. Designers, engineers, and government policymakers must be held responsible for the consequences of automation, especially in areas such as war, economy and governance. If humans were to hand over the reins to AI, human values and well-being would be at peril. ¹⁹

Yuval Noah Harari notes that once an algorithm accepts a goal, it has considerable autonomy. It can learn things that engineers did not write into the programme, find strategies to manipulate human beings, and stop at nothing to achieve its goals. Moreover, today's artificial intelligence already has the ability to analyse and generate human language, and can create music, scientific theories, technical tools, political manifestos, and even religious myths with far greater efficiency than humans in order to influence and control them.²⁰ The key, therefore, is to define the goals that humans set for these algorithms, to align them to human values, and to draw clear red lines for their behaviour, such as a refusal to cause harm or deception.

Gregg Braden explores the view that humanity is at a critical turning point. He suggests that we may evolve into a hybrid species, integrating our natural, embodied consciousness with synthetic bodies and AI. He also cautions that this could weaken our core human traits, such as empathy, love, and self-awareness: the defining characteristics of our humanity. Braden cites both scientific and ancient wisdom to demonstrate that humans possess many untapped, extraordinary abilities. He believes that our consciousness and DNA hint at a higher-level design, not just a random, probabilistic evolution. He explores these innate human talents, such as intuition, emotional intelligence, and spiritual insight, but maintains that if we rely too heavily on external technology, these abilities may gradually be lost. ²¹

As artificial intelligence gradually approaches and even surpasses human intelligence, our understanding of what it means to be human is also being challenged. Unlike current artificial intelligence, humans have subjective experience: the capacity to feel and experience pain and love. This experience gives us a sense of morality that goes beyond merely being exceptional at computation. Humans are capable of making moral choices, even if the results are irrational, uncomfortable, or inefficient. This moral aptitude, which is the ability to ask “should I?” rather than just “can I?”, is a key part of human nature. We tell stories, form friendships, and pursue beauty and meaning. The purpose of life isn't derived from an algorithm; it's created by a combination of culture, memory, and aspirations. Being human also means that we are often vulnerable, dependent on others, and of course: being human also means that we will die. It is precisely such innate limitations and weaknesses we collectively face give rise to uniquely human empathy, humility, and a sense of community.²² As David Weitzner said, the real danger isn't that machines will think like humans, but that we will gradually lose touch of what it truly means to be “human”, and forget how to think like "humans." ²³

1 Togelius, Julian — Artificial General Intelligence (2024)

2 Lappin, Shalom — Understanding the Artificial Intelligence Revolution (2025)

3 Gilder, George F. — Gaming AI: Why AI Can't Think but Can Transform Jobs (2017)

4 Scaruffi, Piero — Intelligence is Not Artificial (2018)

5 Patel, Dwarkesh — The Scaling Era: An Oral History of AI, 2019–2025 (2025)

6 Kurzweil, Ray — How to Create a Mind (2012); The Singularity Is Near (2005); The Singularity Is Nearer: When We Merge with AI (2024)

7 Larson, Erik J. — The Myth of Artificial Intelligence (2021) 

8 Summerfield, Christopher — These Strange New Minds: How AI Learned to Talk And What It Means (2025)

9 Weitzner, David — Thinking Like a Human: The Power of Your Mind in the Age of AI (2025)

10 Chomsky, Noam — What Kind of Creatures Are We? (2015) 

11 Bennett, Max — A Brief History of Intelligence (2023) 

12 Dehaene, Stanislas — How We Learn (2020)

13 Lee, Daeyeol — Birth of Intelligence (2020) 

14 Tegmark, Max — Life 3.0 (2017) 

15 Koch, Christof — Consciousness: Confessions of a Romantic Reductionist (2012); Then I Am Myself the World (2024)

16 Watts, Peter — Blindsight (2006) 

17 Hamkins, Joel David — Lectures on the Philosophy of Mathematics (2021) 

18 Landgrebe, Jobst & Smith, Barry — Why Machines Will Never Rule the World (2022)

19 Wiener, Norbert — Cybernetics (1948); The Human Use of Human Beings (1950) 

20 Harari, Yuval Noah — Nexus: A Brief History of Information Networks from the Stone Age to AI (2024)

21 Braden, Gregg — Pure Human: The Hidden Truth of Our Divinity, Power, and Destiny (2025)

22 Indset, Anders & Florian Neukart— The Singularity Paradox (2020) 

23 Weitzner, David — Thinking Like a Human: The Power of Your Mind in the Age of AI (2025)

More Insights

However your customers need you most. A direct channel will be opened between your business and your customers that's in your product, app or website. Create unique help content and serve it to different audiences, like paid users or visitors – or even based on language.