The Five A's of AI - Chapter 10
Artificial Intelligence: The Quest for Human-Level Capability Across All Domains
Understanding What True Artificial General Intelligence Would Require
Chapter Highlights
$52bn projected AGI market size by 2032 (SNS Insider, 2024)
$632bn global AI spending projected by 2028 (IDC, 2024)
$200bn AI investment approaching by 2025 (Goldman Sachs)
2040-2050 when experts estimate AGI will probably emerge
Build realistic expectations about AGI whilst investing in practical AI capabilities

Chapter 1 - The Dream of Thinking Machines (1830’s-1970’s)
Chapter 2 - Digital Revolution (1980’s-2010)
Chapter 3 - Intelligence Explosion
Chapter 4 - AI Paralysis
Chapter 5 - The Five A's Framework
Chapter 6 - Automation Intelligence
Chapter 7 - Augmented Intelligence
Chapter 8 - Algorithmic Intelligence
Chapter 9 - Agentic Intelligence
Chapter 10 - Artificial Intelligence
Chapter 11 - Governance Across the Five A's
Chapter 12 - Strategic Implementation
Chapter 13 - Use Cases Across Industries
Chapter 14 - The Future of AI
Understanding Artificial Intelligence
What Is True Artificial Intelligence?
Artificial General Intelligence represents AI systems that match or surpass human cognitive capabilities across virtually all domains, moving beyond today's narrow AI to genuine general intelligence.
The AGI Reality Gap
Current AI systems, despite marketing claims, remain profoundly distant from genuine general intelligence.
What today's AI can do:
-
Excel at specific narrow tasks with remarkable proficiency
-
Process vast amounts of data at superhuman speeds
-
Recognise patterns invisible to human perception
-
Generate human-like text through statistical prediction
-
Defeat world champions in constrained game environments
Whilst You Delay Understanding AGI
-
Strategic misdirection from chasing AGI instead of practical AI value
-
Resource wastage on speculative research over proven applications
-
Workforce confusion about which AI skills to develop
-
Regulatory gaps between imagined and actual AI capabilities
-
Competitive disadvantage from realistic vs hype-driven strategies
The Research: Why AGI Remains Elusive
1. The Investment Reality vs Technical Progress
Massive investment flows mask the fundamental distance to genuine AGI.
Market Reality
The Artificial General Intelligence market was valued at $3.01bn in 2023 and is expected to grow to $52bn by 2032 with a CAGR of 37.5% over the forecast period of 2024-2032 (SNS Insider, 2024). Worldwide spending on artificial intelligence, including AI-enabled applications, infrastructure, and related information technology and business services, will more than double by 2028 when it is expected to reach $632bn (IDC, 2024).
Investment Surge
Artificial intelligence investment is expected to approach $200bn globally by 2025, according to Goldman Sachs research, representing enormous capital flowing towards a goal whose definition remains contested.
Technical Reality
Recent research provides sobering perspective. Although there is some progress with agentic systems, the transformer model's reasoning and planning capabilities are still fairly basic. We can't presume that we're close to AGI because we really don't understand current AI.
2. Fundamental Capability Gaps
The path from current AI to AGI faces obstacles that reflect fundamental gaps in our understanding of intelligence.
Transfer Learning Failure
Humans learning chess improve at all strategic board games. Current AI systems show minimal transfer. A Go champion system cannot play chess or apply strategic thinking to business problems without complete reprogramming.
Causal Reasoning Absence
Humans understand that umbrellas don't cause rain despite perfect correlation. Current AI systems excel at correlation but struggle with causation, predicting based on patterns without understanding underlying mechanisms.
Common Sense Deficit
Every human knows water flows downhill, objects persist when unobserved, and people have goals driving behaviour. AI systems lack this foundational understanding, leading to failures that seem absurd to humans.
Computational Limits
Human brains achieve general intelligence through 86bn neurons making trillions of connections. Replicating this computationally might require energy exceeding global production.
3. The Prediction Paradox
Expert predictions about AGI arrival vary dramatically between tech leaders and measured researchers.
Optimistic Claims
According to Sam Altman, machines will think and reason like humans by 2025. Elon Musk expects an artificial intelligence smarter than the smartest humans by 2026. In March 2024, Nvidia CEO Jensen Huang predicted that within five years, artificial intelligence would match or surpass human performance on any test: 2029.
Expert Consensus
Surveyed artificial intelligence experts estimate that AGI will probably emerge between 2040 and 2050 and is very likely to appear by 2075. Sceptical researchers note that artificial intelligence experts have said it would likely be 2050 before AGI hits the market.
Reality Check
Technology, no matter how advanced, cannot be human, so the challenge is trying to develop it to be as human as possible. That also leads to ethical dilemmas regarding oversight. Because of all these questions and our limited capabilities and regulations, optimistic timelines aren't realistic.
Chapter 10
Understanding the gap between current AI and true artificial general intelligence
The Great Misconception
In 1950, Alan Turing published a paper that would define the next seventy-five years of artificial intelligence research. "Computing Machinery and Intelligence" posed a deceptively simple question: "Can machines think?" (Turing, 1950). Turing's genius lay not in providing an answer, but in recognising that the question itself was flawed. Instead of debating whether machines could think, he proposed we ask whether they could convince us they were thinking.
Three-quarters of a century later, we face Turing's question with renewed urgency and considerable confusion. We inhabit an era where artificial intelligence dominates headlines, shapes investment flows, and promises to transform every aspect of human existence. Yet for all our progress, we remain surprisingly distant from answering Turing's fundamental question.
Global VC investment in AI companies saw remarkable growth in 2024, as funding to AI-related companies exceeded $100 billion, an increase of over 80% from $55.6 billion in 2023 (KPMG Private Enterprise, 2025). Nearly 33% of all global venture funding was directed to AI companies, making artificial intelligence the leading sector for investments. Every software application claims intelligence, every algorithm promises transformation, and every advancement gets heralded as the next step towards human-level artificial intelligence.
Yet beneath this cacophony of marketing claims and breathless headlines lies a more complex reality. Despite extraordinary achievements in narrow domains, we remain profoundly distant from artificial intelligence in its truest sense. The Artificial General Intelligence (AGI) Market size was valued at USD 3.01 Billion in 2023 and is expected to grow to USD 52 Billion by 2032 with a growing CAGR of 37.5% over the forecast period of 2024-2032 (SNS Insider, 2024). This market projection reflects both enormous optimism and fundamental misunderstanding about the gap between current AI and genuine artificial general intelligence.
The confusion begins with terminology itself, compounded by massive financial speculation. When computer scientists discuss artificial intelligence, they typically mean narrow AI systems that excel at specific tasks. When philosophers and futurists engage with artificial intelligence, they envision artificial general intelligence (AGI) possessing human-like adaptability and understanding across all domains. When vendors market artificial intelligence, backed by the generative AI funding that reached approximately $45 billion in 2024, nearly doubling from $24 billion in 2023 (Mintz Legal Advisory, 2025), they mean whatever sounds most impressive to potential customers.
This linguistic chaos, amplified by extraordinary investment flows, creates dangerous misunderstandings about current capabilities and future possibilities. Artificial general intelligence (AGI), sometimes called human-level intelligence AI, is a type of artificial intelligence that would match or surpass human capabilities across virtually all cognitive tasks. Some researchers argue that state‑of‑the‑art large language models already exhibit early signs of AGI‑level capability, while others maintain that genuine AGI has not yet been achieved.
The stakes of this confusion extend beyond academic debate. Artificial intelligence investment is expected to approach $200 billion globally by 2025, according to Goldman Sachs research, representing enormous capital flowing towards a goal whose definition remains contested. Worldwide spending on artificial intelligence, including AI-enabled applications, infrastructure, and related information technology and business services, will more than double by 2028 when it is expected to reach $632 billion, according to International Data Corporation (IDC) forecasts. This investment surge occurs whilst fundamental questions about artificial intelligence remain unresolved.
Consider the remarkable capabilities of today's most sophisticated AI systems and compare them with what many believe they achieve. Large language models like GPT produce remarkably human-like text yet possess no understanding of meaning. They manipulate symbolic patterns with extraordinary sophistication but lack any internal representation of the concepts those symbols represent. Chess engines defeat world champions but cannot apply strategic thinking to business decisions. Medical diagnosis systems outperform doctors in narrow domains yet cannot engage in basic reasoning outside their training.
Each achievement represents remarkable progress within constraints. Each also demonstrates how far we remain from general intelligence. Understanding this gap between marketing hyperbole and technical reality proves essential for organisational planning, regulatory preparation, and societal adaptation to AI's actual rather than imagined capabilities.
What True Artificial Intelligence Would Require
Defining artificial general intelligence proves surprisingly challenging. We recognise human intelligence intuitively yet struggle to specify its essential characteristics. This definitional challenge extends beyond philosophical curiosity. It shapes research directions, investment decisions, and societal preparation for potential AGI emergence.
True AGI would interpret new information, learn, and adapt to scenarios without human oversight, all while demonstrating human-level performance, reasoning, and common sense. This encompasses capabilities that current AI systems fundamentally lack, despite their impressive domain-specific achievements.
Generalisation across domains represents AGI's most fundamental requirement. Humans apply intelligence fluidly between contexts. A physicist learning cooking, a chef grasping basic physics, both navigating social situations whilst appreciating art and solving novel problems. This transfer of cognitive capability distinguishes general from narrow intelligence.
Current AI systems fail catastrophically when asked to generalise beyond training domains. A system mastering medical diagnosis cannot play chess without complete reprogramming. A language model generating coherent text about quantum physics possesses no understanding of scientific methodology. This brittleness persists regardless of sophistication within specific domains. Current AI models like those used in autonomous vehicles require enormous datasets and computational power just to handle driving in specific conditions, let alone achieve general intelligence.
The challenge extends beyond technical limitations to fundamental questions about knowledge representation. Humans somehow encode abstract concepts that apply across disparate domains. The notion of "balance" applies to chemistry, accounting, design, and personal relationships. We recognise patterns like cause-and-effect, competition, and cooperation across physics, economics, biology, and social interactions. This cross-domain pattern recognition enables humans to learn rapidly in new fields by applying relevant knowledge from familiar ones.
Current AI architectures show no evidence of developing such flexible knowledge representations. Each system remains trapped within its training domain, unable to recognise that strategic principles from chess might apply to business planning, or that statistical reasoning from finance might illuminate medical diagnosis. The generalisation problem represents more than an engineering challenge; it reflects our incomplete understanding of how human intelligence achieves domain transfer.
The distinction between pattern matching and genuine understanding remains central to discussions of artificial intelligence. Current AI systems manipulate symbolic patterns without comprehension, operating through statistical associations rather than semantic understanding. Whether understanding requires consciousness, or whether sophisticated pattern matching might constitute a form of intelligence, represents an ongoing philosophical question with practical implications for AI development.
We see implications beyond literal statements. Current language models manipulate symbolic patterns without comprehension.
Recent research provides sobering perspective on this limitation. Apple researchers investigating the mathematical reasoning capabilities of large language models found that these systems rely on sophisticated pattern matching rather than genuine logical reasoning.
When they introduced GSM-Symbolic, a benchmark that generates diverse mathematical questions through symbolic templates, they discovered that adding irrelevant but seemingly related information leads to a performance drop of up to 65%. This reveals that even our most advanced AI systems don't reason in the way vendors suggest; they pattern-match at extraordinary scale.
The implications prove profound. Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. Yet this apparent sophistication masks fundamental limitations. The systems exhibit behaviours associated with understanding whilst possessing no internal model of meaning.
Understanding involves more than correct responses; it requires comprehension of relationships, context, and implications. When humans read about photosynthesis, we grasp not just the chemical equation but its significance for life on Earth, its relationship to climate, and its applications in technology. We understand why changing one variable affects others and can predict consequences in related systems. Current AI systems demonstrate no evidence of such deep comprehension, instead relying on statistical associations between text patterns learned during training.
Creative problem-solving through insight distinguishes intelligence from computation. Humans faced with novel challenges don't just interpolate from experience. We re-conceptualise problems. We discover non-obvious connections. We generate genuinely new solutions. We have "aha!" moments where scattered information suddenly crystallises into understanding. Current AI systems optimise within problem spaces. They cannot redefine the spaces themselves. They climb hills efficiently. They cannot recognise when different mountains offer better peaks.
The creative limitation manifests most clearly in breakthrough scientific thinking. Consider Darwin's insight that natural selection could explain biological diversity, Einstein's realisation that space and time are relative, or Watson and Crick's recognition that DNA forms a double helix. Each breakthrough required more than processing existing information; it demanded reframing entire conceptual frameworks. The scientists didn't solve problems within established paradigms but created new paradigms for understanding reality.
Modern AI systems show no evidence of such paradigm-shifting capability. They can optimise designs within known parameters, suggest improvements based on existing examples, and even combine elements in novel ways. But they cannot step outside established frameworks to create fundamentally new approaches. Their creativity remains combinatorial rather than revolutionary, rearranging existing elements rather than discovering new principles.
This limitation extends to everyday problem-solving. Humans excel at recognising when conventional approaches won't work and developing entirely different strategies. A chef facing missing ingredients doesn't just substitute similar items but might completely change the cooking method. An engineer encountering unexpected constraints doesn't just optimise existing designs but might adopt entirely different principles. This adaptive creativity remains beyond current AI capabilities.
The relationship between consciousness and intelligence remains hotly debated. Some researchers, including microprocessor inventor Federico Faggin, propose that consciousness represents a fundamental aspect of reality rather than an emergent property of complex systems. This perspective suggests that artificial systems, however sophisticated, may never achieve genuine understanding without consciousness. Others argue that functional equivalence might suffice, regardless of subjective experience.
Some argue consciousness emerges from sufficient complexity. They believe AGI will necessarily be conscious. Others contend philosophical zombies could exhibit all external signs of intelligence without inner experience. The debate matters practically. Conscious AI would demand different ethical considerations than unconscious systems, however intelligent.
Artificial consciousness, also known as machine consciousness, synthetic consciousness, or digital consciousness, is the consciousness hypothesised to be possible in artificial intelligence. The 2020s have witnessed exceptional focus on this question. Since sentience involves the ability to experience ethically positive or negative (i.e., valenced) mental states, it may justify welfare concerns and legal protection, as with animals. The practical implications prove profound. Conscious AI would demand different ethical considerations than unconscious systems, however intelligent their behaviour.
Qualia, or phenomenological consciousness, is an inherently first-person phenomenon. Because of that, and the lack of an empirical definition of sentience, directly measuring it may be impossible. This measurement challenge creates profound difficulties for recognising consciousness in artificial systems. Brain-Computer Interface (BCI) technology, which facilitates direct communication between the brain and external devices, emerges as one approach for understanding consciousness, demonstrating significant promise in research contexts.
Contemporary neuroscience provides partial insights into consciousness whilst highlighting remaining mysteries. A new paper suggests that four specific, separate processes combine as a "signature" of conscious activity. By studying the neural activity of people who are presented with two different types of stimuli – one which could be perceived consciously, and one which could not – researchers show that these four processes occur only in the former, conscious perception task.
However, it seems to be the convergence of these measures in a late time window (after 300 milliseconds), rather than the mere presence of any single one of them, which best characterises conscious trials. This suggests that consciousness involves integrated processing across multiple brain regions rather than activity in any single location. For artificial systems, this implies that consciousness (if achievable) would require sophisticated integration of multiple processing systems rather than simply more powerful individual components.
The hard problem of consciousness asks how physical processes create subjective experience, the feeling of what it's like to see red, taste chocolate, or feel sadness. For AGI development, consciousness questions prove unavoidable and consequential. If consciousness proves functionally necessary for intelligence, creating AGI requires understanding and implementing subjective experience. This challenge exceeds all current technical obstacles.
Recent neuroscience research has revealed that our brain registers what we see even if we are not consciously aware of it. We even react emotionally to the content, highlighting the complex relationship between conscious awareness and information processing. The hard problem becomes more acute when considering artificial systems. Even if we successfully replicate every function of the human brain, would the resulting system experience qualia?
David Chalmers proposed two thought experiments intending to demonstrate that "functionally isomorphic" systems (those with the same "fine-grained functional organisation", that is, the same information processing) will have qualitatively identical conscious experiences, regardless of whether they are based on biological neurons or digital hardware. However, critics of artificial sentience object that Chalmers' proposal begs the question in assuming that all mental properties and external connections are already sufficiently captured by abstract causal organisation.
If consciousness proves unnecessary for intelligence, AGI development can proceed without solving philosophy's deepest mystery. In 2022, Google engineer Blake Lemoine made a viral claim that Google's LaMDA chatbot was sentient, though the chatbot's behaviour was judged by the scientific community as likely a consequence of mimicry, rather than machine sentience.
Testing for consciousness in AGI presents profound challenges. Human consciousness inference relies on similarity. We assume other humans experience consciousness like ourselves. AGI might experience utterly alien consciousness. It might convince us of consciousness it lacks. No proposed test definitively detects consciousness. We might create conscious AGI without knowing. We might mistake sophisticated mimicry for genuine experience.
The Historical Path to Today's Confusion
Understanding how we arrived at today's AI capabilities requires examining key breakthroughs that shaped the field's trajectory. Each milestone revealed both progress and persistent limitations whilst building towards increasingly sophisticated systems.
The journey began with Alan Turing's revolutionary 1950 question: "Can machines think?" (Turing, 1950). Turing's contribution was transformative, shifting AI discussions from theoretical philosophy to practical experimentation. Alan Turing's AI legacy extends far beyond his initial question. His development of the Turing Test became a cornerstone in evaluating machine intelligence.
The Turing test, originally called the imitation game by Alan Turing in 1949, is a test of a machine's ability to exhibit intelligent behaviour equivalent to that of a human. In the test, a human evaluator judges a text transcript of a natural-language conversation between a human and a machine. The evaluator tries to identify the machine, and the machine passes if the evaluator cannot reliably tell them apart.
The term 'artificial intelligence' was first coined by John McCarthy at the Dartmouth Conference in 1956. Many cite this time as the moment when AI was created. McCarthy, along with Marvin Minsky, Nathaniel Rochester, and Claude Shannon, organised this pivotal summer conference to explore ways that machines could simulate aspects of intelligence.
The decades following Dartmouth demonstrated both AI's potential and its constraints. Early successes seemed to herald rapid progress towards general intelligence. An early success of the microworld approach was SHRDLU, written by Terry Winograd of MIT. SHRDLU would respond to commands typed in natural English, such as "Will you please stack up both of the red blocks and either a green cube or a pyramid." The program could also answer questions about its own actions.
Although SHRDLU was initially hailed as a major breakthrough, Winograd soon announced that the program was, in fact, a dead end. The techniques pioneered in the program proved unsuitable for application in wider, more interesting worlds. Moreover, the appearance that SHRDLU gave of understanding the blocks microworld, and English statements concerning it, was in fact an illusion.
This pattern of impressive demonstrations followed by disappointing limitations characterised early AI development. Systems achieved remarkable performance within carefully constrained environments yet failed catastrophically when faced with real-world complexity. ELIZA, created by Joseph Weizenbaum in 1966, was one of the first natural language processing programs, capable of engaging in basic conversations with users using a pattern-matching approach.
ELIZA's design was based on the idea of "scripted" conversation, where the program followed predefined patterns and responses to engage users in dialogue. Despite its simplicity, the program demonstrated the potential of machines to understand and generate human-like text. Yet Weizenbaum himself became alarmed when users attributed more understanding to ELIZA than it possessed. His own secretary asked him to leave the room so she could have a private conversation with ELIZA, revealing how easily humans project consciousness onto sophisticated pattern-matching systems.
These early experiences established recurring themes in AI development: impressive performance within narrow domains, brittleness when circumstances change, human tendency to overestimate AI capabilities, and the difficulty of distinguishing sophisticated simulation from genuine understanding. Each pattern would reappear throughout AI's subsequent evolution, from expert systems through neural networks to modern large language models.
Strategic games provided measurable benchmarks for AI progress whilst revealing the difference between narrow excellence and general intelligence. The progression from chess to Go to complex strategy games illustrated both AI's advancing capabilities and persistent limitations.
In 1997, IBM's Deep Blue victory over Garry Kasparov marked a watershed moment in AI development. The system processed 200 million chess positions per second, demonstrating the power of brute-force computation combined with sophisticated evaluation functions. Yet Deep Blue's triumph highlighted the difference between narrow and general intelligence. Despite mastering chess at superhuman levels, Deep Blue could not play checkers, discuss chess strategy, or apply its pattern recognition to any domain beyond the 64 squares. With games such as checkers (that has been solved by the Chinook computer engine), chess, and now Go won by computers, victories at popular board games can no longer serve as major milestones for artificial intelligence in the way that they used to.
The next major gaming breakthrough came with IBM's Watson in 2011. The supercomputer Watson, named after IBM founder Thomas J. Watson, became famous worldwide when it won the American quiz show Jeopardy against two human competitors. Watson answers questions in natural spoken language. The server room for the vast amount of hardware and data storage space required to accommodate the machine was so big it could hold ten refrigerators.
Watson's victory demonstrated advances in natural language processing and knowledge retrieval that exceeded chess-playing capabilities. Unlike Deep Blue's pure computational approach, Watson had to understand wordplay, cultural references, and ambiguous clues whilst searching through vast databases of information. Yet Watson's knowledge remained brittle and domain-specific, leading to well-documented failures when IBM attempted to apply its technology to medical diagnosis and other real-world applications.
In March 2016, AlphaGo beat Lee Sedol in a five-game match, the first time a computer Go program has beaten a 9-dan professional without handicap. AlphaGo's victory transcended previous game-playing achievements because Go's complexity had long been considered beyond computational reach. Go, a complex board game with more possible moves than atoms in the universe, had long been considered a challenge for AI. AlphaGo's 4–1 victory over Sedol is a groundbreaking moment in AI, showcasing the power of deep learning techniques to handle highly complex strategic tasks that had previously been beyond AI's capabilities.
The 2010s witnessed the emergence of increasingly sophisticated language models. The 2020s have been a period of remarkable artificial intelligence innovation, starting with the splash made by OpenAI's Generative Pre-trained Transformer 3 (GPT-3) in 2020. Its ability to generate human-like text blurred the lines between human and machine-generated content. Yet these achievements masked fundamental limitations. Current language models predict text based on statistical patterns rather than understanding meaning. They excel at mimicking human communication without possessing comprehension of the concepts they manipulate.
Today's AI Landscape and Its Limitations
Today's AI landscape presents remarkable capabilities alongside persistent limitations. Understanding both proves essential for realistic assessment of progress towards AGI.
An OpenAI employee, Vahid Kazemi, claimed in 2024 that the company had achieved Artificial General Intelligence, stating, "In my opinion, we have already achieved AGI and it's even more clear with O1." Kazemi clarified that while the AI is not yet "better than any human at any task", it is "better than most humans at most tasks" (WindowsCentral, 2024). These statements have sparked debate, as they rely on a broad and unconventional definition of Artificial General Intelligence, traditionally understood as artificial intelligence that matches human intelligence across all domains. Critics argue that, while OpenAI's models demonstrate remarkable versatility, they may not fully meet this standard.
Recent research provides sobering perspective on current AI capabilities. Apple researchers investigating large language models found fundamental limitations in their reasoning abilities. Although there is some progress with agentic systems, the transformer model's reasoning and planning capabilities are still fairly basic. "We can't presume that we're close to AGI because we really don't understand current AI, which is a far cry from the dreamed-of AGI. We don't know how current AIs arrive at their conclusions, nor can current AIs even explain to us the processes by which that happens," says HP Newquist, author of The BrainMakers and executive director of The Relayer Group.
The term "artificial general intelligence" was coined in the mid-20th century. Initially, it denoted an autonomous computer capable of performing any task a human could, including physical activities like making a cup of coffee or fixing a car. But as advancements in robotics lagged behind the rapid progress of computing, most in the AI field shifted to narrower definitions of AGI. This definitional drift complicates assessment of progress towards AGI. Continuing to focus on claims of imminent AGI, he says, could muddle our understanding of the technology at hand and obscure AI's current societal effects.
Technical Barriers to True Intelligence
The path from current AI to AGI faces obstacles that represent more than engineering challenges. They reflect fundamental gaps in our understanding of intelligence itself.
Transfer learning at human levels eludes current approaches. Humans learning chess improve at all strategic board games. Students mastering calculus enhance their general reasoning abilities. Current AI systems show minimal transfer. A Go champion system cannot play chess. It cannot apply strategic thinking to business problems. The representations enabling human transfer learning remain mysterious. Are they abstract symbols? Distributed patterns? Something else entirely? Without breakthrough insights into transfer learning, AI remains narrow. This holds regardless of sophistication within domains.
The idea behind artificial general intelligence is creating the most human-like AI possible -- a type of AI that can teach itself and essentially operate in an autonomous manner.
So, one of the most obvious challenges is creating AI in a way that allows the developers to be able to take their hands off eventually, as the goal is for it to operate on its own.
Causal reasoning separates human intelligence from statistical pattern matching. Humans understand umbrellas don't cause rain despite perfect correlation. We model causal mechanisms. This enables counterfactual reasoning about what would happen if circumstances differed. Current AI systems excel at correlation but struggle with causation. They predict based on patterns without understanding underlying mechanisms. This limitation proves critical. Intelligent action requires understanding causal relationships, not just statistical regularities.
Adnan Masood, chief artificial intelligence architect at digital transformation services company UST, says Artificial General Intelligence will need to be able to do several things that aren't possible just yet. Specifically: The ability to generalise: AGI trained on medical data could also diagnose a mechanical failure. Open-ended learning: AGI will need to take the human guidance and reinforcement learning for defined tasks and then pursue knowledge itself. Causal reasoning: AGI should be able to explain causality, such as a crop failed because the soil was infected with bacteria.
Common sense reasoning confounds AI systems whilst being trivial for humans. Every human knows water flows downhill. Objects persist when unobserved. People have goals driving behaviour. This vast body of implicit knowledge enables effective reality navigation. AI systems lack this foundational understanding. This leads to failures seeming absurd to humans. Building common sense into AI requires either encoding millions of facts or discovering natural human acquisition methods. The first approach remains brittle and incomplete.
Embodied intelligence may prove essential for AGI. Human intelligence evolved through physical interaction with reality. Our concepts of up and down, before and after, cause and effect arise from bodily experience. Disembodied AI systems lack this grounding. This potentially limits their ability to understand reality as humans do. Building robotic bodies for AI faces immense challenges. It's unclear whether simulated embodiment suffices for developing human-like understanding.
"To get to AGI, we need advanced learning algorithms that can generalise and learn autonomously, integrated systems that combine various AI disciplines, massive computational power, diverse data and a lot of interdisciplinary collaboration," says Sergey Kastukevich, deputy Chief Technology Officer at gambling software company SOFTSWISS.
The computational requirements might exceed realistic possibilities. Human brains achieve general intelligence through 86 billion neurons making trillions of connections, operating continuously for years. Replicating this computationally might require energy exceeding global production. It might need hardware beyond physical possibilities. Quantum computing might help but introduces new uncertainties. The computational path to AGI might be theoretically possible but practically prohibitive.
Current Approaches and Their Limitations
Researchers pursue multiple paths toward AGI. Each offers insights whilst revealing limitations. Understanding these approaches helps appreciate progress made and distances remaining.
Scaling current architectures represents the brute force approach. If narrow AI improves with more parameters and training data, perhaps sufficient scaling achieves generality. Large language models with trillions of parameters show impressive capabilities. This leads some to argue we're on a scaling path to AGI. Yet fundamental limitations persist regardless of scale. Pattern matching doesn't become understanding through size alone. Statistical prediction doesn't become causal reasoning through volume. Scaling brings quantitative improvements. It hasn't demonstrated qualitative leaps toward general intelligence.
Hybrid systems attempt combining narrow AI modules into general intelligence. This approach links perception modules, reasoning engines, memory systems, and action planners. It's like assembling specialised brain regions into a complete mind. The integration challenge proves formidable. Human intelligence isn't modular. Our capabilities integrate seamlessly. Creating interfaces between specialised AI systems remains unsolved. We need fluid intelligence rather than awkward handoffs. The hybrid approach offers engineering paths forward. It may miss essential aspects of cognitive unity.
Neuroscience-inspired architectures seek to replicate brain function more directly. Human brains achieve general intelligence. Perhaps copying their structure enables artificial versions. Neuromorphic computing mimics neural dynamics. Connectionist models replicate brain organisation. Yet our understanding of how brains create intelligence remains primitive. We map neural activity without grasping how computation creates consciousness. Copying brain architecture without understanding operating principles may prove futile. It's like medieval attempts at flight by attaching feathers to arms.
Evolutionary approaches attempt to recapitulate intelligence's natural development. Rather than designing intelligence directly, create environments where it evolves. Artificial life simulations, genetic algorithms, and competitive multi-agent systems explore this path. Results prove interesting but limited. Evolution required billions of years and countless organisms to produce human intelligence. Compressed evolutionary approaches might miss essential developmental stages. They might miss environmental pressures that shaped intelligence. The approach offers insights but no clear path to AGI.
Expert Predictions and the Reality Gap
The question of when AGI might arrive generates intense speculation among researchers, with predictions varying dramatically based on different assumptions about technical progress and definitional criteria.
According to Sam Altman, advancements in AI are accelerating rapidly, and he believes that by 2025, machines will think and reason like humans. This prediction is not speculative; it is grounded in the significant progress made by AI models in recent years. Elon Musk expects development of an artificial intelligence smarter than the smartest of humans by 2026. Dario Amodei, Chief Executive Officer of Anthropic, expects singularity by 2026. In February 2025, entrepreneur and investor Masayoshi Son predicted it in 2 to 3 years (that is, 2027 or 2028). In March 2024, Nvidia Chief Executive Officer Jensen Huang predicted that within five years, artificial intelligence would match or surpass human performance on any test: 2029.
Yet measured assessments tell a different story. The surveyed artificial intelligence experts estimate that Artificial General Intelligence will probably (over 50% chance) emerge between 2040 and 2050 and is very likely (90% chance) to appear by 2075. Once AGI is reached, most experts believe it will progress to super-intelligence relatively quickly, with a timeframe ranging from as little as 2 years (unlikely, 10% probability) to about 30 years (high probability, 75%).
He estimated in 2024 (with low confidence) that systems smarter than humans could appear within 5 to 20 years and stressed the attendant existential risks. In May 2023, Demis Hassabis similarly said that "The progress in the last few years has been pretty incredible", and that he sees no reason why it would slow down, expecting AGI within a decade or even a few years.
Sceptical perspectives abound among experts and researchers. Artificial intelligence experts have said it would likely be 2050 before Artificial General Intelligence hits the market. OpenAI Chief Executive Officer Sam Altman says 2025, but it's a very difficult problem to solve. "Technology, no matter how advanced, cannot be human, so the challenge is trying to develop it to be as human as possible. That also leads to ethical dilemmas regarding oversight. There are certainly a lot of people out there who are concerned about AI having too much autonomy and control, and those concerns are valid. How do developers make AGI while also being able to limit its abilities when necessary? Because of all these questions and our limited capabilities and regulations at the present [time] I think that 2025 isn't realistic."
Ethical and Social Implications
The prospect of AGI raises profound ethical questions that extend far beyond technical development to encompass fundamental issues of consciousness, rights, and humanity's relationship with artificial minds.
As further iterations of GPT prove themselves more and more intelligent, more and more capable of meeting a broad spectrum of demands, from acing the bar exam to building a website from scratch, their success, in and of itself, can't be taken as evidence of their consciousness. Even a machine that behaves indistinguishably from a human isn't necessarily aware of anything at all.
The possibility of conscious AI introduces extraordinary ethical considerations. Understanding how an AI works on the inside could be an essential step toward determining whether or not it is conscious. Schneider, though, hasn't lost hope in tests. Together with the Princeton physicist Edwin Turner, she has formulated what she calls the "artificial consciousness test."
Traditional ethical frameworks assume human agents making decisions. AGI development challenges these assumptions, requiring new approaches to responsibility, rights, and moral consideration. Therefore, we must make an addition to Asimov's laws. Fourth Law: A robot or artificial intelligence must not deceive a human by impersonating a human being. We need clear boundaries. While human-AI collaboration can be constructive, AI deception undermines trust and leads to wasted time, emotional distress, and misuse of resources.
The Fourth Law addresses emerging challenges as artificial intelligence systems become more sophisticated in their interactions with humans. In his 2019 book Human Compatible, the eminent computer scientist Stuart Russell argues that artificial intelligence systems' ability to deceive humans represents a fundamental challenge to social trust.
Isaac Asimov's Three Laws of Robotics have influenced AI ethics discussions for decades, yet they prove insufficient for modern challenges. The Three Laws, and the Zeroth, have pervaded science fiction and are referred to in multiple books, films, and other media. They have also influenced thought on the ethics of artificial intelligence.
The laws are as follows: "(1) a robot may not injure a human being or, through inaction, allow a human being to come to harm; (2) a robot must obey the orders given it by human beings except where such orders would conflict with the First Law; (3) a robot must protect its own existence as long as such protection does not conflict with the First or Second Law."
However, Perhaps ironically, or perhaps because it was artistically appropriate, the sum of Asimov's stories disprove the contention that he began with: It is not possible to reliably constrain the behaviour of robots by devising and applying a set of rules.
We don't know how to assess whether future artificial intelligence systems will have moral status. Here's why that could be a big problem. The question of artificial intelligence rights becomes acute as systems approach human-level capabilities. Given it'd be functionally equivalent, this brain emulation would plausibly report being sentient, and we'd have at least some reason to think it was correct given the plausibility of functionalist accounts of consciousness. Given this, it would be reasonable to regard this emulation as morally worthy of concern comparable to a human.
Artificial General Intelligence development carries implications extending beyond technical achievement to encompass environmental sustainability and social transformation. The energy demands of advanced artificial intelligence systems raise serious sustainability questions. Current large language models require massive computational resources for training and operation. Artificial General Intelligence systems would likely demand orders of magnitude more energy, potentially conflicting with climate goals.
Economic transformation promises to be extraordinary. Artificial General Intelligence, possessing cognitive abilities that equal if not exceed a human's, will be capable of performing virtually all tasks. It will revolutionise the economy, turbocharge scientific discovery, propel the quality of life to unimagined heights, and grant near invulnerability to national security. Yet this transformation carries risks. But Artificial General Intelligence could also create a much darker world. The path that AGI takes will depend in large measure on who develops it, and how.
Artificial General Intelligence has the potential to transform society in ways that, historically, only governments have had the power to accomplish. That is a potential problem, because the public currently have no ability to challenge it, shape it, support it, or oppose it. This is a novel challenge for democratic nations.
The Challenge of Testing and Measuring Intelligence
Recognising AGI when it emerges proves as challenging as creating it. Traditional tests fall short of capturing general intelligence comprehensively.
A well-known method for testing machine intelligence is the Turing test, which assesses the ability to have a human-like conversation. But passing the Turing test does not indicate that an artificial intelligence system is sentient, as the artificial intelligence may simply mimic human behaviour without having the associated feelings. Alan Turing's Imitation Game has long been a benchmark for machine intelligence. But what it really measures is deception. The test's focus on mimicry rather than understanding limits its utility for assessing genuine intelligence.
Researchers have proposed various alternatives to the Turing Test. For example, in 2019, French computer scientist and former Google engineer Francois Chollet released the Abstract Reasoning Corpus for Artificial General Intelligence, or ARC-AGI. In this test, an artificial intelligence model is repeatedly given some examples of coloured squares arranged in different patterns on a grid.
Recently, OpenAI's soon-to-be released o3 model achieved vast improvement on ARC-AGI compared to previous artificial intelligence models, leading some researchers to view it as a breakthrough in Artificial General Intelligence. However, But this success may be partly due to OpenAI funding the benchmark's development and having access to the testing dataset while developing o3.
It's not just ARC-AGI that's contentious. Determining whether an artificial intelligence model counts as Artificial General Intelligence is complicated by the fact that every available test of artificial intelligence ability is flawed. Just as Raven's Progressive Matrices and other Intelligence Quotient tests are imperfect measures of human intelligence and face constant criticism for their biases, so too do Artificial General Intelligence evaluations.
Why AGI Remains Elusive
Despite decades of predictions about imminent Artificial General Intelligence, fundamental barriers suggest much longer timelines than optimistic projections suggest.
Theoretical foundations remain incomplete. We lack mathematical frameworks capturing general intelligence's essential features. Current theories address narrow aspects. These include learning, optimisation, and information processing. We lack unified understanding. It's like trying to build aircraft before discovering aerodynamics. Engineering might eventually stumble onto solutions. Theory-guided design would accelerate progress immensely.
The complexity gulf between narrow and general intelligence appears vast. Each narrow AI breakthrough reveals how much more remains. Solving chess didn't lead to general game playing. Mastering language generation didn't create understanding. Pattern recognition didn't enable reasoning. The gap isn't narrowing linearly. Each advance reveals new depths of challenge.
Computational requirements might exceed realistic possibilities. Human brains achieve general intelligence through 86 billion neurons. These make trillions of connections, operating continuously for years. Replicating this computationally might require energy exceeding global production. It might need hardware beyond physical possibilities. Quantum computing might help but introduces new uncertainties. The computational path to AGI might be theoretically possible but practically prohibitive.
Social and ethical constraints could slow development even if technical barriers fall. As Artificial General Intelligence possibilities become clearer, societal concern grows. Demands for careful development and strong safeguards intensify. Research moratoria, international treaties, and development restrictions might emerge. Humanity might collectively decide Artificial General Intelligence risks exceed benefits. Development could be slowed or stopped entirely. Technical possibility doesn't guarantee social permission.
Preparing for Multiple Futures
The uncertainty surrounding Artificial General Intelligence creates planning challenges. Will it arrive in decades or centuries? Will it emerge gradually or suddenly? Will it be beneficial or dangerous? Organisations must prepare for multiple scenarios whilst avoiding paralysis from uncertainty.
Hedging strategies make sense given timeline uncertainty. Invest in narrow artificial intelligence delivering immediate value. Build governance frameworks extensible to more sophisticated systems. Develop workforce skills transferable across scenarios. Create organisational cultures embracing change. These investments pay dividends regardless of Artificial General Intelligence timelines. They position organisations for various futures.
Monitoring developments helps detect Artificial General Intelligence approach. Watch for fundamental breakthroughs, not incremental improvements. Transfer learning advances might signal progress. Causal reasoning achievements matter more than performance metrics. Integration successes across domains suggest genuine advancement. Creating early warning systems helps organisations adapt strategies as Artificial General Intelligence probability shifts.
Ethical frameworks need development before Artificial General Intelligence, not after. What values should guide Artificial General Intelligence development? How do we encode human values into systems potentially surpassing human intelligence? What rights might conscious Artificial General Intelligence possess? These questions require societal dialogue. Waiting until Artificial General Intelligence arrives leaves no time for careful consideration. Beginning ethical discussions now enables thoughtful rather than reactive policies.
International cooperation becomes essential as Artificial General Intelligence approaches. No single nation should determine humanity's future with Artificial General Intelligence. Development races could prioritise speed over safety. Shared frameworks, safety standards, and development protocols benefit everyone. Starting international dialogue whilst Artificial General Intelligence remains distant enables relationship building. Crisis moments are poor times for initiating cooperation.
It is increasingly probable that the next U.S. presidential term could see the development of Artificial General Intelligence (AGI). If that happens, everything will change, and generative AI, the artificial intelligence which can produce images and text through ChatGPT and other applications, will seem like the Kitty Hawk Flyer compared with the B-21. Developing AGI will necessarily depend upon the private sector, but Washington can assist it through a combination of funding and technical support on a scale similar to that provided during the Cold War.
The portfolio approach makes most sense given Artificial General Intelligence uncertainty. Invest primarily in proven artificial intelligence categories delivering immediate value whilst allocating smaller portions to advanced research with Artificial General Intelligence relevance. This balanced approach captures current value whilst maintaining future options. Building learning organisations matters more than specific technology investments. Artificial General Intelligence's form remains unknowable, but organisational learning capabilities provide value regardless. Invest in cultures embracing change, systems capturing knowledge, and workforces comfortable with adaptation.
The Human Element
Preparing for Artificial General Intelligence resembles preparing for first contact with alien intelligence. The event would transform everything. Its nature remains fundamentally unknowable beforehand. This challenge doesn't justify ignoring Artificial General Intelligence possibility. It demands humble, flexible preparation acknowledging radical uncertainty.
Capability building beats specific planning. Rather than detailed Artificial General Intelligence response plans, build organisational capabilities valuable across scenarios. Adaptability, learning speed, ethical reasoning, and stakeholder communication matter regardless of Artificial General Intelligence's form. Organisations mastering these capabilities navigate any future effectively. Those fixated on specific predictions might find themselves perfectly prepared for futures that never arrive.
Maintaining human agency remains central. Artificial General Intelligence discussions often assume human irrelevance once artificial intelligence surpasses us. This assumption becomes self-fulfilling if we accept it. Humans might lack raw computational power compared to Artificial General Intelligence. We retain purpose definition, value judgment, and meaning creation. Preparing for Artificial General Intelligence shouldn't mean preparing for human obsolescence. It means preparing for radically enhanced human capability.
The philosophical preparation matters as much as practical planning. Artificial General Intelligence forces confrontation with fundamental questions. What defines humanity if intelligence isn't unique to us? What provides meaning if Artificial General Intelligence surpasses all human achievement? How do we maintain dignity alongside superior intelligences? These questions require cultural evolution. Technical development without philosophical preparation leaves humanity adrift.
True artificial intelligence represents humanity's most audacious aspiration. Creating minds equalling or exceeding human capabilities would crown our technological achievements. Yet this goal remains frustratingly distant despite decades of predictions and remarkable progress in narrow domains.
The journey from current artificial intelligence to Artificial General Intelligence teaches valuable lessons regardless of destination. Understanding intelligence deepens through attempting artificial versions. Building beneficial artificial intelligence systems prepares for beneficial Artificial General Intelligence. Governing narrow artificial intelligence develops frameworks extensible to general intelligence. The path provides value independent of arrival at the destination.
Most importantly, the Artificial General Intelligence question forces essential considerations about human values, technological purpose, and societal direction. What makes humans valuable? How should intelligence be deployed? What futures do we choose to build? These questions matter whether Artificial General Intelligence arrives tomorrow or never.
The future remains unwritten. Artificial General Intelligence might emerge through surprising breakthroughs or require centuries more research. It might prove impossible given physical constraints. Organisations succeed not by predicting Artificial General Intelligence's arrival but by building capabilities valuable across scenarios whilst maintaining human agency and purpose.
Whether Artificial General Intelligence arrives or not, the greater achievement might be enhancing existing intelligence through thoughtful human-AI collaboration. The most profound transformation might come not from replacing human intelligence but from amplifying it through tools that extend rather than substitute for human capabilities.
The horizon of true machine understanding may always remain just that, a horizon that recedes as we approach it. Yet the pursuit itself drives innovation, forces ethical reflection, and compels us to understand intelligence more deeply. In this pursuit, we might discover that the most important intelligence to develop remains fundamentally human: the wisdom to use our tools well, the judgement to choose worthy goals, and the compassion to ensure technology serves human flourishing.
Artificial General Intelligence captures imagination because it represents ultimate achievement. Yet the more immediate achievement lies within reach: creating artificial intelligence systems that genuinely serve human purposes whilst respecting human values. Whether artificial minds ever achieve consciousness, current minds can achieve wisdom in how we shape our technological future.
The dream of thinking machines, born in Victorian workshops and wartime code-breaking centres, continues evolving. Where it leads depends not on artificial intelligence alone but on the human intelligence guiding its development. The future of intelligence, artificial and human alike, depends on choices being made today.
What the Research Shows
Organisations that succeed build progressively, not revolutionarily
The Five A's Framework
Your Path Forward
A Progressive Approach to AI Implementation
Each level builds on the previous, reducing risk while delivering value.
Chapter 1 - The Dream of Thinking Machines (1830’s-1970’s)
Chapter 2 - Digital Revolution (1980’s-2010)
Chapter 3 - Intelligence Explosion
Chapter 4 - Understanding the Paralysis
Chapter 5 - The Five A's Framework
Chapter 6 - Automation Intelligence
Chapter 7 - Augmented Intelligence
Chapter 11 - Governance Across the Five A's
Chapter 12 - Strategic Implementation
Chapter 13 - Use Cases Across Industries
Chapter 14 - The Future of AI