The Five A's of AI - Chapter 14
The Future of AI: Beyond the Hype, What Comes Next
Navigating Uncertainty with Historical Patterns and Observable Realities
By Owen Tribe, author of "The Five A's of AI" and strategic technology adviser with 20+ years delivering technology solutions across a range of industries
Chapter Highlights
$150.8bn global AI investment in 2024 (Stanford HAI, 2025)
2026-2028 predicted bubble correction timeline (Historical bubble cycles)
2040-2050 AI experts estimate AGI emergence (AI expert surveys, 2024)
$632bn projected AI spending by 2028 (IDC, 2024)
Build capabilities that deliver value regardless of AGI arrival

Chapter 1 - The Dream of Thinking Machines (1830’s-1970’s)
Chapter 2 - Digital Revolution (1980’s-2010)
Chapter 3 - Intelligence Explosion
Chapter 4 - AI Paralysis
Chapter 5 - The Five A's Framework
Chapter 6 - Automation Intelligence
Chapter 7 - Augmented Intelligence
Chapter 8 - Algorithmic Intelligence
Chapter 9 - Agentic Intelligence
Chapter 10 - Artificial Intelligence
Chapter 11 - Governance Across the Five A's
Chapter 12 - Strategic Implementation
Understanding The Future of AI
What Does AI's Future Mean for Organisations?
The Future of AI represents the intersection of technological possibility, investment reality, and organisational preparedness, demonstrating how understanding historical patterns and current trends enables strategic decision-making despite inherent uncertainty about breakthrough timing and transformative capabilities.
The Forward-Looking Implementation Pattern
Organisations preparing effectively for AI's future typically achieve:
-
Strategic resilience through capability building that delivers value today whilst positioning for tomorrow's breakthroughs
-
Investment discipline through bubble awareness that captures opportunity whilst managing cyclical risk
-
Scenario planning through AGI preparation that maintains options across multiple futures
-
Infrastructure development through systematic progression that supports current applications and future possibilities
-
Ethical frameworks through proactive governance that shapes AI development rather than reacting to consequences
Whilst You Delay
-
Technology leaders position for bubble correction whilst competitors over-extend during peak hype
-
Research organisations develop AGI frameworks that shape breakthrough governance rather than scrambling after emergence
-
Forward-thinking firms build AI literacy that enables rapid adaptation when capabilities shift
-
Strategic investors accumulate capabilities at reasonable valuations rather than inflated bubble prices
-
Ethical pioneers establish governance models that influence regulatory frameworks and societal norms
The Research: Why Future Preparation Matters
1. Investment Cycles and Bubble Dynamics
AI investment follows recognisable patterns from previous technology revolutions, enabling informed strategic positioning despite market volatility.
Market Reality
Global private AI investment reached $150.8 billion in 2024, marking a 44.6% increase year-on-year (Stanford HAI, 2025). This concentration exceeds even the dot-com peak, with Nvidia's market capitalisation reaching $3 trillion in June 2024 driven primarily by AI chip demand. Historical technology bubble patterns suggest probable correction between 2026 and 2028, though projected AI spending could reach $632 billion by 2028 (IDC, 2024).
Key Distinction
Investment bubbles destroy value for over-extended participants whilst creating opportunities for disciplined builders. The dot-com crash eliminated thousands of companies yet the underlying internet technologies became foundational to modern business. Strategic organisations position for correction through disciplined investment, capability building, and valuation awareness, capturing opportunity whilst managing cyclical risk.
2. The AGI Timeline and Preparation Strategy
Artificial general intelligence predictions vary dramatically, requiring scenario-based planning rather than single-timeline commitment.
Expert Consensus
Leading technology executives predict AGI emergence between 2025 and 2029, with Sam Altman suggesting 2025, Elon Musk predicting 2026, and Jensen Huang estimating 2029. However, surveyed AI experts provide more measured assessments, estimating AGI emergence between 2040 and 2050, with 90% probability of appearance by 2075 (AI expert surveys, 2024). Large language models currently operate through next-token prediction based on statistical patterns, with research showing performance drops up to 65% when irrelevant information is introduced.
Success Factors
Organisations succeeding across scenarios invest primarily in proven AI categories delivering immediate value whilst allocating smaller portions to advanced research. This portfolio approach captures current returns whilst maintaining strategic options. AGI preparation focuses on ethical frameworks and governance models rather than betting on specific arrival timelines. Maintaining human agency remains central, as preparation shouldn't mean accepting obsolescence but rather positioning for radically enhanced human capability.
3. Infrastructure Ownership and Strategic Positioning
Who owns and controls AI infrastructure profoundly influences development trajectory, competitive dynamics, and societal benefit distribution.
Current Status
Private sector companies, particularly major technology firms, dominate AI infrastructure ownership. The computational resources, training data, and deployment platforms enabling modern AI primarily belong to corporations rather than public institutions. China launched a $138 billion government AI fund in 2025, whilst the European Union mobilises €200 billion combining various funding sources (Government announcements, 2025). Data centre power requirements doubled between 2022 and 2023 (MIT News, 2025).
Strategic Implications
Infrastructure concentration creates remarkable innovation velocity whilst raising questions about access, control, and societal benefit. Organisations must navigate this landscape through partnership strategies, selective infrastructure development, and strategic positioning. The goal becomes capturing AI capabilities through appropriate ownership models whilst participating in governance discussions shaping how infrastructure access and control evolve. Building proven capabilities today positions organisations for multiple infrastructure scenarios tomorrow.
Chapter 14
Beyond the Hype: What Comes Next
The Impossibility of Prediction
No one can predict the future. This fundamental truth bears repeating at the outset of any discussion about where artificial intelligence might lead us. History is littered with confident predictions that proved spectacularly wrong. In 1943, IBM's Thomas Watson allegedly declared that the world market for computers might be around five machines. In 1977, Digital Equipment Corporation's Ken Olsen famously stated there was no reason anyone would want a computer in their home. Bill Gates supposedly said in 1981 that 640KB of memory ought to be enough for anybody.
These weren't fools making these pronouncements. They were intelligent, informed leaders in their fields. Their errors remind us that technological change follows patterns that often surprise even experts. Breakthrough innovations emerge from unexpected directions. User behaviour evolves in ways no one anticipates. External forces reshape entire industries overnight.
What We Can Recognise: Patterns from the Past
Yet whilst we cannot predict the future, we can recognise patterns from technological history that provide useful guidance. The progression from research labs to commercial deployment follows predictable stages. Investment cycles exhibit recurring characteristics. Infrastructure requirements scale in foreseeable ways. Social and political responses to transformative technologies echo previous experiences.
These patterns don't guarantee specific outcomes, but they illuminate the forces shaping technological development. They help distinguish between genuine trends and temporary noise. Most importantly, they reveal the problems that need solving and the constraints that must be addressed, pointing toward technologies and approaches that seem most likely to succeed.
What We Can Observe: Current Realities
Standing at the midpoint of 2025, we find ourselves in a peculiar position. The AI revolution feels both inevitable and overblown, transformative and underwhelming, revolutionary and evolutionary. This contradiction isn't accidental. It reflects the nature of technological change itself, which rarely unfolds as smoothly as venture capital presentations suggest.
The current moment bears striking resemblance to 1999, when every company needed a ".com" strategy and valuations soared based on traffic rather than revenue. Yet it also echoes 1876, when Alexander Graham Bell's telephone seemed like an interesting curiosity rather than the foundation of global communication. Understanding which analogy proves more accurate requires examining not just the technology, but the observable patterns emerging from economic, environmental, and social forces, alongside the critical question of who owns and controls the infrastructure upon which AI depends.
Problems That Need Solving: The Infrastructure Question
The most pressing challenge facing AI development isn't algorithmic sophistication but infrastructure ownership and control. Historical patterns suggest that transformative technologies require infrastructure that can support mass adoption whilst addressing sovereignty, security, and sustainability concerns. The current AI infrastructure landscape reveals significant problems that technology and policy must solve:
First, the concentration of AI capabilities in private hands creates strategic vulnerabilities for nations and societies. Second, the environmental impact of current AI infrastructure proves unsustainable at scale. Third, the uneven global distribution of AI resources threatens to create permanent technological inequalities. These problems point toward specific technological and governance solutions that seem most likely to emerge.
The Infrastructure Ownership Divide
Critical AI infrastructure today is overwhelmingly owned by private institutions, creating strategic vulnerabilities that governments are only beginning to address. In 2024, over 50% of all global VC funding went to AI startups, totalling $131.5 billion, marking a 52% year-over-year increase. This private investment surge has created a landscape where critical AI capabilities remain in corporate hands, often concentrated in a handful of companies.
The U.S. and China have by far the most public GPU clusters in the world. China leads the U.S. on the number of GPU-enabled regions overall, however the most advanced GPUs are highly concentrated in the United States. This concentration creates what researchers call "Compute Deserts:" areas where there are no GPUs for hire at all, highlighting the uneven global distribution of AI infrastructure.
The private sector's control over AI infrastructure extends beyond compute resources to encompass data centres, cloud services, and the algorithms themselves. Major technology companies own the vast majority of large-scale training infrastructure, whilst governments struggle to build equivalent capabilities. One of the buzzwords there will be "public AI" – basically, the notion that governments need to build their own publicly owned and controlled AI infrastructure to serve societal goals instead of profit motives.
This private dominance creates significant sovereignty concerns that are reshaping national AI strategies. Today, it is a challenge to find a country that hasn't been locked in some kind of bruising political, legal or regulatory battle with those same technology firms. The dependence on foreign-owned AI infrastructure forces difficult choices between technological capability and national autonomy.
Europe's dependence on US cloud infrastructure, for instance, is seen by many on the continent as a strategic vulnerability. Despite initiatives like the GaiaX programme, this dependency appears likely to persist, forcing European nations to consider alternative approaches to technological sovereignty.
It is worth noting that the United States and China dominate AI not because of looser rules but because of their aggressive state-backed investments in infrastructure, access to vast computing power, and more seamless public-private partnerships. This observation highlights how successful AI development requires coordination between public policy and private investment rather than relying on either sector alone.
Governments worldwide are recognising the strategic importance of public AI infrastructure, though implementation remains limited. Some point out that China's DeepSeek reportedly trained its R1 model, which took the world by storm a couple of weeks ago, for just $6 million. But that number is probably a massive underestimate. Meanwhile, the $56 million is less than a tenth of what Mistral, Europe's biggest homegrown AI company, raised in a funding round last summer.
The scale of required investment creates significant challenges for public sector initiatives. It's less than one six-hundredth of the €30-35 billion that one study estimated it would cost to build a "CERN for AI" (and that just in the first three years). These numbers illustrate the vast gap between current public investment and the resources needed for competitive AI infrastructure.
China represents the most aggressive public sector approach to AI infrastructure development. China is launching a new 1 trillion-yuan (~$138 billion) government-backed fund to support emerging technologies including AI and semiconductors. This massive investment demonstrates how state resources can be mobilised for strategic technology development.
Different regions are pursuing distinct approaches to AI infrastructure ownership that reflect their political systems and strategic priorities.
Over the past five years, the US government has committed just over $12 billion to AI-related obligations, most of which have gone into R&D. This relatively modest public investment relies on private sector innovation whilst using export controls and regulatory measures to maintain strategic advantages.
China's overall government-led funding likely exceeds investment by US federal and state governments; however, total private-sector investment in AI companies in the US vastly outmatches private-sector investment in China. This approach prioritises strategic autonomy and national control over AI capabilities.
Europe urgently needs to secure the computing power, the necessary energy, and industrial data ecosystem to support ambitious economic and strategic AI goals for the ten years ahead. The European Union is mobilising €200 billion for AI investments by combining public and private funds, though this remains modest compared to private sector investment in other regions.
Recognisable Patterns: The Investment Cycle
The mathematics of the current AI investment cycle follow patterns recognisable from previous technology bubbles. Global private AI investment reached $150.8 billion in 2024, marking a 44.6% increase on the previous year's $104.3 billion. This concentration exceeds even the dot-com peak. Nvidia's market capitalisation briefly exceeded $3 trillion in June 2024, making it the world's most valuable company based largely on demand for AI chips.
Historical patterns suggest this represents a classic speculative bubble with predictable characteristics. The railway mania of the 1840s, the electricity boom of the 1890s, the automobile bubble of the 1920s, and the dot-com frenzy of the 1990s all followed similar trajectories: initial breakthrough, speculative investment, overvaluation, correction, then sustainable development based on genuine utility.
These numbers reflect genuine technological capability combined with extraordinary speculation. Unlike the dot-com bubble, which was built on the promise of future profitability, today's AI investments are generating immediate returns for many implementations. Companies integrating generative AI report substantial returns for every pound invested, whilst retailers using machine learning see annual profit growth of approximately 8%. The technology works, but perhaps not at the scale or speed that current valuations assume.
The bubble characteristics are unmistakable. Every software application now claims AI capabilities, regardless of actual intelligence. Startups with minimal revenue achieve unicorn valuations based on AI buzzwords. Traditional companies rebrand existing products as "AI-powered" to attract investment. Venture capital flows to any company mentioning large language models or machine learning. This pattern suggests inevitable correction, though the timing remains uncertain.
History provides guidance on what happens when technology bubbles burst. The dot-com crash of 2000-2002 destroyed trillions in market value, eliminated thousands of companies, and triggered a global recession. Yet the underlying technologies survived and became the foundation of modern business. Amazon's stock fell dramatically from its peak but the company emerged stronger. Google, founded during the bubble's aftermath, became one of history's most valuable companies.
The AI bubble will likely follow a similar pattern. Overvalued companies with unsustainable business models will disappear. Startups burning cash on speculative research will fail. But the fundamental capabilities will remain valuable. The infrastructure being built today will support future applications. The talent being trained in AI techniques will continue innovating after the hype subsides.
When the correction comes, likely between 2026 and 2028 based on historical bubble cycles, the consequences will be severe but temporary. Venture funding will contract sharply, eliminating marginal players. Public company valuations will reset to levels reflecting actual rather than projected capabilities. Unemployment will rise in technology sectors as companies reduce headcount. Consumer and business spending on AI will moderate as organisations focus on proven applications rather than experimental projects.
Yet this correction will also create opportunities. Talented engineers will leave failing startups to join established companies with sustainable business models. Computing resources will become cheaper as demand moderates. The noise surrounding AI will diminish, allowing clearer assessment of what actually works. The most promising applications will attract investment at reasonable valuations.
The pattern resembles every major technological transition: railways in the 1840s, electricity in the 1890s, automobiles in the 1920s, computers in the 1980s, the internet in the 1990s. Each generated speculative bubbles followed by corrections that cleared away unsustainable businesses whilst preserving fundamental innovations. The AI revolution will unfold similarly.
BlackRock, Global Infrastructure Partners (GIP), a part of BlackRock, Microsoft, and MGX today announced that NVIDIA and xAI will join the Global AI Infrastructure Investment Partnership. These massive public-private partnerships suggest that even during potential bubble conditions, long-term infrastructure investment continues, creating foundations that will survive any market correction.
US export controls have also almost certainly prompted the Chinese government to accelerate funding for its AI hardware and semiconductor industries and high-performance computing infrastructure. These geopolitical pressures are driving infrastructure development that transcends market cycles, as nations view AI capabilities as strategic necessities rather than speculative investments.
Environmental Unsustainability Points Toward Space Solutions
The environmental impact of AI represents an observable problem that existing terrestrial infrastructure cannot solve sustainably. Data centres in North America alone saw power requirements nearly double between 2022 and 2023, driven largely by AI training and inference. The International Energy Agency projects significant growth in global electricity demand over the next three years, with data centre expansion as a primary driver.
This trajectory proves mathematically unsustainable on Earth, where energy generation still relies heavily on fossil fuels and cooling requirements consume massive resources. Yet the problem points toward a specific technological solution that addresses multiple constraints simultaneously: space-based computing infrastructure.
The space-based solution isn't speculative; it's already under development. Observable facts point toward this becoming a practical reality within the next decade. Solar radiation in space provides unlimited, consistent power without atmospheric interference. The vacuum of space eliminates cooling requirements. Heat dissipates through radiation rather than requiring energy-intensive air conditioning. Orbital positions enable global connectivity without the latency of terrestrial networks.
The emergence of satellite constellations like Starlink and OneWeb creates the infrastructure necessary for space-based computing. Starlink alone operates thousands of satellites with plans for tens of thousands more, creating a mesh network that could support distributed processing. Inter-satellite communication using laser links enables data transfer at light speed without atmospheric interference or ground-based routing delays. This infrastructure could support AI processing at orbital edge nodes, bringing computation closer to users whilst eliminating Earth-based environmental impact.
Space-based AI also enables new security architectures that transcend traditional client-server models. Blockchain technology, often dismissed as energy-intensive and slow, becomes practical in space where energy is unlimited and processing can be distributed across satellite constellations. Unlike traditional encryption that requires central servers for key management, blockchain creates trustless systems where data integrity depends on mathematical proof rather than institutional authority.
The distributed nature of satellite constellations makes blockchain particularly attractive. Instead of centralising data in ground-based facilities vulnerable to attack or control, information gets distributed across hundreds of satellites. Reconstructing complete datasets requires accessing multiple nodes in the constellation. This task becomes computationally difficult for adversaries but straightforward for authorised users with the proper sequence keys. This approach could revolutionise data security for sensitive applications from financial transactions to government communications.
The concept of space-based data centres has moved beyond theoretical discussion to active development and deployment. Companies like Starcloud, Axiom Space, and Lonestar Data Holdings are pioneering this frontier with concrete missions and substantial investment.
Technical analysis demonstrates that orbital data centres offer compelling economic advantages through multiple factors that terrestrial facilities cannot match. Space-based solar arrays achieve dramatically higher capacity factors compared to terrestrial solar farms, with significantly higher peak power generation due to atmospheric attenuation elimination. This translates to energy costs substantially lower than typical wholesale electricity rates in developed nations.
The cooling advantages prove equally dramatic. Deep space's effective temperature enables passive radiative cooling that eliminates the energy-intensive chillers required by terrestrial data centres. Calculations show that modest radiator plates can dissipate substantial wattage whilst operating at comfortable temperatures, providing cooling capacity that scales efficiently without water usage or complex mechanical systems.
For substantial computing clusters operated over extended periods, projections show total costs dramatically lower in space versus Earth, primarily due to massive energy savings. Modular designs enable linear scaling to gigawatt levels using container-based compute modules launched on next-generation heavy-lift vehicles.
Some companies have already achieved practical milestones. Axiom Space announced the launch of their first two Orbital Data Centre nodes by year's end 2025, marking the transition from research to commercial deployment. Even more remarkably, Lonestar Data Holdings has achieved the first successful lunar data centre deployment, testing storage and processing operations from the Moon's surface in February 2024, with physical hardware deployed in February 2025.
These developments demonstrate that space-based computing has moved from science fiction to engineering reality. The infrastructure being deployed today creates practical pathways for AI processing beyond Earth's environmental constraints.
Consider the implications for consumer electronics. A device needn't carry massive processing power if it can access orbital AI through simple communication links. The Star Trek communicator becomes reality: a lightweight device providing access to virtually unlimited intelligence through space-based processing. Natural language interfaces work particularly well for this architecture because they eliminate the need for complex local software. Users speak to their devices; satellites process requests and return responses in natural language.
This shift to space-based processing could democratise access to advanced AI whilst solving environmental challenges. Rural areas without terrestrial internet infrastructure could access the same capabilities as urban centres. Developing countries could leapfrog traditional computing infrastructure, much as they did with mobile phones. The environmental impact transfers from Earth to space, where unlimited solar energy and natural cooling make it sustainable.
Infrastructure Ownership in Space
Space-based AI infrastructure introduces new questions about ownership and control that could fundamentally alter the current private-public dynamic. Countries without jurisdiction over AI infrastructure have fewer legislative choices, he argues, leaving them subjected to a world shaped by others. Space deployment could either exacerbate this problem through private space ventures or solve it through international cooperation frameworks.
The development of space-based AI infrastructure requires massive capital investments that may favour current technology leaders whilst potentially creating new opportunities for nations to leapfrog terrestrial limitations. International space law and governance frameworks will need to evolve rapidly to address these developments and ensure that space-based AI serves broader human interests rather than concentrating power in the hands of a few spacefaring nations or corporations.
Indicators of Change
Several indicators suggest the current private dominance of AI infrastructure may be evolving towards greater public sector involvement and international coordination.
Investment Scale and Direction
The emergence of massive public-private partnerships suggests recognition that AI infrastructure requires coordination beyond pure market mechanisms. These initiatives demonstrate how infrastructure investment can bridge public strategic needs with private sector efficiency.
National Security Concerns
Export controls and technology restrictions are reshaping global AI infrastructure ownership patterns. These restrictions force nations to develop domestic alternatives, potentially reducing private sector dominance in favour of state-controlled infrastructure.
Regulatory Pressure
Growing calls for AI governance and oversight are pushing governments towards greater infrastructure control. This has implications for which countries shape AI development as well as norms around what is good, safe, and beneficial AI.
Technological Sovereignty
The push for digital sovereignty is driving nations to develop independent AI capabilities, even at significant cost. This trend suggests a gradual shift from private-dominated to mixed public-private control over critical AI infrastructure.
The Path Forward
The future of AI infrastructure ownership will likely be characterised by hybrid models that combine private sector innovation with public sector oversight and strategic direction. The pure private sector model that has dominated AI development to date appears insufficient for addressing national security, sovereignty, and societal benefit concerns.
However, the massive capital requirements for competitive AI infrastructure mean that purely public sector approaches face significant challenges outside of the largest economies. The most successful models will likely combine public strategic direction with private sector efficiency and innovation, whilst ensuring that critical capabilities remain under appropriate national or international control.
The space dimension adds another layer of complexity, potentially creating new opportunities for international cooperation whilst raising fresh questions about governance and control in domains beyond traditional national jurisdiction. How these dynamics evolve will significantly influence not just who benefits from AI development, but what values and priorities shape the technology's deployment and impact on society.
What the Research Shows
Organisations succeed by building proven capabilities, not predicting breakthroughs
The Five A's Framework
Your Path Forward
A Progressive Approach to AI Implementation
Each level builds on the previous, reducing risk while delivering value.
Chapter 1 - The Dream of Thinking Machines (1830’s-1970’s)
Chapter 2 - Digital Revolution (1980’s-2010)
Chapter 3 - Intelligence Explosion
Chapter 4 - Understanding the Paralysis
Chapter 5 - The Five A's Framework
Chapter 6 - Automation Intelligence
Chapter 7 - Augmented Intelligence
Chapter 8 - Algorithmic Intelligence
Chapter 9 - Agentic Intelligence
Chapter 10 - Artificial Intelligence
Chapter 11 - Governance Across the Five A's