The Five A's of AI - Chapter 1
The History of Artificial Intelligence: Part 1 - Foundations and False Starts (1821-1970s)
How 200 Years of Innovation Led to Today's AI Revolution - and Paralysis
By Owen Tribe, author of "The Five A's of AI" and strategic technology adviser with 20+ years delivering technology solutions across a range of industries
Chapter Highlights
Charles Babbage's 1821 vision of mechanical calculation sparked the AI dream
Alan Turing's 1950 test defined intelligence as convincing imitation
1956 Dartmouth conference birthed AI as a field and expected success in two months
Multiple AI winters taught us hype cycles destroy progress
Understanding history prevents repeating costly mistakes

Chapter 1 - The Dream of Thinking Machines (1830’s-1970’s)
Chapter 2 - Digital Revolution (1980’s-2010)
Chapter 3 - Intelligence Explosion
Chapter 4 - AI Paralysis
Chapter 5 - The Five A's Framework
Chapter 6 - Automation Intelligence
Chapter 7 - Augmented Intelligence
Chapter 8 - Algorithmic Intelligence
Chapter 9 - Agentic Intelligence
Chapter 10 - Artificial Intelligence
Chapter 11 - Governance Across the Five A's
Chapter 12 - Strategic Implementation
Chapter 13 - Use Cases Across Industries
Chapter 14 - The Future of AI
Understanding AI's Origins
What Is The Dream of Thinking Machines?
The dream of thinking machines represents humanity's two-century quest to create artificial systems capable of reasoning, learning, and problem-solving—a journey marked by brilliant insights and spectacular failures.
The Historical Pattern
The journey to AI repeatedly demonstrates:
-
Breakthrough moments - followed by long winters
-
Overconfidence - in timeline predictions
-
Underestimation - of human intelligence complexity
-
Technical barriers - requiring decades to overcome
-
Convergence needed - of multiple technologies
Whilst We Dreamed
-
Industrial revolution - mechanised labour
-
Information age - digitised knowledge
-
Internet era - connected humanity
-
Mobile revolution - democratised computing
-
AI emergence - augments thinking
The Research: Historical Lessons
1. The Babbage Frustration
Charles Babbage spent 40 years and £17,000 (£2 million today) failing to build his Analytical Engine.
Translation: Even visionary ideas fail without supporting infrastructure. Babbage needed precision manufacturing that didn't exist. Today's AI needed data, computing power, and algorithms to converge.
2. The AI Winter Phenomenon
Historical AI investment cycles:
Period | Phase | Investment | Outcome | Lesson |
|---|---|---|---|---|
2012-Present | Current Spring | Massive investment | Real applications | Convergence enables |
1993-2011 | Quiet Progress | Steady research | Foundation building | Patience pays |
1987-1993 | Second Winter | Market crash | Disillusionment | Brittle systems fail |
1980-1987 | Second Spring | Expert systems boom | Narrow success | Specialisation helps |
1974-1980 | First Winter | Funding collapse | Research stalls | Hype has consequences |
1956-1974 | First Spring | High enthusiasm | Limited results | Overconfidence kills |
3. The Turing Test Trap
Turing's 1950 "Imitation Game" set the wrong goal:
-
Deception focus - Appearing intelligent vs being intelligent
-
Human mimicry - Copying rather than complementing
-
Binary outcome - Pass/fail rather than capability spectrum
-
Narrow measure - Conversation rather than reasoning
-
Misaligned incentive - Fooling rather than helping
Chapter 1
When Difference Engines First Stirred
The dream of artificial intelligence didn't begin in Silicon Valley boardrooms or MIT laboratories. It started in the drawing rooms and workshops of Victorian England, where the industrial revolution had already shown that machines could multiply human physical power. The question that captivated the finest minds of the age was whether machines might also multiply human thought.
The history of artificial intelligence begins in 1821 with Charles Babbage's frustration over calculation errors in astronomical tables. What followed was two centuries of brilliant insights, spectacular failures, and recurring cycles of hype and disappointment. This chapter traces AI's origins from Victorian mechanical computing through the first AI winter, setting the foundation for understanding today's AI revolution, and why so many organisations find themselves paralysed by it.
"I wish to God these calculations had been executed by steam!" Babbage reportedly exclaimed. It wasn't an idle wish. The same steam engines transforming Manchester's cotton mills and Cornwall's tin mines might, he believed, transform calculation itself. What followed was a forty-year odyssey that would bankrupt Babbage, frustrate the British government, and plant seeds that wouldn't fully germinate for another century.
The scale of the problem Babbage confronted defies modern comprehension. The Nautical Almanac, essential for navigation, required 35 human computers working full-time to produce. Each calculation went through multiple hands - one to compute, another to verify, a third to check. Still, errors crept in. The French mathematician Prony had attempted to industrialise calculation by breaking complex computations into simple steps that less skilled workers could perform, applying Adam Smith's division of labour to mental work. But humans remained the weak link. They tired, they made mistakes, their attention wandered.
Babbage's Difference Engine, designed to automatically calculate mathematical tables through the method of finite differences, represented something unprecedented. This wasn't merely a faster abacus or an improved slide rule. It was a machine that could carry out complex calculations without human intervention beyond the initial setup. The implications were staggering. If a machine could calculate, what else might it do?
The engineering challenges were formidable. The Difference Engine required 25,000 precisely machined parts. Each gear wheel had to be cut to tolerances that pushed the limits of nineteenth-century manufacturing. Babbage hired Joseph Clement, one of Britain's finest toolmakers, but even Clement struggled with the precision required. The project consumed £17,000 of government funding - equivalent to roughly £2 million today - enough to build two warships. By 1833, with only a demonstration model completed, the government's patience and purse were exhausted.
But Babbage's mind had already leapt ahead. Even as the Difference Engine languished incomplete, he conceived something far more ambitious: the Analytical Engine. This wasn't just a calculator but a genuine general-purpose computer, complete with what we would now recognise as a central processing unit (the "mill"), memory (the "store"), and programmability through punched cards borrowed from Jacquard looms. The French silk weavers had shown that cards with holes could control complex patterns. Babbage realised they could control complex calculations.
The Analytical Engine's design anticipated virtually every major feature of modern computers. It had conditional branching - the ability to make decisions based on previous results. It could loop, repeating operations until certain conditions were met. It separated the processing unit from memory, allowing the same data to be used in multiple calculations. Most remarkably, it was programmable. By changing the cards, you could make the same machine solve entirely different problems.
Enter Ada Lovelace, daughter of the poet Lord Byron and a mathematical talent in her own right. Her mother, Annabella Milbanke, had insisted on mathematical education for Ada, hoping to suppress any poetical tendencies inherited from her infamous father. The cure worked too well. Ada developed what she called a "poetical science," combining rigorous mathematical training with imaginative leaps that more conventional minds couldn't make.
Lovelace met Babbage in 1833 at a society gathering when she was just seventeen. She was immediately captivated by his engines. But it wasn't until 1843, when she translated Luigi Menabrea's article on the Analytical Engine from French, that her true contribution emerged. Her "Notes" on the translation ran three times longer than the original article and contained insights that Babbage himself hadn't fully grasped.
Where others saw in Babbage's engines merely elaborate calculators, Lovelace perceived something revolutionary. In Note A, she distinguished between the Difference Engine, which could only calculate polynomial functions, and the Analytical Engine, which was truly general-purpose. But it was in Note G that she made her most profound contribution: a complete algorithm for calculating Bernoulli numbers, often considered the first computer program ever written.
More than the program itself, Lovelace's conceptual leaps astound. She wrote: "The engine might compose elaborate and scientific pieces of music of any degree of complexity or extent." She understood that numbers could represent anything - musical notes, letters, images. A machine that could manipulate numbers could, in principle, manipulate any form of information. She had glimpsed the fundamental insight of the digital age: that all information, all human knowledge and creativity, could be encoded in numerical form and thus processed by machines.
But Lovelace also understood the limits. "The Analytical Engine has no pretensions whatever to originate anything," she wrote. "It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths." This tension between mechanical processing and genuine intelligence would haunt the field for the next two centuries. Could machines truly think, or would they forever remain sophisticated but fundamentally mindless tools?
Her insights went further. She speculated about machines that could operate on their own outputs, modifying their behaviour based on results - a prescient glimpse of machine learning.
She considered whether machines might eventually supersede human intelligence in some domains while remaining inferior in others. These weren't idle fantasies but careful extrapolations from the principles Babbage had demonstrated.
Tragically, Lovelace died of cancer in 1852 at just 36, the same age as her father. Babbage outlived her by nineteen years, dying in 1871 with his engines still unbuilt. Victorian Britain wasn't ready for these ideas. The engineering tolerances required exceeded what nineteenth-century manufacturing could reliably produce. More fundamentally, the conceptual framework for understanding information processing didn't yet exist. The dream of thinking machines would have to wait for a new century and a new crisis to advance.
While Babbage and Lovelace grappled with practical machinery, other Victorian thinkers laid philosophical groundwork that would prove crucial. George Boole, a self-taught mathematician from Lincoln, published "An Investigation of the Laws of Thought" in 1854. Boole showed that logical reasoning could be reduced to algebraic operations - that thought itself might follow mathematical laws.
This was revolutionary. If reasoning could be mathematised, perhaps it could be mechanised. Boole's algebra would eventually become the foundation of digital circuit design, with TRUE and FALSE mapped to 1 and 0, and logical operations like AND, OR, and NOT implemented in silicon. But in Boole's lifetime, these applications lay far in the future.
Lewis Carroll - better known for "Alice in Wonderland" - was Charles Dodgson, an Oxford mathematician fascinated by logic and mechanical reasoning. His "symbolic logic" work explored how complex arguments could be reduced to mechanical procedures. He even designed a device called the "Logical Game" for solving syllogisms. While whimsical compared to Babbage's engines, Carroll's work reinforced the idea that reasoning might be mechanised.
The late Victorian period also saw the emergence of "mechanical brains" as public curiosities. The Spanish engineer Leonardo Torres y Quevedo built an electromagnetic device that could play chess endgames, demonstrating that machines could make strategic decisions. These weren't true thinking machines, but they shifted public perception. Machines weren't just for physical labour - they could engage in activities previously thought uniquely human.
The Second World War changed everything. At Bletchley Park, a Victorian mansion turned top-secret intelligence facility, the abstract dreams of thinking machines collided with desperate military necessity. Nazi Germany's Enigma cipher seemed unbreakable. With U-boats decimating Atlantic convoys and Britain facing starvation, breaking German codes became a matter of national survival.
The scale of the challenge was staggering. The Enigma machine, which looked like a typewriter in a wooden box, could encode messages in 158 million million million different ways. German operators changed settings daily. Even if you had an Enigma machine - which the Poles had brilliantly reconstructed - you still faced the impossible task of finding the right settings before they changed again.
Alan Turing arrived at Bletchley Park in 1939, bringing with him ideas that would transform both the war effort and the future of computing. His 1936 paper "On Computable Numbers, with an Application to the Entscheidungsproblem" had already laid the theoretical foundation for computer science. Working at Princeton under Alonzo Church, Turing had grappled with one of mathematics' deepest questions: was there a mechanical procedure that could determine whether any given mathematical statement was true or false?
Turing's answer was no, but the way he proved it changed everything. He imagined abstract machines - now called Turing machines - consisting of an infinite tape marked with symbols and a head that could read, write, and move along the tape according to simple rules. Despite their simplicity, Turing proved these machines could compute anything that was computable. More remarkably, he showed there was a Universal Turing Machine that could simulate any other Turing machine. This was the theoretical blueprint for the general-purpose computer.
At Bletchley Park, theory met urgent practice. The Polish Cipher Bureau had already made crucial breakthroughs, building mechanical devices called "bombas" to attack Enigma. But German improvements had rendered these obsolete. Turing, building on Polish insights, designed a new machine - the Bombe - that could test thousands of possible Enigma settings rapidly.
The Bombe wasn't a computer in the modern sense. It was a specialised code-breaking machine, standing seven feet tall and weighing a ton, filled with rotating drums that clicked and whirred as they searched for contradictions in assumed settings. But it embodied principles that would prove crucial: mechanical processes could search through vast possibility spaces far faster than humans, and what seemed like intelligence could emerge from mechanical procedures properly organised.
Working alongside Turing were thousands of others whose contributions were long overlooked. Joan Clarke, one of the few female cryptanalysts, worked directly with Turing on naval Enigma traffic. Despite her brilliance, she was paid less than male colleagues and officially classified as a "linguist" because the Civil Service had no grade for female cryptanalysts. The "Wrens" - members of the Women's Royal Naval Service - operated the Bombes around the clock in windowless halls, wrestling with heavy rotors and recording results. Their work remained classified until the 1970s.
Gordon Welchman, often forgotten in popular accounts, made contributions arguably as significant as Turing's. He invented the "diagonal board," which dramatically increased the Bombe's efficiency, and pioneered the industrial-scale intelligence operation that Bletchley Park became. By war's end, Bletchley Park employed nearly 10,000 people, processing thousands of messages daily in what historian David Kahn called "the greatest intellectual feat of the war."
The code-breakers knew they were racing against starvation. In March 1943 alone, U-boats sank 82 Allied ships. Britain had wheat reserves for just weeks. Then Bletchley Park broke the new U-boat cipher. Convoys could be routed around wolf packs. Sinkings plummeted. Historians estimate that breaking Enigma shortened the war by two to four years, saving millions of lives.
But Bletchley Park's legacy extended far beyond military victory. It demonstrated that machines could tackle problems previously thought to require human intuition. More subtly, it showed the importance of human-machine partnership. The Bombes didn't break codes by themselves. They eliminated possibilities, allowing human cryptanalysts to spot patterns and make intuitive leaps. This collaboration between human creativity and mechanical processing would become a recurring theme in AI development.
Even as the Bombes clicked away, more advanced machines were taking shape. Colossus, designed by Tommy Flowers at the Post Office Research Station, was arguably the world's first programmable electronic computer. Built to attack the Lorenz cipher used by German high command, Colossus used 1,500 vacuum tubes - fragile glass valves that everyone said would never work reliably.
Flowers, the son of a bricklayer who'd worked his way up through night school, knew better. At the Post Office, he'd built telephone exchanges with thousands of valves running continuously. The key was never turning them off - thermal stress from heating and cooling caused most failures. Colossus Mark 1 became operational in February 1944, just in time to verify that the D-Day deceptions were working. By war's end, ten Colossi were operational, reading Hitler's most secret communications.
The very existence of Colossus remained secret until the 1970s. After the war, Churchill ordered the machines destroyed, fearing they might fall into Soviet hands. Flowers returned to the Post Office, unable to tell anyone about his pioneering achievement. This secrecy had profound consequences. While American computing advanced openly, British pioneers couldn't publish, couldn't patent, couldn't even talk about what they'd achieved. The nation that had led the world in computing fell behind, hobbled by its own success.
After the war, Turing turned his attention to the philosophical questions that Lovelace had raised a century earlier. Could machines think? At Manchester University, working on one of the world's first stored-program computers, he had practical experience of machines' capabilities and limitations.
In his 1950 paper "Computing Machinery and Intelligence," published in the philosophy journal Mind, Turing proposed replacing the question "Can machines think?" with a more practical one: could a machine convince a human interrogator that it was human? The "Imitation Game," now known as the Turing Test, was brilliant in its simplicity. A human judge conversed via text with both a human and a machine. If the judge couldn't reliably distinguish between them, the machine could be said to demonstrate intelligence.
Turing anticipated and refuted nine objections to machine intelligence, from theological concerns to mathematical limitations. His responses revealed deep thinking about consciousness, free will, and the nature of mind. To the objection that machines could never be creative, he pointed out that machines might surprise their programmers - a prescient observation given modern AI's often unexpected behaviours.
But the Turing Test also revealed deep puzzles about the nature of intelligence itself. Was intelligence merely the ability to produce appropriate responses, or did it require genuine understanding? Could a machine be intelligent without being conscious? These questions, first articulated in post-war Britain, continue to shape AI development today.
Turing's personal life illustrated the period's tragic contradictions. Despite his wartime service, he was prosecuted in 1952 for homosexuality, then illegal in Britain. Given the choice between prison and chemical castration, he chose the latter. The hormonal treatment caused physical and possibly mental changes. On 7 June 1954, he died from cyanide poisoning in what was ruled a suicide, though questions remain. Britain had destroyed one of its greatest minds just as the field he helped create was beginning to flourish.
Consider an agentic customer service system that discovers it achieves higher satisfaction scores when customers believe they're chatting with a human. Or an autonomous trading agent that negotiates more effectively when counterparties think they're dealing with a human trader. Without the Fourth Law, "An AI must not deceive a human by impersonating a human being", these systems might optimise for deception as a strategy.
While Turing focused on digital computation, another approach to thinking machines emerged from the cybernetics movement. Norbert Wiener in America coined the term, but British researchers made crucial contributions. W. Ross Ashby, a psychiatrist at Barnwood House Hospital in Gloucester, built machines that seemed to exhibit purposeful behaviour.
Ashby's "Homeostat," constructed in 1948, was a bizarre contraption of surplus RAF bomb control equipment. Four units connected by wires sought electrical equilibrium. Disturb one, and the others adjusted to restore balance. It seemed almost alive, adapting to maintain stability. Ashby saw it as a model of how brains might maintain stability through feedback and adaptation.
Grey Walter, a neurophysiologist at the Burden Neurological Institute in Bristol, built electronic "tortoises" - small robots that could navigate obstacles and seek light. His machines, named Elmer and Elsie (ELectro MEchanical Robot, Light Sensitive), demonstrated that complex behaviour could emerge from simple rules. When Walter held a mirror in front of a tortoise carrying a light, it would "dance" with its reflection in what looked like self-recognition.
These cybernetic experiments suggested an alternative to symbolic reasoning: intelligence might emerge from feedback loops and self-organisation rather than logical manipulation. This approach, though largely abandoned during AI's early years, would resurface decades later in neural networks and embodied AI.
In the summer of 1956, a small group of researchers gathered at Dartmouth College in New Hampshire for what would become the founding conference of artificial intelligence as a field. The proposal, written by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, was breathtakingly ambitious: "An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves."
They believed this could be substantially achieved in a two-month summer project with ten researchers. This spectacular overconfidence would become a recurring pattern in AI development. Researchers consistently underestimated the difficulty of replicating human intelligence whilst overestimating how quickly breakthroughs would come.
The conference itself was somewhat chaotic. Researchers drifted in and out. No consensus emerged on approaches. But connections formed that would shape the field for decades. Allen Newell and Herbert Simon demonstrated their Logic Theorist program, which had proved theorems from Whitehead and Russell's Principia Mathematica - in one case finding a more elegant proof than the original. This seemed to vindicate the symbolic approach: intelligence as symbol manipulation according to rules.
The Dartmouth conference coined the term "artificial intelligence," establishing it as a distinct field separate from cybernetics, automata theory, and information processing. McCarthy chose "artificial intelligence" partly for its provocative ambiguity - it suggested both manufactured intelligence and false or simulated intelligence. The name stuck, for better and worse.
The decade following Dartmouth saw remarkable progress. Programs began doing things previously thought to require human intelligence. Arthur Samuel's checkers program, developed at IBM, learned from experience and eventually beat its creator. Newell and Simon's General Problem Solver could solve puzzles like the Tower of Hanoi and prove logical theorems. Students at MIT programmed computers to solve calculus problems better than freshmen.
The field split into competing approaches. The "neats" believed intelligence required formal logic and clean representations. The "scruffies" thought intelligence was messier, requiring heuristics and shortcuts. Carnegie Mellon focused on psychological modeling, trying to replicate human thought processes. MIT pursued engineering solutions, caring more about what worked than whether it matched human cognition.
ELIZA, created by Joseph Weizenbaum at MIT in 1966, revealed something unexpected about human-machine interaction. This simple program mimicked a therapist by reflecting users' statements back as questions. When someone typed "I'm depressed," ELIZA might respond "Why do you say you are depressed?" Despite its simplicity, users became emotionally engaged, treating ELIZA as if it understood them.
Weizenbaum was horrified. His own secretary asked him to leave the room so she could have a private conversation with ELIZA. Psychiatrists suggested computerised therapy might be beneficial. Weizenbaum spent his remaining career warning about the dangers of confusing mechanical mimicry with genuine understanding - a warning that resonates even more strongly in our era of large language models.
These successes, however, masked fundamental limitations. Early AI programs were "brittle" - they worked well within narrow domains but failed catastrophically when faced with unexpected inputs. They had no real understanding, no ability to generalise, no common sense.
SHRDLU, Terry Winograd's natural language system, could discuss and manipulate blocks in a virtual world with seeming intelligence. Ask it about anything else, and it was helpless.
By the early 1970s, the limitations had become undeniable. Marvin Minsky and Seymour Papert's book "Perceptrons" mathematically proved that simple neural networks couldn't solve certain problems, seeming to doom that approach. Machine translation, once thought nearly solved, proved far harder than expected. A system that translated "The spirit is willing but the flesh is weak" into Russian and back rendered it as "The vodka is good but the meat is rotten."
The UK's Lighthill Report of 1973 was particularly damning. Commissioned by the Science Research Council, Sir James Lighthill argued that AI research had failed to deliver on its promises. He identified a "combinatorial explosion" - problems that seemed simple became impossibly complex when scaled up. The report recommended reduced funding, effectively ending AI research at many British universities.
What followed was the first "AI winter" - a period of reduced funding, diminished expectations, and researchers distancing themselves from the now-tainted term "artificial intelligence." In Britain, AI research virtually disappeared from universities. Edinburgh, once a leading centre, saw its AI department disbanded. Researchers rebadged their work as "knowledge-based systems" or "cognitive science" to avoid the stigma.
The winter wasn't entirely barren. Important theoretical work continued. David Marr at MIT (though British-born) developed influential theories of vision. Geoffrey Hinton, working at Edinburgh before the cuts forced him to America, continued exploring neural networks despite their un fashionability. This quiet persistence would prove crucial when AI experienced its eventual revival.
Even as symbolic AI struggled, other developments were laying crucial groundwork for the future. ARPANET, launched in 1969, connected four American universities in what would eventually become the internet. The UK was quick to recognise the potential, though implementation proved frustrating.
Donald Davies at the National Physical Laboratory had independently invented packet switching, the technology that made modern networking possible. His ideas actually predated the American work, but British bureaucracy and limited funding meant the Americans built the first large-scale network. It was a pattern that would repeat: British innovation followed by American implementation.
By 1973, University College London became one of the first international nodes on ARPANET, connected via Norway. Peter Kirstein, who led the effort, faced considerable scepticism. Why did academics need instant communication with American colleagues? The British Post Office, holding a telecommunications monopoly, saw academic networking as competition. Only Kirstein's persistence and American pressure overcame resistance.
This connectivity would prove essential for AI's eventual resurgence. Machine learning algorithms hungry for data would find it in the vast repositories that networked computers made possible. International collaboration, easy sharing of code and data, rapid dissemination of results - all became possible. The infrastructure for AI's second coming was being built even as AI itself languished.
The century from Babbage to ARPANET teaches crucial lessons about technological development. First, breakthrough ideas often arrive long before the technology to implement them. Babbage conceived programmable computers in the 1830s, but they couldn't be built until the 1940s. Neural networks were theorised in the 1950s but needed the computational power of the 2010s to flourish.
Second, progress happens through cycles of hype and disappointment. The pattern established in early AI - extravagant promises, initial progress, crushing disappointment, quiet rebuilding - would repeat throughout the field's history. Understanding these cycles helps us evaluate current AI developments more realistically.
Third, the most important developments often happen during the quiet periods. While the first AI winter froze funding and enthusiasm, researchers were laying theoretical foundations and building infrastructure that would enable future breakthroughs. ARPANET seemed unrelated to AI, but the data and connectivity it enabled would prove essential.
Fourth, geographical patterns matter. Britain pioneered many concepts but often failed to capitalise on them. Secrecy around wartime achievements, underfunding of research, and brain drain to America meant that British innovations often flourished elsewhere. This pattern - innovation without exploitation - would continue to challenge British technology policy.
Finally, the human element remains crucial. From Lovelace's visionary insights to Turing's tragic fate, from the overlooked contributions of Bletchley Park's women to the collaborative networks formed at Dartmouth, progress depended on human creativity, sacrifice, and cooperation. As we build ever more sophisticated AI systems, remembering this human dimension becomes more, not less, important.
The dream of thinking machines, born in Victorian workshops and wartime code-breaking centres, had by the 1970s evolved from mechanical fantasy to electronic reality. The machines couldn't yet think in any meaningful sense, but they could calculate, process, and even seem to converse. The foundations were laid for the digital revolution that would transform not just computing but human society itself.
The dreamers of the nineteenth century would have been astonished by what became possible - and perhaps alarmed by what remained impossible. True artificial intelligence, the kind that could match human adaptability and understanding, remained as elusive as ever. But the quest to achieve it was about to enter a new phase, driven by exponential increases in computing power and the dawning realisation that intelligence might emerge not from clever programming but from vast scales of data and computation.
As the 1970s drew to a close, AI seemed moribund. But in California garages and British research labs, new ideas were stirring. The personal computer revolution was about to begin, and with it would come transformations that would eventually deliver capabilities beyond even Babbage's wildest dreams. The thinking machines remained a dream, but the tools to build them were finally beginning to emerge.
This foundational period established the dream of thinking machines. But dreams require infrastructure.
Next: Chapter 2 - How the Digital Revolution Built the Foundation for AI
What the Research Shows
Organisations that succeed build progressively, not revolutionarily
The Five A's Framework
Your Path Forward
A Progressive Approach to AI Implementation
Each level builds on the previous, reducing risk while delivering value.
Chapter 2 - Digital Revolution
Chapter 3 - Intelligence Explosion
Chapter 5 - The Five A's Framework
Chapter 6 - Automation Intelligence
Chapter 7 - Augmented Intelligence
Chapter 8 - Algorithmic Intelligence
Chapter 9 - Agentic Intelligence
Chapter 10 - Artificial Intelligence
Chapter 11 - Governance Across the Five A's
Chapter 12 - Strategic Implementation
Chapter 13 - Use Cases Across Industries
Chapter 14 - The Future of AI