The quest for dominance in the world of AI (artificial intelligence) is rapidly gathering momentum. Projective Stargate, formally announced by President Donald Trump at a White House press conference on 21 January 2025, has been backed by an initial $100 billion in funds and promised a total budget of $500 billion to establish data centres across America as part of a new AI infrastructure for OpenAI. The initial equity funders in the new company are SoftBank, OpenAI, Oracle, and MGX, while ARM, Microsoft, NVIDIA, OpenAI, and Oracle are the key initial technology partners. In the UK, the government has committed to a more modest investment of £14 billion over the next decade to “ramp up AI adoption across the UK”, announced earlier on 13 January as part of Matt Clifford’s 50-point AI Opportunities Action Plan. The UK hopes to build on the achievement of AI firms like Google DeepMind, ARM, and Wayve in achieving its goals.
These Anglo-American developments have been followed closely by the revelation on 27 January that DeepSeek had released a low-cost AI-powered chatbot a week earlier, which had rapidly become the most downloaded free app on the Apple Store. The Chinese AI startup was founded in December 2023 by Liang Wenfeng, CEO of the High-Flyer hedge fund. Stocks listed on the NASDAQ stock exchange in New York lost $1 trillion overnight with news of DeepSeek’s R1 AI model. America’s leadership as home to the world’s top ten AI companies was seriously threatened in what was likened to a Sputnik moment, reminiscent of Russia’s launch of the space satellite Sputnik I in 1957 which wrongfooted the Americans in the space race. Donald Trump claimed that DeepSeek had provided a “wake-up call” for the US tech giants. Before long, concerns over Chinese government interference and the risks of data breaches and security lapses were being voiced by DeepSeek’s competitors and detractors.
AI has been moving beyond the realms of science fiction ever since the 1940s. Early visionaries included Marvin Minsky, Allen Newell, Herbert Simon, and Alan Turing, whose Universal Turing Machine conceptualised a “thinking machine” capable of holding a text-based conversation via teleprinter so as to deceive a human interrogator into thinking that it is another human. The Dartmouth Summer Research Project, a six-week summer workshop funded by the Rockefeller Foundation and attended by ten scientists at Dartmouth College in July and August 1956, can be considered the birthplace of AI, research on which was initially funded by the US Defence Advanced Research Projects Agency (DARPA) between 1956 and 1974. The event’s main organiser, John McCarthy, was the first to use the term “artificial intelligence” a year earlier in his proposal for the workshop. Early AI systems, such as Logic Theorist and the General Problem Solver, were able to solve mathematical problems, play games, and translate languages. AI-powered computers were to confirm their prowess in various games, defeating human champions in backgammon, checkers, chess, Go, Jeopardy!, poker and other strategic games, but overall AI failed to live up to initial expectations.
The history of AI has been dominated by periods of investment, innovation, and misplaced optimism, the so-called AI summers, followed by periods of declining interest, funding, and development, referred to as AI winters, between 1974 and 1980 and again between 1987 and 1994. The current AI boom has been powered by machine learning techniques, which allow computer systems to learn from existing data without being explicitly programmed. A prime example is the deep learning system developed by the “godfather of AI” Geoffrey Hinton and his team at the University of Toronto, which won the annual ImageNet Large Scale Visual Recognition Challenge in September 2012. Google acquired Hinton’s startup in 2013, followed by the British AI startup DeepMind the following year. Progress has been recently turbocharged by generative AI systems, heralded by OpenAI’s ChatGPT (Generative Pretrained Transformer) -4 in 2023, which can create audio, code, images, text, simulations, video, and other AI-generated content.
AI is based on computer algorithms that enable machines to carry out tasks that require human intelligence, such as pattern recognition, decision making, and problem solving, freed from human supervision, and at a pace and efficiency that humans cannot replicate. AI technologies confer the ability to see (computer vision), hear (speech recognition), and understand (natural language processing), thereby augmenting or even replacing human intelligence. Narrow AI focuses on specific tasks, while general AI (artificial general intelligence) can replicate all human cognitive actions and thereby address any given problem. The growth of AI has been driven by data platforms that capture and store large volumes of data; cloud computing that provides access to online computing resources on demand; and reusable algorithms for basic cognitive functions.
AI is a disruptive technology that can transform and improve society and is applicable across a wide range of sectors. Job replacement and job losses are the inevitable price to be paid for progress as processes are automated, streamlined, and data-driven. The World Economic Forum has indeed referred to AI as the Fourth Industrial Revolution, following on the digital revolution. AI can readily replace humans for a number of tasks, and in the predicted scenario of superintelligence may even outperform them. Keeping humans out of the loop reduces costs and helps maximise profits in businesses, for whom integration of AI is a priority. In particular, monotonous and repetitive tasks are better undertaken by AI systems that do not fatigue, fall sick, or burnout, and are less likely to make mistakes.
AI promises enhanced productivity, competitive advantage for businesses, and economic growth, as machines continue to replace tasks formerly performed exclusively by humans. AI-powered chatbots, virtual assistants (Alexa, Siri), and robots are providing new means for machines to interact with, and support, their human users. AI’s many potential applications extend to agriculture, e-commerce, business intelligence, cybersecurity, education, energy, entertainment gaming, healthcare, financial services, fraud detection, manufacturing, online marketing, pharmaceuticals, robotics, search engines, self-driving (autonomous) vehicles, and social media- to mention but a few. Streamlined administration, improved diagnostic accuracy, personalised treatment plans, robotic surgery, and virtual health assistants are among the potential benefits in healthcare, which can aid transformation of the ailing NHS in England.
There can be no doubt that AI has the capacity to deliver significant benefits to society. But it has its risks and as proven before may be somewhat overhyped, leading those who jump on the AI bandwagon to have high expectations. Massive investments in generative AI systems may not necessarily deliver proportionate returns. Biases in training data may be reflected in biased algorithms, thereby adding to social injustice. AI-generated content can help spread misinformation. The dominance of AI is likely to increase the stranglehold of Big Tech on contemporary society. The staggering amounts of capital required to develop AI systems at scale places control firmly in the hands of a few large corporations. To best benefit society, AI has to be based upon an appropriate human-machine collaboration, in which humans contribute to creativity and critical thinking, while ensuring ethical practice. It is worth repeating the following statement by the economist Leo Cherne: “The computer is incredibly fast, accurate, and stupid. Man is incredibly slow, inaccurate, and brilliant. The marriage of the two is a force beyond calculation.”
Ashis Banerjee