History of Artificial intelligence in Timeline

Share: FB Share X Share Reddit Share Reddit Share
Artificial intelligence

Artificial intelligence is the ability of computer systems to perform tasks typically requiring human intelligence. This includes learning, reasoning, problem-solving, and decision-making. As a field of computer science, AI focuses on creating methods and software that enable machines to understand their surroundings and utilize learning and intelligence to act in ways that improve their chances of reaching specific objectives.

23 hours ago : White House AI Framework: Congress Urged to Take Light Regulatory Approach

The White House introduced an AI framework urging Congress to adopt light regulations. The Trump administration also unveiled a national AI policy framework to limit state power over AI.

1943: McCulloch and Pitts design for artificial neurons

In 1943, McCulloch and Pitts designed "artificial neurons", which became a key component in the development of AI.

1950: Turing proposes the question 'Can machines think?'

In 1950, Alan Turing proposed the question 'can machines think?' and suggested focusing on whether machines can exhibit intelligent behavior, leading to the development of the Turing Test.

1950: Turing introduces the Turing Test

In 1950, Alan Turing published 'Computing Machinery and Intelligence', introducing the Turing test to assess machine intelligence and arguing for the plausibility of machines thinking.

1951: Release of 'The Day the Earth Stood Still'

In 1951, the movie 'The Day the Earth Stood Still' was released.

Loading Video...

1956: Founding of AI as an academic discipline

In 1956, Artificial intelligence was founded as an academic discipline.

1956: Dartmouth Workshop: The founding of AI research

In 1956, the field of AI research was officially founded at a workshop at Dartmouth College, marking a significant milestone. Attendees became leaders in AI research.

1965: Herbert Simon predicts machines can do any work a man can do

In 1965, Herbert Simon predicted that machines would be capable of doing any work a man can do within twenty years, showcasing early optimism in AI's potential.

1967: Marvin Minsky's AI prediction

In 1967, Marvin Minsky predicted that the problem of creating 'artificial intelligence' would substantially be solved within a generation, illustrating the high expectations of the time.

1968: Release of '2001: A Space Odyssey'

In 1968, Arthur C. Clarke's and Stanley Kubrick's '2001: A Space Odyssey' was released, featuring HAL 9000, a murderous computer.

Loading Video...

1974: AI Winter Begins

In 1974, funding for exploratory AI research was cut by the U.S. and British governments. This was partially a response to criticism and a shift towards more productive projects, leading to the 'AI winter'.

1979: Partial nuclear meltdown at Three Mile Island

In 1979, the Three Mile Island nuclear power plant suffered a partial nuclear meltdown of its Unit 2 reactor.

1984: Release of 'The Terminator'

In 1984, the movie 'The Terminator' was released.

Loading Video...

1985: AI Market Reaches Over a Billion Dollars

By 1985, the market for AI, fueled by the success of expert systems, had reached over a billion dollars, marking a significant commercial milestone.

1986: Release of 'Aliens'

In 1986, the movie 'Aliens' was released.

1987: Collapse of the Lisp Machine Market

Beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting winter began.

1990: LeCun's convolutional neural networks

In 1990, Yann LeCun demonstrated the successful application of convolutional neural networks for recognizing handwritten digits.

May 1997: Deep Blue beats Garry Kasparov in chess

On May 11, 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov.

1998: Publication of 'Darwin Among the Machines'

In 1998, George Dyson expanded on the idea that artificial intelligence is the next step in evolution in his book, Darwin Among the Machines: The Evolution of Global Intelligence.

1999: Release of 'The Matrix'

In 1999, the movie 'The Matrix' was released.

Loading Video...

2000: AI Solutions Widely Used

By 2000, solutions developed by AI researchers were being widely used, although they were rarely described as "artificial intelligence," a phenomenon known as the AI effect.

2001: Release of '2001: A Space Odyssey'

In 2001, Arthur C. Clarke's and Stanley Kubrick's '2001: A Space Odyssey' was released, featuring HAL 9000, a murderous computer.

Loading Video...

2002: Subfield of Artificial General Intelligence Founded

Around 2002, academic researchers concerned about AI's focus shifted to creating specific solutions founded the subfield of artificial general intelligence (AGI).

2005: Founding of the field of machine ethics

In 2005, the field of machine ethics, also called computational morality, was founded at an AAAI symposium.

2011: Fukushima nuclear accident

Although most nuclear plants in Japan have been shut down after the 2011 Fukushima nuclear accident.

2011: Watson defeats Jeopardy! champions

In 2011, in a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin.

2012: Deep Learning Dominates Industry Benchmarks

In 2012, deep learning began to dominate industry benchmarks and was adopted throughout the field, leading to the abandonment of other methods for many specific tasks.

2012: Increased funding and interest in AI

In 2012, funding and interest in AI increased substantially after graphics processing units began being used to accelerate neural networks, and deep learning outperformed previous AI techniques.

2012: Sudden success of Deep Learning

The sudden success of deep learning in 2012 occurred because of the increase in computer power and the availability of vast amounts of training data.

2014: Nations support a ban on autonomous weapons

In 2014, 30 nations (including China) supported a ban on autonomous weapons under the United Nations' Convention on Certain Conventional Weapons, while the United States and others disagreed, marking an early stage in the debate over lethal autonomous weapons.

2014: DARPA establishes XAI program

In 2014, DARPA established the XAI ("Explainable Artificial Intelligence") program to try to solve the problems of AI explainability.

June 2015: Google Photos mistakenly identifies black people as gorillas

In June 2015, Google Photos's new image labeling feature mistakenly identified Jacky Alcine and a friend as "gorillas" because they were black, due to a lack of images of black people in the training data. This incident highlighted the issue of sample size disparity in AI training datasets.

2015: Over fifty countries research battlefield robots

By 2015, over fifty countries were reported to be researching battlefield robots, indicating a growing interest and investment in military AI applications.

2015: AlphaGo Beats World Champion Go Player

In 2015, AlphaGo, developed by DeepMind, beat the world champion Go player, showcasing the program's self-developed strategy based solely on the game's rules.

2015: The Economist warns about AI's impact on white-collar jobs

In 2015, The Economist stated that "the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution" is "worth taking seriously".

2015: Increase in machine learning research

In the years 2015-2019, the amount of machine learning research (measured by total publications) increased by 50%.

2015: Sudden success of Deep Learning

The sudden success of deep learning in 2015 occurred because of the increase in computer power and the availability of vast amounts of training data.

March 2016: AlphaGo wins against Lee Sedol

In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps.

2016: Increase in AI-related laws passed globally

According to AI Index at Stanford, the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016.

2016: Serious research on AI risks and solutions begins

After 2016, the study of current and future risks and possible solutions related to AI became a serious area of research, shifting from earlier arguments that the risks were too distant or that humans would always be valuable to superintelligent machines.

2016: Mitigation steps taken by major technology companies after U.S. election

After the U.S. election in 2016, major technology companies took steps to mitigate the problem of AI recommending misinformation, conspiracy theories and extreme partisan content.

2016: ProPublica discovers racial bias in COMPAS

In 2016, Julia Angwin at ProPublica discovered that COMPAS, a commercial program used by U.S. courts to assess recidivism risk, exhibited racial bias, overestimating the risk for black defendants and underestimating it for white defendants, even when the error rate was calibrated equally.

2016: Focus on fairness and misuse of technology in AI

In 2016, concerns about fairness and misuse of technology became central in machine learning, with increased publications, funding, and researcher focus on these issues. The alignment problem became a serious academic study.

2016: EU's General Data Protection Regulation considers the right to explanation for algorithm decisions

In 2016, early drafts of the European Union's General Data Protection Regulation included an explicit statement that people harmed by an algorithm's decision have a right to an explanation, highlighting the challenge of explainability in AI.

2016: Privacy experts shift focus to fairness

Since 2016, some privacy experts, such as Cynthia Dwork, have begun to view privacy in terms of fairness, shifting from 'what they know' to 'what they're doing with it'.

2017: AlphaGo defeats Ke Jie

In 2017, AlphaGo defeated Ke Jie, who was the best Go player in the world.

2017: Research highlights mathematical impossibility of fairness in COMPAS

In 2017, researchers demonstrated that it was mathematically impossible for COMPAS to accommodate all possible measures of fairness given the differing base rates of re-offense between white and black individuals, adding a layer of complexity to the discussion of fairness in AI.

2017: EU considers 'electronic personhood' for AI

In 2017, the European Union considered granting "electronic personhood" to some of the most capable AI systems, giving them rights and responsibilities, akin to the legal status of companies.

2017: Transformer architecture boosts AI growth

In 2017, the transformer architecture further accelerated the growth of AI.

2017: AI Adoption in Companies

In a 2017 survey, one in five companies reported having incorporated "AI" in some offerings or processes.

2018: Criticism of granting rights to AI systems

In 2018, critics argued that granting rights to AI systems would undermine human rights, advocating instead for legislation focusing on user needs and highlighting robots' lack of autonomy.

2019: AlphaStar achieves grandmaster level in StarCraft II

In 2019, DeepMind's AlphaStar achieved grandmaster level in StarCraft II, a particularly challenging real-time strategy game that involves incomplete knowledge of what happens on the map.

2019: Generative pre-trained transformer language models generate coherent text

In 2019, generative pre-trained transformer (or "GPT") language models began to generate coherent text.

2019: Increase in machine learning research

In the years 2015-2019, the amount of machine learning research (measured by total publications) increased by 50%.

2019: Singapore imposes ban on opening data centers

Singapore imposed a ban on the opening of data centers in 2019 due to electric power concerns.

June 2020: Launch of the Global Partnership on Artificial Intelligence

In June 2020, the Global Partnership on Artificial Intelligence was launched, emphasizing the need for AI to be developed in accordance with human rights and democratic values.

2020: OpenAI releases GPT-3

In 2020, OpenAI released GPT-3, a large language model capable of generating high-quality, human-like text.

2020: AI tools available for authoritarian governments to control citizens

Since 2020, AI tools have become available for authoritarian governments to control their citizens, including face and voice recognition, machine learning for classifying potential enemies of the state, recommendation systems for propaganda, and deepfakes for misinformation.

November 2021: Call for a government commission to regulate AI

In November 2021, Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher published a joint statement calling for a government commission to regulate AI.

2021: Reason for Deep Learning performance is still unknown

As of 2021, the reason that deep learning performs so well in so many applications is not known.

2021: AlphaFold 2 approximates protein structure

In 2021, AlphaFold 2 demonstrated the ability to approximate, in hours rather than months, the 3D structure of a protein.

2021: AI agent wins Gran Turismo competition

In 2021, an AI agent competed in a PlayStation Gran Turismo competition, winning against four of the world's best Gran Turismo drivers using deep reinforcement learning.

Loading Video...

November 2022: Launch of ChatGPT

On November 30, 2022, ChatGPT was launched and rapidly became the fastest-growing consumer software application in history, gaining over 100 million users in two months.

2022: US data centers consume 3% of US power

A 2024 Goldman Sachs Research Paper found that in 2022, US data centers consumed 3% of US power.

2022: CLIP-based robotic system reproduces harmful stereotypes

At the 2022 ACM Conference on Fairness, Accountability, and Transparency, a paper reported that a CLIP‑based (Contrastive Language-Image Pre-training) robotic system reproduced harmful gender‑ and race‑linked stereotypes in a simulated manipulation task.

2022: Singapore lifts ban on opening data centers

In 2022, Singapore lifted the ban on opening of data centers.

2022: Significant rise in AI-related laws passed

In 2022, the annual number of AI-related laws passed in the 127 survey countries jumped to 37 passed.

2022: Ipsos survey shows differing attitudes towards AI by country

In a 2022 Ipsos survey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that "products and services using AI have more benefits than drawbacks".

2022: Tax breaks for nuclear power in US Inflation Reduction Act

The cost for re-opening and upgrading Three Mile Island is dependent on tax breaks for nuclear power contained in the 2022 US Inflation Reduction Act.

February 2023: Microsoft introduces Copilot Search (Bing Chat)

In February 2023, Microsoft introduced Copilot Search under the name Bing Chat, which provides AI-generated summaries.

April 2023: Elimination of jobs for Chinese video game illustrators by generative AI

In April 2023, it was reported that 70% of the jobs for Chinese video game illustrators had been eliminated by generative artificial intelligence.

May 2023: Geoffrey Hinton resigns from Google to speak out about AI risks

In May 2023, Geoffrey Hinton announced his resignation from Google in order to be able to "freely speak out about the risks of AI" without "considering how this impacts Google".

September 2023: Taiwan suspends data center approvals

After the last approval in September 2023, Taiwan suspended the approval of data centers north of Taoyuan with a capacity of more than 5 MW in 2024, due to power supply shortages.

November 2023: Global AI Safety Summit held in the UK

In November 2023, the first global AI Safety Summit took place in Bletchley Park, UK. The summit aimed to discuss the risks of AI and explore regulatory frameworks. 28 countries, including the United States, China, and the European Union, issued a declaration calling for international cooperation.

2023: GPT models achieve human-level scores on various tests

By 2023, generative pre-trained transformer models were able to get human-level scores on the bar exam, SAT test, GRE test, and many other real-world applications.

2023: Recommendations for the governance of superintelligence published

In 2023, OpenAI leaders published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years. Also in 2023, the United Nations launched an advisory body to provide recommendations on AI governance.

2023: Reuters/Ipsos and Fox News polls on AI risks and regulation

In 2023, a Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity. In a 2023 Fox News poll, 35% of Americans thought it "very important", and an additional 41% thought it "somewhat important", for the federal government to regulate AI.

2023: Google Photos still unable to identify gorillas

In 2023, eight years after the initial incident, Google Photos still could not identify a gorilla, and neither could similar products from Apple, , Microsoft and Amazon. This indicates the persistence of biases in AI systems.

2023: AI-guided drug discovery finds new antibiotics

In 2023, it was reported that AI-guided drug discovery helped find a class of antibiotics capable of killing two different types of drug-resistant bacteria.

2023: Authors sue AI companies for copyright infringement

In 2023, leading authors (including John Grisham and Jonathan Franzen) sued AI companies for using their work to train generative AI.

2023: AI experts endorse statement on mitigating extinction risk from AI

In 2023, many leading AI experts endorsed the joint statement that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war".

January 2024: IEA forecasts rising power demand for AI and cryptocurrency

In January 2024, the International Energy Agency (IEA) released a report forecasting that power demand for data centers, artificial intelligence, and cryptocurrency might double by 2026.

March 2024: Amazon purchases nuclear-powered data center

In March 2024, Amazon purchased a Pennsylvania nuclear-powered data center for US$650 million.

May 2024: AI Seoul Summit safety commitments

In May 2024, at the AI Seoul Summit, 16 global AI tech companies agreed to safety commitments regarding the development of AI.

August 2024: EU Artificial Intelligence Act enters into force

On 1 August 2024, the EU Artificial Intelligence Act entered into force, establishing the first comprehensive EU-wide AI regulation.

September 2024: Microsoft agrees to reopen Three Mile Island nuclear plant

In September 2024, Microsoft announced an agreement with Constellation Energy to re-open the Three Mile Island nuclear power plant to provide Microsoft with 100% of all electric power produced by the plant for 20 years.

October 2024: Ubitus seeks land near nuclear plant for data center

According to an October 2024 Bloomberg article in Japanese, cloud gaming services company Ubitus, in which Nvidia has a stake, is looking for land in Japan near a nuclear power plant for a new data center for generative AI.

November 2024: FERC rejects Talen Energy's application to supply electricity to Amazon

On 1 November 2024, the Federal Energy Regulatory Commission (FERC) rejected an application submitted by Talen Energy for approval to supply some electricity from the nuclear power station Susquehanna to Amazon's data center.

2024: Goldman Sachs Research Paper forecasts US power demand surge

A 2024 Goldman Sachs Research Paper found "US power demand (is) likely to experience growth not seen in a generation...." and forecasts that, by 2030, US data centers will consume 8% of US power, as opposed to 3% in 2022.

2024: AI Startups Funding

According to PitchBook research, 22% of newly funded startups in 2024 claimed to be AI companies, indicating a high level of investment in the field.

2024: Taiwan suspends data center approvals

After the last approval in September 2023, Taiwan suspended the approval of data centers north of Taoyuan with a capacity of more than 5 MW in 2024, due to power supply shortages.

2024: Use of AI-generated content in Indian elections

During the 2024 Indian elections, US$50 million was spent on authorized AI-generated content, notably by creating deepfakes of allied politicians and translating speeches.

2024: Google DeepMind introduces SIMA

In 2024, Google DeepMind introduced SIMA, a type of AI capable of autonomously playing nine previously unseen open-world video games by observing screen output, as well as executing short, specific tasks in response to natural language instructions.

2024: Machine learning accelerates Parkinson's drug search

In 2024, researchers used machine learning to accelerate the search for Parkinson's disease drug treatments, speeding up the initial screening process and reducing costs.

2024: Alibaba's Qwen2-Math achieves state-of-the-art performance

In 2024, the Alibaba Group developed a version of its Qwen models called Qwen2-Math, that achieved state-of-the-art performance on several mathematical benchmarks, including 84% accuracy on the MATH dataset of competition mathematics problems.

2024: Council of Europe creates treaty on AI and human rights

In 2024, the Council of Europe created the first international legally binding treaty on AI, called the "Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law".

2024: UK AI Safety Institute releases 'Inspect' testing toolset

In 2024, the UK AI Safety Institute released a testing toolset called 'Inspect' for AI safety evaluations available under an MIT open-source licence.

January 2025: Microsoft proposes rStar-Math technique

In January 2025, Microsoft proposed the technique rStar-Math that leverages Monte Carlo tree search and step-by-step reasoning, enabling a relatively small language model like Qwen-7B to solve 53% of the AIME 2024 and 90% of the MATH benchmark problems.

May 2025: Google introduces an AI Mode at Google I/O

On May 20, 2025, Google introduced an AI Mode at its Google I/O event.

July 2025: Ford CEO Jim Farley predicts AI will replace half of U.S. white-collar workers

In July 2025, Ford CEO Jim Farley predicted that "artificial intelligence is going to replace literally half of all white-collar workers in the U.S."

October 2025: Planned reopening of Palisades Nuclear reactor

Closed since 2022, the Palisades Nuclear reactor on Lake Michigan is planned to be reopened in October 2025.

2025: Geoffrey Hinton discusses AI's persuasion capabilities

In 2025, Geoffrey Hinton said that modern AI is particularly "good at persuasion" and getting better all the time, and asks "Suppose you wanted to invade the capital of the US. Do you have to go there and do it yourself? No. You just have to be good at persuasion."

2025: IEA estimates greenhouse gas emissions from AI energy consumption

In 2025, a report prepared by the International Energy Agency estimated the greenhouse gas emissions from the energy consumption of AI at 180 million tons.

2025: Gemini Deep Think achieves gold medal results

In 2025, the experimental model Gemini Deep Think accepted natural language prompts directly and achieved gold medal results in the International Math Olympiad.

2025: Taiwan aims to phase out nuclear power

Taiwan aims to phase out nuclear power by 2025.

March 2026: UN convenes inaugural meeting of AI panel

In March 2026, the United Nations convened the first meeting of the Independent International Scientific Panel on AI, a 40-member expert body established to produce annual reports on AI's societal impacts.

2026: IEA forecasts rising power demand for AI and cryptocurrency

In January 2024, the International Energy Agency (IEA) released a report forecasting that power demand for data centers, artificial intelligence, and cryptocurrency might double by 2026.

2030: US data centers projected to consume 8% of US power

A 2024 Goldman Sachs Research Paper found that by 2030, US data centers will consume 8% of US power, as opposed to 3% in 2022.

2035: Projected increase in greenhouse gas emissions from AI energy consumption

By 2035, the International Energy Agency estimated the greenhouse gas emissions from the energy consumption of AI could rise to 300–500 million tonnes depending on what measures will be taken.