Artificial intelligence is the ability of computer systems to perform tasks typically requiring human intelligence. This includes learning, reasoning, problem-solving, and decision-making. As a field of computer science, AI focuses on creating methods and software that enable machines to understand their surroundings and utilize learning and intelligence to act in ways that improve their chances of reaching specific objectives.
The White House introduced an AI framework urging Congress to adopt light regulations. The Trump administration also unveiled a national AI policy framework to limit state power over AI.
In 1943, McCulloch and Pitts designed "artificial neurons", which became a key component in the development of AI.
In 1950, Alan Turing proposed the question 'can machines think?' and suggested focusing on whether machines can exhibit intelligent behavior, leading to the development of the Turing Test.
In 1950, Alan Turing published 'Computing Machinery and Intelligence', introducing the Turing test to assess machine intelligence and arguing for the plausibility of machines thinking.
In 1951, the movie 'The Day the Earth Stood Still' was released.
In 1956, Artificial intelligence was founded as an academic discipline.
In 1956, the field of AI research was officially founded at a workshop at Dartmouth College, marking a significant milestone. Attendees became leaders in AI research.
In 1965, Herbert Simon predicted that machines would be capable of doing any work a man can do within twenty years, showcasing early optimism in AI's potential.
In 1967, Marvin Minsky predicted that the problem of creating 'artificial intelligence' would substantially be solved within a generation, illustrating the high expectations of the time.
In 1968, Arthur C. Clarke's and Stanley Kubrick's '2001: A Space Odyssey' was released, featuring HAL 9000, a murderous computer.
In 1974, funding for exploratory AI research was cut by the U.S. and British governments. This was partially a response to criticism and a shift towards more productive projects, leading to the 'AI winter'.
In 1979, the Three Mile Island nuclear power plant suffered a partial nuclear meltdown of its Unit 2 reactor.
In 1984, the movie 'The Terminator' was released.
By 1985, the market for AI, fueled by the success of expert systems, had reached over a billion dollars, marking a significant commercial milestone.
In 1986, the movie 'Aliens' was released.
Beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting winter began.
In 1990, Yann LeCun demonstrated the successful application of convolutional neural networks for recognizing handwritten digits.
On May 11, 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov.
In 1998, George Dyson expanded on the idea that artificial intelligence is the next step in evolution in his book, Darwin Among the Machines: The Evolution of Global Intelligence.
In 1999, the movie 'The Matrix' was released.
By 2000, solutions developed by AI researchers were being widely used, although they were rarely described as "artificial intelligence," a phenomenon known as the AI effect.
In 2001, Arthur C. Clarke's and Stanley Kubrick's '2001: A Space Odyssey' was released, featuring HAL 9000, a murderous computer.
Around 2002, academic researchers concerned about AI's focus shifted to creating specific solutions founded the subfield of artificial general intelligence (AGI).
In 2005, the field of machine ethics, also called computational morality, was founded at an AAAI symposium.
Although most nuclear plants in Japan have been shut down after the 2011 Fukushima nuclear accident.
In 2011, in a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin.
In 2012, deep learning began to dominate industry benchmarks and was adopted throughout the field, leading to the abandonment of other methods for many specific tasks.
In 2012, funding and interest in AI increased substantially after graphics processing units began being used to accelerate neural networks, and deep learning outperformed previous AI techniques.
The sudden success of deep learning in 2012 occurred because of the increase in computer power and the availability of vast amounts of training data.
In 2014, 30 nations (including China) supported a ban on autonomous weapons under the United Nations' Convention on Certain Conventional Weapons, while the United States and others disagreed, marking an early stage in the debate over lethal autonomous weapons.
In 2014, DARPA established the XAI ("Explainable Artificial Intelligence") program to try to solve the problems of AI explainability.
In June 2015, Google Photos's new image labeling feature mistakenly identified Jacky Alcine and a friend as "gorillas" because they were black, due to a lack of images of black people in the training data. This incident highlighted the issue of sample size disparity in AI training datasets.
By 2015, over fifty countries were reported to be researching battlefield robots, indicating a growing interest and investment in military AI applications.
In 2015, AlphaGo, developed by DeepMind, beat the world champion Go player, showcasing the program's self-developed strategy based solely on the game's rules.
In 2015, The Economist stated that "the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution" is "worth taking seriously".
In the years 2015-2019, the amount of machine learning research (measured by total publications) increased by 50%.
The sudden success of deep learning in 2015 occurred because of the increase in computer power and the availability of vast amounts of training data.
In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps.
According to AI Index at Stanford, the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016.
After 2016, the study of current and future risks and possible solutions related to AI became a serious area of research, shifting from earlier arguments that the risks were too distant or that humans would always be valuable to superintelligent machines.
After the U.S. election in 2016, major technology companies took steps to mitigate the problem of AI recommending misinformation, conspiracy theories and extreme partisan content.
In 2016, Julia Angwin at ProPublica discovered that COMPAS, a commercial program used by U.S. courts to assess recidivism risk, exhibited racial bias, overestimating the risk for black defendants and underestimating it for white defendants, even when the error rate was calibrated equally.
In 2016, concerns about fairness and misuse of technology became central in machine learning, with increased publications, funding, and researcher focus on these issues. The alignment problem became a serious academic study.
In 2016, early drafts of the European Union's General Data Protection Regulation included an explicit statement that people harmed by an algorithm's decision have a right to an explanation, highlighting the challenge of explainability in AI.
Since 2016, some privacy experts, such as Cynthia Dwork, have begun to view privacy in terms of fairness, shifting from 'what they know' to 'what they're doing with it'.
In 2017, AlphaGo defeated Ke Jie, who was the best Go player in the world.
In 2017, researchers demonstrated that it was mathematically impossible for COMPAS to accommodate all possible measures of fairness given the differing base rates of re-offense between white and black individuals, adding a layer of complexity to the discussion of fairness in AI.
In 2017, the European Union considered granting "electronic personhood" to some of the most capable AI systems, giving them rights and responsibilities, akin to the legal status of companies.
In 2017, the transformer architecture further accelerated the growth of AI.
In a 2017 survey, one in five companies reported having incorporated "AI" in some offerings or processes.
In 2018, critics argued that granting rights to AI systems would undermine human rights, advocating instead for legislation focusing on user needs and highlighting robots' lack of autonomy.
In 2019, DeepMind's AlphaStar achieved grandmaster level in StarCraft II, a particularly challenging real-time strategy game that involves incomplete knowledge of what happens on the map.
In 2019, generative pre-trained transformer (or "GPT") language models began to generate coherent text.
In the years 2015-2019, the amount of machine learning research (measured by total publications) increased by 50%.
Singapore imposed a ban on the opening of data centers in 2019 due to electric power concerns.
In June 2020, the Global Partnership on Artificial Intelligence was launched, emphasizing the need for AI to be developed in accordance with human rights and democratic values.
In 2020, OpenAI released GPT-3, a large language model capable of generating high-quality, human-like text.
Since 2020, AI tools have become available for authoritarian governments to control their citizens, including face and voice recognition, machine learning for classifying potential enemies of the state, recommendation systems for propaganda, and deepfakes for misinformation.
In November 2021, Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher published a joint statement calling for a government commission to regulate AI.
As of 2021, the reason that deep learning performs so well in so many applications is not known.
In 2021, AlphaFold 2 demonstrated the ability to approximate, in hours rather than months, the 3D structure of a protein.
In 2021, an AI agent competed in a PlayStation Gran Turismo competition, winning against four of the world's best Gran Turismo drivers using deep reinforcement learning.
On November 30, 2022, ChatGPT was launched and rapidly became the fastest-growing consumer software application in history, gaining over 100 million users in two months.
A 2024 Goldman Sachs Research Paper found that in 2022, US data centers consumed 3% of US power.
At the 2022 ACM Conference on Fairness, Accountability, and Transparency, a paper reported that a CLIP‑based (Contrastive Language-Image Pre-training) robotic system reproduced harmful gender‑ and race‑linked stereotypes in a simulated manipulation task.
In 2022, Singapore lifted the ban on opening of data centers.
In 2022, the annual number of AI-related laws passed in the 127 survey countries jumped to 37 passed.
In a 2022 Ipsos survey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that "products and services using AI have more benefits than drawbacks".
The cost for re-opening and upgrading Three Mile Island is dependent on tax breaks for nuclear power contained in the 2022 US Inflation Reduction Act.
In February 2023, Microsoft introduced Copilot Search under the name Bing Chat, which provides AI-generated summaries.
In April 2023, it was reported that 70% of the jobs for Chinese video game illustrators had been eliminated by generative artificial intelligence.
In May 2023, Geoffrey Hinton announced his resignation from Google in order to be able to "freely speak out about the risks of AI" without "considering how this impacts Google".
After the last approval in September 2023, Taiwan suspended the approval of data centers north of Taoyuan with a capacity of more than 5 MW in 2024, due to power supply shortages.
In November 2023, the first global AI Safety Summit took place in Bletchley Park, UK. The summit aimed to discuss the risks of AI and explore regulatory frameworks. 28 countries, including the United States, China, and the European Union, issued a declaration calling for international cooperation.
By 2023, generative pre-trained transformer models were able to get human-level scores on the bar exam, SAT test, GRE test, and many other real-world applications.
In 2023, OpenAI leaders published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years. Also in 2023, the United Nations launched an advisory body to provide recommendations on AI governance.
In 2023, a Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity. In a 2023 Fox News poll, 35% of Americans thought it "very important", and an additional 41% thought it "somewhat important", for the federal government to regulate AI.
In 2023, it was reported that AI-guided drug discovery helped find a class of antibiotics capable of killing two different types of drug-resistant bacteria.
In 2023, leading authors (including John Grisham and Jonathan Franzen) sued AI companies for using their work to train generative AI.
In 2023, many leading AI experts endorsed the joint statement that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war".
In January 2024, the International Energy Agency (IEA) released a report forecasting that power demand for data centers, artificial intelligence, and cryptocurrency might double by 2026.
In March 2024, Amazon purchased a Pennsylvania nuclear-powered data center for US$650 million.
In May 2024, at the AI Seoul Summit, 16 global AI tech companies agreed to safety commitments regarding the development of AI.
On 1 August 2024, the EU Artificial Intelligence Act entered into force, establishing the first comprehensive EU-wide AI regulation.
On 1 November 2024, the Federal Energy Regulatory Commission (FERC) rejected an application submitted by Talen Energy for approval to supply some electricity from the nuclear power station Susquehanna to Amazon's data center.
A 2024 Goldman Sachs Research Paper found "US power demand (is) likely to experience growth not seen in a generation...." and forecasts that, by 2030, US data centers will consume 8% of US power, as opposed to 3% in 2022.
According to PitchBook research, 22% of newly funded startups in 2024 claimed to be AI companies, indicating a high level of investment in the field.
After the last approval in September 2023, Taiwan suspended the approval of data centers north of Taoyuan with a capacity of more than 5 MW in 2024, due to power supply shortages.
During the 2024 Indian elections, US$50 million was spent on authorized AI-generated content, notably by creating deepfakes of allied politicians and translating speeches.
In 2024, Google DeepMind introduced SIMA, a type of AI capable of autonomously playing nine previously unseen open-world video games by observing screen output, as well as executing short, specific tasks in response to natural language instructions.
In 2024, researchers used machine learning to accelerate the search for Parkinson's disease drug treatments, speeding up the initial screening process and reducing costs.
In 2024, the Alibaba Group developed a version of its Qwen models called Qwen2-Math, that achieved state-of-the-art performance on several mathematical benchmarks, including 84% accuracy on the MATH dataset of competition mathematics problems.
In 2024, the Council of Europe created the first international legally binding treaty on AI, called the "Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law".
In 2024, the UK AI Safety Institute released a testing toolset called 'Inspect' for AI safety evaluations available under an MIT open-source licence.
In January 2025, Microsoft proposed the technique rStar-Math that leverages Monte Carlo tree search and step-by-step reasoning, enabling a relatively small language model like Qwen-7B to solve 53% of the AIME 2024 and 90% of the MATH benchmark problems.
On May 20, 2025, Google introduced an AI Mode at its Google I/O event.
In July 2025, Ford CEO Jim Farley predicted that "artificial intelligence is going to replace literally half of all white-collar workers in the U.S."
Closed since 2022, the Palisades Nuclear reactor on Lake Michigan is planned to be reopened in October 2025.
In 2025, Geoffrey Hinton said that modern AI is particularly "good at persuasion" and getting better all the time, and asks "Suppose you wanted to invade the capital of the US. Do you have to go there and do it yourself? No. You just have to be good at persuasion."
In 2025, a report prepared by the International Energy Agency estimated the greenhouse gas emissions from the energy consumption of AI at 180 million tons.
In 2025, the experimental model Gemini Deep Think accepted natural language prompts directly and achieved gold medal results in the International Math Olympiad.
In March 2026, the United Nations convened the first meeting of the Independent International Scientific Panel on AI, a 40-member expert body established to produce annual reports on AI's societal impacts.
In January 2024, the International Energy Agency (IEA) released a report forecasting that power demand for data centers, artificial intelligence, and cryptocurrency might double by 2026.
A 2024 Goldman Sachs Research Paper found that by 2030, US data centers will consume 8% of US power, as opposed to 3% in 2022.
By 2035, the International Energy Agency estimated the greenhouse gas emissions from the energy consumption of AI could rise to 300–500 million tonnes depending on what measures will be taken.
PlayStation is a video game brand by Sony Interactive Entertainment...
Fox News Channel FNC is a conservative American news and...
Facebook is a social media and networking service created in...
Nvidia is a prominent American technology company specializing in the...
The United States of America is a federal republic located...
Google LLC is a multinational technology corporation specializing in a...
3 minutes ago G2 Dominates Gen.G, Advances in First Stand; Labrov Confident for the Title
1 hour ago Monrovia's 'Blondie' Bear Euthanized After Attack; Policy Change Demanded, Cubs Orphaned.
1 hour ago Jo Koy and Gabriel Iglesias team up for stand-up mega-show at SoFi Stadium.
2 hours ago Jamie Foxx recalls bizarre cloning experience and stroke recovery during hospital stay
2 hours ago Al Pacino Poses with His Three Adult Children at Daughter's Movie Premiere
2 hours ago Riot Reveals Riftbound: Unleashed with New Legends, Mechanics, and Exclusive Card.
Cesar Chavez was a prominent American labor leader and civil...
Sean Penn is a highly acclaimed American actor and film...
Paula White-Cain is a prominent American televangelist and key figure...
Joseph Clay Kent is an American politician and former military...
Benjamin Bibi Netanyahu is an Israeli politician and diplomat currently...
XXXTentacion born Jahseh Dwayne Ricardo Onfroy was a controversial yet...