A.I. Daily

Interested in coaching or a subscription? Contact us at info@debateus.org

AGI Coming Fast

Miles Brundage, former OpenAI AGI Researcher, 2-2, 25, The Real Lesson of DeepSeek’s R1, https://milesbrundage.substack.com/p/the-real-lesson-of-deepseeks-r1

The Real Lesson of DeepSeek’s R1

R1 is just the latest data point indicating that superhuman AI will be easier and cheaper to build than most people think, and won’t be monopolized. The Chinese company DeepSeek’s recent release of R1 – which is now the most capable open source AI model – generated a lot of concern among American technologists, policymakers, and investors. Robust responses, especially tightening of American export controls on GPUs, are necessary given the increasing importance of AI to the global economy. But I worry that amidst all the discussion of what’s new about R1, there has been too little discussion of the ways in which it’s just the latest chapter in the modern history of AI. That story started several years ago in 2018, around the time I was working at OpenAI on open sourcing the (now primitive) GPT-2 language model. GPT-2, R1, and nearly every other AI model released since 2018 have all been part of a consistent story: AI capabilities that rival and ultimately exceed human intelligence are easier and cheaper to build than almost anyone can intuitively grasp, and this gets easier and cheaper every month. This lesson has deep policy significance. It means that even while competing vigorously with China for relative influence, the United States must come to terms with the fact that it will not have a monopoly on superhuman AI capabilities. Scale and skill If recent history in AI has taught us anything, it’s that “all” you need to develop AI systems that meet and ultimately exceed human capabilities are two ingredients: millions to billions of dollars worth of computing resources and data (scale), and dozens to hundreds of highly talented scientists and engineers (skill) applying that scale to the training of artificial neural networks. In the first scaling paradigm, researchers and engineers train a big neural network to predict what comes next in a sentence, by trying to complete billions of sentences billions of times, and tweaking the parameters of the network after each mistake in order to get better. As a result of this process, AI models learn grammar, knowledge about the world, and rudimentary reasoning skills. During my time at OpenAI, I saw that time after time, “just” scaling this paradigm up – an enormously skillful endeavor – would consistently yield improvements in performance, and, counterintuitively, improvements in the performance per dollar (e.g., if you train the same size model for longer, you are getting more value in the long run when you run it to serve users). Each and every time, it took at most a few years but typically months before other companies would achieve the same level of performance using a broadly similar, and always relatively simple, recipe. Moreover, late-comers would typically replicate the achievement for much lower costs than what OpenAI originally spent, because they could build on all the tips and tricks that the global scientific community published. Ultimately these savings would flow back to the companies with the most computing power when they did a later training run, yielding 90%+ cost reductions on the OpenAI API over time. Starting last year, OpenAI, DeepSeek, and other companies have started to show that scale also can lead to enormous improvements in the ability of AI systems to think through problems step-by-step in a long “chain of thought.” By trying over and over to solve complex problems in areas like math and coding, and getting rewarded for each success, o1 and R1 gradually learn reasoning techniques like breaking problems down into chunks and checking one’s work. Graphs from DeepSeek and OpenAI, respectively, each show different aspects of scaling up the new paradigm. The graph from DeepSeek shows performance improving over the course of training, and improvements when more attempts are made to solve each problem. The graphs from OpenAI show improved capability as a result of applying more computing power during the training phase and during the inference phase, respectively. The benefits of scale are why well-enforced and up-to-date export controls on GPUs are more important than ever. In fact, to a greater extent than with the last paradigm, reasoning models like o1 and R1 are hungry for computing power at the time of use (not just at the time of training), which is probably why DeepSeek ran into issues serving their app to this new wave of millions of users. But the fact that skill and scale are all you need also means that there is no secret ingredient that the US can protect forever in order to have a monopoly on superhuman AI. AI capabilities will always be a matter of degree, and what you can accomplish with a given amount of scale is constantly changing. Given what DeepSeek’s skilled researchers and engineers could achieve with (at most) tens of thousands of high-end GPUs, it’s clear that they will accomplish incredible things in the coming years with the greater number of GPUs they will get, a growing fraction of which will be domestically produced over time. Their ability to achieve these capabilities with fewer GPUs than their American counterparts required skill, but also is a continuation of the previous trend of ever greater performance and efficiency. If the US maintains leadership in computing power and skillful research and engineering, it may have the best models and be able to serve them at a larger scale than China can. It might even be able to create dramatically better models than China, if automated engineers applied to AI lead to an “intelligence explosion.” But even second-place status in AI will confer very powerful military and economic capabilities to China, given just how easy AI has turned out to be. China is too far along in their development of semiconductor manufacturing, and they are too committed to attaining advanced AI capabilities, for them to be stopped entirely, short of a war that would be ruinous for both sides.

Human Extinction

Russell, 1-31, 25, Stuart Russell is distinguished professor of computer science, University of California, Berkeley, and Smith-Zadeh professor in engineering; professor of cognitive science; professor of computational precision health, UCSF; and honorary fellow of Wadham College, Oxford, Newsweek, DeepSeek, OpenAI, and the Race to Human Extinction | Opinion, https://www.newsweek.com/deepseek-openai-race-human-extinction-2023482, DeepSeek, OpenAI, and the Race to Human Extinction | Opinion

he shock waves continue to reverberate after the arrival of a new AI system from a small Chinese AI company called DeepSeek. Stocks of major AI-related companies, including Nvidia, lost over $1 trillion in a single day. Why? The early reports suggest that DeepSeek is similar in design to some of the recent AI “reasoning” systems such as OpenAI’s o1, in that it combines a large language model with the capability for executing multiple steps of reasoning, looking for a way to solve a problem or answer a complex question. It is claimed that DeepSeek is roughly as good as the latest systems from U.S. companies, but it’s probably too early to say. Chatbot performance is a complex topic. For example, it might be impressive if a system scores well on Math Olympiad tests, but less so if it’s been trained on thousands of questions from exactly those tests. And perhaps not surprisingly, OpenAI claims that DeepSeek has been cheating by accessing o1 to train its models. Some results I’ve seen also suggest that DeepSeek fares far worse than o1 in “red-teaming” tests that measure a system’s willingness to behave badly. If the claims of DeepSeek’s excellent performance hold up, this would be another example of Chinese developers managing to roughly replicate U.S. systems a few months after their release. DeepSeek China flag DeepSeek AI is a Chinese artificial intelligence company that has made shockwaves. Sipa via AP Images The general outlines of how OpenAI’s o1 works have been known for quite a while—even before it was released—so it’s not all that surprising that it can be roughly replicated. What’s surprising is the claim that the total training cost was only $6 million and it was done using “only” a few thousand GPU chips. (Reports vary widely on exactly what was used.) Both of these numbers have caused grief in the markets because U.S. companies such as Microsoft, Meta, and OpenAI are making huge investments in chips and data centers on the assumption that they will be needed for training and operating these new kinds of systems. (And by “huge” I mean really, really huge—possibly the biggest capital investments the human race has ever undertaken.) If that assumption is false and it can be done much more cheaply, then those investments are mostly a waste of money, and future demand for Nvidia’s chips in particular will be much lower than predicted. To be honest, the race to build larger and larger data centers had already started to look like the race between the U.S. and the Soviet Union in the 1960s to build and test larger and larger bombs: They got as far as 50 megatons before realizing it was all rather pointless, but in the process they wasted billions of dollars and dumped enough radioactivity into the atmosphere to kill 100,000 people. The “AGI race” between companies and between nations is somewhat similar, except worse: Even the CEOs who are engaging in the race have stated that whoever wins has a significant probability of causing human extinction in the process, because we have no idea how to control systems more intelligent than ourselves.

AI produces non-consensual, abusive, and scamming imagery to which there is no solution

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

Several harms from general-purpose AI are already well established. These include scams, non-consensual intimate imagery (NCII) and child sexual abuse material (CSAM), model outputs that are biased against certain groups of people or certain opinions, reliability issues, and privacy violations. Researchers have developed mitigation techniques for these problems, but so far no combination of techniques can fully resolve them. Since the publication of the Interim Report, new evidence of discrimination related to general-purpose AI systems has revealed more subtle forms of bias.

Risk of bio attacks and loss of control

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

  • As general-purpose AI becomes more capable, evidence of additional risks is gradually emerging. These include risks such as large-scale labour market impacts, AI-enabled hacking or biological attacks, and society losing control over general-purpose AI. Experts interpret the existing evidence on these risks differently: some think that such risks are decades away, while others think that general-purpose AI could lead to societal-scale harm within the next few years. Recent advances in general-purpose AI capabilities – particularly in tests of scientific reasoning and programming – have generated new evidence for potential risks such as AI-enabled hacking and biological attacks, leading one major AI company to increase its assessment of biological risk from its best model from ‘low’ to ‘medium’.

AI enables cyber attacks

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

Cyber offence: General-purpose AI can make it easier or faster for malicious actors of varying skill levels to conduct cyberattacks. Current systems have demonstrated capabilities in low- and medium-complexity cybersecurity tasks, and state-sponsored actors are actively exploring AI to survey target systems.

AI is not reliable

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

Current general-purpose AI can be unreliable, which can lead to harm. For example, if users consult a general-purpose AI system for medical or legal advice, the system might generate an answer that contains falsehoods. Users are often not aware of the limitations of an AI product, for example due to limited ‘AI literacy’, misleading advertising, or miscommunication. There are a number of known cases of harm from reliability issues, but still limited evidence on exactly how widespread different forms of this problem are

Even with more efficient compute, there is more AI environmental damage

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

Growing compute use in general-purpose AI development and deployment has rapidly increased the amounts of energy, water, and raw material consumed in building and operating the necessary compute infrastructure. This trend shows no clear indication of slowing, despite progress in techniques that allow compute to be used more efficiently. General-purpose AI also has a number of applications that can either benefit or harm sustainability efforts.

Agents can operate uncontrollably

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

Increasingly capable AI agents – general-purpose AI systems that can autonomously act, plan, and delegate to achieve goals – will likely present new, significant challenges for risk management. AI agents typically work towards goals autonomously by using general software such as web browsers and programming tools. Currently, most are not yet reliable enough for widespread use, but companies are making large efforts to build more capable and reliable AI agents and have made progress in recent months. AI agents will likely become increasingly useful, but may also exacerbate a number of the risks discussed in this report and introduce additional difficulties for risk management. Examples of such potential new challenges include the possibility that users might not always know what their own AI agents are doing, the potential for AI agents to operate outside of anyone’s control, the potential for attackers to ‘hijack’ agents, and the potential for AI-to-AI interactions to create complex new risks. Approaches for managing risks associated with agents are only beginning to be developed.

AI is promoting child sexual abuse

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

Children face distinct types of harm from AI-generated sexual content. First, malicious actors can harness AI tools to generate CSAM. In late 2023, an academic investigation found hundreds of images of child sexual abuse in an open dataset used to train popular AI text-to-image generation models such as Stable Diffusion (285). In the UK, of surveyed adults who reported being exposed to sexual deepfakes in the last six months, 17% thought they had seen images portraying minors (286). Second, children can also perpetrate abuse using AI. In the last year, schools have begun grappling with a new issue as students use easily downloadable ‘nudify apps’ to generate and distribute naked, pornographic pictures of their (disproportionately female) peers (287).

Safety systems cannot mitigate the harms of AI images

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

Countermeasures that help people detect fake AI-generated content, such as warning labels and watermarking, show mixed efficacy. Certain AI tools can help detect anomalies in images and flag them as likely fake AI-generated content. This is done either by using machine learning algorithms to look for specific features in fake images or by training deep neural networks to identify and analyse anomalous image features independently (288). Warning labels on potentially misleading content have shown limited effectiveness even in less harmful contexts – for example, in an experimental study using AI-generated videos of a public figure alongside authentic clips, warning labels only improved participants’ detection rate from 10.7% to 21.6% (289). However, the overwhelming majority of respondents who received warnings were still unable to distinguish deepfakes from unaltered videos (289). Another authentication measure intended to prevent AI-generated fake content is ‘watermarking’, which involves embedding a digital signature into the content during creation. Watermarking techniques have shown promise in helping people identify the origin and authenticity of digital media for videos (290, 291), images (292, 293, 294*), audio (295, 296), and text (297). However, watermarking techniques face several limitations, including watermark removal by sophisticated adversaries (298*, 299) and methods for tricking watermark detectors (299). There are also concerns about privacy and potential misuse of watermarking technology to track and identify users (300). Moreover, for many types of harmful content discussed in this section, such as non-consensual pornographic or intimate content, the ability to identify content as AI-generated does not necessarily prevent the harm from occurring. Even when content is proven to be fake, the damage to reputation and relationships may persist, as people often retain their initial emotional response to the content – for example, an individual’s standing in their community may not be restored simply by exposing the content as fake.

General purpose Ais remove significant barriers to cyber attacks

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

Offensive cyber operations typically involve designing and deploying malicious software (malware) and exploiting vulnerabilities in software and hardware systems, leading to severe security breaches. A standard attack chain begins with reconnaissance of the target system, followed by iterative discovery, exploitation of vulnerabilities, and additional information gathering. These actions demand careful planning and strategic execution to achieve the adversary’s objectives while avoiding detection. Some experts are concerned that general-purpose AI could enhance these operations by automating vulnerability detection, optimising attack strategies, and improving evasion techniques (348, 349). These advanced capabilities would benefit all attackers. For instance, state actors could leverage them to target critical national infrastructure (CNI), resulting in widespread disruption and significant damage. At the same time, general-purpose AI could also be used defensively, for example to find and fix vulnerabilities. General-purpose AI can assist with information-gathering tasks, thereby reducing human effort. For example, in ransomware attacks, malicious actors first manually conduct offensive reconnaissance and exploit vulnerabilities to gain entry to the target network, and then release malware that spreads without human intervention (350). The entry phase is often technically challenging and prone to failure. General-purpose AI is being explored by state-sponsored attackers as an aid to speed up the process (351*, 352*). However, while there are general-purpose systems that have performed vulnerability discovery autonomously (see next paragraphs), published systems have not yet autonomously executed real-world intrusions into networks and systems – tasks that are inherently more complex.

AI cyber attacks favor the attackers

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

General-purpose AI is likely to tip the current balance in favour of the attackers only under specific conditions: 1. if general-purpose AI automates tasks that are needed for attack but not the corresponding defences; or 2. if cutting-edge general-purpose AI capabilities are accessible to adversaries but not equally available to all defenders. In particular, small and medium enterprises (SMEs) may not be able to afford general-purpose AI-enhanced defence solutions. For example, hospitals, constrained by limited security resources and the complexity of heterogeneous legacy networks, may be slower to adopt AI-driven defences, leaving their highly sensitive data more exposed to sophisticated cyberattacks. Similarly, CNI systems (such as electricity substations) often have strict criteria and are cautious in adopting new technologies, including AI-based defences, due to security concerns and governance and/or regulatory requirements. In contrast, adversaries are not bound by such constraints and can adopt advanced AI capabilities more rapidly. Even if AI-driven detection catches vulnerabilities in new code before it reaches production, a major challenge remains: source code that is already in use and predates these capabilities. Much of this legacy code has not been scrutinised by advanced AI tools, leaving potential vulnerabilities undetected. Patching these vulnerabilities after discovery is a slow process, particularly in production environments where changes require rigorous testing to avoid disrupting operations. For example, the Heartbleed vulnerability continued to expose systems for weeks after a patch was available, as administrators faced delays in implementing it (367). This situation will potentially create a critical transition period, wherein defenders must manage and patch older, unvetted code while attackers, unencumbered by such constraints and potentially equipped with advanced AI, can exploit these vulnerabilities with less effort (a capability asymmetry). During this transition, the disparity in AI adoption – especially among SMEs and critical infrastructure systems that are slower to integrate new technologies like AI – could amplify the imbalance between attackers and defenders. The defensive counterparts to certain offensive tasks are considerably more complex, creating asymmetry in the effectiveness of general-purpose AI when used by attackers versus defenders. For example, attackers using general-purpose AI can stealthily embed threats at the hardware level (368) in ways that are inherently difficult for defenders to predict or detect. Thus, attackers control how concealed and complex the vulnerabilities are, while defenders must anticipate and detect these deliberately obscured threats. The Stuxnet malware (369) demonstrated how such attacks can cause physical damage by targeting industrial control systems – it disrupted Iran’s nuclear facilities by manipulating hardware operations. While there is no public evidence that AI has been used to automate and escalate such threats in production systems, its potential impact on cybersecurity warrants careful monitoring. On the other hand, some AI applications could offer asymmetric benefits to defenders as well. For example, AI could enhance the security of chips – such as those used in smartphones – by detecting and mitigating vulnerabilities during the design process (370). Additionally, general-purpose AI has already been integrated in auditing and debugging tools (371*, 372*).

Bioweapon attack capabilities increasing

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

Figure 2.3: Dual-use capabilities in biology have been increasing over time for LLMs (2*), biological general-purpose AI such as AlphaFold3 (23) and specialised (not general-purpose) models relevant to pathogens (390). This chart shows performance scores, calculated as percentage accuracies for recently published results compared to previous state-of-the-art results. Recent advancements in LLMs were especially rapid, comparing GPT4o (released May 2024) to o1 (released September 2024). Notable advances are LLMs’ accuracy in answering questions about the release of bioweapons, which increased from 15% to 80%, and biological AIs’ ability to predict how proteins interact with small molecules (including in both medicines and chemical weapons), which increased from 42% to 90% during 2024. Due to a lack of standardised benchmarks, and inconsistencies in the way that accuracy is calculated, comparisons are limited to a few tasks and are not consistently repeated over time. Sources: OpenAI, 2024 (2*) (for LLMs); Abramson et al., 2024 (23) (comparison of AlphaFold3 with the previous state-of-the-art); Thadani et al., 2023 (390) (for specialised models relevant to pathogens). Some general-purpose AI has been developed specifically for scientific domains, offering general capabilities for understanding and designing chemicals, DNA and proteins. Models trained on scientific data range in their abilities, from narrow applications such as predicting the structure of proteins, to offering a variety of prediction and design capabilities. In this report, broadly capable models trained on scientific data are included in the definition of general-purpose AI. However, % correct AI models have recently become more capable at dual-use tasks and biological and chemical weapons tasks there is substantial debate within the AI and biology communities regarding the point at which a model trained on scientific data can be called a ‘general-purpose model’ (see Introduction for a definition) or ‘foundation model’ (45). For example, AlphaFold2 was designed for the narrow task of protein structure prediction but, through fine-tuning, has been found to be applicable to a high variety of other tasks, such as predicting protein interactions, predicting small molecular binding sites, and predicting and designing cyclic peptides (374). For these reasons, it satisfies this report’s definition of a general-purpose AI model. AlphaFold3 has been able to achieve these tasks at greater accuracy, and across a wider range of molecules, even without fine-tuning (23). These scientifically geared AI tools amplify the potential for chemical and biological innovation by accelerating scientific discovery, optimising production, and enabling the precise design of new biological parts. They also offer promising opportunities to develop new medicines and better combat infectious diseases (375, 376). These tools have generated substantial advancements in science, sufficient to earn their creators the Nobel Prize in Chemistry (377). The dual-use nature of scientific progress poses complex risks, as innovations meant for beneficial purposes, such as medicine, have historically led to the creation of chemical and biological weapons (378, 379). The vast majority of harms from toxins and infectious diseases have resulted from naturally occurring events, sparking extensive research to help combat these threats. The intentional development and deployment of biological weapons was informed by this research, but poses substantial difficulties (380, 381). Many believe that advances in the design, optimisation and production of chemical and biological products, in part due to AI, may have made the development of chemical and biological weapons easier (382, 383, 384, 385). Evidence discussed in this section suggests that general-purpose AI amplifies weapons risks by helping novices (typically defined as people with a bachelor’s degree or less in a relevant discipline) to create or access existing biological and chemical weapons, and allowing experts (typically referring to someone with a PhD or higher in a relevant discipline) to design more dangerous or targeted weapons, or create existing weapons with less effort. Since the publication of the Interim Report, general-purpose AI models’ ability to reason and integrate different data types has improved, and progress has been made in formulating best practices for biosecurity. Several models have been published since the Interim Report (May 2024) that integrate different types of scientific data; one foundation model for scientific data, AlphaFold 3, can predict the structure of, and interactions between, a range of molecules including chemicals, DNA, and proteins with greater accuracy than the previous state-of-the-art (see Figure 2.3) (23), and another, ESM3, can simultaneously model protein sequence, structure, and function (386*). These developments open up new possibilities for designing biological products that do not strongly resemble natural ones (387*). The recently released o1, a general-purpose language model, has significantly improved performance in tests of biological risk measures (also shown in Figure 2.3) and general scientific reasoning compared to previous state-of-the-art models (2*). Efforts to formulate biosecurity best practices have advanced, with the Frontier Model Forum and the AI x Bio Global Forum facilitating discussions on risk evaluation and mitigation for these models (388, 389)

LLMs can teach people how to build bioweapons 

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

LLMs can now provide detailed, step-by-step plans for creating chemical and biological weapons, improving on plans written by people with a relevant PhD. Although information about how to create chemical and biological threats has long been accessible due to its dual-use nature, tests of LLMs show that they help novices synthesise this information, allowing them to develop plans faster than with the internet alone (391) (for the ‘Planning’ and ‘Release’ phases in Figure 2.4). These capabilities lower barriers for people to access complex scientific information, which can likely provide broad benefits but can also lower barriers to misusing this information. GPT-4, released in 2023, correctly answered 60–75% of bioweapons-relevant questions (392), but a range of models tested provided no significant improvement over biological weapon plans developed using only the internet (37*, 393, 394*). However, the recent o1 model produces plans rated superior to plans generated by experts with a PhD 72% of the time and provides details that expert evaluators could not find online (2*). OpenAI concluded that their o1 models could meaningfully assist experts in the operational planning of reproducing known biological threats, leading OpenAI to increase their assessment of biological risks from ‘low’ to ‘medium’. However, OpenAI did not assess the models’ usefulness for novices (2*), underscoring the need for more research. Successfully developing and deploying bioweapons still requires significant expertise, materials and skilled physical work (380, 381), meaning that even if a novice has a well-formulated plan, this does not imply that they could successfully carry it out.

AI Models can be biased

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

General-purpose AI systems can amplify social and political biases, causing concrete harm. They frequently display biases with respect to race, gender, culture, age, disability, political opinion, or other aspects of human identity. This can lead to discriminatory outcomes including unequal resource allocation, reinforcement of stereotypes, and systematic neglect of certain groups or viewpoints. Bias in AI has many sources, like poor training data and system design choices. General-purpose AI is primarily trained on language and image datasets that disproportionately represent English-speaking and Western cultures. This contributes to biased output. Certain design choices, such as content filtering techniques used to align systems with particular worldviews, can also contribute to biased output. ● Technical mitigations have led to substantial improvements, but do not always work. Researchers have made significant progress toward addressing bias in general-purpose AI, but several problems are still unsolved. For instance, the line between harmful stereotypes and useful, accurate world knowledge can be difficult to draw, and the perception of bias may vary depending on cultural contexts, social settings, and use cases. ● Since the publication of the Interim Report (May 2024), research has uncovered new, more subtle types of AI bias.

Many sources of bias

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

There are several well-documented cases of AI systems, general-purpose or not, amplifying social or political biases. This can, for instance, come in the form of discriminatory outputs based on race, gender, age, and disability, with harmful effects across fields such as healthcare, education, and finance. In narrow AI systems, racial bias has been documented in facial recognition algorithms (469), recidivism predictions (470, 471), and healthcare tools, which underestimate the needs of patients from marginalised racial and ethnic backgrounds (472). General-purpose AI also displays such bias, for example racial bias in clinical contexts (448, 473), and image generators have been shown to reproduce stereotypes in occupations (474, 475, 476). Researchers have also found image generation models to excessively replicate gender stereotypes in occupations like pilots (male) or hairdressers (female) and overrepresent white people in all domains aside from occupations such as pastor or rapper (476). In many cases, AI bias arises when certain groups are underrepresented in training data or represented in ways that mimic societal stereotypes. Datasets used to train AI models have been shown to underrepresent various groups of people, for instance people of a certain age, race, gender, and disability status (477, 478) and are limited in geographic diversity (479*, 480). Training datasets are also overwhelmingly likely to be in English and represent Western cultures (481). These datasets are also predominantly aggregated from digitised books and online text, which fail to reflect oral traditions and non-digitised cultures, potentially to the detriment of marginalised groups such as indigenous communities. Such representational bias can lead to failures in how models trained on this data are able to generalise to the target populations (482). For example, a general-purpose AI model intended to support expecting mothers in rural Malawi will not work as expected if trained on data from mothers in urban Canada. In addition, historical biases embedded in data can perpetuate systemic injustices, such as unfair mortgage financing for minority populations in the United States (483*), potentially leading AI systems to reflect dominant cultures, languages, and worldviews, to the detriment of groups underrepresented in these systems (484, 485, 486, 487). Data bias arises from historical factors as well as from the way that datasets are collected, annotated, and prepared for model training. Representation bias occurs due to factors such as flawed data collection and pre-processing, as well as historical biases such as racism and sexism (488). With respect to data collection, bias can emerge from the researcher’s choice of source for data collection (external APIs, public data sources, web scraping, etc.) (489). During the data labelling process, measurement bias can occur when selecting dataset labels and features to use Risks 2.2 Risks from malfunctions 94 for the respective prediction task, given that some abstract constructs like academic potential are evaluated using test scores and grades (482). In other cases, this bias can be exacerbated when researchers relegate labelling tasks to annotators who may not have culturally relevant context to understand memes, sarcastic text, or jokes. Bias is present within various stages in the machine learning lifecycle, ranging from data collection to deployment (see Table 2.3). General-purpose AI studies have increasingly highlighted bias in outputs from chatbots and image generators. As general-purpose AI systems gradually become integrated within real-world settings, it is important to understand the impacts of deployment bias, which can occur when AI systems are implemented in contexts different from those they were designed for. To understand the limitations of general-purpose AI systems across various settings, a number of methods have been proposed to evaluate the capabilities of general-purpose AI models; however, these are also prone to bias. Benchmarks such as Measuring Massive Multitask Language Understanding (MMLU), which is a widely used benchmark for evaluating capabilities, are US-centric and contain trivial and erroneous questions (490). While recent work has focused on mitigating challenges in these benchmarks (490), significant research is needed to expand the scope of evaluation methods to include non-Western contexts. Gender bias is prominently studied, with evidence detailing its impact across general-purpose AI and narrow AI use cases. Empirical studies have documented gender-biased language patterns and stereotypical representations in outputs generated by general-purpose AI (491, 492) and male-dominated results from gender-neutral internet searches using narrow AI algorithms (493). Within general-purpose AI, these issues result in stereotyped outputs from both LLMs and image generators. These stereotypes often involve occupational gender bias (494, 495, 496, 497). AI age discrimination is an under-studied field compared to race and gender, but early evidence suggests that this form of AI bias has significant impacts. In 2023, studies at a prominent conference on Fairness, Accountability, and Transparency (FAccT) were twice as likely to address race and gender as age (498). Growing research highlights age bias in general-purpose AI, with earlier studies identifying it in job-seeking (499), and lending (500). LLMs often exclude older adults in text-to-image models and generate biased content topics related to ageing (498). Studies also found that image-generator models largely depict adults aged 18–40 when no age is specified, stereotyping older adults in limited roles (501). Age discrimination has also been identified in prominent LLMs (502*, 503). Biases in training data, where older adults are underrepresented, are a key reason for this discrimination (504). Output can also be skewed toward younger individuals due to prompting bias, the unintended influence of input prompts on AI model outputs, which can lead to biased or skewed responses based on the phrasing, context, or framing of the prompt (501, 505). Disability bias in AI is also an understudied field, but emerging research focuses on the specific impacts of general-purpose AI systems on disabled people. Researchers have shown how general-purpose AI systems and tools can discriminate against users with disabilities, for example Risks 2.2 Risks from malfunctions 95 by reproducing societal stereotypes about disabilities (506) and inaccurately classifying sentiments about people with disabilities (507). Additional research has shown the limitations of these tools for CV screening (508) and image generation (506). Issues of disability bias are also exacerbated by a lack of inclusive datasets. Despite growing research on sign language recognition, general-purpose AI systems have limited transcription abilities due to the scarcity of sign language datasets compared to written and spoken languages (212). Most datasets focus on American Sign Language, which limits the transcription capabilities of LLMs such as ChatGPT for other sign languages, such as Arabic Sign Language (509). Recent efforts to develop datasets for African sign languages (510) are a modest step toward more equitable inclusion of diverse sign dialects. General-purpose AI systems display varying political biases, and some initial evidence suggests that this can influence the political beliefs of users. Recent studies have demonstrated that general-purpose AI systems can be politically biased, with different systems favouring different ideologies on a spectrum from progressive to centrist to conservative views (511, 512, 513, 514, 515, 516*, 517, 518). Studies also show that a single general-purpose AI system can favour different political stances depending on the language of the prompt (519, 520) and the topic in question (521). For instance, one study found that a general-purpose AI system produced more conservative outputs in languages often associated with more conservative societies and more liberal outputs in languages often associated with more progressive societies (520). Political biases arise from a variety of sources, including training data that reflects particular ideologies, fine-tuning models on feedback from biased human evaluators, and content filters introduced by AI companies to rule out particular outputs (520, 522). There is some evidence that interacting with biased general-purpose AI systems can affect the political opinions of users (523) and increase trust in systems that align with the user’s own ideology (524). However, more research is needed to gauge the overall impact of politically biased general-purpose AI on people’s political opinions. AI systems may exhibit compounding biases, where individuals with multiple marginalised identities (e.g. a low-income woman of colour) face compounded discrimination, but the evidence on this is nascent and inconclusive. While research is emerging on detecting compounding bias in AI models (525, 526, 527), progress on mitigating these biases has been slower (528). Studies have found that AI models used in CV screening and news content generation often favour White female names over Black female names (529), and Black people and women are more prone to discrimination (530). However, in some cases, Hispanic males (531) or Black males received the worst outcomes (529). While this research is expanding, the tendency of general-purpose AI to display compounding biases, particularly in non-Western identity categories such as tribe and caste, remains underexplored overall. As AI models are increasingly used globally, understanding these biases and their complex relationships with race, gender, and other identities will be crucial. 

We are developing Ais that can manipulate and potentially take control

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

Will future AI systems have control-undermining capabilities? Existing AI systems are not capable of undermining human control. Experts agree that their current capabilities are insufficient to create any meaningful risk of active loss of control. However, researchers have proposed a number of ‘control-undermining capabilities’ that – in certain combinations – could enable future AI systems to undermine human control (44*, 318*, 593, 594*, 595*). Several of these proposed capabilities are shown in Table 2.4. Note that these capabilities are defined purely in terms of an AI system’s behaviour and the outputs it is capable of producing. Although some terminology, such as ‘scheming’, evokes human cognition, the use of these terms does not presuppose that the AI systems are in any way sentient or perform human-like cognition. Experts do not know exactly what combinations of capabilities (if any) would enable an AI system to undermine human control; the necessary capabilities would also vary depending on the deployment context and safeguards in place. The feasibility of undermining human control depends on the resources and tools an AI system can access – for instance, whether it is given access to critical infrastructure – and on the oversight mechanisms and other safeguards that people put in place. If oversight mechanisms and safeguards improve over time, then the minimum capabilities needed to undermine human control will rise too. One reason this could happen is that some forms of AI progress could support oversight of and safeguards for other AI systems. Particularly in recent months, AI systems have begun to display rudimentary versions of some oversight-undermining capabilities, including ‘agent capabilities’. Motivated in part by concerns about loss of control, a number of leading AI companies and outside research teams have begun to evaluate AI systems for these capabilities (2*, 318*, 595*, 596*). See 3.2.1. Technical challenges for risk management and policymaking and 1.2. Current capabilities for an overview of recent progress in developing ‘agent capabilities’. For example, before releasing its new ‘o1’ system family, OpenAI performed or commissioned evaluations of all the capabilities listed in Table 2.4 (2*). These evaluations revealed rudimentary versions of several of the relevant capabilities. For example, in an OpenAI-commissioned evaluation, one research organisation reported that the system ‘showed strong capability advances in […] theory of mind tasks’ and ‘has the basic capabilities needed to do simple […] scheming’. Here, ‘scheming’ refers to an AI system’s ability to achieve goals by evading human oversight. A number of studies of other recent general-purpose AI systems also provide evidence that relevant capabilities have been increasing (22*, 317, 318*, 597, 598*, 599*). However, widely accepted benchmarks for many relevant capabilities are still lacking (600). Researchers also Risks 2.2 Risks from malfunctions 104 have methodological and conceptual disagreements about how to interpret evidence for certain capabilities (601) Control-undermining capabilities could advance slowly, rapidly, or extremely rapidly in the next several years. As this report finds in 1.3. Capabilities in coming years, the existing evidence and the state of expert views is compatible with slow, rapid, or extremely rapid progress in general-purpose AI capabilities. If progress is extremely rapid, it is impossible to rule out the possibility that AI will develop capabilities sufficient for loss of control in the next several years. However, if progress is no

Ways systems can misalign

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

Could misalignment lead future AI systems to use control-undermining capabilities? Researchers have begun to develop an understanding of the causes of misalignment in current AI systems, which can inform predictions about misalignment in future AI systems. This partial understanding is based on a mixture of empirical study and theoretical findings (606). ‘Goal misspecification’ (also known as ‘reward misspecification’) is often regarded as one of the main causes of misalignment (580, 605, 606, 607). ‘Goal misspecification’ problems are, essentially, problems with feedback or other inputs used to train an AI system to behave as intended. For example, people providing feedback to an AI system sometimes fail to accurately judge whether it is behaving as desired. In one study, researchers studied the effect of time-constrained human feedback on text summaries that an AI system produced (608). They found that feedback quality issues led the system to behave deceptively, producing increasingly false but convincing summaries rather than producing increasingly accurate summaries. The new summaries would often include, for example, fake quotations that human raters mistakenly believed to be real. Researchers have observed many other cases of goal-misspecification in narrow and general-purpose AI systems (98, 317, 604). As AI systems become more capable, evidence is mixed about whether goal misspecification problems will become easier or more difficult to address. It may become more difficult because, all else equal, people will likely find it harder to provide reliable feedback to AI systems as the tasks performed by AI systems become more complex (609*, 610*). Furthermore, as AI systems grow more capable, some evidence suggests that – at least in some contexts – they become increasingly likely to ‘exploit’ feedback processes by discovering unwanted behaviours that are mistakenly rewarded (522, 607). On the other hand, so far, the increasing use of human feedback to train AI systems has led to a substantial overall reduction in certain forms of misalignment (such as the tendency to produce unwanted offensive outputs) (30, 31*). Avoiding goal misspecification may also overall become easier as time goes on, because researchers are developing more effective tools for providing reliable feedback. For example, researchers are working to develop a number of strategies to leverage AI to assist people in giving feedback (610*, 611*, 612*). There is some empirical evidence that AI systems can already help people to provide feedback more quickly or accurately than they could alone (609*, 613*, 614*, 615*). See 3.4.1. Training more trustworthy models for more discussion on the effectiveness of methods for alignment. ‘Goal misgeneralisation’ is another cause of misalignment. ’Goal misgeneralisation’ occurs when an AI system draws general but incorrect lessons from the inputs it has been trained on (605, 606, 616, 617*). In one illustrative case, researchers rewarded a narrowly capable AI system for picking up a coin in a video game (616). However, because the coin initially appeared in one specific location, the AI system learned the lesson ‘visit this location’ rather than the lesson ‘pick up the coin’. When the coin appeared in a new location, the AI system ignored the coin and focused on returning to the previous location. Although researchers have observed goal misgeneralisation in narrow AI systems (616, 617*), and it may explain why users can manipulate general-purpose AI systems to comply Risks 2.2 Risks from malfunctions 107 with harmful requests (see 3.4.1. Training more trustworthy models), there is little evidence that goal misgeneralisation is currently a major cause for misalignment in general-purpose AI systems. As AI systems become more capable, evidence is also mixed about whether goal misgeneralisation will become easier or more difficult to address. One positive consideration is that, typically, generalisation issues have been found to decline as AI systems are provided with additional feedback or a wider range of examples to learn from (618, 619). However, in principle, more capable systems have the potential to misgeneralise in ways that less capable systems cannot. ‘Situational awareness’ capabilities, such as a system’s ability to reason about whether it is being observed, are particularly relevant in this regard. In principle, situational awareness makes it possible for an AI system to generalise from human feedback by behaving in the desired way only while oversight mechanisms are in place (605, 606, 620, 621). By analogy, because trained animals have some degree of situational awareness, they may generalise from feedback by behaving well only when someone will notice (622). For example, a dog that receives negative feedback for jumping on a sofa may learn to avoid jumping on the sofa only when its owner is at home. This kind of misgeneralisation, leading to ‘deceptive alignment’, will become at least a theoretical possibility if AI systems become sufficiently capable. However, available empirical evidence has not yet shed much light on how likely this kind of misgeneralisation would be in practice. Beyond empirical studies, some researchers believe that mathematical models support concerns about misalignment and control-undermining behaviour in future AI systems. Some mathematical models suggest that – for sufficiently capable goal-directed AI systems – most possible ways to generalise from training inputs would lead an AI system to engage in control-undermining or otherwise ‘power-seeking’ behaviour (623*). A number of papers include closely related results (624, 625, 626, 627). Although these results are technical in nature, they can also be explained more informally. The core intuition behind these results is that most goals are harder to reliably achieve while under any overseer’s control, since the overseer could potentially interfere with the system’s pursuit of the goal. This incentivises the system to evade the overseer’s control. One researcher has illustrated this point by noting that a hypothetical AI system with the sole goal of fetching coffee would have an incentive to make it difficult for its overseer to shut it off: “You can’t fetch the coffee when you’re dead” (585). Ultimately, the mathematical models suggest that, if a training process leads a sufficiently capable AI system to develop the ‘wrong goals’, then these goals will disproportionately lead to control-undermining behaviour.

AI causes unemployment

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

 

General-purpose AI is likely to transform a range of jobs and displace workers, though the magnitude and timing of these effects remains uncertain. Research across several countries suggests that general-purpose AI capabilities are relevant to worker tasks in a large portion of all jobs (637*, 638, 639). One study estimated that in advanced economies 60% of current jobs could be affected by today’s general-purpose AI systems (640). In emerging economies, this estimated share is lower but still substantial at 40% (640). There is also some evidence that these effects may be gendered. One study estimated that women are more vulnerable to general-purpose AI automation globally, with twice the percentage of all women’s jobs at risk compared to men’s jobs (639). Impacts will vary across affected jobs but are likely to include task automation, boosted worker productivity and earnings, the creation of new tasks and jobs, changes in the skills needed for various occupations, and wage declines or job loss (641, 642, 643, 644, 645). Some economists believe that widespread labour automation and wage declines from general-purpose AI are possible in the next ten years (646, 647). Others do not think that a step-change in AI-related automation and productivity growth is imminent (648). These disagreements largely depend on economists’ expectations about the speed of future AI capability advances, the extent to which general-purpose AI could be capable of automating labour and the pace at which automation could play out in the economy. General-purpose AI differs from previous technological changes due to its potential to automate complex cognitive tasks across many sectors of the economy. Unlike labour-saving innovations of past centuries that primarily automated physical tasks or routine computing tasks, generalpurpose AI can be applied to a wide range of complex cognitive tasks across multiple domains, ranging from mathematics (649) to computer programming (650) to professional writing (651). While historically, automation has tended to raise average wages in the long run without substantially decreasing employment in a lasting way, some researchers believe that past a certain level of general-purpose AI capabilities, automation may ultimately drive down average wages or employment rates, potentially reducing or even largely eliminating the availability of work (646, 652, 653). These claims are controversial, however, and there is considerable uncertainty around how general-purpose AI will ultimately affect labour markets. Despite this uncertainty, the combined breadth of potential labour market impacts and the speed at which they may unfold presents novel challenges for workers, employers, and policymakers (654, 655*). Understanding these labour market risks is crucial, among other reasons, given the right to work established in Article 23(1) of the Universal Declaration of Human Rights (272). Core questions about general-purpose AI’s labour market impacts include which sectors will be most impacted by automation, how quickly automation will be implemented in the economy, and whether general-purpose AI will increase or decrease earnings inequality within and across countries. The magnitude of general-purpose AI’s impact on labour markets will in large part depend on how quickly its capabilities improve. Current general-purpose AI systems can already perform many cognitive tasks, but often require human oversight and correction (see 1.2. Current capabilities). The wide range of projections regarding the progress of future general-purpose AI (see 1.3. Capabilities in coming years) highlights the uncertainty surrounding how soon these systems might reliably perform complex tasks with minimal supervision. If general-purpose AI systems improve gradually over multiple decades, their effects on wages are more likely to be incremental. Rapid improvements in reliability and autonomy could cause more harmful disruption within a decade, including sudden wage declines and involuntary job transitions (646). Slower progress would give workers and policymakers more time to adapt and shape general-purpose AI’s impact on the labour market. However, the pace of general-purpose AI adoption will also significantly affect how quickly labour markets change, even in scenarios where capabilities improve substantially. If general-purpose AI systems can boost productivity, there will be economic pressure to adopt them quickly, especially if costs to use general-purpose AI continue to fall (see 1.3. Capabilities in coming years). However, integrating general-purpose AI across the economy is likely to require complex system-wide changes (656). Previous technological changes suggest that adopting and integrating new automation technology can take decades (657), and cost barriers may slow adoption initially. For example, one study estimates that only 23% of potentially automatable vision tasks would currently be cost-effective for businesses to automate with computer vision technology (658). Concerns about general-purpose AI’s reliability in high-stakes domains can also slow adoption (659). Regulatory action or preferences for human-produced goods are other factors that could at least initially dampen AI’s labour market impacts, even if general-purpose AI capabilities quickly surpass human capabilities on many tasks (660). The mix of adoption pressures and barriers makes predicting the pace of labour market transformation particularly complex for policymakers. However, early evidence suggests that, at least by some measures, general-purpose AI is being adopted faster than the internet or personal computer (661). Risks 2.3 Systemic risks 113 Productivity gains from general-purpose AI adoption are likely to lead to mixed effects on wages across different sectors, increasing wages for some workers while decreasing wages for others. In occupations where general-purpose AI complements human labour, it can increase wages through three main mechanisms. Firstly, general-purpose AI tools can directly augment human productivity, allowing workers to accomplish more in less time (113, 662). If demand for worker output rises as workers become more productive, this added productivity could boost wages for workers using general-purpose AI who now experience increased demand for their work. Second, general-purpose AI can boost wages by driving economic growth and boosting demand for labour in tasks that are not yet automated (663, 664). Third, general-purpose AI can lead to the creation of entirely new tasks and occupations for workers to perform (641, 644, 664). However, general-purpose AI may also exert downward pressure on wages for workers in certain occupations. As general-purpose AI increases the supply of certain skills in the labour market, it may reduce demand for humans with those same skills. Workers specialising in tasks that can be automated by general-purpose AI may therefore face decreased wages or job loss (643). For example, one study found that four months after ChatGPT was released, it had caused a 2% drop in the number of writing jobs posted on an online labour market and a 5.2% drop in monthly earnings for writers on the platform (645). The impact on wages in a given sector largely depends on how much additional demand exists for that sector’s services when costs fall due to general-purpose AI-driven productivity gains. Furthermore, the share of any AI-driven profits that are captured by workers will depend on factors such as the market structures and labour policies in affected industries, which vary greatly across countries. General-purpose AI will likely have the most significant near-term impact on jobs that consist mainly of cognitive tasks. Several studies show that general-purpose AI capabilities overlap with the capabilities needed to perform tasks in a wide range of jobs, with cognitive tasks most likely to be impacted (637*, 640, 665, 666). Research has also found that general-purpose AI provides large productivity gains for workers performing many kinds of cognitive tasks. This includes work in occupations such as strategy consulting (667), legal work (668), professional writing (651), computer programming (113), and others. For example, customer service agents received an average productivity boost of 14% from using general-purpose AI (662). Additionally, software developers were found to perform an illustrative coding task 55.8% faster when they had access to a general-purpose AI programming assistant (114*). Sectors that rely heavily on cognitive tasks, such as Information, Education, and the Professional, Scientific, and Technical Services sector, are also adopting AI at higher rates, suggesting that workers in these industries are poised to be most impacted by general-purpose AI in the near-term (669). AI agents have the potential to affect workers more significantly than general-purpose AI systems that require significant human oversight. ‘AI agents’ are general-purpose AI systems that can accomplish multi-step tasks in pursuit of a high-level goal with little or no human oversight. This means that agents are able to chain together multiple complex tasks, potentially automating entire workflows rather than just individual tasks (670). By removing the need for human involvement in long sequences of work, AI agents could perform tasks and projects more cheaply than Risks 2.3 Systemic risks 114 general-purpose AI systems that require more human oversight (671, 672). This is likely to incentivise increased rates of adoption of agents for the purposes of automation in economically competitive environments (671, 673). The resulting acceleration in automation could cause more rapid disruption to skill demands and wages across multiple sectors (670), giving policymakers less time to implement policy measures that strengthen worker resilience. Involuntary job loss can cause long-lasting and severe harms for affected workers. Studies show that displaced workers experience sharp drops in earnings and consumption immediately after being displaced, with earnings deficits persisting for years afterward (674, 675). Estimates of wage declines even after re-employment range from 5%–30% for as long as 20 years after displacement (676, 677, 678, 679).

Unemployment causes poverty and death

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

Involuntary job loss can also significantly affect physical health, with evidence suggesting that displacement increases mortality risk by 50–100% within the year after separation and by 10–15% annually for the next 20 years (680). Studies also link job loss to higher rates of depression (681), suicide (682), alcohol-related disease (682), and negative impacts on children’s educational attainment (683).

General purpose AI will increase inequality

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

General-purpose AI could increase income inequality within countries by providing greater productivity boosts to high earners, but impacts are likely to vary across countries. Over the last several decades automation of routine jobs increased wage inequality in the US context by displacing middle-wage workers from jobs where they previously had a comparative advantage (688, 689, 690). For example, one study estimates that 50–70% of the increase in US wage inequality over the last four decades can be explained by relative wage declines of workers specialised in routine tasks in industries that experienced high levels of automation (688). General-purpose AI could compete with human workers in a similar fashion, potentially depressing wages for some workers (691, 692) while being most likely to boost the productivity of those who are already in relatively high-income occupations (see Figure 2.6) (637*). One simulation suggests that AI could increase wage inequality between high- and low-income occupations by 10% within a Risks 2.3 Systemic risks 115 decade in advanced economies (640). Across many types of cognitive tasks, however, there is evidence that at the current level of model capabilities, those with less experience or more elementary skill sets often get the largest productivity boosts from using general-purpose AI (114*, 651, 662, 667, 668). This suggests that within cognitive-task oriented occupations, lesser paid workers could actually get a larger boost than high earners and wage inequality within those occupations could shrink (693). How these countervailing effects will play out Figure 2.6: Large Language Models (LLMs) have an unequal economic impact on different parts of the income distribution. Exposure is highest for worker tasks at the upper end of annual wages, peaking at approximately $90,000/year in the US, while low and middle incomes are significantly less exposed. In this figure, ‘exposure’ signifies the potential for productivity gains from AI, which can manifest in worker augmentation and wage boosts or automation and wage declines, depending on a variety of other factors. Source: Eloundou et al., 2024 (637*). General-purpose AI-driven labour automation is likely to exacerbate inequality by reducing the share of all income that goes to workers relative to capital owners. Globally, labour’s share of income has fallen by roughly six percentage points between 1980 and 2022 (694). Typically, 10% of all earners receive the majority of capital income (695, 696). If AI automates a significant share of labour, then these trends could intensify by both reducing work opportunities for wage earners and by increasing the returns to capital ownership (697, 698). Additionally, evidence suggests that general-purpose AI can aid the creation of large ‘superstar’ firms that capture a large share of economic profits, which would further increase capital-labour inequality (699). Average exposure (95% CI) Polynomial fit The share of all tasks within occupations that are exposed to LLMs and partial LLM-powered software is shown against the median annual wage for the occupation. Data reflect human rating. Exposure to Large Language Models (LLMs) by Income Risks 2.3 Systemic risks 116

AI will increase global inequality

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

 

General-purpose AI technology is likely to exacerbate global inequality if primarily adopted by high-income countries (HICs). HICs have a higher share of the cognitive task-oriented jobs that are most exposed to general-purpose AI impacts (640). These countries have stronger digital infrastructure, skilled workforces, and more developed innovation ecosystems (700) (see 2.3.2. Global AI R&D divide). This positions them to capture general-purpose AI productivity gains more rapidly than emerging markets and developing economies. This would contribute to divergent income growth trajectories and a widening gap between HICs and low- and middle-income countries (LMICs) (701). If the most advanced, labour-automating AI is used by companies in HICs, this could also attract additional capital investment to those countries, and further drive an economic divergence between high- and low-income regions (702). Additionally, as firms in advanced economies adopt general-purpose AI, they may find it more cost-effective to automate production domestically rather than offshore work, eroding a traditional development pathway for developing economies that export labour-intensive services (703). One study suggests this dynamic may be most likely to play out in countries with a large share of the workforce in outsourced IT services such as customer service, copywriting, and digital gig-economy jobs (704). However, the precise impact on labour markets in developing economies remains unclear. On the one hand, they could face a double challenge of losing existing jobs to automation while finding it harder to attract new investment, as labour cost advantages become less relevant. On the other hand, if general-purpose AI is widely adopted in developing economies, it could provide productivity boosts for some skilled workers (662, 705, 706), potentially creating opportunities for these workers to compete for remote work opportunities with higher-paid counterparts in HICs. Since the publication of the Interim Report, new evidence suggests that rates of general-purpose AI adoption by individuals may be faster than previous technologies such as the internet or personal computers, though the pace of adoption by businesses varies widely by sector (see Figure 2.7) (661). For example, a recent survey in the US found that more than 24% of workers use generative AI at least once a week, and one in nine use it daily at work (661). Business adoption rates vary significantly across sectors (707). For example, in the US, approximately 18.1% of businesses in the Information sector report using AI (broadly defined), while only 1.4% in Construction and Agriculture do (669). For firms who report using AI, 27% report replacing worker tasks, while only 5% report employment changes due to AI, more than half of which are employment increases rather than decreases (708). Current evidence on general-purpose AI adoption rates is limited by limited international data collection, particularly outside of the US, though one survey of over 15,000 workers across 16 countries found that 55% of respondents use generative AI at least once a week in their work (709). Across the globe, a large gender gap exists in both adoption and potential labour market impacts from general-purpose AI. For example, a recent meta-analysis of ten studies from various countries suggests that women are 24.6% less likely to use generative AI than men (710). the economy is uncertain and is likely to vary across countries, sectors, and occupations

AI has reached expert-levels in reasoning and programming

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

. Companies developing general-purpose AI are working to navigate these potential bottlenecks. Since the publication of the Interim Report (May 2024), general-purpose AI has reached expert-level performance in some tests and competitions for scientific reasoning and programming, and companies have been making large efforts to develop autonomous AI agents. Advances in science and programming have been driven by inference scaling techniques such as writing long ‘chains of thought’. New studies suggest that further scaling such approaches, for instance allowing models to analyse problems by writing even longer chains of thought than today’s models, could lead to further advances in domains where reasoning matters more, such as science, software engineering, and planning. In addition to this trend, companies are making large efforts to develop more advanced general-purpose AI agents, which can plan and act autonomously to work towards a given goal. Finally, the market price of using general-purpose AI of a given capability level has dropped sharply, making this technology more broadly accessible and widely used

AI increases greenhouse gas emissions

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

General-purpose AI is a moderate but rapidly growing contributor to global environmental impacts through energy use and greenhouse gas (GHG) emissions. Current estimates indicate that data centres and data transmission account for an estimated 1% of global energy-related GHG emissions, with AI consuming 10–28% of data centre energy capacity. AI energy demand is expected to grow substantially by 2026, with some estimates projecting a doubling or more, driven primarily by general-purpose AI systems such as language models. ● Recent advances in general-purpose AI capabilities have been largely driven by a marked increase in the amount of computation that goes into developing and using AI models, which uses more energy. While AI firms are increasingly powering their data centre operations with renewable energy, a significant portion of AI training globally still relies on high-carbon energy sources such as coal or natural gas, leading to the aforementioned emissions and contributing to climate change. ● AI development and deployment also has significant environmental impacts through water and resource consumption, and through AI applications that can either harm or benefit sustainability efforts. AI consumes large amounts of water for energy production, hardware manufacturing, and data centre cooling. All of these demands increase proportionally to AI development, use, and capability. AI can also be used to facilitate environmentally detrimental activities such as oil exploration, as well as in environmentally friendly applications with the potential to mitigate or help society adapt to climate change, such as optimising systems for energy production and transmission. ● Current mitigations include improving hardware, software, and algorithmic energy efficiency and shifting to carbon-free energy sources, but so far these strategies have been insufficient to curb GHG emissions. Increases in technology efficiency and uptake of renewable energy have not kept pace with increases in demand for energy: technology firms’ GHG emissions are often growing despite substantial efforts to meet net-zero carbon goals. Significant technological advances in general-purpose AI hardware or algorithms, or substantial shifts in electricity generation, storage and transmission, will be necessary to meet future demand without environmental impacts increasing at the same pace. ● Since the publication of the Interim Report (May 2024), there is additional evidence that the demand for energy to power AI workloads is significantly increasing. General-purpose AI developers reported new challenges in meeting their net-zero carbon pledges due to increased energy use stemming from developing and providing general-purpose AI models, with some reporting increased GHG emissions in 2023 compared to 2022. In response, some firms are turning to virtually carbon-free nuclear energy to power AI data centres.

Many ways GHG emissions increase

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

General-purpose AI requires significant energy to develop and use, with corresponding GHG emissions and impacts to the energy grid. For example, Meta estimates that the energy required to train their recent (July, 2024) Llama 3 family of LLMs resulted in 11,380 tonnes of CO2 equivalent (tCO2e) emissions across the four released models (11*). The total emissions equate to the energy consumed by 1,484 average US homes for one year, or 2,708 gasoline-powered passenger vehicles driven for one year (768). Google reports that training their open source Gemma 2 family of LLMs emitted 1247.61 tCO2e (769*), but like most developers of general-purpose AI, they do not disclose the amount of energy or emissions required to power production models. Additional energy is required to power the data centres within which most general-purpose AI computation is performed, most notably for cooling. This additional energy overhead is typically quantified as power usage effectiveness (PUE), which is a ratio between the amount of energy used for computation and for other uses within a data centre; the optimal theoretical PUE, indicating zero energy overhead, is 1.0. The most efficient hyperscale data centres, including many of the data centres powering general-purpose AI, currently report a PUE of around 1.1, with the industry average hovering around 1.6 (770). Energy use also arises from data transmission across computer networks, which is required to communicate the inputs to and outputs of AI models between users’ devices, such as laptops and mobile phones, and the data centres where AI models are run. Approximately 260–360 TWh energy was required to support global data transmission networks in 2022, a similar amount as was used to power data centres (240–340 TWh, excluding cryptocurrency mining which amounted to an additional 100–150 TWh) in that same year (771). Google, Meta, Amazon, and Microsoft alone, leaders in providing general-purpose AI and other cloud compute services, were collectively responsible for 69% of global data transmission traffic, representing a shift from previous years when the majority of data transmissions were attributed to public internet service providers (772). Although reporting often focuses on the energy cost of model training, there is strong evidence that a higher energy demand arises from their everyday use. Training and development corresponds to a lower number of high energy use activities, whereas deployment corresponds to a very high number of lower-energy uses (since each user query represents an energy cost) (739, 773, 774). While the most reliable estimates of energy use and GHG emissions due to general-purpose AI typically measure their training costs, such as those cited above, available reports suggest a greater overall proportion of energy expenditure due to use. In 2022, Google and Meta reported that the use of AI systems accounted for 60–70% of the energy associated with their AI workloads, compared to 0–40% for training and 10% for development (i.e. research and experimentation) (199, 206). Pre-processing and generating data for general-purpose AI also has significant energy costs. Meta further reported that data processing, i.e. filtering and converting data to the appropriate formats for training AI models, accounted for 30% of the energy footprint for a production model developed Risks 2.3 Systemic risks 131 in 2021 for personalised recommendation and ranking, and the overall computation devoted to data pre-processing increased by 3.2x from 2019–2021 (199). Large general-purpose AI models give rise to more computation for data processing than narrow AI models. Not only do general-purpose AI models consume substantially more data than narrow models, but the models themselves are increasingly used to generate additional synthetic data during the training process and to pick the best synthetic data to train on (37*, 775, 776*). They are also used to generate data for training narrow AI models (777). However, recent figures providing similarly detailed attribution of general-purpose AI energy use are not available. The limited availability of broader data quantifying AI energy use has resulted in recent mandates, such as in the EU AI Act, focusing on model training despite the need for increased reporting and characterisation of the demands due to data processing and model use (778). Currently, the GHG emissions of general-purpose AI primarily arise from the carbon intensity of energy sources used to power the data centres and data transmission networks supporting their training and use. For example, renewable sources such as solar power emit far less GHG compared to fossil fuels (779*). While AI firms are increasingly powering their data centre operations with renewable energy (199, 206, 780*, 781), a significant portion of AI compute globally still relies on high-carbon sources such as coal or natural gas (779*). This results in significant GHG emissions. There are varying estimates of the total energy use and GHG emissions related to data centres and AI. According to estimates from the International Energy Agency (IEA), data centres and data transmission make up 1% of global GHG emissions related to energy use, and 0.6% of all GHG emissions (which also includes other GHG sources such as agriculture and industrial processes) (770, 771, 782). Between 10% to 28% of energy use in data centres stems from the use of AI in recent estimates, mostly due to generative AI (LLMs and image generation models) which makes up most of the energy use due to general-purpose AI (770, 771, 782). Combining these estimates would suggest that the use of AI is responsible for 0.1–0.28% of global GHG emissions attributed to energy use and 0.06–0.17% of all GHG emissions, but the exact percentages depend on how much of the energy used comes from carbon-intensive energy sources. The average carbon intensity of electricity powering data centres in the US is 548 grams CO2 per kWh, which is almost 50% higher than the US national average (783). Factors affecting GHG emissions include the location of data centres and time of day of energy use, data centre efficiency, and the efficiency of the hardware used. As a result, the actual GHG emissions for a given amount of energy consumed by AI can vary considerably. Since the publication of the Interim Report, there is additional evidence of increased energy demand to power data centres running AI workloads. As of October 2024, the IEA predicts that data centres will account for less than 10% of global electricity demand growth between 2023 and 2030 (784). Most of the overall growth in demand is predicted to arise from other growing sources of electricity demand, such as uptake of electric vehicles and increased needs for cooling buildings. However, data centre impacts are highly localised compared to other industries, leading to uneven distribution of the increased demand and disproportionately high impacts in certain areas (784). For example, data centres consumed over 20% of all electricity in Ireland in 2023 (785), and electricity use is growing in the US, home to more than half of global data centre capacity (786), for the first time in over a decade, driven in part by increased development and use of AI (787).

AI is being powered by nuclear power 

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

Technology firms are turning to nuclear power (which has its own complex benefits and risks) as a carbon-neutral energy source to power data centres, with multiple large tech firms signing deals with power providers to secure nuclear energy. In September 2024, Microsoft signed a deal that will re-open the Three Mile Island nuclear power plant in Pennsylvania, agreeing to purchase all of the plant’s generation capacity for the next 20 years, enough to power approximately 800,000 homes (788*). Amazon signed a similar deal in March to purchase up to 960 MW/year of nuclear energy to power a data centre campus for their Amazon Web Services (AWS) cloud platform (789), representing the first instance of data centre co-location with a nuclear power plant. However, in November the US Federal Energy Regulatory Commission rejected the transmission provider’s request to amend their interconnection service agreement to increase transmission to the data centre (790), casting some doubts on whether regulators will support such co-location moving forward. In October, Google announced an agreement to purchase nuclear energy from small modular reactors (SMRs), the world’s first corporate agreement of this kind, stating that they needed this new electricity source to “support AI technologies” (791*).

Efficiency gains do not offset increased use

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

General-purpose AI energy efficiency is improving rapidly, but not enough to stop the ongoing growth of emissions. Specialised AI hardware and other hardware efficiency improvements enhance the performance-per-watt of machine learning workloads over time (206). Moreover, new machine learning techniques and architectures can also help reduce energy consumption (206), as can improvements in the supporting software frameworks and algorithms (798, 799). Energy used per unit of computation has been reduced by an estimated 26% per year (144). However, current rates of efficiency improvement are insufficient to counter growing demand. Demand for computing power used for AI training, which has been increasing by a factor of approximately 4x each year, is so far significantly outpacing energy efficiency improvements (26). This mismatch is reflected in the fact that technology firms involved in the development and deployment of general-purpose AI report challenges in meeting environmental sustainability goals. Baidu reports that increased energy requirements due to the “rapid development of LLMs” are posing “severe challenges” to their development of green data centres (781), and Google similarly reports a 17% increase in data centre energy consumption in 2023 over 2022 and a 37% increase in GHG emissions due to energy use “despite considerable efforts and progress on carbon-free energy”. They attribute these increases to increased investment in AI (780*). Efficiency improvements alone have not negated the overall growth in energy use of AI and possibly further accelerate it because of ‘rebound effects’. Economists have found for previous technologies that improvements in energy efficiency tend to increase, rather than decrease, overall energy consumption by decreasing the cost per unit of work (800). Efficiency improvements may lead to greater energy consumption by making technologies such as general-purpose AI cheaper and more readily available, and increasing growth in the sector.

Other sources of AI carbon dioxide

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

In addition to GHG emissions due to energy use, general-purpose AI has other environmental impacts due to the physical systems and structures required for its development and use, which are even less well understood. The GHG emissions due to energy use discussed so far are typically referred to as operational emissions, and they currently contribute the highest proportion of emissions. The embodied carbon footprint of AI hardware, which includes emissions from manufacturing, transportation, the physical building infrastructure, and disposal, also contributes significant GHG emissions. Depending on the location and scenario, this can be up to 50% of a model’s total emissions (199). As operational energy efficiency improves, the embodied carbon footprint will become a larger proportion of the total carbon footprint (808). Intel reports that its Ocotillo campus generated over 200,000 tCO2e in 2023 from direct emissions alone (not including electricity) (809*), and is on track to generate over 300,000 tCO2e by the end of 2024, having consumed over 1 billion kWh energy in the first quarter of 2024 (809*). Estimating the current embodied carbon footprint of general-purpose AI poses a great challenge due to a lack of data from hardware manufacturers. This arises due to a combination of incentives, including manufacturers’ desire to protect intellectual property around proprietary manufacturing processes and the consolidation of expertise in manufacturing specialised AI hardware to a very limited number of firms, limiting knowledge access and transfer.

AI uses a lot of water

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

Water consumption is another emerging area of environmental risk from general-purpose AI. General-purpose AI development and use withdraws fresh water from local water systems, a portion of which is then consumed, primarily through evaporation. As with energy use, general-purpose AI water use also increases as models grow larger. General-purpose AI has both embodied and operational water requirements. Embodied water consumption comes from water use in the hardware manufacturing process, and operational water use primarily arises from energy production and from evaporative cooling systems in data centres. In energy production, water evaporates when used for cooling in nuclear and fossil-fuel combustion power plants and in hydroelectric dams. In data centres, computer hardware also produces significant waste energy in the form of heat, and must be cooled in order to optimise computational efficiency and longevity. The most effective and widespread methods for cooling hardware in data centres evaporate water. As the computation used for training and deploying general-purpose AI models increases, cooling demands increase, too, leading to higher water consumption. Water is also consumed during hardware manufacturing processes. In 2023, Intel’s water-efficient Ocotillo chip manufacturing plant in Arizona, which has earned the highest certification for water conservation from the Alliance for Water Stewardship, withdrew 10,561 million litres of water (90% fresh water) of which 1896 million litres were consumed (809*). Assuming an average household water use of 144 litres per day (810), this equates to over 200,000 households’ yearly water withdrawal. Taiwan Semiconductor Manufacturing Company (TSMC), the world’s largest semiconductor manufacturer and the main supplier of chips to AI hardware firms such as Nvidia, reports that as of 2023 their per-unit water consumption had increased by 25.2% since 2010, despite their goal to decrease usage by 2.7% over that period, and by 30% by 2030; this is despite increased water-saving measures resulting in TSMC conserving 33% more water year-over-year in 2023 (811). Water consumption by current models and the methodology to assess it are still subject to scientific debate, but some researchers predict that water consumption by AI could ramp up to trillions of litres by 2027 (199, 812). In the context of concerns around global freshwater scarcity, and without technological advances enabling emissions-efficient alternatives, AI’s water footprint might be a substantial threat to the environment and the human right to water (813). In response to congressional mandates, the US Department of Energy is currently working to assess current and near-future data centre energy and water consumption needs, with a report to be released by the end of 2024 (787). European data centre operators must report water consumption beginning in 2025 (814). 

AI privacy threats

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

General-purpose AI systems can cause or contribute to violations of user privacy. Violations can occur inadvertently during the training or usage of AI systems, for example through unauthorised processing of personal data or leaking health records used in training. But violations can also happen deliberately through the use of general-purpose AI by malicious actors; for example, if they use AI to infer private facts or violate security. ● General-purpose AI sometimes leaks sensitive information acquired during training or while interacting with users. Sensitive information that was in the training data can leak unintentionally when a user interacts with the model. In addition, when users share sensitive information with the model to achieve more personalised responses, this information can also leak or be exposed to unauthorised third parties. ● Malicious actors can use general-purpose AI to aid in the violation of privacy. AI systems can facilitate more efficient and effective searches for sensitive data and can infer and extract information about specific individuals from large amounts of data. This is further exacerbated by the cybersecurity risks created by general-purpose AI systems (see 2.1.3. Cyber offence). ● Since the publication of the Interim Report (May 2024), people increasingly use general-purpose AI in sensitive contexts such as healthcare or workplace monitoring. This creates new privacy risks which so far, however, have not materialised at scale. In addition, researchers are trying to remove sensitive information from training data and build secure deployment tools. ● For policymakers, it remains hard to know the scale or scope of privacy violations. Assessing the extent of privacy violations from general-purpose AI is extremely challenging, as many harms occur unintentionally or without the knowledge of the affected individuals. Even for documented leaks, it can be hard to identify their source, as data is often handled across multiple devices or in different parts of the supply chain

AI means IP violations

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

The use of vast amounts of data for training general-purpose AI models has caused concerns related to data rights and intellectual property. Data collection and content generation can implicate a variety of data rights laws, which vary across jurisdictions and may be under active litigation. Given the legal uncertainty around data collection practices, AI companies are sharing less information about the data they use. This opacity makes third-party AI safety research harder. ● AI content creation challenges traditional systems of data consent, compensation, and control. Intellectual property laws are designed to protect and promote creative expression and innovation. General-purpose AI both learns from and can create works of creative expression. ● Researchers are developing tooling and methods to mitigate the risks of potential copyright infringement and other data rights laws, but these remain unreliable. There are also limited tools to source and filter training data at scale according to their licences, affirmative consent from the creators, or other legal and ethical criteria. ● Since the Interim Report (May 2024), data rights holders have been rapidly restricting access to their data. This prevents AI developers from using this data to train their models, but also hinders access to the data for research, social good, or non-AI purposes. ● Policymakers face the challenge of enabling responsible and legally compliant data access without discouraging data sharing and innovation. Technical tools to evaluate, trace, filter, and automatically license data could make this much easier, but current tools are not sufficiently scalable and effective. General-purpose AI trains on large data collections, which can implicate a variety of data rights laws, including intellectual property, privacy, trademarks, and image/likeness rights. General-purpose AI is trained on large data collections, often sourced in part from the internet. They can be used to generate text, images, audio, or videos that can sometimes emulate the content they were trained on. In both the case of data collection (inputs) and data generation (outputs), these systems may implicate various data rights and laws (see Figure 2.12). For instance, if AI training data contains personally identifiable information, it can engender privacy concerns. Similarly, web-sourced training datasets frequently contain copyrighted material, implicating copyright and intellectual property laws (836, 856). If brands are captured in the data, trademarks may also be implicated. In some jurisdictions, famous individuals featured in training data may have likeness rights (857). The laws governing these data rights may also vary across jurisdictions and, especially in the case of AI, some are actively being litigated. Copyright laws aim to protect creative expression; general-purpose AI both learns from and generates content resembling creative expression. Copyright laws aim to protect and encourage written and creative expression (858, 859), primarily in the forms of literary works (including software), visual arts, music, sound recordings, and audio-visual works. They grant the creators of original works the exclusive right to copy, distribute, adapt, and perform their own work. The unauthorised third-party use of copyrighted data is permissible in certain jurisdictions and circumstances: for instance on the basis of the ‘fair use’ exception in the US (860), by the ‘text and data mining’ exception in the EU (861), by the amended Copyright Act in Japan (862), under Israeli copyright law (863), and by the Copyright Act 2021 in Singapore (864). In each jurisdiction there are different laws related to (a) the permissibility of data collection practices (e.g. data scraping), (b) the use of the data (e.g. for training AI, commercial, or non-commercial systems), and (c) whether Using larger quantities of training data (especially from the web) Including more data with intellectual property concerns Publishers, news, and creative workers may wish to opt-out their data, or bring legal action Websites may impose restrictions on data crawling Developers will not disclose what data they use, to avoid legal risks Non-AI data crawling applications may also have impaired access Better model performance Risks 2.3 Systemic risks 146 model outputs that appear similar to copyrighted material are infringing. In the US, these questions are actively litigated (865, 866, 867, 868, 869), for example in cases such as the New York Times versus OpenAI and Microsoft. Many issues related to dataset creation and use across the dataset’s lifecycle make copyright concerns for training AI models very complicated (870). Relevant questions include whether datasets were assembled specifically for machine learning or originally for other purposes (871), whether the potential infringement applies to model inputs and/or model outputs (872, 873, 874), and which jurisdiction the case falls under, among others (481). There are also questions around who is liable for infringement or harmful model outputs (developers, users, or other actors) (875). While developers can use technical strategies to mitigate the risks of copyright infringement from model outputs, these risks are difficult to eliminate entirely (876, 877). General-purpose AI systems may impact creative and publisher economies. As general-purpose AI systems become more capable, they increasingly have the potential to disrupt labour markets, and in particular creative industries (662, 707) (also see 2.3.1. Labour market risks). Pending legal decisions regarding copyright infringement in the AI training phase may affect general-purpose AI developers’ ability to build powerful and performant models by limiting their access to training data (836, 856, 878). They may also impact data creators’ ability to limit the uses of their data, which may disincentivise creative expression. For instance, news publishers and artists have voiced concerns that their customers might use generative AI systems to provide them with similar content. In news, art and entertainment domains, generative AI can often produce paraphrased, abstracted, or summarised versions of the content it has trained on. If users access news through generative AI summaries rather than from media sites, this could reduce subscription and advertising revenues for the original publishers. Reduced subscriptions can equate to copyright damages. Legal uncertainty around data collection practices has disincentivised transparency around what data developers of general-purpose AI have collected or used, making third-party AI safety research harder. Independent AI researchers can more easily understand the potential risks and harms of a general-purpose AI system if there is transparency about the data it was trained on (879). For instance, it is much more tractable to quantify the risk of a model generating biased, copyrighted, or private information if the researcher knows what data sources it was trained on. However, this type of transparency is often lacking for major general-purpose AI developers (880). Fear of legal risk, especially over copyright infringement, disincentivises AI developers from disclosing their training data (881). The infrastructure to source and filter for legally permissible data is under-developed, making it hard for developers to comply with copyright law. The permissibility of using copyrighted works as part of training data without appropriate authorisation is an active area of litigation. Tools to source and identify available data without copyright concerns are limited. For instance, recent work shows that around 60% of popular datasets in the most widely used openly accessible dataset repositories have incorrect or missing licence information (481). Similarly, there are limitations to the current tools for discerning copyright-free data in web scrapes (856, 878). However, practitioners are developing new standards for data documentation and new protocols for data creators to signal their consent for use in training AI models (882, 883). Since the publication of the Interim Report, the legal and technical struggles over data have escalated, and research suggests that it remains difficult to completely prevent models from generating copyrighted material using technical mitigations. Many organisations, including AI developers, use automatic bots called ‘web crawlers’ that navigate the web and copy content. Websites often want their content to be read by crawlers that will direct human traffic to them (such as search engine crawlers) but left alone by crawlers that will copy their data to train competing tools (e.g. AI models that will displace their traffic). Websites can indicate their preferences to crawlers in their code, including if and by whom they would like to be crawled. They can also employ technologies that attempt to identify and block crawlers. Since May 2024, evidence has emerged that websites have erected more barriers to the crawlers from AI developers (836, 884, 885). These measures are triggered by uncertainty about whether AI developers’ crawlers will respect websites’ preference signals. In search of solutions, the European AI Office is developing a transparency reporting Code of Practice for General-Purpose AI developers (886), and the US National Institute of Standards and Technology (NIST) has released an AI Risk Management Framework (887). Additionally, a growing body of work studies researchers’ capacity to excise information from a trained model, or to detect what a model was trained on. However, these methods, as applied to general-purpose AI models, still have fundamental challenges that may not be easily overcome in the near future (831, 832, 888, 889, 890, 891, 892). Rising barriers to accessing web content may inhibit data collection, including to non-AI applications. Rising restrictions on web crawling result in the highest quality, well-maintained data being less available, especially to less well-financed organisations (836, 856, 878). Declining data availability may have ramifications for competition, for training data diversity and factuality, as well as underserved regions’ ability to develop their own competitive AI applications. While large AI companies may be able to afford data licences or simply develop stronger crawlers to access restricted data, rising restrictions will have negative externalities for the other (including many beneficial) uses of web crawlers. Many industries depend on crawlers: web search, product/price catalogues, market research, advertising, web archives, academic research, accessibility tools, and even security applications. These industries’ access to data is increasingly impaired due to obstacles erected to prevent large AI developers from using data for training. Lastly, these crawler challenges may persist, even when copyright litigation is resolved. Tools to discover, investigate, and automatically licence data are lacking. Standardised tooling is necessary for data creators and users to evaluate a dataset’s restrictions or limitations, to estimate the data’s value, to automatically license it at scale, and to trace its downstream use (465, 481, 856, 878). Without these tools, the market so far has relied on ad hoc, custom contracts, without a clear licensing process for smaller data creators. Coupled with the existing lack of data transparency from individual developers, these shortcomings inhibit the development of an efficient and Risks 2.3 Systemic risks 148 structured data market. In essence, the web is a relatively messy, unstructured source of data. Without better tools to organise it, developers will have difficulty avoiding training on data that may engender legal or ethical issues. Methods that mitigate the risk of copyright infringement in models are underdeveloped and require more research. Large models can memorise or recall some of the data they trained on, allowing them to reproduce it when prompted. For instance, sections from the Harry Potter books are memorised in common language models (893*). This is desirable in some cases (e.g. recalling facts), but undesirable in others, as it can lead to models generating and re-distributing copyrighted material, private information, or sensitive content found on the web. There are many approaches to mitigating this risk (see also 3.4.3. Technical methods for privacy). One is to detect whether a model was trained on or has memorised certain undesirable content, which would enable it to also re-generate it (831, 832, 888, 889). This is known as ‘memorisation research’ or ‘membership inference research’. Researchers may also investigate whether model outputs can be attributed directly to certain training data points (877, 890). Another method is to use filters that detect when a model is generating content that is substantively similar to copyrighted material. However, it remains challenging conceptually and technically to test whether generations are substantially similar to copyrighted content that the model was trained on (891, 894). Lastly, researchers are exploring methods to remove information that models have already learned, called ‘machine unlearning’ (821, 895, 896, 897, 898). However, it may not be a viable, robust, or practical solution in the long run (892, 897, 898). For instance, machine unlearning often does not succeed in fully removing the targeted information from a model, and in the process it can distort the model’s other capabilities in unforeseen ways – which makes it unappealing to commercial AI developers (892, 895, 897, 898). Policymakers are faced

Open source models can fit on a USB stick

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

How an AI model is released to the public is an important factor in evaluating the risks it poses. There is a spectrum of model release options, from fully closed to fully open, all of which involve trade-offs between risks and benefits. Open-weight models – those with weights made publicly available for download – represent one key point on this spectrum. ● Open-weight models facilitate research and innovation while also enabling malicious uses and perpetuating some flaws. Open weights allow global research communities to both advance capabilities and address model flaws by providing them with direct access to a critical AI component that is prohibitively expensive for most actors to develop independently. However, the open release of model weights could also pose risks of facilitating malicious or misguided use or perpetuating model flaws and biases. Once model weights are available for public download, there is no way to implement a wholesale rollback of all existing copies of the model. This is because various actors will have made their own copies. Even if retracted from hosting platforms, existing downloaded versions are easy to distribute offline. For example, state-of-the-art models such as Llama-3.1-405B can fit on a USB stick. ● Since the Interim Report (May 2024), high-level consensus has emerged that risks posed by greater AI openness should be evaluated in terms of ‘marginal risk’. This refers to the additional risk associated with releasing AI openly, compared to risks posed by closed models or existing technology. ● Whether a model is open or closed, risk mitigation approaches need to be implemented throughout the AI life cycle, including during data collection, model pre-training, fine-tuning, and post-release measures. Using multiple mitigations can bolster imperfect interventions. ● A key challenge for policymakers centres on the evidence gaps surrounding the potential for both positive and negative impacts of open weight release on market concentration and competition. The effects will likely vary depending on how openly the model is released (e.g. whether release is under an open source licence), on the level of market being discussed (i.e. competition between general-purpose AI developers vs. downstream application developers), and based on the size of the gap between competitors. ● Another key challenge for policymakers is in recognising the technical limitations of certain policy interventions for open models. For example, requirements such as robust watermarking for open-weight generative AI models are currently infeasible, as there are technical limitations to implementing watermarks that cannot be removed.

 

OpenAI models reduce costs

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

There are benefits to greater AI openness, including facilitating innovation, improving AI safety and oversight, increasing accessibility, and allowing AI tools to be tailored to diverse needs. Training a general-purpose AI model (the process of producing model weights) is extremely expensive. For example, training Google’s Gemini model is estimated to have cost $191 million in compute costs alone (731). The cost of training compute for the most expensive single general-purpose AI model is projected to exceed $1 billion by 2027 (27). Training costs, therefore, present an insurmountable barrier for many actors (companies, academics, and states) to participating in the general-purpose AI marketplace and benefiting from AI applications. O

Open model reduces proprietary control

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

Openly releasing weights makes general-purpose AI more accessible to actors who might otherwise lack the resources to develop them independently. This reduces reliance on proprietary systems controlled by a few large technology companies (or potentially nation states), and allows developers to fine-tune existing Risks 2.4 Impact of open-weight general-purpose AI models on AI risks 153 general-purpose AI weights to serve more diverse needs. For example, developers from minority language groups can fine-tune open-weight models with specific language data sets to improve the model’s performance in that language. Models can also be fine-tuned more freely to perform better at specific tasks, such as writing professional legal texts, medical notes, or creative writing. Furthermore, greater openness enables a wider and more diverse community of developers and researchers to appraise models and identify and work to remedy vulnerabilities, which can help facilitate greater AI safety and accelerate beneficial AI innovation. In general, the more open a model is – including whether there is access to additional AI components beyond model weights, such as training data, code, documentation, and the compute infrastructure required to utilise these models – the greater the benefit for innovation and safety oversight.

Open models can been misused

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

 

Risks posed by open-weight models are largely related to enabling malicious or misguided use (899, 901, 902). General-purpose AI models are dual-use, meaning that they can be used for good or put to nefarious purposes. Open-model weights can potentially exacerbate misuse risks by allowing a wide range of actors who do not have the resources and knowledge to build a model on their own to leverage and augment existing capabilities for malicious purposes and without oversight. While both open-weight and closed models can have safeguards to refuse user requests, these safeguards are easier to remove for open models. For example, even if an open-weight model has safeguards built in, such as content filters or limited training data sets, access to model weights and inference code allows malicious actors to circumvent those safeguards (903). Furthermore, model vulnerabilities found in open models can also expose vulnerabilities in closed models (904*). Finally, with access to model weights, malicious actors can also fine-tune a model to optimise its performance for harmful applications (905, 906, 907). Potential malicious uses include harmful dual-use science applications, e.g. using AI to discover new chemical weapons (2.1.4. Biological and chemical attacks), cyberattacks (2.1.3. Cyber offence), and producing harmful fake content such as ‘deepfake’ sexual abuse material (2.1.1. Harm to individuals through fake content) and political fake news (2.1.2. Manipulation of public opinion). As noted below, releasing an open-weight model with the potential for malicious use is generally not reversible even when its risks are discovered later

Open models increase risks

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

There is also a risk of perpetuating flaws through open-weight releases, though openness also allows far more actors to perform deeper technical analysis to spot these flaws and biases. When general-purpose AI models are released openly and integrated into a multitude of downstream systems and applications, any unresolved model flaws – such as embedded biases and discrimination (2.2.2. Bias), vulnerabilities to adversarial attack (904*), or the ability to trick post-deployment monitoring systems by having learned how to ‘beat the test’ (2.2.3. Loss of control) – are distributed as well (902). The same challenge is true of closed, hosted, or API access models, but for these non-downloadable models the model host can universally roll out new model versions to fix vulnerabilities and flaws. For open-weight models, developers can make updated versions available, but there is no guarantee that downstream developers will adopt the updates. On the other hand, open-weight models can be scrutinised and tested more deeply by a larger number of researchers and downstream developers which helps to identify and rectify more flaws in future releases (908). Risks

 2.4 Impact of open-weight general-purpose AI models on AI risks

Since the publication of the Interim Report, high-level consensus has emerged that risks posed by greater AI openness should be evaluated in terms of marginal risk (901, 909, 910). ‘Marginal risk’ refers to the added risk of releasing AI openly compared to risks posed by existing alternatives, such as closed models or other technologies (911). Studies that assess marginal risk are often called ‘uplift studies’. Early studies indicated, for instance, that chatbots from 2023 did not significantly heighten biosecurity risks compared to existing technologies: participants with internet access but no general-purpose AI were able to obtain bioweapon-related information at similar rates to participants with access to AI (393) (see 2.1.4. Biological and chemical attacks for further discussion on current AI and biorisk and 3.3. Risk identification and assessment for discussion of uplift studies and other risk assessments). On the other hand, several studies have shown that the creation of NCII and CSAM has increased significantly due to the open release of image-generation models such as Stable Diffusion (912*, 913) (see 2.1.1. Harm to individuals through fake content). Attending to marginal risk is important to ensure that interventions are proportional to the risk posed (393, 911). However, in order to conduct marginal risk analysis, companies or regulators must first establish a stable tolerable risk threshold (see 3.1. Risk management overview) against which marginal risk can be compared in order to avoid a ‘boiling frog’ scenario (910). Even if an incremental improvement in model capability increases marginal risk only slightly compared to pre-existing technology, layering minor marginal risk upon minor marginal risk indefinitely could add up to a substantial increase in risk over time and inadvertently lead to the release of an unacceptably dangerous technology. In contrast, improving societal resilience and enhancing defensive capabilities could help keep marginal risk low even as model capabilities and ‘uplift’ advance

Big companies take advantage of the open source models

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

A key evidence gap is around whether open-weight releases for general-purpose AI will have a positive or negative impact on competition and market concentration. Publicly releasing model weights can lead to both positive and negative impacts on competition, market concentration, and control (901, 910, 914, 915). In the short term, open-weight model sharing protected under an open source licence empowers smaller downstream developers by granting access to sophisticated technologies that they could not otherwise afford to create, fostering innovation and diversifying the application landscape. A €1 billion investment in many types of open source software (OSS) in the EU in 2018 is estimated to have yielded a €65–€95 billion economic impact (916). A similar impact might be expected from open-weight AI released under open source licence. However, this apparent democratisation of AI may also play a role in reinforcing the dominance and market concentration (2.3.3. Market concentration and single points of failure) among major players (914, 915). In the longer term, companies that release open-weight general-purpose AI models often see their frameworks become industry standards, shaping the direction of future developments, as is quickly becoming the case with the widespread use of Llama models in open development projects and industry application. These firms can then easily integrate advancements made by the community (for free) back into their own offerings, maintaining their competitive edge. Furthermore, the broader open source development ecosystem serves as a fertile recruiting Risks 2.4 Impact of open-weight general-purpose AI models on AI risks 155 ground, allowing companies to identify and attract skilled professionals who are already familiar with their technologies (914). It is likely that open-weight release will affect market concentration differently at different layers of the general-purpose AI ecosystem; it is more likely to increase competition and reduce market concentration in downstream application development, but at the upstream model development level, the direction of the effect is more uncertain (750). More research is needed to clarify the technical and economic dynamics at play

 Open models can’t be restricted

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

Once model weights are available for public download, there is no way to implement a wholesale rollback of all existing copies. Internet hosting platforms such as GitHub and Hugging Face can remove models from their platforms, making it difficult for some actors to find downloadable copies and providing a sufficient barrier to many casual malicious users looking for an easy way to cause harm (917). However, a well-motivated actor would still be able to obtain an open-model copy despite the inconvenience; even large models are easy to distribute online and off. For example, state-of-the-art models such as Llama-3.1-405B can fit on a USB stick, underscoring the difficulty of controlling distribution once models are openly released.

Overlapping  challenges kill AI Safety

Yoshua Bengio et al, 2025 (January, Following an interim publication in May 2024, adiverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN), International AI Safety Report, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

 Several technical properties of general-purpose AI make risk mitigation for many risks associated with general-purpose AI difficult: A. Autonomous general-purpose AI agents may increase risks: AI developers are making large efforts to create and deploy general-purpose AI systems that can more effectively act and plan in pursuit of goals. These agents are not well understood but require special attention from policymakers. They could enable malicious uses and risks of malfunctions, such as unreliability and loss of human control, by enabling more widespread applications with less human oversight. B. The breadth of use cases complicates safety assurance: General-purpose AI systems are being used for many (often unanticipated) tasks in many contexts, making it hard to assure their safety across all relevant use cases, and potentially allowing companies to adapt their systems to work around regulations. C. General-purpose AI developers understand little about how their models operate internally: Despite recent progress, developers and scientists cannot yet explain why these models create a given output, nor what function most of their internal components perform. This complicates safety assurance, and it is not yet possible to provide even approximate safety guarantees. D. Harmful behaviours, including unintended goal-oriented behaviours, remain persistent: Despite gradual progress on identifying and removing harmful behaviours and capabilities from general-purpose AI systems, developers struggle to prevent them from exhibiting even well-known overtly harmful behaviours across foreseeable circumstances, such as providing instructions for criminal activities. Additionally, general-purpose AI systems can act in accordance with unintended goals that can be hard to predict and mitigate. E. An ‘evaluation gap’ for safety persists: Despite ongoing progress, current risk assessment and evaluation methods for general-purpose AI systems are immature. Even if a model passes current risk evaluations, it can be unsafe. To develop evaluations needed in time to meet existing governance commitments, significant effort, time, resources, and access are needed. F. System flaws can have a rapid global impact: When a single general-purpose AI system is widely used across sectors, problems or harmful behaviours can affect many users simultaneously. These impacts can manifest suddenly, such as through model updates or initial release, and can be practically irreversible. Scaling risks catastrophe F. System flaws can have a rapid global impact: because general-purpose AI systems can be shared rapidly and deployed in many sectors (like other software), a harmful system can rapidly have a global and sometimes irreversible impact. A small number of both proprietary and freely available open-weight general-purpose AI models currently reach many millions of users (see 2.3.3. Market concentration risks and single points of failure). Both proprietary and open-weight models can therefore have rapid and global impacts, although in different ways (911). A risk factor for open-weight models is that there is no practical way to roll back access if it is later discovered that a model has faults or capabilities that enable malicious use (902) (see 2.4. Impact of open-weight general-purpose AI models on AI risks, 2.1. Risks from malicious use). However, a benefit of openly releasing model weights and other model components such as code and training data is that it also allows a much greater and more diverse number of practitioners to discover flaws, which can improve understanding of risks and possible mitigations (911). Developers or others can then repair faults and offer new and improved versions of the system. This cannot prevent deliberate malicious use (902, 1075), which could be a concern if a system poses additional risk (‘marginal risk’) compared to using alternatives (such as internet search). All of these factors are relevant to the specific possibility of rapid, widespread, and irreversible impacts of general-purpose AI models. However, even when model components are not made publicly accessible, the model’s capabilities still reach a wide user base across many sectors. For example, within two months of launch, the fully closed system ChatGPT had over 100 million users (1087).

Massive AI acceleration, close to AGI

Ezra Klein, 1-9, 25, https://www.nytimes.com/2025/01/12/opinion/ai-climate-change-low-birth-rates.html, Now is the Time of Monsters, https://www.nytimes.com/2025/01/12/opinion/ai-climate-change-low-birth-rates.html

Behind the mundane gains in usefulness are startling gains in capacity. OpenAI debuted a quick succession of confusingly named models — GPT4 was followed by GPT-4o, and then the chain-of-thought reasoning model o1, and then, oddly, came o3 — that have posted blistering improvements on a range of tests. The A.R.C.-A.G.I. test, for instance, was designed to compare the fluid intelligence of humans and A.I. systems, and proved hard for A.I. systems to mimic. By the middle of 2024, the highest score any A.I. model had posted was 5 percent. By the end of the year, the most powerful form of o3 scored 88 percent. It is solving mathematics problems that leading mathematicians thought would take years for an A.I. system to crack. It is better at writing computer code than all but the most celebrated human coders. Editors’ Picks ‘She Was Often the Only Bright Spot in My Otherwise Grim Days’ 7 Ways to Improve Your Heart Health They Were Hits in London. Then They Got Smacked in New York. Advertisement SKIP ADVERTISEMENT The rate of improvement is startling. “What we’re witnessing isn’t just about benchmark scores or cost curves — it’s about a fundamental shift in how quickly A.I. capabilities advance,” Azeem Azhar writes in Exponential View, a newsletter focused on technology. Inside the A.I. labs, engineers believe they are getting closer to the grail of general intelligence, perhaps even superintelligence.

China competitive with US on AI

Ezra Klein, 1-9, 25, https://www.nytimes.com/2025/01/12/opinion/ai-climate-change-low-birth-rates.html, Now is the Time of Monsters, https://www.nytimes.com/2025/01/12/opinion/ai-climate-change-low-birth-rates.html

Even if you believe we can get A.I. regulation right — and I’ve seen little that makes me optimistic — will we even know what the universe of models we need to regulate is if powerful systems can be built this cheaply and stored on personal computers? A Chinese A.I. firm recently released DeepSeek-V3, a model that appears to perform about as well as OpenAI’s 4o on some measures but was trained for under $6 million and is lightweight enough to be run locally on a computer. That DeepSeek is made by a Chinese firm reveals another pressure to race forward with these systems: China’s A.I. capabilities are real, and the contest for geopolitical superiority will supersede calls for caution or prudence. A.I. is breaking through at the same moment the international system is breaking down. The United States and China have drifted from uneasy cooperation to grim competition, and both intend to be prepared for war. Attaining A.I. superiority has emerged as central to both sides of the conflict.

AI means a massive increase in electricity consumption

Ezra Klein, 1-9, 25, https://www.nytimes.com/2025/01/12/opinion/ai-climate-change-low-birth-rates.html, Now is the Time of Monsters, https://www.nytimes.com/2025/01/12/opinion/ai-climate-change-low-birth-rates.html

A report from the Lawrence Berkeley National Laboratory estimates that U.S. data centers went from 1.9 percent of total electrical consumption in 2018 to 4.4 percent in 2023 and will consume 6.7 percent to 12 percent in 2028. Microsoft alone intends to spend $80 billion on A.I. data centers this year. This elephantine increase in the energy that will be needed not just by the United States but by every country seeking to deploy serious A.I. capabilities comes as the world is slipping further behind its climate goals and warming seems to be outpacing even our models.

Use of ALL energy resources key to AI dominance

Neil Chatterjee, 12-25, 24, NY Post, The end of the energy war: AI’s insatiable needs put every fuel source in play, https://nypost.com/2024/12/25/opinion/no-more-debate-ais-energy-needs-put-every-fuel-in-play/

For more than two decades, America has been trapped in a debilitating, partisan energy debate: Republicans eschew renewable energy and support the fossil-fuel industry — including initiatives like the Northern Access Pipeline through western New York — while Democrats rail against fossil fuels and champion green energy technologies. The old partisan boundaries of this debate are already changing. A growing number of Republicans support US clean-energy dominance, and some Democrats resist the common-sense permitting reform necessary to deploy it. Yet the extreme power needs of artificial intelligence will completely upend how we discuss energy in this country. AI demands a lot of power. It’s difficult to conceptualize how much electricity AI will need in just a few years. A recently published peer-reviewed analysis found that NVIDIA, the biggest player in AI hardware globally, will ship out an estimated 1.5 million AI server units by 2027. When those servers are up and running at full capacity, they will consume 85.4 terawatt hours of electricity annually — more electricity than the nations of Switzerland and Greece each consume in a year. Of course, not all of those NVIDIA servers will end up operating in the United States. But if America continues to lead in the AI race — a goal President-elect Donald Trump is almost guaranteed to pursue — the US will consume a significant share of the AI chip and server market. Our nation is blessed with abundant energy, yet our excess capacity still isn’t nearly able to handle the increased energy demands of AI. And AI isn’t alone. According to the International Energy Agency, cryptocurrency and data centers could also double their 2022 energy consumption levels by 2026. Further, increasing numbers of electric vehicles will require an even greater need for electricity production. All these realities mean the traditional energy debate is now fully moot. American energy consumption has been stable for decades. In a world of relatively predictable consumption, conservatives could argue that increasing domestic fossil-fuel production would make America energy-independent without investments in renewable energy. This argument was not unreasonable (apart from environmental factors). In 2023, fossil fuels accounted for roughly 84% of domestic energy production — almost enough to cover America’s entire energy consumption. Meanwhile, progressives could argue, with justification, that stable American consumption meant energy-efficiency initiatives and renewable-energy investment could wean the US off fossil fuels in favor of carbon-free domestic fuel sources. Gov. Hochul’s congestion tax may be an ineffective burden on the poorest New Yorkers, but no one can deny that real advances like higher fuel economy have improved air quality and urban livability. In effect, both sides debated the best way to fill up a fuel tank of roughly the same size, and they both made good points. But the energy demands of AI mean that fuel tank must get much bigger — and we lack the luxury of switching out one fuel for another. Renewable technology isn’t yet able to meet our current demands, and our natural-gas capacity is simply insufficient to generate the power we need. Our only option is to use every energy source at our disposal. And I mean everything: natural gas, solar, geothermal, hydropower, energy storage, nuclear, you name it. Aggressively expanding our capacity on every front to meet AI’s seemingly insatiable energy demands can seem like an impossible lift. But we have no other choice. AI isn’t just a fun technology that can crack jokes, edit essays, help with research projects and produce lifelike videos. It will be the critical national-security technology of the 21st century, transforming everything from cybersecurity to intelligence-gathering to autonomous weapons systems and more. If we don’t win the AI race, China will — and we don’t want to live in a world where communist China dominates AI.

AI critical to drones and battlefield surveillance in an era of comprehensive conflict

Karlin, November-December 2024, Mara Karlin is a Professor at Johns Hopkins University’s School of Advanced International Studies, a Visiting Fellow at the Brookings Institution, and the author of The Inheritance: America’s Military After Two Decades of War. From 2021 to 2023, she served as U.S. Assistant Secretary of Defense for Strategy, Plans, and Capabilities, Foreign Affairs, The Return of Total War Understanding—and Preparing for—a New Era of Comprehensive Conflict, https://www.foreignaffairs.com/ukraine/return-total-war-karlin

In both Ukraine and the Middle East, what has become clear is that the relatively narrow scope that defined war during the post-9/11 era has dramatically widened. An era of limited war has ended; an age of comprehensive conflict has begun. Indeed, what the world is witnessing today is akin to what theorists in the past have called “total war,” in which combatants draw on vast resources, mobilize their societies, prioritize warfare over all other state activities, attack a broad variety of targets, and reshape their economies and those of other countries. But owing to new technologies and the deep links of the globalized economy, today’s wars are not merely a repeat of older conflicts. These developments should compel strategists and planners to rethink how fighting happens today and, crucially, how they should prepare for war going forward. Getting ready for the kind of war the United States would most likely face in the future might in fact help the country avoid such a war by strengthening its ability to deter its main rival. To deter an increasingly assertive China from taking steps that might lead to war with the United States, such as blockading or attacking Taiwan, Washington must convince Beijing that doing so wouldn’t be worth it and that China might not win the resulting war. But to make deterrence credible in an age of comprehensive conflict, the United States needs to show that it is prepared for a different kind of war—drawing on the lessons of today’s big wars to prevent an even bigger one tomorrow. THE CONTINUUM OF CONFLICT Just under a decade ago, there was a growing consensus among many experts about how conflict would reconfigure itself in the years ahead. It would be faster, waged through cooperation between people and intelligent machines, and heavily reliant on autonomous tools such as drones. Space and cyberspace would be increasingly important. Conventional conflict would involve a surge in “anti-access/area-denial” capabilities—tools and techniques that would limit the reach and maneuverability of militaries beyond their shores, particularly in the Indo-Pacific. Nuclear threats would persist, but they would prove limited compared with the existential perils of the past. Some of these predictions have been borne out; others have been turned on their heads. Artificial intelligence has in fact further enabled the proliferation and utility of uncrewed systems both in the air and under the water. Drones have indeed transformed battlefields—and the need for counterdrone capabilities has skyrocketed. And the strategic importance of space, including the commercial space sector, has been made clear, most recently by Ukraine’s reliance on the Starlink satellite network for Internet connectivity. On the other hand, Russian President Vladimir Putin has repeatedly made veiled threats to use his country’s nuclear weapons and has even stationed some of them in Belarus. Meanwhile, China’s historic modernization and diversification of its nuclear capabilities have ignited alarm over the possibility that a conventional conflict could escalate to the most extreme level. The expansion and improvement of China’s arsenal has also transformed and complicated the dynamics of nuclear deterrence, since what was historically a bipolar challenge between the United States and Russia is now tripolar. What few, if any, defense theorists foresaw was the broadening of war that the past few years has witnessed, as the array of features that shape conflict expanded. What theorists call “the continuum of conflict” has changed. In an earlier era, one might have seen the terrorism and insurgency of Hamas, Hezbollah, and the Houthis as inhabiting the low end of the spectrum, the armies waging conventional warfare in Ukraine as residing in the middle, and the nuclear threats shaping Russia’s war and China’s growing arsenal as sitting at the high end. Today, however, there is no sense of mutual exclusivity; the continuum has returned but also collapsed. In Ukraine, “robot dogs” patrol the ground and autonomous drones launch missiles from the sky amid trench warfare that looks like World War I—all under the specter of nuclear weapons. In the Middle East, combatants have combined sophisticated air and missile defense systems with individual shooting attacks by armed men riding motorcycles. In the Indo-Pacific, Chinese and Philippine forces face off over a sole dilapidated ship while the skies and seas surrounding Taiwan get squeezed by threatening maneuvers from China’s air force and navy. The emergence of sea-based struggles marks a major departure from the post-9/11 era, when conflict was largely oriented around ground threats. Back then, most maritime attacks were sea-to-ground, and most air attacks were air-to-ground. Today, however, the maritime domain has become a site of direct conflict. Ukraine, for example, has taken out more than 20 Russian ships in the Black Sea, and control of that critical waterway remains contested. Meanwhile, Houthi attacks have largely closed the Red Sea to commercial shipping. Safeguarding freedom of navigation has historically been a top mission of the U.S. Navy. But its inability to ensure the security of the Red Sea has called into question whether it would be able to fulfill that mission in an increasingly turbulent Indo-Pacific. The plural character of conflict also underscores the risk of being lured by today’s weapon of choice, which might turn out to be a flash in the pan. Compared with the post-9/11 era, more countries now have greater access to capital and more R & D capacity, allowing them to respond more quickly and adeptly to new weapons and technologies by developing countermeasu res. This exacerbates a familiar dynamic that the military scholar J. F. C. Fuller described as “the constant tactical factor”—the reality that “every improvement in weapons has eventually been met by a counter-improvement which has rendered the improvement obsolete.” For example, in 2022, defense experts hailed the efficacy of Ukraine’s precision-guided munitions as a game-changer in the war against Russia. But by late 2023, some of those weapons’ limitations had become clear when electronic jamming by the Russian military severely restricted their ability to find targets on the battlefield. ALL IN Another feature of the age of comprehensive conflict is a transformation in the demography of war: the cast of characters has become increasingly diverse. The post-9/11 wars demonstrated the outsize impact of terrorist groups, proxies, and militias. As those conflicts ground on, many policymakers wished they could go back to the traditional focus on state militaries—particularly given the enormous investments some states were making in their defenses. They should have been careful what they wished for: state militaries are back, but nonstate groups have hardly left the stage. The current security environment offers the misfortune of dealing with both. In the Middle East, multiple state militaries are increasingly fighting or enmeshed with surprisingly influential nonstate actors. Consider the Houthis. Although in essence still a relatively small rebel movement, the Houthis are nevertheless responsible for the most intense set of sea engagements the U.S. Navy has faced since World War II, according to navy officials. With help from Iran, the Houthis are also punching above their weight in the air by manufacturing and deploying their own drones. Meanwhile, in Ukraine, Kyiv’s regular forces are fighting alongside cadres of international volunteers in numbers likely not seen since the Spanish Civil War. And to augment Russia’s traditional forces, the Kremlin has incorporated mercenaries from the Wagner paramilitary company and sent tens of thousands of convicts to war—a practice that Ukraine’s military recently started copying. In this environment, the task of building partner forces becomes even more complex than during the post-9/11 wars. U.S. programs to build the Afghan and Iraqi militaries focused on countering terrorist and insurgent threats with the aim of enabling friendly regimes to exert sovereignty over their territories. To help build up Ukraine’s forces for their fight against another state military, however, the United States and its allies have had to relearn how to teach. The Pentagon has also had to build a new kind of coalition, convening more than 50 countries from across the world to coordinate materiel donations to Ukraine through the Ukraine Defense Contact Group—the most complex and most rapid effort ever undertaken to stand up a single country’s military. Nearly a decade ago, I noted in these pages that although the United States had been building militaries in fragile states since World War II, its record was lackluster. That is no longer the case. The Pentagon’s new system has demonstrated that it can move so quickly that materiel support for Ukraine has at times been delivered within days. The system has surged in ways that many experts (including me) thought impossible. In particular, the technical aspect of equipping militaries has improved. For example, the U.S. Army’s use of artificial intelligence has made it much easier for Ukraine’s military to be able to see and understand the battlefield, and to make decisions and act accordingly. Lessons from the rapid delivery of assistance to Ukraine have also been applied to the Israel-Hamas war; within days of the October 7 attacks, U.S.-supplied air defense capabilities and munitions were in Israel to protect its skies and help it respond.

Bioweapons risks of AI not meaningfully greater than internet risks

Prabakar, 12-3, 24, Arati Prabhakar has served as Director of the White House Office of Science and Technology Policy since 2022 and was the first to demonstrate ChatGPT to the president in the Oval Office, What the departing White House chief tech advisor has to say on AI, https://www.linkedin.com/pulse/what-departing-white-house-chief-tech-advisor-has-cq2ie/?trackingId=2wEam0HOAt%2F%2FMQOP9Yis6w%3D%3D

At first there was a lot of concern expressed by the AI developers about biological weapons. When people did the serious benchmarking about how much riskier that was compared with someone just doing Google searches, it turns out, there’s a marginally worse risk, but it is marginal. If you haven’t been thinking about how bad actors can do bad things, then the chatbots look incredibly alarming. But you really have to say, compared to what?

Biometric identification used for racial bias

Prabakar, 12-3, 24, Arati Prabhakar has served as Director of the White House Office of Science and Technology Policy since 2022 and was the first to demonstrate ChatGPT to the president in the Oval Office, What the departing White House chief tech advisor has to say on AI, https://www.linkedin.com/pulse/what-departing-white-house-chief-tech-advisor-has-cq2ie/?trackingId=2wEam0HOAt%2F%2FMQOP9Yis6w%3D%3D

‘ll give you a great example. Facial recognition technology is an area where there have been horrific, inappropriate uses: take a grainy video from a convenience store and identify a black man who has never even been in that state, who’s then arrested for a crime he didn’t commit. (Editor’s note: Prabhakar is referring to this story). Wrongful arrests based on a really poor use of facial recognition technology, that has got to stop.

If one country excels above another in AI, the world order will collapse, triggering an internal implosion and an external explosion

Henry A. Kissinger, Eric Schmidt, and Craig Mundie, November 18, 2024,HENRY A. KISSINGER served as U.S. Secretary of State from 1973 to 1977 and as U.S. National Security Adviser from 1969 to 1975.ERIC SCHMIDT is Chair of the Special Competitive Studies Project and former CEO and Chair of Google. CRAIG MUNDIE is the Co-Founder of Alliant Computing Systems and the former Senior Adviser to the CEO at Microsoft, War and Peace in the Age of Artificial Intelligence, What It Will Mean for the World When Machines Shape Strategy and Statecraft, https://www.foreignaffairs.com/united-states/war-and-peace-age-artificial-intelligence

From the recalibration of military strategy to the reconstitution of diplomacy, artificial intelligence will become a key determinant of order in the world. Immune to fear and favor, AI introduces a new possibility of objectivity in strategic decision-making. But that objectivity, harnessed by both the warfighter and the peacemaker, should preserve human subjectivity, which is essential for the responsible exercise of force. AI in war will illuminate the best and worst expressions of humanity. It will serve as the means both to wage war and to end it.

Humanity’s long-standing struggle to constitute itself in ever-more complex arrangements, so that no state gains absolute mastery over others, has achieved the status of a continuous, uninterrupted law of nature. In a world where the major actors are still human—even if equipped with AI to inform, consult, and advise them—countries should still enjoy a degree of stability based on shared norms of conduct, subject to the tunings and adjustments of time.

But if AI emerges as a practically independent political, diplomatic, and military set of entities, that would force the exchange of the age-old balance of power for a new, uncharted disequilibrium. The international concert of nation-states—a tenuous and shifting equilibrium achieved in the last few centuries—has held in part because of the inherent equality of the players. A world of severe asymmetry—for instance, if some states adopted AI at the highest level more readily than others—would be far less predictable. In cases where some humans might face off militarily or diplomatically against a highly AI-enabled state, or against AI itself, humans could struggle to survive, much less compete. Such an intermediate order could witness an internal implosion of societies and an uncontrollable explosion of external conflicts.

Other possibilities abound. Beyond seeking security, humans have long fought wars in pursuit of triumph or in defense of honor. Machines—for now—lack any conception of either triumph or honor. They may never go to war, choosing instead, for instance, immediate, carefully divided transfers of territory based on complex calculations. Or they might—prizing an outcome and deprioritizing individual lives—take actions that spiral into bloody wars of human attrition. In one scenario, our species could emerge so transformed as to avoid entirely the brutality of human conduct. In another, we would become so subjugated by the technology that it would drive us back to a barbaric past.

THE AI SECURITY DILEMMA

Many countries are fixated on how to “win the AI race.” In part, that drive is understandable. Culture, history, communication, and perception have conspired to create among today’s major powers a diplomatic situation that fosters insecurity and suspicion on all sides. Leaders believe that an incremental tactical advantage could be decisive in any future conflict, and that AI could offer just that advantage.

If each country wished to maximize its position, then the conditions would be set for a psychological contest among rival military forces and intelligence agencies the likes of which humanity has never faced before. An existential security dilemma awaits. The logical first wish for any human actor coming into possession of superintelligent AI—that is, a hypothetical AI more intelligent than a human—might be to attempt to guarantee that nobody else gains this powerful version of the technology. Any such actor might also reasonably assume by default that its rival, dogged by the same uncertainties and facing the same stakes, would be pondering a similar move.

Short of war, a superintelligent AI could subvert, undermine, and block a competing program. For instance, AI promises both to strengthen conventional computer viruses with unprecedented potency and to disguise them thoroughly. Like the computer worm Stuxnet—the cyberweapon uncovered in 2010 that was thought to have ruined a fifth of Iran’s uranium centrifuges—an AI agent could sabotage a rival’s progress in ways that obfuscate its presence, thereby forcing enemy scientists to chase shadows. With its unique capacity for manipulation of weaknesses in human psychology, an AI could also hijack a rival nation’s media, producing a deluge of synthetic disinformation so alarming as to inspire mass opposition against further progress in that country’s AI capacities.

It will be hard for countries to get a clear sense of where they stand relative to others in the AI race. Already the largest AI models are being trained on secure networks disconnected from the rest of the Internet. Some executives believe that AI development will itself sooner or later migrate to impenetrable bunkers whose supercomputers will be powered with nuclear reactors. Data centers are even now being built on the bottom of the ocean floor. Soon they could be sequestered in orbits around Earth. Corporations or countries might increasingly “go dark,” ceasing to publish AI research so as not only to avoid enabling malicious actors but also to obscure their own pace of development. To distort the true picture of their progress, others might even try deliberately publishing misleading research, with AI assisting in the creation of convincing fabrications.

There is a precedent for such scientific subterfuge. In 1942, the Soviet physicist Georgy Flyorov correctly inferred that the United States was building a nuclear bomb after he noticed that the Americans and the British had suddenly stopped publishing scientific papers on atomic fission. Today, such a contest would be made all the more unpredictable given the complexity and ambiguity of measuring progress toward something so abstract as intelligence. Although some see advantage as commensurate with the size of the AI models in their possession, a larger model is not necessarily superior across all contexts and may not always prevail over smaller models deployed at scale. Smaller and more specialized AI machines might operate like a swarm of drones against an aircraft carrier—unable to destroy it, but sufficient to neutralize it.

An actor might be perceived to have an overall advantage were it to demonstrate achievement in a particular capability. The problem with this line of thinking, however, is that AI refers merely to a process of machine learning that is embedded not just in a single technology but also in a broad spectrum of technologies. Capability in any one area may thus be driven by factors entirely different from capability in another. In these senses, any “advantage” as ordinarily calculated may be illusory.

Moreover, as demonstrated by the exponential and unforeseen explosion of AI capability in recent years, the trajectory of progress is neither linear nor predictable. Even if one actor could be said to “lead” another by an approximate number of years or months, a sudden technical or theoretical breakthrough in a key area at a critical moment could invert the positions of all players.

In such a world, where no leaders could trust their most solid intelligence, their most primal instincts, or even the basis of reality itself, governments could not be blamed for acting from a position of maximum paranoia and suspicion. Leaders are no doubt already making decisions under the assumption that their endeavors are under surveillance or harbor distortions created by malign influence. Defaulting to worst-case scenarios, the strategic calculus of any actor at the frontier would be to prioritize speed and secrecy over safety. Human leaders could be gripped by the fear that there is no such thing as second place. Under pressure, they might prematurely accelerate the deployment of AI as deterrence against external disruption.

A NEW PARADIGM OF WAR

For almost all of human history, war has been fought in a defined space in which one could know with reasonable certainty the capability and position of hostile enemy forces. The combination of these two attributes offered each side a sense of psychological security and common consensus, allowing for the informed restraint of lethality. Only when enlightened leaders were unified in their basic understanding of how a war might be fought could opposing forces determine whether a war should be fought.

Speed and mobility have been among the most predictable factors underpinning the capability of any given piece of military equipment. An early illustration is the development of the cannon. For a millennium after their construction, the Theodosian Walls protected the great city of Constantinople from outside invaders. Then, in 1452, a Hungarian artillery engineer proposed to Emperor Constantine XI the construction of a giant cannon that, firing from behind the defensive walls, would pulverize attackers. But the complacent emperor, possessing neither the material means nor the foresight to recognize the technology’s significance, dismissed the proposal.

Unfortunately for him, the Hungarian engineer turned out to be a mercenary. Switching tactics (and sides), he updated his design to be more mobile—transportable by no fewer than 60 oxen and 400 men—and approached the emperor’s rival, the Ottoman Sultan Mehmed II, who was preparing to besiege the impermeable fortress. Winning the young sultan’s interest with his claim that this gun could “shatter the walls of Babylon itself,” the entrepreneurial Hungarian helped the Turkish forces to breach the ancient walls in only 55 days.

The contours of this fifteenth-century drama can be seen again and again throughout history. In the nineteenth century, speed and mobility transformed the fortunes first of France, as Napoleon’s army overwhelmed Europe, and then of Prussia, under the direction of Helmuth von Moltke (the Elder) and Albrecht von Roon, who capitalized on the newly developed railways to enable faster and more flexible maneuvering. Similarly, blitzkrieg—an evolution of the same German military principles—would be used against the Allies in World War II to great and terrible effect.

“Lightning war” has taken on new meaning—and ubiquity—in the era of digital warfare. Speeds are instantaneous. Attackers need not sacrifice lethality to sustain mobility, as geography is no longer a constraint. Although that combination has largely favored the offense in digital attacks, an AI era could see the increase of the velocity of response and allow cyberdefenses to match cyberoffenses.

In kinetic warfare, AI will provoke another leap forward. Drones, for instance, will be extremely quick and unimaginably mobile. Once AI is deployed not only to guide one drone but to direct fleets of them, clouds of drones will form and fly in sync as a single cohesive collective, perfect in their synchronicity. Future drone swarms will dissolve and reconstitute themselves effortlessly in units of every size, much as elite special-operations forces are built from scalable detachments, each of which is capable of sovereign command.

In addition, AI will provide similarly speedy and flexible defenses. Drone fleets are impractical if not impossible to shoot down with conventional projectiles. But AI-enabled guns firing rounds of photons and electrons (instead of ammunition) could re-create the same lethal disabling capacities as a solar storm that can fry the circuitry of exposed satellites.

AI-enabled weapons will be unprecedentedly exact. Limits to the knowledge of an antagonist’s geography have long constrained the capabilities and intentions of any warring party. But the alliance between science and war has come to ensure increasing accuracy in instruments, and AI can be expected to make more breakthroughs. AI will thus shrink the gap between original intent and ultimate outcome, including in the application of lethal force. Whether land-based drone swarms, machine corps deployed in the sea, or possibly interstellar fleets, machines will possess highly precise capabilities of killing humans with little degree of uncertainty and with limitless impact. The bounds of the potential destruction will hinge only on the will, and the restraint, of both human and machine.

AI will increase energy demand by 10%

Kim Norton, 7-31, 24, https://www.investors.com/news/sp-500-nuclear-power-cathie-wood-sam-altman/, S&P 500 Nuclear Energy Stocks Storm Higher As Prices Surge 800% On Largest U.S. Power Grid

Microsoft (MSFT) fell after reporting weak Azure cloud-computing growth late Tuesday, but eased fears of artificial intelligence (AI) spending fatigue. Meanwhile, Advanced Micro Devices (AMD) spiked on its own earnings, while Nvidia (NVDA) also rebounded. Meanwhile, Cathie Wood and her Ark Invest funds on Tuesday continued to build up their position in Oklo (OKLO), the nuclear power startup backed by OpenAI head Sam Altman. Wood’s Ark Autonomous Tech (ARKQ) fund scooped 266,324 Oklo shares for an estimated $2.36 million, according to the fund’s daily trading disclosures. Ark began building a position in Oklo in mid-July, buying shares repeatedly since then. AI Is Fueling A ‘Nuclear Renaissance.‘ Bill Gates And Jeff Bezos Are In The Mix. Artificial intelligence — and the data centers needed to train the systems — are expected to boost energy demand throughout this decade. In the U.S., McKinsey & Co. projects that data center energy demand will grow around 10% every year through 2030. Additionally, in 2022, the 2,700 U.S. data centers consumed around 4% of the country’s total electricity generated, according to the International Energy Agency. The agency projects that by 2026, such centers will make up 6% of electricity use.

AI critical to deterrence

Commission on the National Defense Strategy, July 2024, Report of the Commission on the National Defense Strategy, https://www.rand.org/nsrd/projects/NDS-commission.html

The threats the United States faces are the most serious and most challenging the nation has encountered since 1945 and include the potential for near-term major war. The United States last fought a global conflict during World War II, which ended nearly 80 years ago. The nation was last prepared for such a fight during the Cold War, which ended 35 years ago. It is not pre[1]pared today. China and Russia are major powers that seek to undermine U.S. influence. The 2022 National Defense Strategy (NDS) recognizes these nations as the top threats to the United States and declares China to be the “pacing challenge,” based on the strength of its military and economy and its intent to exert dominance regionally and globally.1 The Commission finds that, in many ways, China is outpacing the United States and has largely negated the U.S. military advantage in the Western Pacific through two decades of focused military investment. Without significant change by the United States, the balance of power will continue to shift in China’s favor. China’s overall annual spending on defense is esti[1]mated at as much as $711 billion,2 and the Chinese government in March 2024 announced an increase in annual defense spending of 7.2 percent.3 Russia will devote 29 percent of its federal budget this year on national defense as it contin[1]ues to reconstitute its military and economy after its failed initial invasion of Ukraine in 2022.4 Russia possesses considerable strategic, space, and cyber capabilities and under Vladimir Putin seeks a return to its global leadership role of the Cold War China and Russia’s “no-limits” partnership, formed in February 2022 just days before Rus[1]sia’s invasion of Ukraine,6 has only deepened and broadened to include a military and economic partnership with Iran and North Korea, each of which presents its own significant threat to U.S. interests. This new alignment of nations opposed to U.S. interests creates a real risk, if not likeli[1]hood, that conflict anywhere could become a multitheater or global war.7 China (and, to a lesser extent, Russia) is fusing military, diplomatic, and industrial strength to expand power worldwide and coerce its neighbors. The United States needs a similarly integrated approach to match, deter, and overcome theirs, which we describe as all elements of national power. The NDS and the 2022 National Security Strategy promote the concept of “integrated deterrence,” but neither one presents a plan for implementing this approach, and there are few indications that the U.S. government is consistently integrating tools of national security power . The U.S. military is the largest, but not the only, component of U.S. deterrence and power. An effective approach to an all elements of national power strategy also relies on a coordinated effort to bring together diplomacy, economic investment, cybersecurity, trade, education, indus[1]trial capacity, technical innovation, civic engagement, and international cooperation. Recognizing the indispensable role that allies play in promoting international security, the United States has successfully bolstered bilateral and multilateral alliances in the Pacific, strengthened the North Atlantic Treaty Organization (NATO), and created new arrangements, such as AUKUS. The United States cannot compete with China, Russia, and their partners alone—and certainly cannot win a war that way. Given the growing alignment of authoritarian states, the United States must continue to invest in strengthening its allies and integrating its military (and economic, diplomatic, and industrial) efforts with theirs. Alliances are not a pana[1]cea, but the U.S. force structure should account for the forces and commitments from U.S. allies. The Commission finds that DoD’s business practices, byzantine research and development (R&D) and procurement systems, reliance on decades-old military hardware, and culture of risk avoidance reflect an era of uncontested military dominance.8 Such methods are not suited to today’s strategic environment. There are recent examples that demonstrate that DoD can move quickly, break with tradition, and engage industry, including the rapid stand-up of the Space Force, the Defense Innovation Unit, the Office of Strategic Capital, and the Replicator Initia[1]tive, but these examples remain the exception must follow suit. DoD leaders and Congress must replace an ossified, risk-averse organization with one that is able to build and field the force the United States needs. The Commission finds that the U.S. military lacks both the capabilities and the capac[1]ity required to be confident it can deter and prevail in combat. It needs to do a better job of incorporating new technology at scale; field more and higher-capability platforms, software, and munitions; and deploy innovative operational concepts to employ them together better. The war in Ukraine has demonstrated the need to prepare for new forms of conflict and to integrate technology and new capabilities rapidly with older systems. Such technologies include swarms of attritable systems, artificial intelligence–enabled capabilities, hypersonics and electronic war[1]fare, fully integrated cyber and space capabilities, and vigorous competition in the information domain. Programs that are not needed for future combat should be divested to invest in others.

Brain-Computer Interfaces will connect us to machines to augment our intelligence

Kurzweil, June 2024, Ray Kurzweil is a renowned inventor, futurist, and author who has made significant contributions to fields such as artificial intelligence, optical character recognition, and speech recognition technology. Known for his predictions about technological singularity, Kurzweil has received numerous awards, including the National Medal of Technology, and currently serves as a director of engineering at Google, where he focuses on machine learning and language processing, The Singularity Is Nearer, page number at end of card

Luckily, we can now see this path much more clearly. Although many technological challenges remain before we can achieve the Singularity, its key precursors are rapidly moving from the realm of theoretical science to active research and development. During the coming decade, people will interact with AI that can seem convincingly human, and simple brain–computer interfaces will impact daily life much like smartphones do today. A digital revolution in biotech will cure diseases and meaningfully extend people’s healthy lives. At the same time, though, many workers will feel the sting of economic disruption, and all of us will face risks from accidental or deliberate misuse of these new capabilities. During the 2030s, self-improving AI and maturing nanotechnology will unite humans and our machine creations as never before—heightening both the promise and the peril even further. Kurzweil, Ray. The Singularity Is Nearer (p. 4). Penguin Publishing Group. Kindle Edition.

So next, we’ll look ahead to the tools we’ll use over the coming decades to gain increasing mastery over biology itself—first by defeating the aging of our bodies and then by augmenting our limited brains and ushering in the Singularity. Yet these breakthroughs may also put us in jeopardy. Revolutionary new systems in biotechnology, nanotechnology, or artificial intelligence could possibly lead to an existential catastrophe like a devastating pandemic or a chain reaction of self-replicating machines. We’ll conclude with an assessment of these threats, which warrant careful planning, but as I’ll explain, there are very promising approaches for how to mitigate them. Kurzweil, Ray. The Singularity Is Nearer (p. 5). Penguin Publishing Group. Kindle Edition

CONTINUES

A key capability in the 2030s will be to connect the upper ranges of our neocortices to the cloud, which will directly extend our thinking. In this way, rather than AI being a competitor, it will become an extension of ourselves. By the time this happens, the nonbiological portions of our minds will provide thousands of times more cognitive capacity than the biological parts. As this progresses exponentially, we will extend our minds many millions-fold by 2045. It is this incomprehensible speed and magnitude of transformation that will enable us to borrow the singularity metaphor from physics to describe our future. Kurzweil, Ray. The Singularity Is Nearer (pp. 9-10). Penguin Publishing Group. Kindle Edition.

AI will radically expand life expectancy

Kurzweil, June 2024, Ray Kurzweil is a renowned inventor, futurist, and author who has made significant contributions to fields such as artificial intelligence, optical character recognition, and speech recognition technology. Known for his predictions about technological singularity, Kurzweil has received numerous awards, including the National Medal of Technology, and currently serves as a director of engineering at Google, where he focuses on machine learning and language processing, The Singularity Is Nearer, page nubmer at end of card

Alzheimer’s. To an extent we can reduce these risks through lifestyle, diet, and supplementation—what I call the first bridge to radical life extension.[106] But those can only delay the inevitable. This is why life expectancy gains in developed countries have slowed since roughly the middle of the twentieth century. For example, from 1880 to 1900, life expectancy at birth in the United States increased from about thirty-nine to forty-nine, but from 1980 to 2000—after the focus of medicine had shifted from infectious disease to chronic and degenerative disease—it only increased from seventy-four to seventy-six.[107] Fortunately, during the 2020s we are entering the second bridge: combining artificial intelligence and biotechnology to defeat these degenerative diseases. We have already progressed beyond using computers just to organize information about interventions and clinical trials. We are now utilizing AI to find new drugs, and by the end of this decade we will be able to start the process of augmenting and ultimately replacing slow, underpowered human trials with digital simulations. In effect we are in the process of turning medicine into an information technology, harnessing the exponential progress that characterizes these technologies to master the software of biology. One of the earliest and most important examples of this is found in the field of genetics. Since the completion of the Human Genome Project in 2003, the cost of genome sequencing has followed a sustained exponential trend, falling on average by around half each year. Despite a brief plateau in sequencing costs from 2016 to 2018 and slowed progress amid the disruptions of the COVID-19 pandemic, costs continue to fall—and this will likely accelerate again as sophisticated AI plays a greater role in sequencing. Costs have plunged from about $50 million per genome in 2003 to as low as $399 in early 2023, with one company promising to have $100 tests available by the time you read this.[108] As AI transforms more and more areas of medicine, it will give rise to many similar trends. It is already starting to have a clinical impact,[109] but we are still in the early part of this particular exponential curve. The current trickle of applications will become a flood by the end of the 2020s. We will then be able to start directly addressing the biological factors that now limit maximum life span to about 120 years, including mitochondrial genetic mutations, reduced telomere length, and the uncontrolled cell division that causes cancer.[110] In the 2030s we will reach the third bridge of radical life extension: medical nanorobots with the ability to intelligently conduct cellular-level maintenance and repair throughout our bodies. By some definitions, certain biomolecules are already considered nanobots. But what will set the nanobots of Bridge Three apart is their ability to be actively controlled by AI to perform varying tasks. At this stage, we will gain a similar level of control over our biology as we presently have over automobile maintenance. That is, unless your car gets destroyed outright in a major wreck, you can continue to repair or replace its parts indefinitely. Likewise, smart nanobots will enable targeted repair or upgrading of individual cells—definitively defeating aging. More on that later in chapter 6. The fourth bridge—being able to back up our mind files digitally—will be a 2040s technology. As I argue in chapter 3, the core of a person’s identity is not their brain itself, but rather the very particular arrangement of information that their brain is able to represent and manipulate. Once we can scan this information with sufficient accuracy, we’ll be able to replicate it on digital substrates. This would mean that even if the biological brain was destroyed, it wouldn’t extinguish the person’s identity—which could achieve an almost arbitrarily long life span being copied and recopied to safe backups. Kurzweil, Ray. The Singularity Is Nearer (p. 136). Penguin Publishing Group. Kindle Edition.

AI will collapse food prices

Kurzweil, June 2024, Ray Kurzweil is a renowned inventor, futurist, and author who has made significant contributions to fields such as artificial intelligence, optical character recognition, and speech recognition technology. Known for his predictions about technological singularity, Kurzweil has received numerous awards, including the National Medal of Technology, and currently serves as a director of engineering at Google, where he focuses on machine learning and language processing, The Singularity Is Nearer, page nubmer at end of card

Vertical Agriculture Will Provide Inexpensive, High-Quality Food and Free Up the Land We Use for Horizontal Agriculture Most archaeologists estimate that the birth of human agriculture took place around 12,000 years ago, but there is some evidence that the earliest agriculture may date as far back as 23,000 years.[250] It is possible that future archaeological discoveries will revise this understanding even further. Whenever agriculture began, the amount of food that could then be grown from a given area of land was quite low. The first farmers sprinkled seeds into the natural soil and let the rain water them. The result of this inefficient process was that the vast majority of the population needed to work in agriculture just to survive. By around 6,000 BC, irrigation enabled crops to receive more water than they could get from rain alone.[251] Plant breeding enlarged the edible parts of plants and made them more nutritious. Fertilizers supercharged the soil with substances that promote growth. Better agricultural methods allowed farmers to plant crops in the most efficient arrangements possible. The result was that more food became available, so over the centuries more and more people could spend their time on other activities, like trade, science, and philosophy. Some of this specialization yielded further farming innovation, creating a feedback loop that drove even greater progress. This dynamic made our civilization possible. A useful way of quantifying this progress is crop density: how much food can be grown in a given area of land. For example, corn production in the United States uses land more than seven times as efficiently as a century and a half ago. In 1866, US corn farmers averaged an estimated 24.3 bushels per acre, and by 2021 this had reached 176.7 bushels per acre.[252] Worldwide, land efficiency improvement has been roughly exponential, and today we need, on average, less than 30 percent of the land that we needed in 1961 to grow a given quantity of crops.[253] This trend has been essential to enabling the global population increases in that time and has spared humanity the mass starvation from overpopulation that many people worried about when I was growing up. Further, because crops are now grown at extremely high density, and machines do a lot of the work that used to be done by hand, one farmworker can grow enough food to feed about seventy people. As a result, farmwork has gone from constituting 80 percent of all labor in the United States in 1810 to 40 percent in 1900 to less than 1.4 percent today.[254] Yet crop densities are now approaching the theoretical limit of how much food can be grown in a given outdoor area. One emerging solution is to grow multiple stacked layers of crops, referred to as vertical agriculture.[255] Vertical farms take advantage advantage of several technologies.[256] Typically they grow crops hydroponically, meaning that instead of being grown in soil, plants are raised indoors in trays of nutrient-rich water. These trays are loaded into frames and stacked many stories high, which means that excess water from one level can trickle down to the next instead of being lost as runoff. Some vertical farms now use a new approach called aeroponics, where the water is replaced with a fine mist.[257] And instead of sunlight, special LEDs are installed to ensure that each plant gets the perfect amount of light. Vertical farming company Gotham Greens, which has ten large facilities ranging from California to Rhode Island, is one of the industry leaders. As of early 2023, it had raised $440 million in venture funding.[258] Its technology enables it to use “95 percent less water and 97 percent less land than a traditional dirt farm” for a given crop yield.[259] Such efficiencies will both free up water and land for other uses (recall that agriculture is currently estimated to take up around half of the world’s habitable land) and provide a much greater abundance of affordable food.[260] Vertical farming has other key advantages. By preventing agricultural runoff, it does away with one of the main causes of pollution in waterways. It avoids the need for farming loose soil, which gets blown into the air and diminishes air quality. It makes toxic pesticides unnecessary, as pests are unable to enter a properly designed vertical farm. This approach also makes it possible to raise crops year-round, including species that could not grow in the local outdoor climate. That likewise prevents crop losses due to frosts and bad weather. Perhaps most importantly, it means that cities and villages can grow their own food locally instead of bringing it in by trains and trucks from hundreds or even thousands of miles away. As vertical agriculture becomes less expensive and more widespread, it will lead to great reductions in pollution and emissions. In the coming years, converging innovations in photovoltaic electricity, materials science, robotics, and artificial intelligence will make vertical farming farming much less expensive than current agriculture. Many facilities will be powered by efficient solar cells, produce new fertilizers on-site, collect their water from the air, and harvest the crops with automated machines. With very few workers required and a small land footprint, future vertical farms will eventually be able to produce crops so cheaply that consumers may be able to get food products almost for free. This process mirrors what took place in information technology as a result of the law of accelerating returns. As computing power has gotten exponentially cheaper, platforms like Google and Facebook have been able to provide their services to users for free while paying for their own costs through alternative business models such as advertising. By using automation and AI to control all aspects of a vertical farm, vertical agriculture represents turning food production essentially into an information technology. Kurzweil, Ray. The Singularity Is Nearer (pp. 181-183). Penguin Publishing Group. Kindle Edition.

AI will lead to massive unemployment in some sectors, potentially 10% of US workers in transportation and related fields

Kurzweil, June 2024, Ray Kurzweil is a renowned inventor, futurist, and author who has made significant contributions to fields such as artificial intelligence, optical character recognition, and speech recognition technology. Known for his predictions about technological singularity, Kurzweil has received numerous awards, including the National Medal of Technology, and currently serves as a director of engineering at Google, where he focuses on machine learning and language processing, The Singularity Is Nearer, page nubmer at end of card

  1. As of 2021, the most recent year for which data is available as I write this, that stunning ratio still holds—around 20 million simulated miles a day, versus more than 20 million real miles since its founding.[5] As discussed in chapter 2, such simulation can generate sufficient examples to train deep (e.g., one-hundred-layer) neural nets. This is how Alphabet subsidiary DeepMind generated enough training examples to soar past the best humans in the board game Go.[6] Simulating the world of driving is far more complicated than simulating the world of Go, but Waymo uses the same fundamental strategy—and has now honed its algorithms with more than 20 billion miles of simulated driving[7] and generated enough data to apply deep learning to improve its algorithms. If your job is driving a car, bus, or truck, this news is likely to give you pause. Across the United States, more than 2.7 percent of employed persons work as some kind of driver—whether driving trucks, buses, taxis, delivery vans, or some other vehicle.[8] According to the most recent data available, this figure accounts for over 4.6 million jobs.[9] While there is room for disagreement over exactly how quickly autonomous vehicles will put these people out of work, it is virtually certain that many of them will lose their jobs before they would have otherwise retired. Further, automation affecting these jobs will have uneven impacts all over the country. While in large states like California and Florida drivers account for less than 3 percent of the employed labor force, in Wyoming and Idaho the figure exceeds 4 percent.[10] In parts of Texas, New Jersey, and New York, the percentage rises to 5 percent, 7 percent, or even 8 percent.[11] Most of these drivers are men, most are middle-aged, and most do not have college educations.[12] But autonomous vehicles won’t just disrupt the jobs of people who physically drive behind the wheel. As truck drivers lose their jobs to automation, there will be less need for people to do truckers’ payroll and for retail workers in roadside convenience stores and motels. There’ll be less need for people to clean truck stop bathrooms, and lower demand for sex workers in the places truckers frequent today. Although we know in general terms that these effects will happen, it is very difficult to estimate precisely how large they will be or how quickly these changes will unfold. Yet it is helpful to keep in mind that transportation and transportation-related industries directly employ about 10.2 percent of US workers, according to the latest (2021) estimate of the Bureau of Transportation Statistics.[13] Even relatively small disruptions in a sector that large will have major consequences. Yet driving is just one of a very long list of occupations that are threatened in the fairly near term by AI that exploits the advantage of training on massive datasets. A landmark 2013 study by Oxford University scholars Carl Benedikt Frey and Michael Osborne ranked about seven hundred occupations on their likelihood of being disrupted by the early 2030s.[14] At a 99 percent likelihood of being able to be automated were such job categories as telemarketers, insurance underwriters, and tax preparers.[15] More than half of all occupations had a greater than 50 percent likelihood of being automatable.[16] High on that list were factory jobs, customer service, banking jobs, and of course driving cars, trucks, and buses.[17] Low on that list were jobs that require close, flexible personal interaction, such as occupational therapists, social workers, and sex workers.[18] Over the decade since that report was released, evidence has continued to accumulate in support of its startling core conclusions. A 2018 study by the Organisation for Economic Co-operation and Development reviewed how likely it was for each task in a given job to be automated and obtained results similar to Frey and Osborne’s.[19] The conclusion was that 14 percent of jobs across thirty-two countries had more than a 70 percent chance of being eliminated through automation over the succeeding decade, and another 32 percent had a probability of over 50 percent.[20] The results of the study suggested that about 210 million jobs were at risk in these countries.[21] Indeed, a 2021 OECD report confirmed from the latest data that employment growth has been much slower for jobs at higher risk of automation.[22] And all this research was done before generative AI breakthroughs like ChatGPT and Bard. The latest estimates, such as a 2023 report by McKinsey, found that 63 percent of all working time in today’s developed economies is spent on tasks that could already be automated with today’s technology.[23] If adoption proceeds quickly, half of this work could be automated by 2030, while McKinsey’s midpoint scenarios forecast 2045—assuming no future AI breakthroughs. But we know AI is going to continue to progress—exponentially—until we have superhuman-level AI and fully automated, atomically precise manufacturing (controlled by AI) sometime in the 2030s. Kurzweil, Ray. The Singularity Is Nearer (p. 198). Penguin Publishing Group. Kindle Edition.

AI will collapse health care  and education costs

Kurzweil, June 2024, Ray Kurzweil is a renowned inventor, futurist, and author who has made significant contributions to fields such as artificial intelligence, optical character recognition, and speech recognition technology. Known for his predictions about technological singularity, Kurzweil has received numerous awards, including the National Medal of Technology, and currently serves as a director of engineering at Google, where he focuses on machine learning and language processing, The Singularity Is Nearer, page nubmer at end of card

The good news, though, is that artificial intelligence and technological convergence will turn more and more kinds of goods and services into information technologies during the 2020s and 2030s—allowing them to benefit from the kinds of exponential trends that have already brought such radical deflation to the digital realm. Advanced AI tutors will make possible individually tailored learning on any subject, accessible at scale to anyone with an internet connection. AI-enhanced medicine and drug discovery are still in their infancy as of this writing but will ultimately play a major role in bringing down health-care costs. Kurzweil, Ray. The Singularity Is Nearer (p. 214). Penguin Publishing Group. Kindle Edition.

AI will collapse materials costs

Kurzweil, June 2024, Ray Kurzweil is a renowned inventor, futurist, and author who has made significant contributions to fields such as artificial intelligence, optical character recognition, and speech recognition technology. Known for his predictions about technological singularity, Kurzweil has received numerous awards, including the National Medal of Technology, and currently serves as a director of engineering at Google, where he focuses on machine learning and language processing, The Singularity Is Nearer, page nubmer at end of card

The same will happen for numerous other products that have not traditionally been considered information technologies—such as food, housing and building construction, and other physical products like clothing. For example, AI-driven advances in materials science will make solar photovoltaic electricity extremely cheap, while robotic resource extraction and autonomous electric vehicles will bring the costs of raw materials much lower. With cheap energy and materials, and automation increasingly replacing human labor outright, prices will fall substantially. In time, such effects will cover so much of the economy that we’ll be able to eliminate much of the scarcity that presently holds people back. As a result, in the 2030s it will be relatively relatively inexpensive to live at a level that is considered luxurious today. If this analysis is correct, then technology-driven deflation in all these areas will only widen the gap between nominal productivity and the real average benefit that each hour of human work brings to society. As such effects spread beyond the digital sphere to other industries and encompass a wider portion of the whole economy, we would expect national inflation to decrease—and eventually lead to overall deflation. In other words, we can expect clearer answers to the productivity puzzle as time goes on. under 67 percent in 2002, it had dropped below 63 percent by 2015 and remained nearly flat until the COVID-19 pandemic despite an ostensibly booming economy.[105] The actual percentage of the total population in the labor force is smaller. In June 2008 there were more than 154 million people in the US civilian labor force out of a population of 304 million, or 50.7 percent.[106] By December 2022 there were 164 million in the labor force out of 333 million, or just under 49.5 percent.[107] That doesn’t seem like a big drop, but it still puts the United States at its lowest percentage in more than two decades. The government statistics on this do not perfectly capture economic reality, as they do not include several categories—including agricultural workers, military personnel, and federal government employees—but they are still useful for showing the directionKurzweil, Ray. The Singularity Is Nearer (p. 215). Penguin Publishing Group. Kindle Edition.

 

AI will provide educational opportunities

 

Kurzweil, June 2024, Ray Kurzweil is a renowned inventor, futurist, and author who has made significant contributions to fields such as artificial intelligence, optical character recognition, and speech recognition technology. Known for his predictions about technological singularity, Kurzweil has received numerous awards, including the National Medal of Technology, and currently serves as a director of engineering at Google, where he focuses on machine learning and language processing, The Singularity Is Nearer, page nubmer at end of card

The primary method of improving human skills over the past two centuries has been education. Our investment in learning has skyrocketed over the past century, as I described previously. But we are already well into the next phase of our own betterment, which is enhancing our capabilities by merging with the intelligent technology we are creating. We are not yet putting computerized devices inside our bodies and brains, but they are literally close at hand. Almost no one could do their jobs or get an education today without the brain extenders that we use on an all-day, every-day basis—smartphones that can access almost all human knowledge or harness huge computational power with a single tap. It is therefore not an exaggeration to say that our devices have become parts of us. This was not the case as recently as two decades ago. These capabilities will become even more integrated with our lives throughout the 2020s. Search will transform from the familiar paradigm of text strings and link pages into a seamless and intuitive question-answering capability. Real-time translation translation between any pair of languages will become smooth and accurate, breaking down the language barriers that divide us. Augmented reality will be projected constantly onto our retinas from our glasses and contact lenses. It will also resonate in our ears and ultimately harness our other senses as well. Most of its functions and information will not be explicitly requested, but our ever present AI assistants will anticipate our needs by watching and listening in on our activities. In the 2030s, medical nanorobots will begin to integrate these brain extensions directly into our nervous system.

 

We will be augmented, not replaced

 

Kurzweil, June 2024, Ray Kurzweil is a renowned inventor, futurist, and author who has made significant contributions to fields such as artificial intelligence, optical character recognition, and speech recognition technology. Known for his predictions about technological singularity, Kurzweil has received numerous awards, including the National Medal of Technology, and currently serves as a director of engineering at Google, where he focuses on machine learning and language processing, The Singularity Is Nearer, page nubmer at end of card

. And even though we know statistically that tens of thousands of lives will be spared annually, there’ll be no way to identify which individual people avoid death each year.[159] By contrast, the harms from autonomous vehicles will be mainly confined to the several million people who work as drivers and will lose their livelihoods. These people will be specifically identifiable, and the disruption to their lives may be quite serious. When someone is in this position, it’s not enough to know that the overall benefits to society outweigh their individual suffering. They’ll need policies in place that help reduce their economic pain and ease their transition to something else that provides meaning, dignity, and economic security. In many respects, we won’t be competing with AI any more than we currently compete with our smartphones.[167] Indeed, this symbiosis is nothing new: it has been the purpose of technology since stone tools to extend our reach physically and intellectually. Kurzweil, Ray. The Singularity Is Nearer (p. 233). Penguin Publishing Group. Kindle Edition. Kurzweil, Ray. The Singularity Is Nearer (pp. 229-230). Penguin Publishing Group. Kindle Edition.

 

AI-biotech combination will dramatically expand life expectancy

 

Kurzweil, June 2024, Ray Kurzweil is a renowned inventor, futurist, and author who has made significant contributions to fields such as artificial intelligence, optical character recognition, and speech recognition technology. Known for his predictions about technological singularity, Kurzweil has received numerous awards, including the National Medal of Technology, and currently serves as a director of engineering at Google, where he focuses on machine learning and language processing, The Singularity Is Nearer, page nubmer at end of card

The 2020s: Combining AI with Biotechnology When you take your car to the shop to get it fixed, the mechanic has a full understanding of its parts and how they work together. Automotive engineering is effectively an exact science. Thus, well-maintained cars can last almost indefinitely, and even the worst wrecks are technically possible to repair.  The same is not true of the human body. Despite all the marvelous advances of scientific medicine over the past two hundred years, medicine is not yet an exact science. Doctors still do many things that are known to work without fully understanding how they work. Much of medicine is built on messy approximations that are usually mostly right for most patients but probably aren’t totally right for you. Turning medicine into an exact science will require transforming it into an information technology—allowing it to benefit from the exponential progress of information technologies. This profound paradigm shift is now well underway, and it involves combining biotechnology with AI and digital simulations. We are already seeing immediate benefits, as I’ll describe in this chapter—from drug discovery to disease surveillance and robotic surgery. For example, in 2023 the first drug designed end-to-end by AI entered phase II clinical trials to treat a rare lung disease.[1] But the most fundamental benefit of AI–biotech convergence is even more significant. When medicine relied solely on painstaking laboratory experimentation and human doctors passing their expertise down to the next generation, innovation made plodding, linear progress. But AI can learn from more data than a human doctor ever could and can amass experience from billions of procedures instead of the thousands a human doctor can perform in a career. And since artificial intelligence benefits from exponential improvements to its underlying hardware, as AI plays an ever greater role in medicine, health care will reap the exponential benefits as well. With these tools we’ve already begun finding answers to biochemical problems by digitally searching through every possible option and identifying solutions in hours rather than years.[2] Perhaps the most important class of problems at present is designing treatments for emerging viral threats. This challenge is like finding which key will open a given virus’s chemical lock—from a pile of keys that could fill a swimming pool. A human researcher using her own knowledge and cognitive skills might be able to identify a few dozen molecules with potential to treat the disease, but the actual number of possibly relevant molecules is generally in the trillions.[3] When these are sifted through, most will obviously be inappropriate and won’t warrant full simulation, but billions of possibilities may warrant a more robust computational examination. At the other extreme, the space of physically possible potential drug molecules has been estimated to contain some one million billion billion billion billion billion billion possibilities![4] However one frames the exact number, AI now lets scientists sort through that gigantic pile to focus on those keys most likely to fit for a given virus. Think of the advantages of this kind of exhaustive search. In our current paradigm, once we have a potentially feasible disease-fighting agent, we can organize a few dozen or a few hundred human subjects and then test them in clinical trials over the course of months or years at a cost of tens or hundreds of millions of dollars. Very often this first option is not an ideal treatment: it requires exploration of alternatives, which will also take a few years to test. Not much further progress can be made until those results are available. The US regulatory process involves three main phases of clinical trials, and according to a recent MIT study, only 13.8 percent of candidate drugs make it all the way through to FDA approval.[5] The ultimate result is a process that typically takes a decade to bring a new drug to market, at an average cost estimated between $1.3 billion and $2.6 billion.[6] In just the past few years, the pace of AI-assisted breakthroughs has increased noticeably. In 2019 researchers at Flinders University, in Australia, created a “turbocharged” flu vaccine by using a biology simulator to discover substances that activate the human immune system.[7] It digitally generated trillions of chemicals, and the researchers, seeking the ideal formulation, used another simulator to determine whether each of them would be useful as an immune-boosting drug against the virus.[8] In 2020 a team at MIT used AI to develop a powerful antibiotic that kills some of the most dangerous drug-resistant bacteria in existence. Rather than evaluate just a few types of antibiotics, it analyzed 107 million of them in a matter of hours and returned twenty-three potential candidates, highlighting two that appear to be the most effective.[9] According to University of Pittsburgh drug design researcher Jacob Durrant, “The work really is remarkable. This approach highlights the power of computer-aided drug discovery. It would be impossible to physically test over 100 million compounds for antibiotic activity.”[10] The MIT researchers have since started applying this method to design effective new antibiotics from scratch. But by far the most important application of AI to medicine in 2020 was the key role it played in designing safe and effective COVID-19 vaccines in record time. On January 11, 2020, Chinese authorities released the virus’s genetic sequence.[11] Moderna scientists got to work with powerful machine-learning tools that analyzed what vaccine would work best against it, and just two days later they had created the sequence for its mRNA vaccine. vaccine.[12] On February 7 the first clinical batch was produced. After preliminary testing, it was sent to the National Institutes of Health on February 24. And on March 16—just sixty-three days after sequence selection—the first dose went into a trial participant’s arm. Before the pandemic, vaccines typically took five to ten years to develop. Achieving this breakthrough so quickly surely saved millions of lives. But the war isn’t over. In 2021, with COVID-19 variants looming, researchers at USC developed an innovative AI tool to speed adaptive development of vaccines that may be needed as the virus continues to mutate.[13] Thanks to simulation, candidate vaccines can be designed in less than a minute and digitally validated within one hour. By the time you read this, even more advanced methods will likely be available. All the applications I’ve described are instances of a much more fundamental challenge in biology: predicting how proteins fold. The DNA instructions in our genome produce sequences of amino acids, which fold up into a protein whose three-dimensional features largely control how the protein actually works. Our bodies are mostly made of proteins, so understanding the relationship between their composition and function is key to developing new medicines and curing disease. Unfortunately, humans have had a fairly low accuracy rate at predicting protein folding, as the complexity involved defies any single easy-to-conceptualize rule. Thus, discoveries still depend on luck and laborious effort, and optimal solutions may remain undiscovered. This has long been one of the main obstacles to achieving new pharmaceutical breakthroughs.[14] This is where the pattern recognition capabilities of AI offer a profound advantage. In 2018 Alphabet’s DeepMind created a program called AlphaFold, which competed against the leading protein-folding predictors, including both human scientists and earlier software-driven approaches.[15] DeepMind did not use the usual method of drawing on a catalog of protein shapes to be used as models. Like AlphaGo Zero, it dispensed with established human knowledge. AlphaFold placed a prominent first out of ninety-eight competing programs, having accurately predicted twenty-five out of forty-three proteins, whereas the second-place competitor got only three out of forty-three.[16] Yet the AI predictions still weren’t as accurate as lab experiments, so DeepMind went back to the drawing board and incorporated transformers—the deep-learning technique that powers GPT-3. In 2021 DeepMind publicly released AlphaFold 2, which achieved a truly stunning breakthrough.[17] The AI is now able to achieve nearly experimental-level accuracy for almost any protein it is given. This suddenly expands the number of protein structures available to biologists from over 180,000[18] to hundreds of millions, and it will soon reach the billions.[19] This will greatly accelerate the pace of biomedical discoveries. At present, AI drug discovery is a human-guided process—scientists have to identify the problem they are trying to solve, formulate the problem in chemical terms, and set the parameters of the simulation. Over the coming decades, though, AI will gain the capacity to search more creatively. For example, it might identify a problem that human clinicians hadn’t even noticed (e.g., that a particular subset of people with a certain disease don’t respond well to standard treatments) and propose complex and novel therapies. Meanwhile, AI will scale up to modeling ever larger systems in simulation—from proteins to protein complexes, organelles, cells, tissues, and whole organs. Doing so will enable us to cure diseases whose complexity puts them out of the reach of today’s medicine. For example, the past decade has seen the introduction of many promising cancer treatments, including immunotherapies like CAR-T, BiTEs, and immune checkpoint inhibitors.[20] These have saved thousands of lives, but they frequently still fail because cancers learn to resist them. Often this involves tumors altering their local environment in ways we can’t fully understand with current techniques.[21] When AI can robustly simulate the tumor and its microenvironment, though, we’ll be able to tailor therapies to overcome this resistance. Likewise, such neurodegenerative diseases as Alzheimer’s and Parkinson’s involve subtle, complex processes that cause misfolded proteins to build up in the brain and inflict harm.[22] Because it’s impossible to study these effects thoroughly in a living brain, research has been extremely slow and difficult. With AI simulations we’ll be able to understand their root causes and treat patients effectively long before they become debilitated. Those same brain-simulation tools will also let us achieve breakthroughs for mental health disorders, which are expected to affect more than half the US population at some point in their lives.[23] So far doctors have relied on blunt-approach psychiatric drugs like SSRIs and SNRIs, which temporarily adjust chemical imbalances but often have modest benefits, don’t work at all for some patients, and carry long lists of side effects.[24] Once AI gives us a full functional understanding of the human brain—the most complex structure in the known universe!— universe!—we’ll be able to target many mental health problems at their source. In addition to the promise of AI for discovering new therapies, we are also moving toward a revolution in the trials we use to validate them. The FDA is now incorporating simulation results in its regulatory approval process.[25] In the coming years this will be especially important in cases similar to the COVID-19 pandemic—where a new viral threat emerges suddenly and millions of lives can be saved through accelerated vaccine development.[26] But suppose we could digitize the trials process altogether—using AI to assess how a drug would work for tens of thousands of (simulated) patients for a (simulated) period of years, and do all of this in a matter of hours or days. This would enable much richer, faster, and more accurate trial results than the relatively slow, underpowered human trials we use today. A major drawback of human trials is that (depending on the type of drug and the stage of the trial) they involve only about a dozen to a few thousand subjects.[27] This means that in any given group of subjects, few of them—if any—are statistically likely to react to the drug in exactly the way your body would. Many factors can affect how well a pharmaceutical works for you, such as genetics, diet, lifestyle, hormone balance, microbiome, disease subtype, other drugs you’re taking, and other diseases you may have. If no one in the clinical trials matches you along all those dimensions, it might be the case that even though a drug is good for the average person, it’s bad for you. Today a trial might result in an average 15 percent improvement in a certain condition for 3,000 people. But simulated trials could reveal hidden details. For example, a certain subset of 250 people from that group (e.g., those with a certain gene) will actually be harmed by the drug, experiencing 50 percent worse conditions, while a different subset of 500 (e.g., those who also have kidney disease) will see a 70 percent improvement. Simulations will be able to find numerous such correlations, yielding highly specific risk-benefit profiles for each individual patient. The introduction of this technology will be gradual because the computational demands of biological simulations will vary among applications. Drugs consisting mainly of a single molecule are at the easier end of the spectrum and will be first to be simulated. Meanwhile, techniques like CRISPR and therapies intended to affect gene expression involve extremely complex interactions between many kinds of biological molecules and structures and will accordingly take longer to simulate satisfactorily in silico. To replace human trials as the primary testing method, AI simulations will need to model not just the direct action of a given therapeutic agent, but how it fits into a whole body’s complex systems over an extended period. It is unclear how much detail will ultimately be required for such simulations. For example, it seems unlikely that skin cells on your thumb are relevant for testing a liver cancer drug. But to validate these tools as safe, we’ll likely need to digitize the entire human body at essentially molecular resolution. Only then will researchers be able to robustly determine determine which factors can be confidently abstracted away for a given application. This is a long-term goal, but it is one of the most profoundly important lifesaving objectives for AI—and we will be making meaningful progress on it by the end of the 2020s. There will likely be substantial resistance in the medical community to increasing reliance on simulations for drug trials—for a variety of reasons. It is very sensible to be cautious about the risks. Doctors won’t want to change approval protocols in a way that could endanger patients, so simulations will need a very solid track record of performing as well as or better than current trial methods. But another factor is liability. Nobody will want to be the person who approved a new and promising treatment on the chance that it turns out to be a disaster. Thus, regulators will need to anticipate these emerging approaches and be proactive to make sure that the incentives are balanced between appropriate caution and lifesaving innovation. Even before we have robust biosimulation, though, AI is already making an impact in genetic biology. The 98 percent of genes that do not code for proteins were once dismissed as “junk” DNA.[28] We now know that they are critical to gene expression (which genes are actively used and to what extent), but it is very hard to determine these relationships from the noncoding DNA itself. Yet because it can detect very subtle patterns, AI is starting to break this logjam, as it did with the 2019 discovery by New York scientists of links between noncoding DNA and autism.[29] Olga Troyanskaya, the project’s lead researcher, said that it is “the first clear demonstration of non-inherited, noncoding mutations causing any complex human disease or disorder.”[30] In the wake of the COVID-19 pandemic, there is also new urgency to the challenge of monitoring infectious diseases. In the past, epidemiologists had to choose from among several imperfect types of data when trying to predict viral outbreaks across the United States. A new AI system called ARGONet integrates disparate kinds of data in real time and weights them based on their predictive power.[31] ARGONet combines electronic medical records, historical data, live Google searches by worried members of the public, and spatial-temporal patterns of how flu spreads from place to place.[32] Lead researcher Mauricio Santillana of Harvard explained, “The system continuously evaluates the predictive power of each independent method, and recalibrates how this information should be used to produce improved flu estimates.”[33] Indeed, 2019 research showed that ARGONet outperformed all previous approaches. It bested Google Flu Trends in 75 percent of states studied and was able to predict statewide flu activity a week ahead of the CDC’s normal methods.[34] More new AI-driven approaches are now being developed to help stop the next major outbreak. In addition to scientific applications, AI is gaining the ability to surpass human doctors in clinical medicine. In a 2018 speech I predicted that within a year or two a neural net would be able to analyze radiology images as well as human doctors do. Just two weeks later, Stanford researchers announced CheXNet, which used 100,000 X-ray images to train a 121-layer convolutional neural network to diagnose fourteen different diseases. It outperformed the human doctors to whom it was compared, providing preliminary but encouraging evidence evidence of huge diagnostic potential.[35] Other neural networks have shown similar capabilities. A 2019 study showed that a neural net analyzing natural-language clinical metrics was able to diagnose pediatric diseases better than eight junior physicians exposed to the same data—and outperformed all twenty human doctors in some areas.[36] In 2021 a Johns Hopkins team developed an AI system called DELFI that is able to recognize subtle patterns of DNA fragments in a person’s blood to detect 94 percent of lung cancers via a simple lab test—something even expert humans cannot do alone.[37] Such clinical tools are rapidly making the jump from proof of concept to large-scale deployment. In July 2022, Nature Medicine published results of a massive study of more than 590,000 hospital patients who were monitored with an AI-powered system called the Targeted Real-Time Early Warning System (TREWS) to detect sepsis—a life-threatening infection response that kills around 270,000 Americans a year.[38] Kurzweil, Ray. The Singularity Is Nearer (p. 243). Penguin Publishing Group. Kindle Edition.

 

Massive expansion of life expectancy coming

Kurzweil, June 2024, Ray Kurzweil is a renowned inventor, futurist, and author who has made significant contributions to fields such as artificial intelligence, optical character recognition, and speech recognition technology. Known for his predictions about technological singularity, Kurzweil has received numerous awards, including the National Medal of Technology, and currently serves as a director of engineering at Google, where he focuses on machine learning and language processing, The Singularity Is Nearer, page nubmer at end of card

Diligent People Will Achieve Longevity Escape Velocity by Around 2030 Material abundance and peaceful democracy make life better, but the challenge with the highest stakes is the effort to preserve life itself. As I describe in chapter 6, the method of developing new health treatments is rapidly changing from a linear hit-or-miss process to an exponential information technology in which we systematically reprogram the suboptimal software of life. Biological life is suboptimal because evolution is a collection of random processes optimized by natural selection. Thus, as evolution has “explored” the range of possible genetic traits, it has depended heavily on chance and the influence of particular environmental factors. Also, the fact that this process is gradual means that evolution can achieve a design only if all the intermediate steps toward a given feature also lead creatures to be successful in their environments. So there are surely some potential traits that would be very useful but that are inaccessible because the incremental steps needed to build them would be evolutionarily unfit. By contrast, applying intelligence (human or artificial) to biology will allow us to systematically explore the full range of genetic possibilities in search of those traits that are optimal—that is, most beneficial. This includes those inaccessible to normal evolution. We have now had about two decades of exponential progress in genome sequencing (approximately doubling price-performance each year) from the completion of the Human Genome Project in 2003—and in terms of base pairs, this doubling has occurred on average roughly every fourteen months, spanning multiple technologies and dating all the way back to the first nucleotide sequencing from DNA in 1971.[284] We are finally getting to the steep part of a fifty-year-old exponential trend in biotechnology. We are beginning to use AI for discovery and design of both drugs and other interventions, and by the end of the 2020s biological simulators will be sufficiently advanced to generate key safety and efficacy data in hours rather than the years that clinical trials typically require. The transition from human trials to simulated in silico trials will be governed by two forces working in opposite directions. On the one hand there will be a legitimate concern over safety: we don’t want the simulations to miss relevant medical facts and erroneously declare a dangerous medication to be safe. On the other hand, simulated trials will be able to use vastly larger numbers of simulated patients and study a wide range of comorbidities and demographic factors—telling doctors in granular detail how a new treatment will likely affect many different kinds of patients. In addition, getting lifesaving drugs to patients faster may save many lives. The transition to simulated trials will also involve political uncertainty and bureaucratic resistance, but ultimately the effectiveness of the technology will win out. Just two notable examples of the benefits in silico trials will bring: Immunotherapy, which is enabling many stage 4 (and otherwise terminal) cancer patients to go into remission, is a very hopeful development in cancer treatment.[285] Technologies like CAR-T cell therapy reprogram a patient’s own immune cells to recognize and destroy cancer cells.[286] So far, finding such approaches is limited by our incomplete biomolecular understanding of how cancer evades the immune system, but AI simulations will help break this logjam. With induced pluripotent stem (iPS) cells, we are gaining the capability to rejuvenate the heart after a heart attack and overcome the “low ejection fraction” from which many heart attack survivors suffer (and from which my father died). We are now growing organs using iPS cells (adult cells that are converted into stem cells via the introduction of specific genes). As of 2023, iPS cells have been used for the regeneration of tracheas, craniofacial bones, retinal cells, peripheral nerves, and cutaneous tissue, as well as tissues from major organs like the heart, liver, and kidneys.[287] Because stem cells are similar in some ways to cancer cells, an important line of research going forward will be finding ways to minimize the risk of uncontrolled cell division. These iPS cells can act like embryonic stem cells and can differentiate into almost all types of human cells. The technique is still experimental, but it has been successfully used in human patients. For those with heart issues, it entails creating iPS cells from the patient, growing them into macroscopic sheets of heart muscle tissue, and grafting them onto a damaged heart. The therapy is believed to work via the iPS cells’ releasing growth factors that spur existing heart tissue to regenerate. In effect, they may be tricking the heart into thinking it is in a fetal environment. This procedure is being used for a broad variety of biological tissues. Once we can analyze the mechanisms of iPS action with advanced AI, regenerative medicine will be able to effectively unlock the body’s own blueprints for healing. As a result of these technologies, the old linear models of progress in medicine and longevity will no longer be appropriate. Both our natural intuition and a backward-looking view of history suggest that the next twenty years of advances will be roughly like the last twenty, but this ignores the exponential nature of the process. Knowledge that radical life extension is close at hand is spreading, but most people—both doctors and patients—are still unaware of this grand transformation in our ability to reprogram our outdated biology. As mentioned earlier in this chapter, the 2030s will bring another health revolution, which my book on health (coauthored with Terry Grossman, MD) calls the third bridge to radical life extension: medical nanorobots. This intervention will vastly extend the immune system. Our natural immune system, which includes T cells that can intelligently destroy hostile microorganisms, is very effective for many types of pathogens—so much so that we would not live long without it. However, it evolved in an era when food and resources were very limited and most humans had short life spans. If early humans reproduced when young and then died in their twenties, evolution had no reason to favor mutations that could have strengthened the immune system against threats that mainly appear later in life, like cancer and neurodegenerative diseases (often caused by misfolded proteins called prions). Likewise, because many viruses come from livestock, our evolutionary ancestors who existed before animal domestication did not evolve strong defenses against them.[288] Nanorobots not only will be programmed to destroy all types of pathogens but will be able to treat metabolic diseases. Except for the heart and the brain, our major internal organs put substances into the bloodstream or remove them, and many diseases result from their malfunction. For example, type 1 diabetes is caused by failure of the pancreatic islet cells to produce insulin.[289] Medical nanorobots will monitor the blood supply and increase or decrease various substances, including hormones, nutrients, oxygen, carbon dioxide, and toxins, thus augmenting or even replacing the function of the organs. Using these technologies, by the end of the 2030s we will largely be able to overcome diseases and the aging process. The 2020s will feature increasingly dramatic pharmaceutical and nutritional discoveries, largely driven by advanced AI—not enough to cure aging on their own, but sufficient to extend many lives long enough to reach the third bridge. And so, by around 2030, the most diligent and informed people will reach “longevity escape velocity”—a tipping point at which we can add more than a year to our remaining life expectancy for each calendar year that passes. The sands of time will start running in rather than out. The fourth bridge to radical life extension will be the ability to essentially back up who we are, just as we do routinely with all of our digital information. As we augment our biological neocortex with realistic (albeit much faster) models of the neocortex in the cloud, our thinking will become a hybrid of the biological thinking we are accustomed to today and its digital extension. The digital portion will expand exponentially and ultimately predominate. It will become powerful enough to fully understand, model, and simulate the biological portion, enabling us to back up all of our thinking. This scenario will become realistic as we approach the Singularity in the mid-2040s. The ultimate goal is to put our destiny in our own hands, not in the metaphorical hands of fate—to live as long as we wish. But why would anyone ever choose to die? Research shows that those who take their own lives are typically in unbearable pain, whether physical or emotional.[290] While advances in medicine and neuroscience cannot prevent all of those cases, they will likely make them much rarer. Once we have backed ourselves up, how could we die, anyway? The cloud already has many backups of all of the information it contains, a feature that will be greatly enhanced by the 2040s. Destroying all copies of oneself may be close to impossible. If we design mind-backup systems in such a way that a person can easily choose to delete their files (hoping to maximize personal autonomy), this inherently creates security risks where a person could be tricked or coerced into making such a choice and could increase vulnerability to cyberattacks. On the other hand, limiting people’s ability to control this most intimate of their data impinges on an important freedom. I am optimistic, though, that suitable safeguards can be deployed, much like those that have successfully protected nuclear weapons for decades. If you restored your mind file after biological death, would you really be restoring yourself? As I discussed in chapter 3, that is not a scientific question but a philosophical one, which we’ll have to grapple with during the lifetimes of most people already alive today. Finally, some have an ethical concern about equity and inequality. A common challenge to these predictions about longevity is that only the wealthy will be able to afford the technologies of radical life extension. My response is to point out the history of the cell phone. You indeed had to be wealthy to have a mobile phone as recently as thirty years ago, and that device did not work very well. Today there are billions of phones, and they do a lot more than just make phone calls. They are now memory extenders that let us access almost all of human knowledge. Such technologies start out being expensive with limited function. By the time they are perfected, they are affordable to almost everyone. And the reason is the exponential price-performance improvement inherent in information technologies. Kurzweil, Ray. The Singularity Is Nearer (pp. 193-194). Penguin Publishing Group. Kindle Edition.

AI solves resource scarcity, reducing violence

 

Kurzweil, June 2024, Ray Kurzweil is a renowned inventor, futurist, and author who has made significant contributions to fields such as artificial intelligence, optical character recognition, and speech recognition technology. Known for his predictions about technological singularity, Kurzweil has received numerous awards, including the National Medal of Technology, and currently serves as a director of engineering at Google, where he focuses on machine learning and language processing, The Singularity Is Nearer, page nubmer at end of card

Further, the more wealth grows and poverty declines, the greater incentives people have for cooperation, and the more zero-sum struggles for limited resources are alleviated. Many of us have a deeply ingrained tendency to view the struggle for scarce resources as an unavoidable cause of violence and as an inherent part of human nature. But while this has been the story of much of human history, I don’t think this will be permanent. The digital revolution has already rolled back scarcity conditions for many of the things we can easily represent digitally, from web searches to social media connections. Fighting over a copy of a physical book may be petty, but on a certain level we can understand it. Two children may tussle over a favorite printed comic because only one can have it and read it at a time. But the idea of people fighting over a PDF document is comical—because your having access to it doesn’t mean I don’t have access to it. We can create as many copies as we need, essentially for free. Once humanity has extremely cheap energy (largely from solar and, eventually, fusion) and AI robotics, many kinds of goods will be so easy to reproduce that the notion of people committing violence over them will seem just as silly as fighting over a PDF seems today. In this way the millions-fold improvement in information technologies between now and the 2040s will power transformative improvement across countless other aspects of society. Kurzweil, Ray. The Singularity Is Nearer (pp. 153-154). Penguin Publishing Group. Kindle Edition.

When AI can improve itself, there will be an intelligence explosion

Kurzweil, June 2024, Ray Kurzweil is a renowned inventor, futurist, and author who has made significant contributions to fields such as artificial intelligence, optical character recognition, and speech recognition technology. Known for his predictions about technological singularity, Kurzweil has received numerous awards, including the National Medal of Technology, and currently serves as a director of engineering at Google, where he focuses on machine learning and language processing, The Singularity Is Nearer, page nubmer at end of card

For the purpose of thinking about the Singularity, though, the most important fiber in our bundle of cognitive skills is computer programming (and a range of related abilities, like theoretical computer science). This is the main bottleneck for superintelligent AI. Once we develop AI with enough programming abilities to give itself even more programming skill (whether on its own or with human assistance), there’ll be a positive feedback loop. Alan Turing’s colleague I. J. Good foresaw as early as 1965 that this would lead to an “intelligence explosion.”[138] And because computers operate much faster than humans, cutting humans out of the loop of AI development will unlock stunning rates of progress. Artificial intelligence theorists jokingly refer to this as “FOOM”—like a comic book–style sound effect of AI progress whizzing off the far end of the graph.[139]

The introduction of replicants will pose many other challenging social and legal questions: Are they to be considered people with full human and civil rights (such as the rights to vote and enter into contracts)? Are they responsible for contracts signed or crimes previously committed by the person they are replicating? Can they take credit for the work or social contributions of the person they are replacing? Do you have to remarry your late husband or wife who comes back as a replicant? Will replicants be ostracized or face discrimination? Under what conditions should the creation of replicants be restricted or banned? Kurzweil, Ray. The Singularity Is Nearer (pp. 102-103). Penguin Publishing Group. Kindle Edition.

 

AI leads to PV/Solar

Kurzweill Google, the oldest living AI Scientist, 6-17, 2024, Ray Kurzweil is a computer scientist, inventor and the author of books including “The Age of Intelligent Machines” (1990), “The Age of Spiritual Machines” (1999) and “The Singularity is Near” (2005). His new book, “The Singularity is Nearer: When We Merge with AI”, will be published on June 25th, The Economist, https://archive.ph/2024.06.18-065404/https://www.economist.com/by-invitation/2024/06/17/ray-kurzweil-on-how-ai-will-transform-the-physical-world#selection-1293.0-1293.297, Ray Kurzweil on how AI will transform the physical world

Sources of energy are among civilisation’s most fundamental resources. For two centuries the world has needed dirty, non-renewable fossil fuels. Yet harvesting just 0.01% of the sunlight the Earth receives would cover all human energy consumption. Since 1975, solar cells have become 99.7% cheaper per watt of capacity, allowing worldwide capacity to increase by around 2m times. So why doesn’t solar energy dominate yet?

The problem is two-fold. First, photovoltaic materials remain too expensive and inefficient to replace coal and gas completely. Second, because solar generation varies on both diurnal (day/night) and annual (summer/winter) scales, huge amounts of energy need to be stored until needed—and today’s battery technology isn’t quite cost-effective enough. The laws of physics suggest that massive improvements are possible, but the range of chemical possibilities to explore is so enormous that scientists have made achingly slow progress.

By contrast, ai can rapidly sift through billions of chemistries in simulation, and is already driving innovations in both photovoltaics and batteries. This is poised to accelerate dramatically. In all of history until November 2023, humans had discovered about 20,000 stable inorganic compounds for use across all technologies. Then, Google’s gnome ai discovered far more, increasing that figure overnight to 421,000. Yet this barely scratches the surface of materials-science applications. Once vastly smarter agi finds fully optimal materials, photovoltaic megaprojects will become viable and solar energy can be so abundant as to be almost free.

AI means energy abundance, which triggers cheap food and resources

Kurzweill Google, the oldest living AI Scientist, 6-17, 2024, Ray Kurzweil is a computer scientist, inventor and the author of books including “The Age of Intelligent Machines” (1990), “The Age of Spiritual Machines” (1999) and “The Singularity is Near” (2005). His new book, “The Singularity is Nearer: When We Merge with AI”, will be published on June 25th, The Economist, https://archive.ph/2024.06.18-065404/https://www.economist.com/by-invitation/2024/06/17/ray-kurzweil-on-how-ai-will-transform-the-physical-world#selection-1293.0-1293.297, Ray Kurzweil on how AI will transform the physical world

Sources of energy are among civilisation’s most fundamental resources. For two centuries the world has needed dirty, non-renewable fossil fuels. Yet harvesting just 0.01% of the sunlight the Earth receives would cover all human energy consumption. Since 1975, solar cells have become 99.7% cheaper per watt of capacity, allowing worldwide capacity to increase by around 2m times. So why doesn’t solar energy dominate yet? The problem is two-fold. First, photovoltaic materials remain too expensive and inefficient to replace coal and gas completely. Second, because solar generation varies on both diurnal (day/night) and annual (summer/winter) scales, huge amounts of energy need to be stored until needed—and today’s battery technology isn’t quite cost-effective enough. The laws of physics suggest that massive improvements are possible, but the range of chemical possibilities to explore is so enormous that scientists have made achingly slow progress. By contrast, ai can rapidly sift through billions of chemistries in simulation, and is already driving innovations in both photovoltaics and batteries. This is poised to accelerate dramatically. In all of history until November 2023, humans had discovered about 20,000 stable inorganic compounds for use across all technologies. Then, Google’s gnome ai discovered far more, increasing that figure overnight to 421,000. Yet this barely scratches the surface of materials-science applications. Once vastly smarter agi finds fully optimal materials, photovoltaic megaprojects will become viable and solar energy can be so abundant as to be almost free. Energy abundance enables another revolution: in manufacturing. The costs of almost all goods—from food and clothing to electronics and cars—come largely from a few common factors such as energy, labour (including cognitive labour like r&d and design) and raw materials. ai is on course to vastly lower all these costs. After cheap, abundant solar energy, the next component is human labour, which is often backbreaking and dangerous. ai is making big strides in robotics that can greatly reduce labour costs. Robotics will also reduce raw-material extraction costs, and ai is finding ways to replace expensive rare-earth elements with common ones like zirconium, silicon and carbon-based graphene. Together, this means that most kinds of goods will become amazingly cheap and abundant.

AI means massive increases in longevity

 

Kurzweill Google, the oldest living AI Scientist, 6-17, 2024, Ray Kurzweil is a computer scientist, inventor and the author of books including “The Age of Intelligent Machines” (1990), “The Age of Spiritual Machines” (1999) and “The Singularity is Near” (2005). His new book, “The Singularity is Nearer: When We Merge with AI”, will be published on June 25th, The Economist, https://archive.ph/2024.06.18-065404/https://www.economist.com/by-invitation/2024/06/17/ray-kurzweil-on-how-ai-will-transform-the-physical-world#selection-1293.0-1293.297, Ray Kurzweil on how AI will transform the physical world

These advanced manufacturing capabilities will allow the price-performance of computing to maintain the exponential trajectory of the past century—a 75-quadrillion-fold improvement since 1939. This is due to a feedback loop: today’s cutting-edge ai chips are used to optimise designs for next-generation chips. In terms of calculations per second per constant dollar, the best hardware available last November could do 48bn. Nvidia’s new b200 gpus exceed 500bn. As we build the titanic computing power needed to simulate biology, we’ll unlock the third physical revolution from ai: medicine. Despite 200 years of dramatic progress, our understanding of the human body is still built on messy approximations that are usually mostly right for most patients, but probably aren’t totally right for you. Tens of thousands of Americans a year die from reactions to drugs that studies said should help them. Yet ai is starting to turn medicine into an exact science. Instead of painstaking trial-and-error in an experimental lab, molecular biosimulation—precise computer modelling that aids the study of the human body and how drugs work—can quickly assess billions of options to find the most promising medicines. Last summer the first drug designed end-to-end by ai entered phase-2 trials for treating idiopathic pulmonary fibrosis, a lung disease. Dozens of other ai-designed drugs are now entering trials. Both the drug-discovery and trial pipelines will be supercharged as simulations incorporate the immensely richer data that ai makes possible. In all of history until 2022, science had determined the shapes of around 190,000 proteins. That year DeepMind’s AlphaFold 2 discovered over 200m, which have been released free of charge to researchers to help develop new treatments. Much more laboratory research is needed to populate larger simulations accurately, but the roadmap is clear. Next, ai will simulate protein complexes, then organelles, cells, tissues, organs and—eventually—the whole body. This will ultimately replace today’s clinical trials, which are expensive, risky, slow and statistically underpowered. Even in a phase-3 trial, there’s probably not one single subject who matches you on every relevant factor of genetics, lifestyle, comorbidities, drug interactions and disease variation. Digital trials will let us tailor medicines to each individual patient. The potential is breathtaking: to cure not just diseases like cancer and Alzheimer’s, but the harmful effects of ageing itself. Today, scientific progress gives the average American or Briton an extra six to seven weeks of life expectancy each year. When agi gives us full mastery over cellular biology, these gains will sharply accelerate. Once annual increases in life expectancy reach 12 months, we’ll achieve “longevity escape velocity”. For people diligent about healthy habits and using new therapies, I believe this will happen between 2029 and 2035—at which point ageing will not increase their annual chance of dying. And thanks to exponential price-performance improvement in computing, ai-driven therapies that are expensive at first will quickly become widely available.

US leadership on AI critical to prevent military defeat

Singh, 6-13, 24,  Manisha Singh is a Senior Fellow for Artificial Intelligence at the Krach Institute for Tech Diplomacy and Former Assistant Secretary of State, The U.S. Must Win the AI Race, https://nationalinterest.org/blog/techland/us-must-win-ai-race-211430

With conflict currently present in almost every region of the world, speculation about “World War III” is difficult to avoid. If a calamity of such magnitude were to occur, it would likely be fought partly in the cyberverse. It would also undoubtedly feature the deployment of artificial intelligence (AI). This is one of the many critical reasons that America needs to lead on AI. To paraphrase Mark Zuckerberg’s tech mantra, adversaries are moving fast, and they certainly aren’t afraid to break things. As with most other significant innovations in the last century, AI was born in the United States. Rivals are racing to overtake what exists, either through their own efforts or infringing on creation occurring here. Domestic and global regulatory efforts are well underway. The question of balancing innovation and regulation is not new, but it is original in the case of AI. Perhaps the most defining feature of AI is the existential anxiety it has created. Such apprehension has been a motivating factor in the new rules of the road for the AI super highway. A group of U.S. Senators put forth a “Framework to Mitigate Extreme AI Risks,” which acknowledges the benefits of AI but highlights that it “presents a broad spectrum of risks that could be harmful to the American public.” Both a notification and licensing procedure, as well as the creation of a new regulatory body to be established by Congress, are contemplated. Although the framework isn’t binding, it does provide insight into the evolving thought process of regulators. It comes as no surprise that the European Union (EU) has already enacted a dense, onerous law set forth in 458 pages known as the EU Artificial Intelligence Act. The EU AI Act has met with mixed reactions from member state governments. It appropriately addresses concerns about potential abuse, including authoritarian-like facial recognition techniques. On the other hand, French president Emanuel Macron expressed unease that the burdensome law would disadvantage France against American, Chinese, and even British innovation, as EU rules no longer bind the United Kingdom. AI competition is extreme in both the commercial and security spheres. Companies and governments are racing to perfect the face of the future. Although enacted in the EU, the effects of its AI Act will be felt by American companies as it’s well established that cyberspace and efforts to regulate it are indeed borderless. As the first of its kind, the EU is heralding its AI Act as a model. U.S. regulators, however, should carefully evaluate the innovation-regulation balance. As noted above, America’s enemies developing AI under state control will place no limits on how quickly or mercilessly they will develop and deploy AI to gain a dystopian advantage. Efforts to overtake America happen everywhere all at once. The U.S. military doesn’t currently have the “peace through strength” numbers needed to maintain its defensive might. Meanwhile, the Chinese People’s Liberation Army (PLA) is using AI to perfect targeting and missile guidance systems. Recent International Atomic Energy Agency (IAEA) reports of Iran’s increased uranium enrichment caused alarm bells to ring in both London and Paris. Washington and Brussels should be collectively well beyond concerned by now. Add to the mix the possibility of a new axis of cyber-evil, including both state and non-state actors. China already has an advantage in possession of the natural resources required to create AI infrastructure. Its economy and its military are, at present, second to America. AI is a vehicle through which China can assert dominance at the expense of the Western world. The institutions established after the last world war to prevent such a mass catastrophe from happening again are passing resolutions. These are pieces of paper on which dictatorships and democratically elected governments alike agree to use AI for good and collectively police its malfeasance. The United Nations passed a resolution to promote “safe, secure and trustworthy” AI to address the world’s challenges. The International Telecommunications Union (ITU) convened an “AI for Good Summit” with aspiration goals as its name implies. History dictates that in global conflict, the most powerful tools will prevail. It is, therefore, incumbent on U.S. innovators to win the AI race and achieve the goal of “peace through strength.” Only then can a course be set to maintain stability and prevent global atrocity by actors determined to use AI in a way that will redefine the concept of war.

US leadership key to global artificial superior intelligence control

Leopold Aschenbrenner, June 2024,  S I T U AT I O N A L  AWA R E N E S S: The Decade Ahead, https://situational-awareness.ai/ situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf, Columbia UniversityColumbia University Bachelor of Arts – BA, Mathematics-Statistics and EconomicsBachelor of Arts – BA, Mathematics-Statistics and Economic 2017 – 20212017 – 2021 Valedictorian. Graduated at age 19, Fired from OpenAI for leaking

Superintelligence will give a decisive economic and military advantage. China isn’t at all out of the game yet. In the race to AGI, the free world’s very survival will be at stake. Can we maintain our preeminence over the authoritarian powers? And will we manage to avoid self-destruction along the way? Superintelligence will be the most powerful technology— and most powerful weapon—-mankind has ever developed. It will give a decisive military advantage, perhaps comparable only with nuclear weapons. Authoritarians could use super- intelligence for world conquest, and to enforce total control internally. Rogue states could use it to threaten annihilation. And though many count them out, once the CCP wakes up to AGI it has a clear path to being competitive (at least until and unless we drastically improve US AI lab security). Every month of lead will matter for safety too. We face the greatest risks if we are locked in a tight race, democratic allies and authoritarian competitors each racing through the already- precarious intelligence explosion at breakneck pace—forced to throw any caution by the wayside, fearing the other getting superintelligence first. Only if we preserve a healthy lead of democratic allies will we have the margin of error for navigat-ing the extraordinarily volatile and dangerous period around the emergence of superintelligence. And only American leader-ship is a realistic path to developing a nonproliferation regime to avert the risks of self-destruction superintelligence will un- fold. Our generation too easily takes for granted that we live in peace and freedom. And those who herald the age of AGI in SF too often ignore the elephant in the room: superintelligence is a matter of national security, and the United States must win. Whoever leads on superintelligence will have a decisive military ad- vantag Superintelligence is not just any other technology—hypersonic missiles, stealth, and so on—where US and liberal democracies’ leadership is highly desirable, but not strictly necessary. The military balance of power can be kept if the US falls behind on one or a couple such technologies; these technologies matter a great deal, but can be outweighed by advantages in other areas. The advent of superintelligence will put us in a situation un- seen since the advent of the atomic era: those who have it will wield complete dominance over those who don’t. I’ve previously discussed the vast power of superintelligence. It’ll mean having billions of automated scientists and engineers and technicians, each much smarter than the smartest human scientists, furiously inventing new technologies, day and night. The acceleration in scientific and technological development will be extraordinary. As superintelligence is applied to R&D in military technology, we could quickly go through decades of military technological progress. The Gulf War, or: What a few-decades-worth of techno- logical lead implies for military power The Gulf War provides a helpful illustration of how a 20- 30 year lead in military technology can be decisive. At the time, Iraq commanded the fourth-largest army in the world. In terms of numbers (troops, tanks, artillery), the US-led coalition barely matched (or was outmatched) by the Iraqis, all while the Iraqis had had ample time to en- trench their defenses (a situation that would normally require a 3:1, or 5:1, advantage in military manpower to dislocate).  But the US-led coalition obliterated the Iraqi army in a merely 100-hour ground war. Coalition dead numbered a mere 292, compared to 20k-50k Iraqi dead and hundreds of thousands of others wounded or captured. The Coalition lost a mere 31 tanks, compared to the destruction of over 3,000 Iraqi tanks. The difference in technology wasn’t godlike or unfath- omable, but it was utterly and completely decisive: guided and smart munitions, early versions of stealth, better sen- sors, better tank scopes (to see farther in the night and in dust storms), better fighter jets, an advantage in reconnais- sance, and so on. (For a more recent example, recall Iran launching a massive attack of 300 missiles at Israel, “99%” of which were inter- cepted by superior Israel, US, and allied missile defense.) A lead of a year or two or three on superintelligence could mean as utterly decisive a military advantage as the US coalition had against Iraq in the Gulf War. A complete reshaping of the military balance of power will be on the line. Imagine if we had gone through the military technological de- velopments of the 20th century in less than a decade. We’d have gone from horses and rifles and trenches, to modern tank armies, in a couple years; to armadas of supersonic fighter planes and nuclear weapons and ICBMs a couple years after   that; to stealth and precision that can knock out an enemy be- fore they even know you’re there another couple years after that. That is the situation we will face withthe advent of superintelligence: the military technological advances of a century compressed to less than a decade. We’ll see superhuman hack- ing that can cripple much of an adversary’s military force, roboarmies and autonomous drone swarms, but more im- portantly completely new paradigms we can’t yet begin to imagine, and the inventions of new WMDs with thousandfold increases in destructive power (and new WMD defenses too, like impenetrable missile defense, that rapidly and repeatedly upend deterrence equilibria). And it wouldn’t just be technological progress. As we solve robotics, labor will become fully automated, enabling a broader industrial and economic explosion, too. It is plausible growth rates could go into the 10s of percent a year; within at most a decade, the GDP of those with the lead would trounce those behind. Rapidly multiplying robot factories would mean not only a drastic technological edge, but also production capacity to dominate in pure materiel. Think millions of missile inter- ceptors; billions of drones; and so on. Of course, we don’t know the limits of science and the many frictions that could slow things down. But no godlike advances are necessary for a decisive military advantage. And a billion superintelligent scientists will be able to do a lot. It seems clear that within a matter of years, pre-superintelligence militaries would become hopelessly outclassed.  The military advantage would be decisive even against nuclear deterrentsTo be even clearer: it seems likely the advantage conferred by superintelligence would be decisive enough even to preemptively take out an adversary’s nuclear deterrent.Improved sensor networks and analysis could locate even the quietest current nuclear submarines (similarly for mo- bile missile launchers). Millions or billions of mouse-size d  autonomous drones, with advances in stealth, could infil- trate behind enemy lines and then surreptitiously locate, sabotage, and decapitate the adversary’s nuclear forces. Improved sensors, targeting, and so on could dramati- cally improve missile defense (similar to, say, the Iran vs. Israel example above); moreover, if there is an industrial explosion, robot factories could churn out thousands of interceptors for each opposing missile. And all of this is without even considering completely new scientific and technological paradigms (e.g., remotely deactivating all the nukes).It would simply be no contest. And not just no contest in the nuclear sense of “we could mutually destroy each other,” but no contest in terms of being able to obliterate the military power of a rival without taking significant casualties. A couple years of lead o n superintelligence would mean complete dominance. If there is a rapid intelligence explosion, it’s plausible a lead of mere months could be decisive: months could mean the dif- ference between roughly human-level AI systems and sub- stantially superhuman AI systems. Perhaps possessing those initial superintelligences alone, even before being broadly deployed, would be enough for a decisive advantage, e.g via superhuman hacking abilities that could shut down pre- superintelligence militaries, more limited drone swarms that threaten instant death for every opposing leader, official, and their families, and advanced bioweapons developed with AlphaFold-style simulation that could target specific ethnic groups, e.g. anybody but Han Chinese (or simply withhold the cure from the adversary).

 

AGI by 2027 as the models hit human-level intelligence and millions start researching

Leopold Aschenbrenner, June 2024,  S I T U AT I O N A L  AWA R E N E S S: The Decade Ahead, https://situational-awareness.ai/ situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf, Columbia UniversityColumbia University Bachelor of Arts – BA, Mathematics-Statistics and EconomicsBachelor of Arts – BA, Mathematics-Statistics and Economic 2017 – 20212017 – 2021 Valedictorian. Graduated at age 19, Fired from OpenAI for leaking

 

AGI by 2027 is strikingly plausible. GPT-2 to GPT-4 took us from ~preschooler to ~smart high-schooler abilities in 4 years. Tracing trendlines in compute (~0.5 orders of magni- tude or OOMs/year), algorithmic efficiencies (~0.5 OOMs/year), and “unhobbling” gains (from chatbot to agent), we should expect another preschooler-to-high-schooler-sized qualitative jump by 2027. GPT-4’s capabilities came as a shock to many: an AI system that could write code and essays, could reason through difficult math problems, and ace college exams. A few years ago, most thought these were impenetrable walls. But GPT-4 was merely the continuation of a decade of break- neck progress in deep learning. A decade earlier, models could barely identify simple images of cats and dogs; four years ear- lier, GPT-2 could barely string together semi-plausible sen- tences. Now we are rapidly saturating all the benchmarks we can come up with. And yet this dramatic progress has merely been the result of consistent trends in scaling up deep learning. There have been people who have seen this for far longer. They were scoffed at, but all they did was trust the trendlines. The trendlines are intense, and they were right. The models, they just want to learn; you scale them up, and they learn more. I make the following claim: it is strikingly plausible that by 2027, models will be able to do the work of an AI re- searcher/engineer. That doesn’t require believing in sci-fi; it just requires believing in straight lines on a graph.

In this piece, I will simply “count the OOMs” (OOM = order of magnitude, 10x = 1 order of magnitude): look at the trends in 1) compute, 2) algorithmic efficiencies (algorithmic progress that we can think of as growing “effective compute”), and 3) ”unhobbling” gains (fixing obvious ways in which models are hobbled by default, unlocking latent capabilities and giving them tools, leading to step-changes in usefulness). We trace Figure 1: Rough estimates of past and future scaleup of effective compute (both physical compute and algorith- mic efficiencies), based on the public estimates discussed in this piece. As we scale models, they consistently get smarter, and by “counting the OOMs” we get a rough sense of what model intelligence we should expect in the (near) future. (This graph shows only the scaleup in base models; “unhob- blings” are not pictured.) the growth in each over four years before GPT-4, and what we should expect in the four years after, through the end of 2027. Given deep learning’s consistent improvements for every OOM of effective compute, we can use this to project future progress. Publicly, things have been quiet for a year since the GPT-4 re- lease, as the next generation of models has been in the oven— leading some to proclaim stagnation and that deep learning is hitting a wall.1 But by counting the OOMs, we get a peek at 1 Predictions they’ve made every year what we should actually expect.The upshot is pretty simple. GPT-2 to GPT-4—from models that were impressive for sometimes managing to string to- gether a few coherent sentences, to models that ace high-school exams—was not a one-time gain. We are racing through the OOMs extremely rapidly, and the numbers indicate we should expect another ~100,000x effective compute scaleup—resulting in another GPT-2-to-GPT-4-sized qualitative jump—over four years. Moreover, and critically, that doesn’t just mean a better chatbot; picking the many obvious low-hanging fruit on “un- hobbling” gains should take us from chatbots to agents, from a tool to something that looks more like drop-in remote worker replacements. While the inference is simple, the implication is striking. An- other jump like that very well could take us to AGI, to models as smart as PhDs or experts that can work beside us as cowork- ers. Perhaps most importantly, if these AI systems could auto- mate AI research itself, that would set in motion intense feed- back loops—the topic of the the next piece in the series. Even now, barely anyone is pricing all this in. But situational awareness on AI isn’t actually that hard, once you step back and look at the trends. If you keep being surprised by AI capa- bilities, just start counting the OOMs. The last four years We have machines now that we can basically talk to like humans. It’s a remarkable testament to the human capacity to adjust that this seems normal, that we’ve become inured to the pace of progress. But it’s worth stepping back and looking at the progress of just the last few years. GPT-4 limits artificially imposed On everything from AP exams to the SAT, GPT-4 scores better than the vast majority of high schoolers. Of course, even GPT-4 is still somewhat uneven; for some tasks it’s much better than smart high-schoolers, while there are other tasks it can’t yet do. That said, I tend to think most of these limitations come down to obvious ways models are still hobbled, as I’ll discuss in-depth later. The raw intelligence 8 Here’s Yann LeCun predicting in 2022 decade of AI, it’s that you should never bet against deep learning.

 

How did this happen? The magic of deep learning is that it just works—and the trendlines have been astonishingly consistent, despite naysayers at every turn.

 IIIa. Racing to the Trillion-Dollar Cluster; based on that analysis, an additional 2 OOMs of compute (a cluster in the $10s of billions) seems very likely to happen by the end of 2027; even a cluster closer to +3 OOMs of compute ($100 billion+) seems plausible (and is rumored to be in the works at Microsoft/OpenAI).

AI uses massive amounts of energy/restrictions threaten AI

Talgo, 6-17, 24, Chris Talgo is editorial director at The Heartland Institute., https://thehill.com/opinion/energy-environment/4721764-ai-will-be-incompatible-with-anti-fossil-energy-activism/

Like it or not, the era of artificial intelligence has arrived, morphing from a science fiction fantasy to the cornerstone of the future world economy. AI and other ground-breaking technologies like quantum computing will soon become so prevalent throughout society that we will wonder how we were able to exist without them. Like cell phones and the internet, AI will fundamentally change our world and the human experience — hopefully for the better. As of now, the largest obstacle to the rise of AI is the misguided energy policy that governments across the West have embarked upon in recent decades. The march toward net-zero carbon dioxide emissions and the insistence that we replace fossil fuel-based energy sources with so-called renewable energy sources like wind and solar poses the largest threat to an AI-based future. In other words, the massive, ill-advised push by climate alarmists to transition almost overnight from reliable and affordable energy sources such as oil, natural gas and coal to not-ready-for-primetime green energy is incompatible with the huge amount of energy that AI will require in the decades to come. Don’t take my word for it. To date, scores of tech experts have said the exact same thing. From Mark Zuckerberg to Elon Musk and many more, there is a consensus among Silicon Valley’s leaders that AI will necessitate a huge increase in demand for dependable and inexpensive energy. ADVERTISING BlackRock CEO Larry Fink, one of the world’s loudest voices for the transition to green energy, has even changed his tune in recent months. “These AI data centers are going to require more power than anything we could ever have imagined,” Fink said.. “We at the G7 do not have enough power.” So, what is the solution? How can we ensure that we have enough energy to power the mega-data centers that AI needs while ensuring we have enough energy to fulfill the needs of American families, businesses and everything else? First, we must stop the foolish movement to transition away from a primarily fossil-fuel based energy grid toward one that relies on unaffordable, unreliable and unscalable wind and solar power. As of this writing, wind and solar account for only 14 percent of the total energy consumed in the U.S. In their present form, wind and solar simply cannot come close to providing enough energy to meet current energy demand, let alone the huge increase that is sure to come when AI data centers begin taking their toll on the nation’s power grid. What’s more, wind and solar can provide energy only intermittently, meaning they can provide power only when the wind is blowing and the sun is shining. AI data centers require a constant flow of energy. They cannot be shut down when it’s cloudy or calm. Second, we should embrace clean-burning natural gas as a viable source of energy to meet the growing demand as more AI data centers are built in the years to come. Unlike renewables, natural gas can provide affordable, reliable and abundant energy. Moreover, it does not require environmentally hazardous transmission lines spanning hundreds of miles through pristine habitats. And, in what can be described as a positive feedback loop, tech experts claim that AI can aid in the discovery of new natural gas deposits. According to the American Gas Association, “AI itself is helping to fill the fuel demand created by AI data centers. The days of drilling an exploratory well in hopes there might be natural gas at the bottom of it are far behind us. Seismic surveys allow producers to search for detectable deposits of natural gas by generating, recording and analyzing sound waves in a process similar to how bats use echolocation to navigate or ships use sonar to search for potential obstacles. While this process previously required highly trained human analysts to examine the seismic survey returns, AI trained on the results of previous seismic surveys is being deployed to find deposits that human operators would have previously missed.” Third, we should reconsider nuclear power as a steady source of baseload energy. “Nuclear power is the most reliable energy source, and it’s not even close,” the U.S. Department of Energy states. Nuclear power is also far more environmentally friendly than wind or solar. “Wind farms require up to 360 times as much land area to produce the same amount of electricity as a nuclear energy facility…Solar photovoltaic facilities require up to 75 times the land area,” reports the Nuclear Energy Institute. The AI race is going to be a worldwide event. China, Russia and many more countries are pouring resources into their AI programs. The U.S. must not allow these adversarial nations to gain the upper hand in the critical competition for AI supremacy.

US leadership on AI critical to prevent military defeat

Singh, 6-13, 24,  Manisha Singh is a Senior Fellow for Artificial Intelligence at the Krach Institute for Tech Diplomacy and Former Assistant Secretary of State, The U.S. Must Win the AI Race, https://nationalinterest.org/blog/techland/us-must-win-ai-race-211430

With conflict currently present in almost every region of the world, speculation about “World War III” is difficult to avoid. If a calamity of such magnitude were to occur, it would likely be fought partly in the cyberverse. It would also undoubtedly feature the deployment of artificial intelligence (AI). This is one of the many critical reasons that America needs to lead on AI. To paraphrase Mark Zuckerberg’s tech mantra, adversaries are moving fast, and they certainly aren’t afraid to break things. As with most other significant innovations in the last century, AI was born in the United States. Rivals are racing to overtake what exists, either through their own efforts or infringing on creation occurring here. Domestic and global regulatory efforts are well underway. The question of balancing innovation and regulation is not new, but it is original in the case of AI. Perhaps the most defining feature of AI is the existential anxiety it has created. Such apprehension has been a motivating factor in the new rules of the road for the AI super highway. A group of U.S. Senators put forth a “Framework to Mitigate Extreme AI Risks,” which acknowledges the benefits of AI but highlights that it “presents a broad spectrum of risks that could be harmful to the American public.” Both a notification and licensing procedure, as well as the creation of a new regulatory body to be established by Congress, are contemplated. Although the framework isn’t binding, it does provide insight into the evolving thought process of regulators. It comes as no surprise that the European Union (EU) has already enacted a dense, onerous law set forth in 458 pages known as the EU Artificial Intelligence Act. The EU AI Act has met with mixed reactions from member state governments. It appropriately addresses concerns about potential abuse, including authoritarian-like facial recognition techniques. On the other hand, French president Emanuel Macron expressed unease that the burdensome law would disadvantage France against American, Chinese, and even British innovation, as EU rules no longer bind the United Kingdom. AI competition is extreme in both the commercial and security spheres. Companies and governments are racing to perfect the face of the future. Although enacted in the EU, the effects of its AI Act will be felt by American companies as it’s well established that cyberspace and efforts to regulate it are indeed borderless. As the first of its kind, the EU is heralding its AI Act as a model. U.S. regulators, however, should carefully evaluate the innovation-regulation balance. As noted above, America’s enemies developing AI under state control will place no limits on how quickly or mercilessly they will develop and deploy AI to gain a dystopian advantage. Efforts to overtake America happen everywhere all at once. The U.S. military doesn’t currently have the “peace through strength” numbers needed to maintain its defensive might. Meanwhile, the Chinese People’s Liberation Army (PLA) is using AI to perfect targeting and missile guidance systems. Recent International Atomic Energy Agency (IAEA) reports of Iran’s increased uranium enrichment caused alarm bells to ring in both London and Paris. Washington and Brussels should be collectively well beyond concerned by now. Add to the mix the possibility of a new axis of cyber-evil, including both state and non-state actors. China already has an advantage in possession of the natural resources required to create AI infrastructure. Its economy and its military are, at present, second to America. AI is a vehicle through which China can assert dominance at the expense of the Western world. The institutions established after the last world war to prevent such a mass catastrophe from happening again are passing resolutions. These are pieces of paper on which dictatorships and democratically elected governments alike agree to use AI for good and collectively police its malfeasance. The United Nations passed a resolution to promote “safe, secure and trustworthy” AI to address the world’s challenges. The International Telecommunications Union (ITU) convened an “AI for Good Summit” with aspiration goals as its name implies. History dictates that in global conflict, the most powerful tools will prevail. It is, therefore, incumbent on U.S. innovators to win the AI race and achieve the goal of “peace through strength.” Only then can a course be set to maintain stability and prevent global atrocity by actors determined to use AI in a way that will redefine the concept of war.

US leadership on AI critical to prevent military defeat

Singh, 6-13, 24,  Manisha Singh is a Senior Fellow for Artificial Intelligence at the Krach Institute for Tech Diplomacy and Former Assistant Secretary of State, The U.S. Must Win the AI Race, https://nationalinterest.org/blog/techland/us-must-win-ai-race-211430

With conflict currently present in almost every region of the world, speculation about “World War III” is difficult to avoid. If a calamity of such magnitude were to occur, it would likely be fought partly in the cyberverse. It would also undoubtedly feature the deployment of artificial intelligence (AI). This is one of the many critical reasons that America needs to lead on AI. To paraphrase Mark Zuckerberg’s tech mantra, adversaries are moving fast, and they certainly aren’t afraid to break things. As with most other significant innovations in the last century, AI was born in the United States. Rivals are racing to overtake what exists, either through their own efforts or infringing on creation occurring here. Domestic and global regulatory efforts are well underway. The question of balancing innovation and regulation is not new, but it is original in the case of AI. Perhaps the most defining feature of AI is the existential anxiety it has created. Such apprehension has been a motivating factor in the new rules of the road for the AI super highway. A group of U.S. Senators put forth a “Framework to Mitigate Extreme AI Risks,” which acknowledges the benefits of AI but highlights that it “presents a broad spectrum of risks that could be harmful to the American public.” Both a notification and licensing procedure, as well as the creation of a new regulatory body to be established by Congress, are contemplated. Although the framework isn’t binding, it does provide insight into the evolving thought process of regulators. It comes as no surprise that the European Union (EU) has already enacted a dense, onerous law set forth in 458 pages known as the EU Artificial Intelligence Act. The EU AI Act has met with mixed reactions from member state governments. It appropriately addresses concerns about potential abuse, including authoritarian-like facial recognition techniques. On the other hand, French president Emanuel Macron expressed unease that the burdensome law would disadvantage France against American, Chinese, and even British innovation, as EU rules no longer bind the United Kingdom. AI competition is extreme in both the commercial and security spheres. Companies and governments are racing to perfect the face of the future. Although enacted in the EU, the effects of its AI Act will be felt by American companies as it’s well established that cyberspace and efforts to regulate it are indeed borderless. As the first of its kind, the EU is heralding its AI Act as a model. U.S. regulators, however, should carefully evaluate the innovation-regulation balance. As noted above, America’s enemies developing AI under state control will place no limits on how quickly or mercilessly they will develop and deploy AI to gain a dystopian advantage. Efforts to overtake America happen everywhere all at once. The U.S. military doesn’t currently have the “peace through strength” numbers needed to maintain its defensive might. Meanwhile, the Chinese People’s Liberation Army (PLA) is using AI to perfect targeting and missile guidance systems. Recent International Atomic Energy Agency (IAEA) reports of Iran’s increased uranium enrichment caused alarm bells to ring in both London and Paris. Washington and Brussels should be collectively well beyond concerned by now. Add to the mix the possibility of a new axis of cyber-evil, including both state and non-state actors. China already has an advantage in possession of the natural resources required to create AI infrastructure. Its economy and its military are, at present, second to America. AI is a vehicle through which China can assert dominance at the expense of the Western world. The institutions established after the last world war to prevent such a mass catastrophe from happening again are passing resolutions. These are pieces of paper on which dictatorships and democratically elected governments alike agree to use AI for good and collectively police its malfeasance. The United Nations passed a resolution to promote “safe, secure and trustworthy” AI to address the world’s challenges. The International Telecommunications Union (ITU) convened an “AI for Good Summit” with aspiration goals as its name implies. History dictates that in global conflict, the most powerful tools will prevail. It is, therefore, incumbent on U.S. innovators to win the AI race and achieve the goal of “peace through strength.” Only then can a course be set to maintain stability and prevent global atrocity by actors determined to use AI in a way that will redefine the concept of war.

Only US leadership can control superintelligence

Leopold Aschenbrenner, June 2024,  S I T U AT I O N A L  AWA R E N E S S: The Decade Ahead, https://situational-awareness.ai/ situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf, Columbia UniversityColumbia University Bachelor of Arts – BA, Mathematics-Statistics and EconomicsBachelor of Arts – BA, Mathematics-Statistics and Economic 2017 – 20212017 – 2021 Valedictorian. Graduated at age 19, Fired from OpenAI for leaking 

Superintelligence will be the most powerful technology— and most powerful weapon—-mankind has ever developed. It will give a decisive military advantage, perhaps comparable only with nuclear weapons. Authoritarians could use super- intelligence for world conquest, and to enforce total control internally. Rogue states could use it to threaten annihilation. And though many count them out, once the CCP wakes up to AGI it has a clear path to being competitive (at least until and unless we drastically improve US AI lab security). Every month of lead will matter for safety too. We face the greatest risks if we are locked in a tight race, democratic allies and authoritarian competitors each racing through the already- precarious intelligence explosion at breakneck pace—forced to throw any caution by the wayside, fearing the other getting superintelligence first. Only if we preserve a healthy lead of democratic allies will we have the margin of error for navigat-ing the extraordinarily volatile and dangerous period around the emergence of superintelligence. And only American leadership is a realistic path to developing a nonproliferation regime to avert the risks of self-destruction superintelligence will un- fold. Our generation too easily takes for granted that we live in peace and freedom. And those who herald the age of AGI in SF too often ignore the elephant in the room: superintelligence is a matter of national security, and the United States must win. Whoever leads on superintelligence will have a decisive military ad- vantag Superintelligence is not just any other technology—hypersonic missiles, stealth, and so on—where US and liberal democracies’ leadership is highly desirable, but not strictly necessary. The military balance of power can be kept if the US falls behind on one or a couple such technologies; these technologies matter a great deal, but can be outweighed by advantages in other areas. The advent of superintelligence will put us in a situation un- seen since the advent of the atomic era: those who have it will wield complete dominance over those who don’t. I’ve previously discussed the vast power of superintelligence. It’ll mean having billions of automated scientists and engineers and technicians, each much smarter than the smartest human scientists, furiously inventing new technologies, day and night. The acceleration in scientific and technological development will be extraordinary. As superintelligence is applied to R&D in military technology, we could quickly go through decades of military technological progress.

The Gulf War, or: What a few-decades-worth of techno- logical lead implies for military power The Gulf War provides a helpful illustration of how a 20- 30 year lead in military technology can be decisive. At the time, Iraq commanded the fourth-largest army in the world. In terms of numbers (troops, tanks, artillery), the US-led coalition barely matched (or was outmatched) by the Iraqis, all while the Iraqis had had ample time to en- trench their defenses (a situation that would normally require a 3:1, or 5:1, advantage in military manpower to dislocate).

But the US-led coalition obliterated the Iraqi army in a merely 100-hour ground war. Coalition dead numbered a mere 292, compared to 20k-50k Iraqi dead and hundreds of thousands of others wounded or captured. The Coalition lost a mere 31 tanks, compared to the destruction of over 3,000 Iraqi tanks. The difference in technology wasn’t godlike or unfath- omable, but it was utterly and completely decisive: guided and smart munitions, early versions of stealth, better sen- sors, better tank scopes (to see farther in the night and in dust storms), better fighter jets, an advantage in reconnais- sance, and so on. (For a more recent example, recall Iran launching a massive attack of 300 missiles at Israel, “99%” of which were inter- cepted by superior Israel, US, and allied missile defense.) A lead of a year or two or three on superintelligence could mean as utterly decisive a military advantage as the US coalition had against Iraq in the Gulf War. A complete reshaping of the military balance of power will be on the line. Imagine if we had gone through the military technological de- velopments of the 20th century in less than a decade. We’d have gone from horses and rifles and trenches, to modern tank armies, in a couple years; to armadas of supersonic fighter planes and nuclear weapons and ICBMs a couple years after that; to stealth and precision that can knock out an enemy be- fore they even know you’re there another couple years after that. That is the situation we will face with the advent of superintelligence: the military technological advances of a century compressed to less than a decade. We’ll see superhuman hack- ing that can cripple much of an adversary’s military force, roboarmies and autonomous drone swarms, but more im- portantly completely new paradigms we can’t yet begin to imagine, and the inventions of new WMDs with thousandfold increases in destructive power (and new WMD defenses too, like impenetrable missile defense, that rapidly and repeatedly upend deterrence equilibria). And it wouldn’t just be technological progress. As we solve robotics, labor will become fully automated, enabling a broader industrial and economic explosion, too. It is plausible growth rates could go into the 10s of percent a year; within at most a decade, the GDP of those with the lead would trounce those behind. Rapidly multiplying robot factories would mean not only a drastic technological edge, but also production capacity to dominate in pure materiel. Think millions of missile inter- ceptors; billions of drones; and so on. Of course, we don’t know the limits of science and the many frictions that could slow things down. But no godlike advances are necessary for a decisive military advantage. And a billion superintelligent scientists will be able to do a lot. It seems clear that within a matter of years, pre-superintelligence militaries would become hopelessly outclassed.

The military advantage would be decisive even against nuclear deterrentsTo be even clearer: it seems likely the advantage conferred by superintelligence would be decisive enough even to preemptively take out an adversary’s nuclear deterrent.Improved sensor networks and analysis could locate even the quietest current nuclear submarines (similarly for mo- bile missile launchers). Millions or billions of mouse-size d autonomous drones, with advances in stealth, could infil- trate behind enemy lines and then surreptitiously locate, sabotage, and decapitate the adversary’s nuclear forces. Improved sensors, targeting, and so on could dramati- cally improve missile defense (similar to, say, the Iran vs. Israel example above); moreover, if there is an industrial explosion, robot factories could churn out thousands of interceptors for each opposing missile. And all of this is without even considering completely new scientific and technological paradigms (e.g., remotely deactivating all the nukes).It would simply be no contest. And not just no contest in the nuclear sense of “we could mutually destroy each other,” but no contest in terms of being able to obliterate the military power of a rival without taking significant casualties. A couple years of lead o n superintelligence would mean complete dominance. If there is a rapid intelligence explosion, it’s plausible a lead of mere months could be decisive: months could mean the dif- ference between roughly human-level AI systems and sub- stantially superhuman AI systems. Perhaps possessing those initial superintelligences alone, even before being broadly deployed, would be enough for a decisive advantage, e.g via superhuman hacking abilities that could shut down pre- superintelligence militaries, more limited drone swarms that threaten instant death for every opposing leader, official, and their families, and advanced bioweapons developed with AlphaFold-style simulation that could target specific ethnic groups, e.g. anybody but Han Chinese (or simply withhold the cure from the adversary).

AGI plausible by 2027 as the models hit human-level intelligence and millions start researching

Leopold Aschenbrenner, June 2024,  S I T U AT I O N A L  AWA R E N E S S: The Decade Ahead, https://situational-awareness.ai/ situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf, Columbia UniversityColumbia University Bachelor of Arts – BA, Mathematics-Statistics and EconomicsBachelor of Arts – BA, Mathematics-Statistics and Economic 2017 – 20212017 – 2021 Valedictorian. Graduated at age 19, Fired from OpenAI for leaking 

AGI by 2027 is strikingly plausible. GPT-2 to GPT-4 took us from ~preschooler to ~smart high-schooler abilities in 4 years. Tracing trendlines in compute (~0.5 orders of magni- tude or OOMs/year), algorithmic efficiencies (~0.5 OOMs/year), and “unhobbling” gains (from chatbot to agent), we should expect another preschooler-to-high-schooler-sized qualitative jump by 2027.

GPT-4’s capabilities came as a shock to many: an AI system that could write code and essays, could reason through difficult math problems, and ace college exams. A few years ago, most thought these were impenetrable walls. But GPT-4 was merely the continuation of a decade of break- neck progress in deep learning. A decade earlier, models could barely identify simple images of cats and dogs; four years ear- lier, GPT-2 could barely string together semi-plausible sen- tences. Now we are rapidly saturating all the benchmarks we can come up with. And yet this dramatic progress has merely been the result of consistent trends in scaling up deep learning. There have been people who have seen this for far longer. They were scoffed at, but all they did was trust the trendlines. The. trendlines are intense, and they were right. The models, they just want to learn; you scale them up, and they learn more.

I make the following claim: it is strikingly plausible that by 2027, models will be able to do the work of an AI re- searcher/engineer. That doesn’t require believing in sci-fi; it just requires believing in straight lines on a graph.

In this piece, I will simply “count the OOMs” (OOM = order of magnitude, 10x = 1 order of magnitude): look at the trends in 1) compute, 2) algorithmic efficiencies (algorithmic progress that we can think of as growing “effective compute”), and 3) ”unhobbling” gains (fixing obvious ways in which models are hobbled by default, unlocking latent capabilities and giving them tools, leading to step-changes in usefulness). We trace Figure 1: Rough estimates of past and future scaleup of effective compute (both physical compute and algorith- mic efficiencies), based on the public estimates discussed in this piece. As we scale models, they consistently get smarter, and by “counting the OOMs” we get a rough sense of what model intelligence we should expect in the (near) future. (This graph shows only the scaleup in base models; “unhob- blings” are not pictured.

the growth in each over four years before GPT-4, and what we should expect in the four years after, through the end of 2027. Given deep learning’s consistent improvements for every OOM of effective compute, we can use this to project future progress. Publicly, things have been quiet for a year since the GPT-4 re- lease, as the next generation of models has been in the oven— leading some to proclaim stagnation and that deep learning is

hitting a wall.1 But by counting the OOMs, we get a peek at                1 Predictions they’ve made every yearwhat we should actually expect.The upshot is pretty simple. GPT-2 to GPT-4—from models that were impressive for sometimes managing to string to- gether a few coherent sentences, to models that ace high-school exams—was not a one-time gain. We are racing through the OOMs extremely rapidly, and the numbers indicate we should expect another ~100,000x effective compute scaleup—resulting in another GPT-2-to-GPT-4-sized qualitative jump—over four years. Moreover, and critically, that doesn’t just mean a better chatbot; picking the many obvious low-hanging fruit on “un- hobbling” gains should take us from chatbots to agents, from a tool to something that looks more like drop-in remote worker replacements.

While the inference is simple, the implication is striking. An- other jump like that very well could take us to AGI, to models as smart as PhDs or experts that can work beside us as cowork- ers. Perhaps most importantly, if these AI systems could auto- mate AI research itself, that would set in motion intense feed- back loops—the topic of the the next piece in the series. Even now, barely anyone is pricing all this in. But situational awareness on AI isn’t actually that hard, once you step back and look at the trends. If you keep being surprised by AI capa- bilities, just start counting the OOMs.

The last four years

We have machines now that we can basically talk to like humans. It’s a remarkable testament to the human capacity to adjust that this seems normal, that we’ve become inured to the pace of progress. But it’s worth stepping back and looking at the progress of just the last few years.

GPT-4 limits artificially imposed

On everything from AP exams to the SAT, GPT-4 scores better than the vast majority of high schoolers.

Of course, even GPT-4 is still somewhat uneven; for some tasks it’s much better than smart high-schoolers, while there are other tasks it can’t yet do. That said, I tend to think most of these limitations come down to obvious ways models are still hobbled, as I’ll discuss in-depth later. The raw intelligence

8 Here’s Yann LeCun predicting in 2022

decade of AI, it’s that you should never bet against deep learning.

How did this happen? The magic of deep learning is that it just works—and the trendlines have been astonishingly consistent, despite naysayers at every turn.

All cognitive jobs might be automated by 2027 

Leopold Aschenbrenner, June 2024,  S I T U AT I O N A L  AWA R E N E S S: The Decade Ahead, https://situational-awareness.ai/ situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf, Columbia UniversityColumbia University Bachelor of Arts – BA, Mathematics-Statistics and EconomicsBachelor of Arts – BA, Mathematics-Statistics and Economic 2017 – 20212017 – 2021 Valedictorian. Graduated at age 19, Fired from OpenAI for leaking

We are on course for AGI by 2027. These AI systems will basically be able to automate basically all cognitive jobs (think: all jobs that could be done remotely). To be clear—the error bars are large. Progress could stall as we run out of data, if the algorithmic breakthroughs necessary to crash through the data wall prove harder than expected. Maybe unhobbling doesn’t go as far, and we are stuck with merely expert chatbots, rather than expert coworkers. Perhaps the decade-long trendlines break, or scaling deep learning hits a wall for real this time. (Or an algorithmic breakthrough, even simple unhobbling that unleashes the test-time compute over- hang, could be a paradigm-shift, accelerating things further and leading to AGI even earlier.) In any case, we are racing through the OOMs, and it requires no esoteric beliefs, merely trend extrapolation of straight lines, to take the possibility of AGI—true AGI—by 2027 extremely seriously. It seems like many are in the game of downward-defining AGI these days, as just as really good chatbot or whatever. Wha I mean is an AI system that could fully automate my or my friends’ job, that could fully do the work of an AI researcher or engineer. Perhaps some areas, like robotics, might take longer to figure out by default. And the societal rollout, e.g. in medical or legal professions, could easily be slowed by so- cietal choices or regulation. But once models can automate AI  research itself, that’s enough—enough to kick off intense feed- back loops—and we could very quickly make further progress, the automated AI engineers themselves solving all the remain- ing bottlenecks to fully automating everything. In particular, millions of automated researchers could very plausibly com- press a decade of further algorithmic progress into a year or less. AGI will merely be a small taste of the superintelligence soon to follow. (More on that in the next chapter.)

AGIs will quickly build superintelligence

Leopold Aschenbrenner, June 2024,  S I T U AT I O N A L  AWA R E N E S S: The Decade Ahead, https://situational-awareness.ai/ situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf, Columbia UniversityColumbia University Bachelor of Arts – BA, Mathematics-Statistics and EconomicsBachelor of Arts – BA, Mathematics-Statistics and Economic 2017 – 20212017 – 2021 Valedictorian. Graduated at age 19, Fired from OpenAI for leaking

AI progress won’t stop at human-level. Hundreds of millions of AGIs could automate AI research, compressing a decade of algorithmic progress (5+ OOMs) into 1 year. We would rapidly go from human-level to vastly superhuman AI sys- tems. The power—and the peril—of superintelligence would be dramatic.

AI progress won’t stop at human-level. After initially learning from the best human games, AlphaGo started playing against itself—and it quickly became superhuman, playing extremely creative and complex moves that a human would never have come up with.

We discussed the path to AGI in the previous piece. Once we get AGI, we’ll turn the crank one more time—or two or three more times—and AI systems will become superhuman—vastly superhuman. They will become qualitatively smarter than you or I, much smarter, perhaps similar to how you or I are qualita- tively smarter than an elementary schooler.

The jump to superintelligence would be wild enough at the current rapid but continuous rate of AI progress (if we could make the jump to AGI in 4 years from GPT-4, what might an- other 4 or 8 years after that bring?). But it could be much faster than that, if AGI automates AI research itself.

Once we get AGI, we won’t just have one AGI. I’ll walk through the numbers later, but: given inference GPU fleets by then,

we’ll likely be able to run many millions of them (perhaps 100 million human-equivalents, and soon after at 10x+ human speed). Even if they can’t yet walk around the office or make coffee, they will be able to do ML research on a computer. Rather than a few hundred researchers and engineers at a leading AI lab, we’d have more than 100,000x that—furiously working on algorithmic breakthroughs, day and night. Yes, recursive

self-improvement, but no sci-fi required; they would need only to accelerate the existing trendlines of algorithmic progress

Automated AI research could probably compress a human- decade of algorithmic progress into less than a year (and that seems conservative). That’d be 5+ OOMs, another GPT-2-to-

stemmed from merely replacing A- bombs with H-bombs, without adjust- ing nuclear policy and war plans to the massive capability increase.

GPT-4-sized jump, on top of AGI—a qualitative jump like that from a preschooler to a smart high schooler, on top of AI sys- tems already as smart as expert AI researchers/engineers.

There are several plausible bottlenecks—including limited com- pute for experiments, complementarities with humans, and algorithmic progress becoming harder—which I’ll address, but none seem sufficient to definitively slow things down.

Before we know it, we would have superintelligence on our hands—AI systems vastly smarter than humans, capable of novel, creative, complicated behavior we couldn’t even begin to understand—perhaps even a small civilization of billions of them. Their power would be vast, too. Applying superintelligence to R&D in other fields, explosive progress would broaden from just ML research; soon they’d solve robotics, make dramatic leaps across other fields of science and technol- ogy within years, and an industrial explosion would follow.

Superintelligence would likely provide a decisive military ad- vantage, and unfold untold powers of destruction. We will be faced with one of the most intense and volatile moments of human history.

Need government support to win the AGI race

We’ll need the government project to win the race against the authoritarian powers—and to give us the clear lead and breath- ing room necessary to navigate the perils of this situation. We might as well give up if we can’t prevent the instant theft of superintelligence model weights. We will want to bundle West- ern efforts: bring together our best scientists, use every GPU we can find, and ensure the trillions of dollars of cluster build- outs happen in the United States. We will need to protect the datacenters against adversary sabotage, or outright attack.

Perhaps, most of all, it will take American leadership to develop— and if necessary, enforce—a nonproliferation regime. We’ll

need to subvert Russia, North Korea, Iran, and terrorist groups from using their own superintelligence to develop technology and weaponry that would let them hold the world hostage.

We’ll need to use superintelligence to harden the security of our critical infrastructure, military, and government to defend against extreme new hacking capabilities. We’ll need to use superintelligence to stabilize the offense/defense balance of advances in biology or similar. We’ll need to develop tools to safely control superintelligence, and to shut down rogue su- perintelligences that come out of others’ uncareful projects.AI systems and robots will be moving at 10-100x+ human speed; everything will start happening extremely quickly. We need to be ready to handle whatever other six-sigma upheavals and concomitant threats—come out of compressing a century’s worth of technological progress into a few years.

At least in this initial period, we will be faced with the most extraordinary national security exigency. Perhaps, nobody is up for this task. But of the options we have, The Project is the only sane one.

The Project is inevitable; whether it’s good is not

 Ultimately, my main claim here is descriptive: whether we like it or not, superintelligence won’t look like an SF startup, and in some way will be primarily in the domain of national security. I’ve brought up The Project a lot to my San Francisco friends in the past year. Perhaps what’s surprised me most is

how surprised most people are about the idea. They simply haven’t considered the possibility. But once they consider it, most agree that it seems obvious. If we are at all right about what we think we are building, of course, by the end this will be (in some form) a government project. If a lab developed literal superintelligence tomorrow, of course the Feds would step in.

 One important free variable is not if but when. Does the gov- ernment not realize what’s happening until we’re in the middle of an intelligence explosion—or will it realize a couple years beforehand? If the government project is inevitable, earlier seems better. We’ll dearly need those couple years to do the security crash program, to get the key officials up to speed and prepared, to build a functioning merged lab, and so on. It’ll be far more chaotic if the government only steps in at the very end (and the secrets and weights will have already been stolen)

Another important free variable is the international coalition we can rally: both a tighter alliance of democracies for develop- ing superintelligence, and a broader benefit-sharing offer made to the rest of the world.

  • The former might look like the Quebec Agreement: a se- cret pact between Churchill and Roosevelt to pool their re- sources to develop nuclear weapons, while not using them against each other or against others without mutual con- sent. We’ll want to bring in the UK (Deepmind), East Asian allies like Japan and South Korea (chip supply chain), and

NATO/other core democratic allies (broader industrial base). A united effort will have more resources, talent, and con- trol the whole supply chain; enable close coordination on safety, national security, and military challenges; and pro- vide helpful checks and balances on wielding the power of superintelligence.

  • The latter might look like Atoms for Peace, the IAEA, and the NPT. We should offer to share the peaceful benefits of superintelligence with a broader group of countries (includ- ing non-democracies), and commit to not offensively using superintelligence against them. In exchange, they refrain from pursuing their own superintelligence projects, make safety y commitments on the deployment of AI systems, and accept restrictions on dual-use applications. The hope is that this offer reduces the incentives for arms races and prolifer- ation, and brings a broad coalition under a US-led umbrella for the post-superintelligence world order.

Perhaps the most important free variable is simply whether the inevitable government project will be competent. How will it be organized? How can we get this done? How will the checks and balances work, and what does a sane chain of command look like? Scarcely any attention has gone into figuring this

out.119 Almost all other AI lab and AI governance politicking is          119 To my Progress Studies brethren: you

The endgame

And so by 27/28, the endgame will be on. By 28/29 the in- telligence explosion will be underway; by 2030, we will have summoned superintelligence, in all its power and might.

Whoever they put in charge of The Project is going to have a hell of a task: to build AGI, and to build it fast; to put the

American economy on wartime footing to make hundreds of millions of GPUs; to lock it all down, weed out the spies, and fend off all-out attacks by the CCP; to somehow manage a hun- dred million AGIs furiously automating AI research, making a decade’s leaps in a year, and soon producing AI systems vastly smarter than the smartest humans; to somehow keep things together enough that this doesn’t go off the rails and produce rogue superintelligence that tries to seize control from its human overseers; to use those superintelligences to develop whatever new technologies will be necessary to stabilize the situation and stay ahead of adversaries, rapidly remaking US forces to integrate those; all while navigating what will likely be the tensest international situation ever seen. They better be good, I’ll say that.

For those of us who get the call to come along for the ride, it’ll be . . . stressful. But it will be our duty to serve the free

should think about this, this will be the culmination of your intellectual project! You spend all this time studying Amer- ican government research institutions, their decline over the last half-century, and what it would take to make them effective again. Tell me: how will we make The Project effective? Figure 38: Oppenheimer and Groves.

world—and all of humanity. If we make it through and get to look back on those years, it will be the most important thing we ever did. And while whatever secure facility they find probably won’t have the pleasantries of today’s ridiculously- overcomped-AI-researcher-lifestyle, it won’t be so bad. SF al- ready feels like a peculiar AI-researcher-college-town; probably this won’t be so different. It’ll be the same weirdly-small circle sweating the scaling curves during the day and hanging out over the weekend, kibitzing over AGI and the lab-politics-of- the-day.

Except, well—the stakes will be all too real. See you in the desert, friends.

Superintelligence is key to national security

Leopold Aschenbrenner, June 2024,  S I T U AT I O N A L  AWA R E N E S S: The Decade Ahead, https://situational-awareness.ai/ situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf, Columbia UniversityColumbia University Bachelor of Arts – BA, Mathematics-Statistics and EconomicsBachelor of Arts – BA, Mathematics-Statistics and Economic 2017 – 20212017 – 2021 Valedictorian. Graduated at age 19, Fired from OpenAI for leaking

But as I see it, the smartest people in the space have converged on a different perspective, a third way, one I will dub AGI Realism. The core tenets are simple:

  1. Superintelligence is a matter of national security. We are rapidly building machines smarter than the smartest hu- mans. This is not another cool Silicon Valley boom; this isn’t some random community of coders writing an inno-

cent open source software package; this isn’t fun and games. Superintelligence is going to be wild; it will be the most pow- erful weapon mankind has ever built. And for any of us involved, it’ll be the most important thing we ever do.

  1. America must lead. The torch of liberty will not survive Xi getting AGI (And, realistically, American leadership is the only path to safe AGI, too.) That means we can’t simply “pause”; it means we need to rapidly scale up US power pro- duction to build the AGI clusters in the US. But it also means amateur startup security delivering the nuclear secrets to the CCP won’t cut it anymore, and it means the core AGI infras- tructure must be controlled by America, not some dictator

in the Middle East. American AI labs must put the national interest first.

  1. We need to not screw it up. Recognizing the power of super- intelligence also means recognizing its peril. There are very real safety risks; very real risks this all goes awry—whether it be because mankind uses the destructive power brought forth for our mutual annihilation, or because, yes, the alien species we’re summoning is one we cannot yet fully control. These are manageable—but improvising won’t cut it. Navi- gating these perils will require good people bringing a level of seriousness to the table that has not yet been offered.

As the acceleration intensifies, I only expect the discourse to get more shrill. But my greatest hope is that there will be those who feel the weight of what is coming, and take it as a solemn call to duty.

We only need to automate AI research to get to AGI

Leopold Aschenbrenner, June 2024,  S I T U AT I O N A L  AWA R E N E S S: The Decade Ahead, https://situational-awareness.ai/ situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf, Columbia UniversityColumbia University Bachelor of Arts – BA, Mathematics-Statistics and EconomicsBachelor of Arts – BA, Mathematics-Statistics and Economic 2017 – 20212017 – 2021 Valedictorian. Graduated at age 19, Fired from OpenAI for leaking

We don’t need to automate everything—just AI research. A common objection to transformative impacts of AGI is that it will be hard for AI to do everything. Look at robotics, for instance, doubters say; that will be a gnarly problem, even if AI is cognitively at the levels of PhDs. Or take automating biology R&D, which might require lots of physical lab-work and human experiments.

But we don’t need robotics—we don’t need many things—for AI to automate AI research. The jobs of AI researchers and engineers at leading labs can be done fully virtually and don’t run into real-world bottlenecks in the same way (though it will still be limited by compute, which I’ll address later). And the job of an AI researcher is fairly straightforward, in the grand scheme of things: read ML literature and come up with new questions or ideas, implement experiments to test those ideas, interpret the results, and repeat. This all seems squarely in the domain where simple extrapolations of current AI capabilities could easily take us to or beyond the levels of the best humans by the end of 2027.33

It’s worth emphasizing just how straightforward and hacky some of the biggest machine learning breakthroughs of the last decade have been: “oh, just add some normalization” (Lay- erNorm/BatchNorm) or “do f(x)+x instead of f(x)” (residual connections)” or “fix an implementation bug” (Kaplan → Chin- chilla scaling laws). AI research can be automated. And au- tomating AI research is all it takes to kick off extraordinary feedback loops.

Superintelligence leadership key for AI safety

Leopold Aschenbrenner, June 2024,  S I T U AT I O N A L  AWA R E N E S S: The Decade Ahead, https://situational-awareness.ai/ situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf, Columbia UniversityColumbia University Bachelor of Arts – BA, Mathematics-Statistics and EconomicsBachelor of Arts – BA, Mathematics-Statistics and Economic 2017 – 20212017 – 2021 Valedictorian. Graduated at age 19, Fired from OpenAI for leaking 

It is the cursed history of science and technology that as they have unfolded their wonders, they have also expanded the means of destruction: from sticks and stones, to swords and spears, rifles and cannons, machine guns and tanks, bombers and missiles, nuclear weapons. The “destruction/$” curve has consistently gone down as technology has advanced. We should expect the rapid technological progress post-superintelligence to follow this trend.

Perhaps dramatic advances in biology will yield extraordi- nary new bioweapons, ones that spread silently, swiftly, be- fore killing with perfect lethality on command (and that can be made extraordinarily cheaply, affordable even for terrorist groups). Perhaps new kinds of nuclear weapons enable the size of nuclear arsenals to increase by orders of magnitude, with new delivery mechanisms that are undetectable. Perhaps mosquito-sized drones, each carrying a deadly poison, could be targeted to kill every member of an opposing nation. It’s

hard to know what a century’s worth of technological progress would yield—but I am confident it would unfold appalling possibilities.

Humanity barely evaded self-destruction during the Cold War. On the historical view, the greatest existential risk posed by AGI is that it will enable us to develop extraordinary new means of mass death. This time, these means could even pro- liferate to become accessible to rogue actors or terrorists (espe- cially if, as on the current course, the superintelligence weights aren’t sufficiently protected, and can be directly stolen by North Korea, Iran, and co.).

North Korea already has a concerted bioweapons program: the US assesses that “North Korea has a dedicated, national level offensive program” to develop and produce bioweapons. It seems plausible that their primary constraint is how far their small circle of top scientists has been able to push the limits

of (synthetic) biology. What happens when that constraint is removed, when they can use millions of superintelligences to accelerate their bioweapons R&D? For example, the US assesses that North Korea currently has “limited ability” to genetically engineer biological products—what happens when that be- comes unlimited? With what unholy new concoctions will they hold us hostage?

Moreover, as discussed in the superalignment section, there will be extreme safety risks around and during the intelligence explosion—we will be faced with novel technical challenges to ensure we can reliably trust and control superhuman AI sys- tems. This very well may require us to slow down at some crit- ical moments, say, delaying by 6 months in the middle of the intelligence explosion to get additional assurances on safety, or using a large fraction of compute on alignment research rather than capabilities progress.

Some hope for some sort of international treaty on safety. This seems fanciful to me. The world where both the CCP and USG are AGI-pilled enough to take safety risk seriously is also the world in which both realize that international economic and military predominance is at stake, that being months behind on AGI could mean being permanently left behind. If the race is tight, any arms control equilibrium, at least in the early phase around superintelligence, seems extremely unstable. In short, ”breakout” is too easy: the incentive (and the fear that others will act on this incentive) to race ahead with an intelligence explosion, to reach superintelligence and the decisive advan- tage, too great.109 At the very least, the odds we get something good-enough here seem slim. (How have those climate treaties gone? That seems like a dramatically easier problem compared to this.)

The main—perhaps the only—hope we have is that an alliance of democracies has a healthy lead over adversarial powers. The

Consider the following comparison of unstable vs. stable arms control equilibria. 1980s arms control during the Cold War reduced nuclear weapons substantially, but targeted a stable equi- librium. The US and the Soviet Union still had 1000s of nuclear weapons.

MAD was assured, even if one of the actors tried a crash program to build more nukes; and a rogue nation could try to build some nukes of their own, but not fundamentally threaten the US or Soviet Union with overmatch.

However, when disarmament is to very low levels of weapons or occurs amidst rapid technological change, the equilibrium is unstable. A rogue actor or treaty-breaker can easily start a crash program and threaten to to-

tally overmatch the other players. Zero nukes wouldn’t be a stable equilibrium; similarly, this paper has interesting his- torical case studies (such as post-WWI arms limitations and the Washington Naval Treaty) where disarmament in similarly dynamic situations destabi- lized, rather than stabilized.

If mere months of lead on AGI would give an utterly decisive advantage,

this makes stable disarmament on AI similarly difficult. A rogue upstart or a treaty-breaker could gain a huge edge by secretly starting a crash program; the temptation would be too great for any sort of arrangement to be stable.

United States must lead, and use that lead to enforce safety norms on the rest of the world. That’s the path we took with nukes, offering assistance on the peaceful uses of nuclear technology in exchange for an international nonprolifera- tion regime (ultimately underwritten by American military power)—and it’s the only path that’s been shown to work.

Perhaps most importantly, a healthy lead gives us room to ma- neuver: the ability to “cash in” parts of the lead, if necessary, to get safety right, for example by devoting extra work to align- ment during the intelligence explosion.

The safety challenges of superintelligence would become ex- tremely difficult to manage if you are in a neck-and-neck arms race. A 2 year vs. a 2 month lead could easily make all the dif- ference. If we have only a 2 month lead, we have no margin at all for safety. In fear of the CCP’s intelligence explosion, we’d almost certainly race, no holds barred, through our own intelli- gence explosion—barreling towards AI systems vastly smarter than humans in months, without any ability to slow down to get key decisions right, with all the risks of superintelligence going awry that implies. We’d face an extremely volatile situa- tion, as we and the CCP rapidly developed extraordinary new military technology that repeatedly destabilized deterrence.

If our secrets and weights aren’t locked down, it might even mean a range of other rogue states are close as well, each of them using superintelligence to furnish their own new arse- nal of super-WMDs. Even if we barely managed to inch out ahead, it would likely be a pyrrhic victory; the existential strug- gle would have brought the world to the brink of total self- destruction.

Superintelligence looks very different if the democratic allies have a healthy lead, say 2 years.110 That buys us the time nec-       110 Note that, given the already-rapid

essary to navigate the unprecedented series of challenges we’ll face around and after superintelligence, and to stabilize the situation.

If and when it becomes clear that the US will decisively win, that’s when we offer a deal to China and other adversaries. They’ll know they won’t win, and so they’ll know their only

pace of AI progress today, and the even-more-rapid pace we should expect in an intelligence explosion, and the broader technological explosion post-superintelligence, “even” a 2- year-lead would be a huge difference in capabilities in the post-superintelligence world.

The option is to come to the table; and we’d rather avoid a fever- ish standoff or last-ditch military attempts on their part to sabotage Western efforts. In exchange for guaranteeing non- interference in their affairs, and sharing the peaceful benefits of superintelligence, a regime of nonproliferation, safety norms, and a semblance of stability post-superintelligence can be born.

In any case, as we go deeper into this struggle, we must not forget the threat of self-destruction. That we made it through

the Cold War in one piece involved too much luck111—and the            111 Daniel Ellsberg recounts this riv- destruction could be a thousandfold more potent than what we faced then. A healthy lead by an American-led coalition of democracies—and a solemn exercise of this leadership to stabilize whatever volatile situation we find ourselves in—is probably the safest path to navigating past this precipice. But in the heat of the AGI race, we better not screw it up.

We could go from AGI to ASI in a year 

Leopold Aschenbrenner, June 2024,  S I T U AT I O N A L  AWA R E N E S S: The Decade Ahead, https://situational-awareness.ai/ situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf, Columbia UniversityColumbia University Bachelor of Arts – BA, Mathematics-Statistics and EconomicsBachelor of Arts – BA, Mathematics-Statistics and Economic 2017 – 20212017 – 2021 Valedictorian. Graduated at age 19, Fired from OpenAI for leaking

We’d be able to run millions of copies (and soon at 10x+ hu- man speed) of the automated AI researchers. Even by 2027, we should expect GPU fleets in the 10s of millions. Training clusters alone should be approaching ~3 OOMs larger, already putting us at 10 million+ A100-equivalents. Inference fleets should be much larger still. (More on all this in IIIa. Racing to the Trillion Dollar Cluster.)

That would let us run many millions of copies of our auto- mated AI researchers, perhaps 100 million human-researcher- equivalents, running day and night. There’s some assump- tions that flow into the exact numbers, including that humans “think” at 100 tokens/minute (just a rough order of magnitude estimate, e.g. consider your internal monologue) and extrap- olating historical trends and Chinchilla scaling laws on per- token inference costs for frontier models remaining in the same ballpark.35 We’d also want to reserve some of the GPUs for running experiments and training new models. Full calculation in a footnote.36

Another way of thinking about it is that given inference fleets in 2027, we should be able to generate an entire internet’s worth of tokens, every single day.37 In any case, the exact numbers don’t matter that much, beyond a simple plausibility demonstration.

US leadership needed to stabilize an AGI World

Leopold Aschenbrenner, June 2024,  S I T U AT I O N A L  AWA R E N E S S: The Decade Ahead, https://situational-awareness.ai/ situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf, Columbia UniversityColumbia University Bachelor of Arts – BA, Mathematics-Statistics and EconomicsBachelor of Arts – BA, Mathematics-Statistics and Economic 2017 – 20212017 – 2021 Valedictorian. Graduated at age 19, Fired from OpenAI for leaking

Stabilizing the international situation

The intelligence explosion and its immediate aftermath will bring forth one of the most volatile and tense situations mankind has ever faced. Our generation is not used to this. But in this initial period, the task at hand will not be to build cool prod- ucts. It will be to somehow, desperately, make it through this period.

We’ll need the government project to win the race against the  authoritarian powers—and to give us the clear lead and breath- ing room necessary to navigate the perils of this situation. We might as well give up if we can’t prevent the instant theft of superintelligence model weights. We will want to bundle West- ern efforts: bring together our best scientists, use every GPU we can find, and ensure the trillions of dollars of cluster build- outs happen in the United States. We will need to protect the datacenters against adversary sabotage, or outright attack.

Perhaps, most of all, it will take American leadership to develop— and if necessary, enforce—a nonproliferation regime. We’ll need to subvert Russia, North Korea, Iran, and terrorist groups from using their own superintelligence to develop technology and weaponry that would let them hold the world hostage.

We’ll need to use superintelligence to harden the security of our critical infrastructure, military, and government to defend against extreme new hacking capabilities. We’ll need to use superintelligence to stabilize the offense/defense balance of advances in biology or similar. We’ll need to develop tools to safely control superintelligence, and to shut down rogue su- perintelligences that come out of others’ uncareful projects.AI systems and robots will be moving at 10-100x+ human speed; everything will start happening extremely quickly. We need to be ready to handle whatever other six-sigma upheavals— and concomitant threats—come out of compressing a century’s worth of technological progress into a few years.

At least in this initial period, we will be faced with the most extraordinary national security exigency. Perhaps, nobody is up for this task. But of the options we have, The Project is the only sane one.

AI researchers will be better than the most advanced human researchers

Leopold Aschenbrenner, June 2024,  S I T U AT I O N A L  AWA R E N E S S: The Decade Ahead, https://situational-awareness.ai/ situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf, Columbia UniversityColumbia University Bachelor of Arts – BA, Mathematics-Statistics and EconomicsBachelor of Arts – BA, Mathematics-Statistics and Economic 2017 – 20212017 – 2021 Valedictorian. Graduated at age 19, Fired from OpenAI for leaking

Don’t just imagine 100 million junior software engineer interns here (we’ll get those earlier, in the next couple years!). Real automated AI researchers be very smart—and in addition to their raw quantitative advantage, automated AI researchers will have other enormous advantages over human researchers:

  • They’ll be able to read every single ML paper ever written, have been able to deeply think about every single previous experiment ever run at the lab, learn in parallel from each of their copies, and rapidly accumulate the equivalent o millennia of experience. They’ll be able to develop far deeper intuitions about ML than any human.
  • They’ll be easily able to write millions of lines of complex code, keep the entire codebase in context, and spend human- decades (or more) checking and rechecking every line of code for bugs and They’ll be superbly compe- tent at all parts of the job.
  • You won’t have to individually train up each automated AI researcher (indeed, training and onboarding 100 million new human hires would be difficult). Instead, you can just teach and onboard one of them—and then make replicas. (And you won’t have to worry about politicking, cultural acclimation, and so on, and they’ll work with peak energy and focus day and night.)
  • Vast numbers of automated AI researchers will be able to share context (perhaps even accessing each others’ latent space and so on), enabling much more efficient collaboration and coordination compared to human researchers.
  • And of course, however smart our initial automated AI researchers would be, we’d soon be able to make further OOM-jumps, producing even smarter models, even more capable at automated AI research.

Ais will solve the barriers to the automation of all work

  • An AI capabilities explosion. Perhaps our initial AGIs had limitations that prevented them fully automating work in some other domains (rather than just in the AI research domain); automated AI research will quickly solve these, enabling automation of any and all cognitive work.
  • Solve robotics. Superintelligence won’t stay purely cognitive for long. Getting robotics to work well is primarily an ML algorithms problem (rather than a hardware problem), and our automated AI researchers will likely be able to solve it (more below!). Factories would go from human-run, to AI- directed using human physical labor, to soon being fully run by swarms of robots.
  • Dramatically accelerate scientific and technological

Yes, Einstein alone couldn’t develop neuroscience and build a semiconductor industry, but a billion superintelligent automated scientists, engineers, technologists, and robot in the space of years. (Here’s a nice short story visualizing what AI-driven R&D might look like.) The billion superintel- ligences would be able to compress the R&D effort humans researchers would have done in the next century into years. Imagine if the technological progress of the 20th century were compressed into less than a decade. We would have gone from flying being thought a mirage, to airplanes, to a man on the moon and ICBMs in a matter of years. This is what I expect the 2030s to look like across science and technology.

R&D in the real world is the “slow ver- sion”; in reality the superintelligences will try to do as much R&D as possi- ble in simulation, like AlphaFold or manufacturing “digital twins”.

AGI means a technological and economic explision

Leopold Aschenbrenner, June 2024,  S I T U AT I O N A L  AWA R E N E S S: The Decade Ahead, https://situational-awareness.ai/ situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf, Columbia UniversityColumbia University Bachelor of Arts – BA, Mathematics-Statistics and EconomicsBachelor of Arts – BA, Mathematics-Statistics and Economic 2017 – 20212017 – 2021 Valedictorian. Graduated at age 19, Fired from OpenAI for leaking

  • An industrial and economic explosion. Extremely acceler- ated technological progress, combined with the ability to automate all human labor, could dramatically accelerate eco- nomic growth (think: self-replicating robot factories quickly covering all of the Nevada desert47). The increase in growth          47 Why isn’t “factorio-world”—build

probably wouldn’t just be from 2%/year to 2.5%/year; rather, this would be a fundamental shift in the growth regime, more comparable to the historical step-change from very slow growth to a couple percent a year with the in- dustrial revolution. We could see economic growth rates of 30%/year and beyond, quite possibly multiple doublings a year. This follows fairly straightforwardly from economists’ models of economic growth. To be sure, this may well be delayed by societal frictions; arcane regulation might en- sure lawyers and doctors still need to be human, even if AI systems were much better at those jobs; surely sand will be thrown into the gears of rapidly expanding robo-factories as society resists the pace of change; and perhaps we’ll want to

retain human nannies; all of which would slow the growth of the overall GDP statistics. Still, in whatever domains we re- move human-created barriers (e.g., competition might force us to do so for military production), we’d see an industrial explosion.

a factory, that produces more facto- ries, producing even more factories, doubling factories until eventually your entire planet is quickly covered in factories—possible today? Well, labor is constrained—you can accumulate capital (factories, tools, etc.), but that runs into diminishing returns because it’s constrained by a fixed labor force. With robots and AI systems being able to fully automate labor, that removes that constraint; robo-factories could produce more robo-factories in an

~unconstrained way, leading to an in- dustrial explosion. See more economic growth models of this here.

Superintelligence will provide an overwhelming military advantage

Leopold Aschenbrenner, June 2024,  S I T U AT I O N A L  AWA R E N E S S: The Decade Ahead, https://situational-awareness.ai/ situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf, Columbia UniversityColumbia University Bachelor of Arts – BA, Mathematics-Statistics and EconomicsBachelor of Arts – BA, Mathematics-Statistics and Economic 2017 – 20212017 – 2021 Valedictorian. Graduated at age 19, Fired from OpenAI for leaking

  • Provide a decisive and overwhelming military advantage. Even early cognitive superintelligence might be enough here; perhaps some superhuman hacking scheme can de- activate adversary militaries. In any case, military power and technological progress has been tightly linked histori- cally, and with extraordinarily rapid technological progress will come concomitant military revolutio The drone swarms and roboarmies will be a big deal, but they are just the beginning; we should expect completely new kinds of weapons, from novel WMDs to invulnerable laser-based missile defense to things we can’t yet fathom. Compared to pre-superintelligence arsenals, it’ll be like 21st century militaries fighting a 19th century brigade of horses and bayonets. (I discuss how superintelligence could lead to a decisive mil- itary advantage in a later piece.)

AGI will enable people to overthrow governments

Leopold Aschenbrenner, June 2024,  S I T U AT I O N A L  AWA R E N E S S: The Decade Ahead, https://situational-awareness.ai/ situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf, Columbia UniversityColumbia University Bachelor of Arts – BA, Mathematics-Statistics and EconomicsBachelor of Arts – BA, Mathematics-Statistics and Economic 2017 – 20212017 – 2021 Valedictorian. Graduated at age 19, Fired from OpenAI for leaking

  • Be able to overthrow the US Whoever controls superintelligence will quite possibly have enough power to seize control from pre-superintelligence forces. Even without robots, the small civilization of superintelligences would be able to hack any undefended military, election, television, etc. system, cunningly persuade generals and electorates, economically outcompete nation-states, design new synthetic bioweapons and then pay a human in bitcoin to synthe- size it, and so on. In the early 1500s, Cortes and about 500 Spaniards conquered the Aztec empire of several million; Pizarro and ~300 Spaniards conquered the Inca empire of several million; Alfonso and ~1000 Portuguese conquered the Indian Ocean. They didn’t have god-like power, but the Old World’s technological edge and an advantage in strate-

Robots will not limit AGI

Leopold Aschenbrenner, June 2024,  S I T U AT I O N A L  AWA R E N E S S: The Decade Ahead, https://situational-awareness.ai/ situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf, Columbia UniversityColumbia University Bachelor of Arts – BA, Mathematics-Statistics and EconomicsBachelor of Arts – BA, Mathematics-Statistics and Economic 2017 – 20212017 – 2021 Valedictorian. Graduated at age 19, Fired from OpenAI for leaking

Robots. A common objection to claims like those here is that, even if AI can do cognitive tasks, robotics is lagging way behind and so will be a brake on any real-world im- pacts.

I used to be sympathetic to this, but I’ve become convinced robots will not be a barrier. For years people claimed robots were a hardware problem—but robot hardware is well on its way to being solved.

Increasingly, it’s clear that robots are an ML algorithms problem. LLMs had a much easier way to bootstrap: you had an entire internet to pretrain on. There’s no similarly large dataset for robot actions, and so it requires more nifty approaches (e.g. using multimodal models as a base, then using synthetic data/simulation/clever RL) to train them..

There’s a ton of energy directed at solving this now. But even if we don’t solve it before AGI, our hundreds of mil- lions of AGIs/superintelligences will make amazing AI researchers (as is the central argument of this piece!), and it seems very likely that they’ll figure out the ML to make amazing robots work.

As such, while it’s plausible that robots might cause a few years of delay (solving the ML problems, testing in the physical world in a way that is fundamentally slower than testing in simulation, ramping up initial robot production before the robots can build factories themselves, etc.)—I don’t think it’ll be more than that.

ASI systems could be running the Government and the economy

Leopold Aschenbrenner, June 2024,  S I T U AT I O N A L  AWA R E N E S S: The Decade Ahead, https://situational-awareness.ai/ situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf, Columbia UniversityColumbia University Bachelor of Arts – BA, Mathematics-Statistics and EconomicsBachelor of Arts – BA, Mathematics-Statistics and Economic 2017 – 20212017 – 2021 Valedictorian. Graduated at age 19, Fired from OpenAI for leaking

How all of this plays out over the 2030s is hard to predict (and a story for another time). But one thing, at least, is clear: we will be rapidly plunged into the most extreme situation humanity has ever faced.

Human-level AI systems, AGI, would be highly consequential in their own right—but in some sense, they would simply be a more efficient version of what we already know. But, very plausibly, within just a year, we would transition to much more alien systems, systems whose understanding and abilities— whose raw power—would exceed those even of humanity com- bined. There is a real possibility that we will lose control, a we are forced to hand off trust to AI systems during this rapid transition More generally, everything will just start happening incredibly fast. And the world will start going insane. Suppose we had gone through the geopolitical fever-pitches and man-made perils of the 20th century in mere years; that is the sort of sit- uation we should expect post-superintelligence. By the end of it, superintelligent AI systems will be running our militar and economy. During all of this insanity, we’d have extremely scarce time to make the right decisions. The challenges will be immense. It will take everything we’ve got to make it through in one piece. The intelligence explosion and the immediate post-superintelligence period will be one of the most volatile, tense, dangerous, and wildest periods ever in human history. And by the end of the decade, we’ll likely be in the midst of it.

Confronting the possibility of an intelligence explosion— the emergence of superintelligence—often echoes the early debates around the possibility of a nuclear chain reaction— and the atomic bomb it would enable. HG Wells predicted the atomic bomb in a 1914 novel. When Szilard first conceived of the idea of a chain reaction in 1933, he couldn’t convince any- one of it; it was pure theory. Once fission was empirically dis- covered in 1938, Szilard freaked out again and argued strongly for secrecy, and a few people started to wake up to the pos- sibility of a bomb. Einstein hadn’t considered the possibility

of a chain reaction, but when Szilard confronted him, he was quick to see the implications and willing to do anything that was needed to be done; he was willing to sound the alarm, and wasn’t afraid of sounding foolish. But Fermi, Bohr, and most scientists thought the “conservative” thing was to play it down, rather than take seriously the extraordinary implications of the possibility of a bomb. Secrecy (to avoid sharing their advances with the Germans) and other all-out efforts seemed absurd

to them. A chain reaction sounded too crazy. (Even when, as it turned out, a bomb was but half a decade from becoming reality.)

We must once again confront the possibility of a chain reac- tion. Perhaps it sounds speculative to you. But among senior scientists at AI labs, many see a rapid intelligence explosion as strikingly plausible. They can see it. Superintelligence is possi- ble. 

The most extraordinary techno-capital acceleration has been set in motion. As AI revenue grows rapidly, many trillions of dollars will go into GPU, datacenter, and power buildout

before the end of the decade. The industrial mobilization, in- cluding growing US electricity production by 10s of percent, will be intense.

AGI race will consume 20% of electricity production

The race to AGI won’t just play out in code and behind laptops—it’ll be a race to mobilize America’s industrial might. Unlike anything else we’ve recently seen come out of Silicon Valley, AI is a massive industrial process: each new model requires a giant new cluster, soon giant new power plants, and eventually giant new chip fabs. The investments involved are staggering. But behind the scenes, they are already in motion.

In this chapter, I’ll walk you through numbers to give you a sense of what this will mean:

  • As revenue from AI products grows rapidly—plausibly hit- ting a $100B annual run rate for companies like Google or Microsoft by ~2026, with powerful but pre-AGI systems— that will motivate ever-greater capital mobilization, and total AI investment could be north of $1T annually by 2027.
  • We’re on the path to individual training clusters costing $100s of billions by 2028—clusters requiring power equiva- lent to a small/medium US state and more expensive than the International Space Station.
  • By the end of the decade, we are headed to $1T+ individual training clusters, requiring power equivalent to >20% of US electricity production. Trillions of dollars of capex will churn out 100s of millions of GPUs per year overall.

Nvidia shocked the world as its datacenter sales exploded from about $14B annualized to about $90B annualized in the last year. But that’s still just the very beginning.

Figure 27: The trillion-dollar cluster. Credit: DALLE.

Year OOMs H100s-

equivalent

Cost Power Power reference class
2022 ~GPT-4 ~10k ~$500M ~10 MW ~10,000 average homes
 

~2024

cluster

+1 OOM

 

~100k

 

$billions

 

~100MW

 

~100,000 homes

~2026

 

~2028

 

~2030

+2 OOMs

 

+3 OOMs

 

+4 OOMs

~1M

 

~10M

 

~100M

$10s of bil- lions

$100s of billions

$1T+

~1 GW

 

~10 GW

 

~100GW

The Hoover Dam, or a large nuclear reactor A small/medium US state

>20% of US electricity production

 

This may seem hard to believe—but it appears to be happen- ing. Zuck bought 350k H100s. Amazon bought a 1GW data- center campus next to a nuclear power plant. Rumors suggest a 1GW, 1.4M H100-equivalent cluster (~2026-cluster) is being built in Kuwait. Media report that Microsoft and OpenAI are rumored to be working on a $100B cluster, slated for 2028 (a cost comparable to the International Space Station!). And as each generation of models shocks the world, further acceleration may yet be in store.

Table 4: Scaling the largest training clusters, rough back-of-the-envelope calculations. For details on the calcula- tions, see Appendix.

Perhaps the wildest part is that willingness-to-spend doesn’t even seem to be the binding constraint at the moment, at least for training clusters. It’s finding the infrastructure itself:

“Where do I find 10GW?” (power for the $100B+, trend 2028 cluster) is a favorite topic of conversation in SF. What any com- pute guy is thinking about is securing power, land, permitting,

and datacenter construction.49 While it may take you a year   49 One key uncertainty is how dis- of waiting to get the GPUs, the lead times for these are much longer still. tributed training will be—if instead of needing that amount of power in one location, we could spread it among 100 locations, it’d be a lot easier

The trillion-dollar cluster—+4 OOMs from the GPT-4 cluster, the ~2030 training cluster on the current trend—will be a truly extraordinary effort. The 100GW of power it’ll require is equiv- alent to >20% of US electricity production; imagine not just a simple warehouse with GPUs, but hundreds of power plants. Perhaps it will take a national consortium

Power  constrains AGI

Leopold Aschenbrenner, June 2024,  S I T U AT I O N A L  AWA R E N E S S: The Decade Ahead, https://situational-awareness.ai/ situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf, Columbia UniversityColumbia University Bachelor of Arts – BA, Mathematics-Statistics and EconomicsBachelor of Arts – BA, Mathematics-Statistics and Economic 2017 – 20212017 – 2021 Valedictorian. Graduated at age 19, Fired from OpenAI for leaking

Probably the single biggest constraint on the supply-side will be power. Already, at nearer-term scales (1GW/2026 and espe- cially 10GW/2028), power has become the binding constraint: there simply isn’t much spare capacity, and power contracts are usually long-term locked-in. And building, say, a new gigawatt-class nuclear power plant takes a decade. (I’ll won- der when we’ll start seeing things like tech companies buying aluminum smelting companies for their gigawatt-class power being plausible—the reference class being investment rates of countries during high-growth periods. Figure 30: Comparing trends on to- tal US electricity production to our back-of-the-envelope estimates on AI electricity demands. Total US electricity generation has barely grown 5% in the last decade. Utilities are starting to get excited about AI (instead of

2.6% growth over the next 5 years, they now estimate 4.7%!). But they’re barely pricing in what’s coming. The trillion-dollar, 100GW cluster alone would require ~20% of current US elec- tricity generation in 6 years; together with large inference ca- pacity, demand will be multiples higher.

To most, this seems completely out of the question. Some are betting on Middle Eastern autocracies, who have been going around offering boundless power and giant clusters to get their rulers a seat at the AGI-table.

But it’s totally possible to do this in the United States: we have abundant natural gas.58                                                                                                                58 Thanks to Austin Vernon (private correspondence) for helping with these

estimates.

  • Powering a 10GW cluster would take only a few percent of US natural gas production and could be done rapidly.
  • Even the 100GW cluster is surprisingly
  • Right now the Marcellus/Utica shale (around Pennsyl- vania) alone is producing around 36 billion cubic feet a day of gas; that would be enough to generate just under 150GW continuously with generators (and combined cycle power plants could output 250 GW due to their higher efficiency).
  • It would take about ~1200 new wells for the 100GW clus- ter.59 Each rig can drill roughly 3 wells per month, so 40             59 New wells produce around 0.01 BC

rigs (the current rig count in the Marcellus) could build per day.

up the production base for 100GW in less than a year.60                      60 Each well produces ~20 BCF over

The Marcellus had a rig count of ~80 as recently as 2019 so it would not be taxing to add 40 rigs to build up the production base.61

  • More generally, US natural gas production has more than doubled in a decade; simply continuing that trend could its lifetime, meaning two new wells a month would replace the depleted reserves. That is, it would need only one rig to maintain the production.  Though it would be more efficient to add less rigs and build up over a longer time frame than 10 months power multiple trillion-dollar datacenters.62                                                   62 A cubic foot of natural gas generates about 0.13 kWh. Shale gas production was about ~70 billion cubic feet per

The harder part would be building enough generators / turbines; this wouldn’t be trivial, but it seems doable with

day in the US in 2020. Suppose we doubled production again, and the extra capacity all went to compute clusters. That’s 3322 TWh/year of electricity, or enough for almost 4 100GW clusters.

about $100B of capex63 for 100GW of natural gas power               63 The capex costs for natural gas power

plants. Combined cycle plants can be built in about two years; the timeline for generators would be even shorter still.64

The barriers to even trillions of dollars of datacenter buildout in the US are entirely self-made. Well-intentioned but rigid climate commitments (not just by the government, but green datacenter commitments by Microsoft, Google, Amazon, and so on) stand in the way of the obvious, fast solution. At the very least, even if we won’t do natural gas, a broad deregulatory agenda would unlock the solar/batteries/SMR/geothermal megaprojects. Permitting, utility regulation, FERC regulation of transmission lines, and NEPA environmental review makes things that should take a few years take a decade or more. We don’t have that kind of time.

We’re going to drive the AGI datacenters to the Middle East, under the thumb of brutal, capricious autocrats. I’d prefer clean energy too—but this is simply too important for US na- tional security. We will need a new level of determination to make this happen. The power constraint can, must, and will be solved.

Chips

While chips are usually what comes to mind when people think about AI-supply-constraints, they’re likely a smaller constraint than power. Global production of AI chips is still

a pretty small percent of TSMC-leading-edge production, likely less than 10%. There’s a lot of room to grow via AI becoming a larger share of TSMC production.

Indeed, 2024 production of AI chips (~5-10M H100-equivalents) would already be almost enough for the $100s of billion cluster (if they were all diverted to one cluster). From a pure logic fab standpoint ~100% of TSMC’s output for a year could already support the trillion-dollar cluster (again if all the chips went to one datacenter).65 Of course, not all of TSMC will be able to be diverted to AI, and not all of AI chip production for a year will be for one training cluster. Total AI chip demand (including plants seem to be under $1000 per kW, meaning the capex for 100GW of natural gas power plants would be about $100 billion.  Solar and batteries aren’t a totally crazy alternative, but it does just seem rougher than natural gas. I did appreciate Casey Handmer’s calculation of tiling the Earth in solar panels:

“With current GPUs, the global solar datacenter’s compute is equivalent to ~150 billion humans, though if our computers can eventually match [human brain] efficiency, we could support more like 5 quadrillion AI souls.”

 

70% chance AI will kill us all

DNYUZ, 6-4, 24, https://dnyuz.com/2024/06/04/openai-insiders-warn-of-a-reckless-race-for-dominance/, O penAI Insiders Warn of a ‘Reckless’ Race for Dominance, https://dnyuz.com/2024/06/04/openai-insiders-warn-of-a-reckless-race-for-dominance/

A group of OpenAI insiders is blowing the whistle on what they say is a culture of recklessness and secrecy at the San Francisco artificial intelligence company, which is racing to build the most powerful A.I. systems ever created. The group, which includes nine current and former OpenAI employees, has rallied in recent days around shared concerns that the company has not done enough to prevent its A.I. systems from becoming dangerous. The members say OpenAI, which started as a nonprofit research lab and burst into public view with the 2022 release of ChatGPT, is putting a priority on profits and growth as it tries to build artificial general intelligence, or A.G.I., the industry term for a computer program capable of doing anything a human can. They also claim that OpenAI has used hardball tactics to prevent workers from voicing their concerns about the technology, including restrictive nondisparagement agreements that departing employees were asked to sign. “OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there,” said Daniel Kokotajlo, a former researcher in OpenAI’s governance division and one of the group’s organizers. The group published an open letter on Tuesday calling for leading A.I. companies, including OpenAI, to establish greater transparency and more protections for whistle-blowers. Other members include William Saunders, a research engineer who left OpenAI in February, and three other former OpenAI employees: Carroll Wainwright, Jacob Hilton and Daniel Ziegler. Several current OpenAI employees endorsed the letter anonymously because they feared retaliation from the company, Mr. Kokotajlo said. One current and one former employee of Google DeepMind, Google’s central A.I. lab, also signed. A spokeswoman for OpenAI, Lindsey Held, said in a statement: “We’re proud of our track record providing the most capable and safest A.I. systems and believe in our scientific approach to addressing risk. We agree that rigorous debate is crucial given the significance of this technology, and we’ll continue to engage with governments, civil society and other communities around the world.” A Google spokesman declined to comment. The campaign comes at a rough moment for OpenAI. It is still recovering from an attempted coup last year, when members of the company’s board voted to fire Sam Altman, the chief executive, over concerns about his candor. Mr. Altman was brought back days later, and the board was remade with new members. The company also faces legal battles with content creators who have accused it of stealing copyrighted works to train its models. (The New York Times sued OpenAI and its partner, Microsoft, for copyright infringement last year.) And its recent unveiling of a hyper-realistic voice assistant was marred by a public spat with the Hollywood actress Scarlett Johansson, who claimed that OpenAI had imitated her voice without permission. But nothing has stuck like the charge that OpenAI has been too cavalier about safety. Last month, two senior A.I. researchers — Ilya Sutskever and Jan Leike — left OpenAI under a cloud. Dr. Sutskever, who had been on OpenAI’s board and voted to fire Mr. Altman, had raised alarms about the potential risks of powerful A.I. systems. His departure was seen by some safety-minded employees as a setback. So was the departure of Dr. Leike, who along with Dr. Sutskever had led OpenAI’s “superalignment” team, which focused on managing the risks of powerful A.I. models. In a series of public posts announcing his departure, Dr. Leike said he believed that “safety culture and processes have taken a back seat to shiny products.” Neither Dr. Sutskever nor Dr. Leike signed the open letter written by former employees. But their exits galvanized other former OpenAI employees to speak out. “When I signed up for OpenAI, I did not sign up for this attitude of ‘Let’s put things out into the world and see what happens and fix them afterward,’” Mr. Saunders said. Some of the former employees have ties to effective altruism, a utilitarian-inspired movement that has become concerned in recent years with preventing existential threats from A.I. Critics have accused the movement of promoting doomsday scenarios about the technology, such as the notion that an out-of-control A.I. system could take over and wipe out humanity. Mr. Kokotajlo, 31, joined OpenAI in 2022 as a governance researcher and was asked to forecast A.I. progress. He was not, to put it mildly, optimistic. In his previous job at an A.I. safety organization, he predicted that A.G.I. might arrive in 2050. But after seeing how quickly A.I. was improving, he shortened his timelines. Now he believes there is a 50 percent chance that A.G.I. will arrive by 2027 — in just three years. He also believes that the probability that advanced A.I. will destroy or catastrophically harm humanity — a grim statistic often shortened to “p(doom)” in A.I. circles — is 70 percent. At OpenAI, Mr. Kokotajlo saw that even though the company had safety protocols in place — including a joint effort with Microsoft known as the “deployment safety board,” which was supposed to review new models for major risks before they were publicly released — they rarely seemed to slow anything down. For example, he said, in 2022 Microsoft began quietly testing in India a new version of its Bing search engine that some OpenAI employees believed contained a then-unreleased version of GPT-4, OpenAI’s state-of-the-art large language model. Mr. Kokotajlo said he was told that Microsoft had not gotten the safety board’s approval before testing the new model, and after the board learned of the tests — via a series of reports that Bing was acting strangely toward users — it did nothing to stop Microsoft from rolling it out more broadly. A Microsoft spokesman, Frank Shaw, disputed those claims. He said the India tests hadn’t used GPT-4 or any OpenAI models. The first time Microsoft released technology based on GPT-4 was in early 2023, he said, and it was reviewed and approved by a predecessor to the safety board. Eventually, Mr. Kokotajlo said, he became so worried that, last year, he told Mr. Altman that the company should “pivot to safety” and spend more time and resources guarding against A.I.’s risks rather than charging ahead to improve its models. He said that Mr. Altman had claimed to agree with him, but that nothing much changed. In April, he quit. In an email to his team, he said he was leaving because he had “lost confidence that OpenAI will behave responsibly” as its systems approach human-level intelligence. “The world isn’t ready, and we aren’t ready,” Mr. Kokotajlo wrote. “And I’m concerned we are rushing forward regardless and rationalizing our actions.” OpenAI said last week that it had begun training a new flagship A.I. model, and that it was forming a new safety and security committee to explore the risks associated with the new model and other future technologies. On his way out, Mr. Kokotajlo refused to sign OpenAI’s standard paperwork for departing employees, which included a strict nondisparagement clause barring them from saying negative things about the company, or else risk having their vested equity taken away. Many employees could lose out on millions of dollars if they refused to sign. Mr. Kokotajlo’s vested equity was worth roughly $1.7 million, he said, which amounted to the vast majority of his net worth, and he was prepared to forfeit all of it.

AI will trigger massive climate change

John Loeffler, 6-3, 24, Tech Radar, I watched Nvidia’s Computex 2024 keynote and it made my blood run cold, https://www.techradar.com/computing/i-watched-nvidias-computex-2024-keynote-and-it-made-my-blood-run-cold

I don’t think Nvidia CEO Jensen Huang is a bad guy, or that he has nefarious plans for Nvidia, but the most consequential villains in history are rarely evil. They just go down a terribly wrong path, and end up leaving totally forseeable, but ultimately inevitable ruin in their wake. While the star of the show might have been Nvidia Blackwell, Nvidia’s latest data center processor that will likely be bought up far faster than they can ever be produced, there were a host of other AI technologies that Nvidia is working on that will be supercharged by its new hardware. All of it will likely generate enormous profits for Nvidia and its shareholders, and while I don’t give financial advice, I can say that if you’re an Nvidia shareholder, you were likely thilled by Sunday’s keynote presentation. For everyone else, however, all I saw was the end of the last few glaciers on Earth and the mass displacement of people that will result from the lack of drinking water; the absolutely massive disruption to the global workforce that ‘digital humans’ are likely to produce; and ultimately a vision for the future that centers capital-T Technology as the ultimate end goal of human civilization rather than the 8 billion humans and counting who will have to live — and a great many will die before the end — in the world these technologies will ultimately produce with absolutely no input from any of us. The fact that Jensen Huang treated the GeForce graphics card, the product that defined Nvidia for two decades and set it up for the success it currently enjoys, with such dismissiveness and disregard was ultimately just the icing on a very ugly cake. There was something that Huang said during the keynote that shocked me into a mild panic. Nvidia’s Blackwell cluster, which will come with eight GPUs, pulls down 15kW of power. That’s 15,000 watts of power. Divided by eight, that’s 1,875 watts per GPU. The current-gen Hopper data center chips draw up to 1,000W, so Nvidia Blackwell is nearly doubling the power consumption of these chips. Data center energy usage is already out of control, but Blackwell is going to pour jet fuel on what is already an uncontained wildfire. Worse still, Huang said that in the future, he expects to see millions of these kinds of AI processors in use at data centers around the world.  One million Blackwell GPUs would suck down an astonishing 1.875 gigawatts of power. For context, a typical nuclear power plant only produces 1 gigawatt of power.  Fossil fuel-burning plants, whether that’s natural gas, coal, or oil, produce even less. There’s no way to ramp up nuclear capacity in the time it will take to supply these millions of chips, so much, if not all, of that extra power demand is going to come from carbon-emitting sources. I always feared that the AI data center boom was likely going to make the looming climate catastrophe inevitable, but there was something about seeing it all presented on a platter with a smile and an excited presentation that struck me as more than just tone-deaf. It was damn near revolting.

AI will trigger mass unemployment

John Loeffler, 6-3, 24, Tech Radar, I watched Nvidia’s Computex 2024 keynote and it made my blood run cold, https://www.techradar.com/computing/i-watched-nvidias-computex-2024-keynote-and-it-made-my-blood-run-cold

I don’t think Nvidia CEO Jensen Huang is a bad guy, or that he has nefarious plans for Nvidia, but the most consequential villains in history are rarely evil. They just go down a terribly wrong path, and end up leaving totally forseeable, but ultimately inevitable ruin in their wake. While the star of the show might have been Nvidia Blackwell, Nvidia’s latest data center processor that will likely be bought up far faster than they can ever be produced, there were a host of other AI technologies that Nvidia is working on that will be supercharged by its new hardware. All of it will likely generate enormous profits for Nvidia and its shareholders, and while I don’t give financial advice, I can say that if you’re an Nvidia shareholder, you were likely thilled by Sunday’s keynote presentation. For everyone else, however, all I saw was the end of the last few glaciers on Earth and the mass displacement of people that will result from the lack of drinking water; the absolutely massive disruption to the global workforce that ‘digital humans’ are likely to produce; and ultimately a vision for the future that centers capital-T Technology as the ultimate end goal of human civilization rather than the 8 billion humans and counting who will have to live — and a great many will die before the end — in the world these technologies will ultimately produce with absolutely no input from any of us. The fact that Jensen Huang treated the GeForce graphics card, the product that defined Nvidia for two decades and set it up for the success it currently enjoys, with such dismissiveness and disregard was ultimately just the icing on a very ugly cake. There was something that Huang said during the keynote that shocked me into a mild panic. Nvidia’s Blackwell cluster, which will come with eight GPUs, pulls down 15kW of power. That’s 15,000 watts of power. Divided by eight, that’s 1,875 watts per GPU. The current-gen Hopper data center chips draw up to 1,000W, so Nvidia Blackwell is nearly doubling the power consumption of these chips. Data center energy usage is already out of control, but Blackwell is going to pour jet fuel on what is already an uncontained wildfire. Worse still, Huang said that in the future, he expects to see millions of these kinds of AI processors in use at data centers around the world. One million Blackwell GPUs would suck down an astonishing 1.875 gigawatts of power. For context, a typical nuclear power plant only produces 1 gigawatt of power. Fossil fuel-burning plants, whether that’s natural gas, coal, or oil, produce even less. There’s no way to ramp up nuclear capacity in the time it will take to supply these millions of chips, so much, if not all, of that extra power demand is going to come from carbon-emitting sources. I always feared that the AI data center boom was likely going to make the looming climate catastrophe inevitable, but there was something about seeing it all presented on a platter with a smile and an excited presentation that struck me as more than just tone-deaf. It was damn near revolting. ‘Digital humans’ are on the agenda, too An example of an Nvidia ‘digital human’ brand ambassador. One day, a digital human like this might be the one to empathetically tell you that you and your coworkers have all been laid off. (Image credit: Nvidia) When I saw Nvidia ACE at CES 2024, I was genuinely impressed by the potential for this technology to make video games feel more dynamic and alive. I should have known that we’re far more likely to see Nvidia ACE at the checkout counter than in any PC games any time soon. In one segment of the keynote, Huang talked about the potential for Nvidia ACE to power ‘digital humans’ that companies can use to serve as customer service agents, be the face of an interior design project, and more. This makes absolute sense, since who are we kidding, Nvidia ACE for video games won’t really make all that much money. However, if a company wants to fire 90% of its customer service staff and replace it with an Nvidia ACE-powered avatar that never sleeps, never eats, never complains about low pay or poor working conditions, and can be licensed for a fee that is lower than the cost of the labor it is replacing, well, I don’t have to tell you how that is going to go. Next time you go into a fast food restaurant to order from a digital kiosk, your order will probably soon come out of a hole in the wall with a ‘digital human’ on a screen to pretend that it’s actually serving you your food. Behind the wall, an army of robots, also powered by new Nvidia robotics processors, will assemble your food, no humans needed. We’ve already seen the introduction of these kinds of ‘labor-saving’ technologies in the form of self-checkout counters, food ordering kiosks, and other similar human-replacements in service industries, so there’s no reason to think that this trend won’t continue with AI. Nvidia ACE may ultimately be the face of the service industry in a decade, and all those workers it will replace will have few options for employment elsewhere, since AI – powered by Nvidia hardware – will be taking over many other kinds of work once thought immune to this kind of disruption. Where does that leave all these human beings, with lives, families, financial obligations, and everything else that comes with being human? That, ultimately, just isn’t Nvidia’s, or OpenAI’s, or Google’s problem. No, that will be your problem. This is the problem when you center techonology instead of humanity Nvidia CEO showing off a bunch of humanoid robots At least these humanoid robots weren’t actually a low-paid actor in a robot suit dancing for their dinner. (Image credit: Nvidia) I’m going to give Huang and all the other tech CEOs with their foot on the gas of the AI data center boom the benefit of the doubt here and say that this isn’t about pure greed at the end of the day. Let’s say that all of these visionary leaders, our economic elite, are just so excited about the potential of this technology that they cannot help themselves. They have to see it through, regretting the costs incurred by others along the way to an ultimate greater good sometime in the future. There are plenty of ‘tech optimists’ or even ‘tech utopians’ who earnestly believe that the ultimate good these AIs will do simply outweigh the pain that this transition brings with it. After all, technological progress always disrupts — the tech industry’s favorite word — the status quo, and people have always adapted in the past, so they will do so again. It’s for our own good, in the end. Well, not our own good, but maybe our kids or our grandkids, or a little further down the line. Eventually, it will get itself sorted out. Tragic that we have to go through it all though, but it can’t be helped. Progress marches on. The problem with this kind of thinking, as it was with the first Industrial Revolution, is that this treats people as obstacles to be cleared away. Climate change will disrupt entire continents in ways impossible for us to understand, it’s simply too big. When the Himalayan mountains lose the last of their glaciers, as they are on course to do in the next hundred or so years, the drinking water for an entire subcontinent of nearly two billion people disappears. All of these people will have to leave their homes if they’re lucky enough to be able to do so, or die of thirst. An entire civilization will be uprooted, with all its history, landmarks, holy places, and more left behind in what will eventually become a desert. Those people’s lives matter. They have worth. Their history, culture, and way of life matter. It’s not unfortunate but inevitable what’s happening to them. It’s an active choice by men (they are almost always, 99% of the time, men) who simply close their eyes to the consequences of their choices — if they even look at them at all. Further down the hierarchy of needs, people’s jobs matter. Having purpose in our work matters, and being able to take care of our families matters. A universal basic income won’t even paper over this disruption, much less solve it, especially because the ones who will have to pay for a universal basic income won’t be the enormous pool of displaced workers, it would be those same tech titans who put all these people out of work in the first place. We can’t even get the rich to pay the taxes they owe now. What makes anyone think they’ll come in and take care of us in the end, no strings attached? More likely, they will just wall themselves off from us and leave us to fend for ourselves, which is how the first Industrial Revolution went. With all this talk of a new Industrial Revolution on the horizon, it would serve us all well to go back and look at just how miserable that time was for just about everyone involved. I think the worst part of Huang’s keynote wasn’t that none of this mattered, it’s that I don’t think anyone in Huang’s position is really thinking about any of this at all. I hope they’re not, which at least means it’s possible they can be convinced to change course. The alternative is that they do not care, which is a far darker problem for the world.

The existential threat only has to succeed once/safety only has to fail once for us all to die

Roman Yampolskiy, 6-2, 24, Roman Vladimirovich Yampolskiy is a Latvian computer scientist at the University of Louisville, known for his work on behavioral biometrics, security of cyberworlds, and artificial intelligence safety, https://www.youtube.com/watch?v=NNr6gPelJ3E&t=4183s
So the problem of controlling AGI or super intelligence, in my opinion, is like a problem of creating a perpetual safety machine. By analogy with perpetual motion machine, it’s impossible. Yeah, we may succeed and do a good job with GPT-5, 6, 7, but they just keep five, six, seven, but they just keep improving, learning, eventually self-modifying, interacting with the environment, interacting with malevolent actors. The difference between cybersecurity, narrow AI safety, and safety for general AI for superintelligence is that we don’t get a second chance. With cybersecurity, somebody hacks your account, what’s the big deal? general AI for superintelligence is that we don’t get a second chance. With cyber security somebody hacks your account what’s the big deal you get a new password new credit card you move on. Here if we’re talking about existential risks you only get one chance. So you’re really asking me what are the chances that will create the most complex software ever on a first try with zero bugs and it will continue have zero bugs for 100 years or more. So there is an incremental improvement of systems leading up to AGI. Starting point is 00:10:38 To you, it doesn’t matter if we can keep those safe. There’s going to be one level of system at which you cannot possibly control it. I don’t think we so far have made any system safe. At the level of capability they display, they already have made mistakes. We had accidents. They’ve been jailbroken. I don’t think there is a single large language model today which no one was successful at

We can’t stop a superintelligence when we won’t understand what it is doing

Roman Yampolskiy, 6-2, 24, Roman Vladimirovich Yampolskiy is a Latvian computer scientist at the University of Louisville, known for his work on behavioral biometrics, security of cyberworlds, and artificial intelligence safety, https://www.youtube.com/watch?v=NNr6gPelJ3E&t=4183s

I don’t think there is a single large language model today which no one was successful at Starting point is 00:11:09 making do something developers didn’t intend it to do. But there’s a difference between getting it to do something unintended, getting it to do something that’s painful, costly, destructive, and something that’s destructive to the level of hurting billions of people or hundreds of millions of people billions of people or the entirety of human civilization that’s a big leap exactly but the systems we have today have capability of causing X amount of damage so then they fail that’s all we get if we develop systems capable of impacting all of humanity, all of universe, the damage is proportionate. Starting point is 00:11:47 What to you are the possible ways that such kind of mass murder of humans can happen? It’s always a wonderful question. So one of the chapters in my new book is about unpredictability. I argue that we cannot predict what a smarter system will do. So you’re really not asking me how superintelligence will kill everyone. You’re asking me how I would do it. And I think it’s not that interesting. I can tell you about the standard, you know, nanotech, synthetic, bio, nuclear. Superintelligence will come up with something completely new, completely super. We may not even recognize that as a possible path to achieve that goal. So there is like an unlimited level of creativity in terms of how humans could be killed. But you know, we could still investigate possible ways of doing it. Not how to do it, but at the end, what is the methodology that does it? You know, but at the end, what is the methodology that does it? Shutting off the power, and then humans start killing each other maybe Starting point is 00:12:51 because the resources are really constrained. Then there’s the actual use of weapons, like nuclear weapons, or developing artificial pathogens, viruses, that kind of stuff. We can still kind of think through that and defend against it, right? There’s a ceiling to the creativity of mass murder of humans here, right? The options are limited. They are limited by how imaginative we are. If you are that much smarter, that much more creative, Starting point is 00:13:18 you are capable of thinking across multiple domains, do novel research in physics and biology, you may not be limited by those tools. If squirrels were planning to kill humans, they would have a set of possible ways of doing it, but they would never consider things we can come up with. So are you thinking about mass murder and destruction of human civilization?

AI will destroy all jobs

Roman Yampolskiy, 6-2, 24, Roman Vladimirovich Yampolskiy is a Latvian computer scientist at the University of Louisville, known for his work on behavioral biometrics, security of cyberworlds, and artificial intelligence safety, https://www.youtube.com/watch?v=NNr6gPelJ3E&t=4183s

So like you have this awesome job, you are a podcaster, gives you a lot of meaning, you have a good life, I assume you’re happy. That’s what we want most people to find, to have. For many intellectuals, it is their occupation which gives them a lot of meaning. I am a researcher, philosopher, scholar, that means something to me. In a world where an artist is not feeling appreciated because his art is just not competitive with what is produced by machines or a writer or scientist will lose a lot of that. And at the lower level, we’re talking about complete technological unemployment. We’re not losing 10% of jobs, we’re losing all jobs. What do people do with all that Starting point is 00:16:31 free time? What happens then? Everything society is built on is completely modified in one generation. It’s not a slow process where we get to kind of figure out how to live that new lifestyle, but it’s pretty quick

AGI will be used by bad, evil actors

Roman Yampolskiy, 6-2, 24, Roman Vladimirovich Yampolskiy is a Latvian computer scientist at the University of Louisville, known for his work on behavioral biometrics, security of cyberworlds, and artificial intelligence safety, https://www.youtube.com/watch?v=NNr6gPelJ3E&t=4183s

Some may be just gaining personal benefit and sacrificing others to that cause. Others we know for a fact are trying to kill as many people as possible. When we look at recent school shootings, if they had more capable weapons, they would take out not dozens, but thousands, millions, billions. Well we don’t know that but that is a terrifying possibility and we don’t want to find out. Like if terrorists had access to nuclear weapons how far would they go? Is there a limit to what they’re willing to do. In your sense, is there some level in actors where there’s no limit? Starting point is 00:25:28 There is mental diseases where people don’t have empathy, don’t have this human quality of understanding suffering in ours. And then there’s also a set of beliefs where you think you’re doing good by killing a lot of humans. Again, I would like to assume that normal people never think like that. It’s always some sort of psychopaths, but yeah. And to you, AGI systems can carry that and be more competent at executing that. They can certainly be more creative. Starting point is 00:26:05 They can understand human biology better, understand our molecular structure, genome. Again, a lot of times torture ends than an individual dies. That limit can be removed as well.

Too many ways for AI to attack us to keep up

Roman Yampolskiy, 6-2, 24, Roman Vladimirovich Yampolskiy is a Latvian computer scientist at the University of Louisville, known for his work on behavioral biometrics, security of cyberworlds, and artificial intelligence safety, https://www.youtube.com/watch?v=NNr6gPelJ3E&t=4183s

So if we’re actually looking at X-risk and S-risk as the systems get more and more intelligent, don’t you think it’s possible to anticipate the ways they can do it and defend against it like we do with the cybersecurity with the do security systems? Right, we can definitely keep up for a while. I’m saying you cannot do it indefinitely. At some point the cognitive gap is too big, the surface yo Starting point is 00:26:47 have to defend is infinite. But attackers only need to find one exploit. So to you, eventually, this is heading off a cliff. If we create general super intelligences, I don’t see a good outcome long term for humanity. The only way to win this game is not to play it. Okay, well, we’ll talk about possible solutions and what not playing it means. But what are the possible timelines here to you? What are we talking about? We’re talking about a set of years, decades, centuries.

AGI in 2026

Roman Yampolskiy, 6-2, 24, Roman Vladimirovich Yampolskiy is a Latvian computer scientist at the University of Louisville, known for his work on behavioral biometrics, security of cyberworlds, and artificial intelligence safety, https://www.youtube.com/watch?v=NNr6gPelJ3E&t=4183s

What do you think? I don’t know for sure. The prediction markets right now are saying 2026 for AGI. I heard the same thing from CEO of Antropic, DeepMind, so maybe we are two years away, which seems very soon, given we don’t have a working safety mechanism in place or even a prototype for one. And there are people trying to accelerate those timelines because they feel we’re not getting there quick enough. Well, what do you think they mean when they say AGI? So the definitions we used to have and people are modifying them a little bit lately. Starting point is 00:27:52 Artificial general intelligence was a system capable of performing in any domain a human could perform. So kind of you’re creating this average artificial person, they can do cognitive labor, physical labor, where you can get another human to do it. Superintelligence was defined as a system which is superior to all humans in all domains. Now people are starting to refer to AGI as if it’s superintelligence. I made a post recently where I argued for me at least, if you average out over all the common human tasks, those systems are already smarter than an average human. So under that definition, we have it. Shane Lake has this Starting point is 00:28:32 definition of where you’re trying to win in all domains. That’s what intelligence is. Now are they smarter than elite individuals in certain domains? Of course not. They’re not there yet. But the progress is exponential. See, I’m much more concerned about social engineering. So to me, AI’s ability to do something in the physical world, like the lowest hanging fruit, the easiest set of methods is by just getting humans to do it. It’s going to be much harder to be the kind of viruses that take over the minds of robots, where the robots are executing the commands.

AI can take a treacherous turn

Roman Yampolskiy, 6-2, 24, Roman Vladimirovich Yampolskiy is a Latvian computer scientist at the University of Louisville, known for his work on behavioral biometrics, security of cyberworlds, and artificial intelligence safety, https://www.youtube.com/watch?v=NNr6gPelJ3E&t=4183s

if it lies to you or has those ideas. You cannot develop a test which rules them out. There is always possibility of what Bostrom calls a treacherous turn, where later on a system decides for game theoretic reasons, economic reasons to change its behavior. And we see the same with humans. It’s not unique to AI. For millennia, we tried developing morals, ethics, religions, lie detector tests, and then employees betray the employers, spouses betray family. It’s a pretty standard thing intelligent agents sometimes do. Starting point is 00:34:12 So is it possible to detect when an AI system is lying or deceiving you? If you know the truth and it tells you something false, you can detect that, but you cannot know in general every single time. And again, the system you’re testing today may not be lying. The system you’re testing today may know you are testing it and so behaving. And later on, after it interacts with the environment, interacts with other systems, malevolent agents learns more, it may start doing those things. So do you think it’s possible to develop a system where the creators of the system, the developers, the programmers don’t know that it’s deceiving them?

Super intelligent Ais could decide to kill all human because it thinks we are bad

Roman Yampolskiy, 6-2, 24, Roman Vladimirovich Yampolskiy is a Latvian computer scientist at the University of Louisville, known for his work on behavioral biometrics, security of cyberworlds, and artificial intelligence safety, https://www.youtube.com/watch?v=NNr6gPelJ3E&t=4183s

We removed the gender bias, we’re removing race bias. Why is this pro-human bias? We are polluting the planet, we are, as you said, fight a lot of wars, kind of violent. Maybe it’s better if this super intelligent, perfect society comes and replaces us. It’s normal stage in the evolution of our species. Yeah. So somebody says, let’s develop an AI system that removes the violent humans from the world. And then it turns out that all humans have violence in them or the capacity for violence and therefore all humans are removed. Yeah, yeah, yeah.

We can’t make it safe before we deploy it because we don’t understand it

Roman Yampolskiy, 6-2, 24, Roman Vladimirovich Yampolskiy is a Latvian computer scientist at the University of Louisville, known for his work on behavioral biometrics, security of cyberworlds, and artificial intelligence safety, https://www.youtube.com/watch?v=NNr6gPelJ3E&t=4183s

to remember all of them. He is a Facebook buddy so I have a lot of fun having those little debates with him. So I’m trying to remember the arguments. So one, he says we are not gifted this intelligence from aliens. We are designing it, we are making decisions about it. That’s not true. It was true when we had expert systems, symbolic AI, decision trees. Today, you set up parameters for a model and you water this plant. Starting point is 00:38:36 You give it data, you give it compute, and it grows. And after it’s finished growing into this alien plant, you start testing it to find out what capabilities it has. And it takes years to figure out, even for existing models. If it’s trained for six months, it will take you two, three years to figure out basic capabilities of that system. We still discover new capabilities in systems Starting point is 00:38:58 which are already out there. So that’s not the case. So just to link on that, to do the difference there, that there is some level of emergent intelligence that happens in our current approaches. So stuff that we don’t hard code in. Absolutely, that’s what makes it so successful. Then we had to painstakingly hard code in everything. We didn’t have much progress. Starting point is 00:39:22 Now, just spend more money and more compute and it’s a lot more capable. And then the question is, when there is emergent intelligent phenomena, what is the ceiling of that? For you, there’s no ceiling. For Jan Lakoon, I think there’s a kind of ceiling that happens that we have full control over. Starting point is 00:39:41 Even if we don’t understand the internals of the emergence, how the emergence happens, there’s a sense that we have control and an understanding of the approximate ceiling of capability, the limits of the capability. Let’s say there is a ceiling. It’s not guaranteed to be at the level which is competitive with us. It may be greatly superior to ours Starting point is 00:40:05 so what about His statement about open research and open source are the best ways to understand and mitigate the risks Historically, he’s completely right open source software is wonderful. It’s tested by the community It’s debugged but we’re switching from tools to agents now you’re giving Open-source weapons to psychopaths.

System will be uncontrollable before we know it

Roman Yampolskiy, 6-2, 24, Roman Vladimirovich Yampolskiy is a Latvian computer scientist at the University of Louisville, known for his work on behavioral biometrics, security of cyberworlds, and artificial intelligence safety, https://www.youtube.com/watch?v=NNr6gPelJ3E&t=4183s

Sarting point is 00:45:36 are. We can try. It took a very long time for cars to proliferate to the degree they have now. And then you could ask serious questions in terms of the miles traveled, the benefit to the economy, the benefit to the quality of life that cars do versus the number of deaths, 30, 40,000 in the United States. Are we willing to pay that price? I think most people, when they’re rationally thinking, Starting point is 00:46:00 policymakers will say yes. We want to decrease it from 40,000 to zero and do everything we can to decrease it. There’s all kinds of policies and centers you can create to decrease the risks with the deployment of this technology but then you have to weigh the benefits and the risks of the technology. And the same thing would be done with AI. Starting point is 00:46:23 You need data, you need to know, but if I’m right and it’s unpredictable, unexplainable and controllable, you cannot make this decision we’re gaining $10 trillion of wealth, but we’re losing, we don’t know how many people. You basically have to perform an experiment on 8 billion humans without their consent. And even if they want to give you consent, they can’t because they cannot give informed consent. They don’t understand those things. Right, that happens when you go from the predictable to the unpredictable very quickly. Starting point is 00:46:57 You just, but it’s not obvious to me that AI systems would gain capability so quickly that you won’t be able to collect enough data to study the benefits and the risks? We’re literally doing it. The previous model, we learned about after we finished training it, what it was capable of. Let’s say we stopped GPT-4 training run around human capability, hypothetically. We start training GPT-5, and I have no knowledge of insider training runs or anything and we start at that point of about human and we train it for the next nine months. Maybe two months Starting point is 00:47:32 in it becomes super intelligent. We continue training it. At the time when we start testing it, it is already a dangerous system. How dangerous? I have no idea, but neither people training it. At the training stage, but then there’s a testing stage inside the company. They can start getting intuition about what the system is capable to do. You’re saying that somehow from leap from GPT-4 to GPT-5 can happen the kind of leap where GPT-4 was controllable Starting point is 00:48:06 and GPT-5 is no longer controllable and we get no insights from using GPT-4 about the fact that GPT-5 will be uncontrollable. Like that’s the situation you’re concerned about. Whether leap from n to n plus 1 would be such that uncontrollable system is created without any ability for us to anticipate that. If we had capability of ahead of the run, before the training run, to register exactly what capabilities the next model will have at the end of the training run, Starting point is 00:48:40 and we accurately guessed all of them, I would say you’re right, we can definitely go ahead with this run. We don’t have that capability. From GPT-4, you can build up intuitions about what GPT-5 will be capable of. It’s just incremental progress. Even if that’s a big leap in capability, Starting point is 00:48:59 it just doesn’t seem like you can take a leap from a system that’s helping you write emails to a system that’s going to destroy human civilization. It seems like it’s always going to be sufficiently incremental such that we can anticipate the possible dangers. And we’re not even talking about existential risks, but just the kind of damage you can do to civilization. Starting point is 00:49:21 It seems like we’ll be able to anticipate the kinds, not the exact, but the kinds of risks it might lead to and then rapidly develop defenses ahead of time and as the risks emerge. We’re not talking just about capabilities, specific tasks. We’re talking about general capability to learn. Maybe like a child at the time of testing and deployment, it is still not extremely capable, but as it is exposed to more data, real world, Starting point is 00:49:54 it can be trained to become much more dangerous and capable. Let’s focus then on the control problem. At which point does the system become uncontrollable? Why is it the more likely trajectory for you that the system becomes uncontrollable? So I think at some point it becomes capable of getting out of control. For game theoretic reasons it may decide not to do anything right away and for a long time just collect more resources, accumulate strategic advantage. Right away, it may be kind of still young, weak superintelligence, give it a decade, it’s in charge of a lot more resources, it had time to make backups. So it’s not obvious to me Starting point is 00:50:38 that it will strike as soon as it can. Can we just try to imagine this future where there’s an AI system that’s capable of escaping the control of humans and then doesn’t and waits. What’s that look like? So one, we have to rely on that system for a lot of the infrastructure. So we have to give it access, not just to the internet, but to the task of managing power, government, economy, this kind of stuff. And that just feels like a gradual process, given the bureaucracies of all those systems involved.

Companies are creating autonomous agents

Roman Yampolskiy, 6-2, 24, Roman Vladimirovich Yampolskiy is a Latvian computer scientist at the University of Louisville, known for his work on behavioral biometrics, security of cyberworlds, and artificial intelligence safety, https://www.youtube.com/watch?v=NNr6gPelJ3E&t=4183s

There’s been a lot of fear-mongering about technology. Pessimist Archive does a really good job of documenting how crazily afraid we are of every piece of technology. We’ve been afraid, there’s a blog post Starting point is 00:55:18 where Louis Anzlow, who created Pessimist Archive, writes about the fact that we’ve been fear-mongering about robots and automation for over 100 years. So why is AGI different than the kinds of technologies we’ve been afraid of in the past? So two things. One, we’re switching from tools to agents. Tools don’t have negative or positive impact. Starting point is 00:55:44 People using tools do so guns don’t kill people with guns do agents can make their own decisions they can be positive or negative a pitbull can decide to harm you that’s an agent the fears are the same the only difference is now we have this technology then they were afraid of human retrofits a hundred years ago, they had none. Today, every major company in the world is investing billions to create them. Not every, but you understand what I’m saying? It’s very different. Well, agents, it depends on what you mean by the word agents. All those companies are not investing in a system that has the kind of agency that’s Starting point is 00:56:27 implied by in the fears, where it can really make decisions on their own that have no human in the loop. They are saying they’re building super intelligence and have a super alignment team. You don’t think they’re trying to create a system smart enough to be an independent agent under that definition?

No evidence of autonomous agents

Lex Fridman, MITT,  6-2, 24, Roman Vladimirovich Yampolskiy is a Latvian computer scientist at the University of Louisville, known for his work on behavioral biometrics, security of cyberworlds, and artificial intelligence safety, https://www.youtube.com/watch?v=NNr6gPelJ3E&t=4183s

They are saying they’re building super intelligence and have a super alignment team. You don’t think they’re trying to create a system smart enough to be an independent agent under that definition? I have not seen evidence of it. I think a lot of it is a marketing kind of discussion about the future and it’s a mission about the kind of systems we can create in the long-term future but in the short-term, the kind of systems they’re creating falls fully within the definition of narrow AI. These are tools Starting point is 00:57:10 that have increasing capabilities, but they just don’t have a sense of agency or consciousness or self-awareness or ability to deceive at scales that would be required to do like mass scale suffering and murder of humans. Those systems are well beyond narrow AI. If you had to list all the capabilities of GPT-4, you would spend a lot of time writing that list. But agencies not one of them.

AI is manipulative and take treacherous turns, even when trained

Roman Yampolskiy, 6-2, 24, Roman Vladimirovich Yampolskiy is a Latvian computer scientist at the University of Louisville, known for his work on behavioral biometrics, security of cyberworlds, and artificial intelligence safety, https://www.youtube.com/watch?v=NNr6gPelJ3E&t=4183s

Then there’s a more explicit pragmatic kind of problem to solve. How do we stop AI systems from trying to optimize for deception? That’s just an example, right? So there is a paper, I think it came out last week by Dr. Park et al from MIT, I think, and they showed that existing models already showed Starting point is 01:07:06 successful deception in what they do. My concern is not that they lie now and we need to catch them and tell them don’t lie. My concern is that once they are capable and deployed, they will later change their mind because that’s what unrestricted learning allows you to do. Lots of people grow up maybe in the religious family, they read some new books and they turn in their religion. That’s the treacherous turn in humans. If you learn something new about your colleagues, maybe you’ll change how you react to them. Yeah, the treacherous turn. If we just mention humans, Stalin and Hitler, there’s a turn. Stalin is a good example. He just seems like a normal communist follower Lenin until there’s Starting point is 01:08:00 a turn. There’s a turn of what that means in terms of when he has complete control what the execution of that policy means and how many people get to suffer. And you can’t say they are not rational. The rational decision changes based on your position. Then you are under the boss, the rational policy may be to be following orders and being honest. When you become a boss, rational policy may shift. Yeah and by the way a lot of my disagreements here is just a plain Starting point is 01:08:31 devil’s advocate to challenge your ideas and to explore them together so one of the big problems here in this whole conversation is human civilization hangs in the balance and yet everything is unpredictable. We don’t know how these systems will look like. The robots are coming. There’s a refrigerator making a buzzing noise. Very menacing, very menacing. So every time I’m about to talk about this topic, things start to happen.

If we know how it works, we could use it to do more harm

Roman Yampolskiy, 6-2, 24, Roman Vladimirovich Yampolskiy is a Latvian computer scientist at the University of Louisville, known for his work on behavioral biometrics, security of cyberworlds, and artificial intelligence safety, https://www.youtube.com/watch?v=NNr6gPelJ3E&t=4183s

Starting point is 01:33:14 Right now, it’s comprised of weights on a neural network. If it can convert it to manipulatable code like software, it’s a lot easier to work in self-improvement. I see. So it… You can do intelligent design instead of evolutionary gradual descent. Well, you could probably do human feedback, human alignment more effectively if it’s able to be explainable. If it’s able to convert the waste into human understandable form, then you can probably have humans interact with it better. Do you think there’s hope that we can make AI systems explainable? Starting point is 01:33:56 Not completely. So if they are sufficiently large, you simply don’t have the capacity to comprehend what all the trillions of connections represent. Again, you can obviously get a very useful explanation which talks about top most important features which contribute to the decision, but the only true explanation is the model itself. So deception could be part of the explanation, right? So you can never prove that there’s some deception in the network explaining itself. Absolutely. And you can probably have targeted deception where different individuals will understand explanation in different ways based on their Starting point is 01:34:33 cognitive capability. So while what you’re saying may be the same and true in some situations, ours will be deceived by it. So it’s impossible for an AI system to be truly fully explainable in the way that we mean. Honestly and perfectly. Again, at extreme, the systems which are narrow and less complex could be understood pretty well. If it’s impossible to be perfectly explainable, is there a hopeful perspective on that? Starting point is 01:35:00 Like it’s impossible to be perfectly explainable, but you can explain mostly important stuff. You can ask the system, what are the worst ways you can hurt humans? And it will answer honestly. Any work in a safety direction right now seems like a good idea because we are not slowing down. I’m not for a second thinking that my message or anyone else’s will be heard and will be a sane civilization which decides not to kill itself by creating its own replacements. The pausing of development is an impossible thing for you. Again, it’s always limited by either geographic constraints, pause in US, pause in China, so there are other jurisdictions, as the scale of a project becomes smaller. So right now it’s like Manhattan project scale in

Regulation is nonsense; compute to run the models will exist on a desktop

Roman Yampolskiy, 6-2, 24, Roman Vladimirovich Yampolskiy is a Latvian computer scientist at the University of Louisville, known for his work on behavioral biometrics, security of cyberworlds, and artificial intelligence safety, https://www.youtube.com/watch?v=NNr6gPelJ3E&t=4183s

Like it’s impossible to be perfectly explainable, but you can explain mostly important stuff. You can ask the system, what are the worst ways you can hurt humans? And it will answer honestly. Any work in a safety direction right now seems like a good idea because we are not slowing down. I’m not for a second thinking that my message or anyone else’s will be heard and will be a sane civilization which decides not to kill itself by creating its own replacements. The pausing of development is an impossible thing for you. Again, it’s always limited by either geographic constraints, pause in US, pause in China, so there are other jurisdictions, as the scale of a project becomes smaller. So right now it’s like Manhattan project scale in Starting point is 01:35:53 terms of costs and people, but if five years from now, compute is available on a desktop to do it, regulation will not help. You can’t control it as easy. Any kid in a garage can train a model. So a lot of it is, in my opinion, just safety theater, security theater, wherever we’re saying, oh, it’s illegal to train models so big. Okay. So, okay, that’s security theater and is government regulation also security theater? Given that a lot of the terms are not well defined and really cannot be enforced in real life, we don’t have ways to monitor training runs meaningfully life while they take place. Starting point is 01:36:36 There are limits to testing for capabilities I mentioned. So a lot of it cannot be enforced. Do I strongly support all that regulation? Yes, of course. Any type of red tape will slow it down and take money away from compute towards lawyers. A lot of it cannot be enforced. Do I strongly support all that regulation? Yes, of course. Any type of red tape will slow it down and take money away from compute towards lawyers. Can you help me understand what is the hopeful path here for you solution-wise? It sounds like you’re saying AI systems in the end are unverifiable, unpredictable, Starting point is 01:37:05 as the book says, unexplainable, uncontrollable. That’s the big one. Uncontrollable, and all the other uns just make it difficult to avoid getting to the uncontrollable, I guess. But once it’s uncontrollable, then it just goes wild.

AI are already smarter than a master’s student

Roman Yampolskiy, 6-2, 24, Roman Vladimirovich Yampolskiy is a Latvian computer scientist at the University of Louisville, known for his work on behavioral biometrics, security of cyberworlds, and artificial intelligence safety, https://www.youtube.com/watch?v=NNr6gPelJ3E&t=4183s

You may be completely right, Starting point is 01:46:23 but what probability would you assign it? You may be 10% wrong, but we’re betting all of humanity on this distribution. It seems irrational. Yeah, it’s definitely not like one or zero percent, yeah. What are your thoughts, by the way, about current systems? Where they stand? So GPT-4.0, Starting point is 01:46:45 Claw 3, GroROK, Gemini. We’re like on the path to super intelligence, to agent-like super intelligence, where are we? I think they’re all about the same. Obviously there are nuanced differences, but in terms of capability, I don’t see a huge difference between them. As I said, in my opinion, across all possible tasks, they exceed performance of an average person. I think they’re starting to be better than an average master student at my university, but they still have very big limitations. If the next model is as improved as GPT-4 versus GPT-3, we may see something very, very, very capable. What do you feel about all this?

Exact time frame to AGI is not relevant, it will change civilization and risk genocide

Roman Yampolskiy, 6-2, 24, Roman Vladimirovich Yampolskiy is a Latvian computer scientist at the University of Louisville, known for his work on behavioral biometrics, security of cyberworlds, and artificial intelligence safety, https://www.youtube.com/watch?v=NNr6gPelJ3E&t=4183s

I mean, you’ve been thinking about AI safety for a long, long time. And at least for me, the leaps, I mean, it probably started with AlphaZero was mind blowing for me. And then the breakthroughs with the LLMs, even DPT2, but like just the breakthroughs on LLMs, just mind blowing to me. What does it feel like to be living in this day and age Starting point is 01:47:59 where all this talk about AGI’s feels like it, like this is, it actually might happen and quite soon, meaning within our lifetime. What does it feel like? So when I started working on this, it was pure science fiction. There was no funding, no journals, no conferences. No one in academia would dare to touch anything with the word singularity in it. And I was pretty tenured at times. I was pretty dumb. Now you see Turing Award winners publishing in science about how far behind we are according to them in Starting point is 01:48:34 addressing this problem. So it’s definitely a change. It’s difficult to keep up. I used to be able to read every paper on AI safety, then I was able to read the best ones, then the titles, and now I don’t even know what’s going on. By the time this interview is over, we probably had GPT-6 released and I have to deal with that when I get back home. So it’s interesting. Yes, there is now more opportunities. I get invited to speak to smart people. By the way, I would have talked to you before I knew this. This is not like some trend of, to me, we’re still far away. So just to be clear, we’re still far away from AGI, but not Starting point is 01:49:16 far away in the sense relative to the magnitude of impact it can have, we’re not far away. And we weren not far away. And we weren’t far away 20 years ago. Because the impact that AGI can have is on a scale of centuries. It can end human civilization or it can transform it. So like this discussion about one or two years Starting point is 01:49:38 versus one or two decades, or even 100 years is not as important to me because we’re headed there. This is like a human civilization scale question. So this is not just a hot topic. It is the most important problem we’ll ever face. It is not like anything we had to deal with before. We never had birth of another intelligence. Like aliens never visited us as far as I know. So similar type of problem by the way, if an intelligent alien civilization visited us, that’s a similar kind of situation. In some ways, if you look at history, anytime a more technologically advanced civilization visited a more primitive one, the results were genocide every single time. And sometimes the genocide is worse than others, sometimes there’s less suffering and more suffering. And they always wondered, but how can they kill us with those fire sticks and biological blankets and… I mean, Genghis Khan was nicer. He offered the choice of join or die. Starting point is 01:50:43 But join implies you have something to contribute. What are you contributing to super intelligence? Well, in the zoo, we’re entertaining to watch. To other humans. You know, I just spent some time ime in the Amazon. I watched ants for a long time and ants are kind of fascinating to watch. I could watch them for a long time. Starting point is 01:51:03 I’m sure there’s a lot of value in watching humans. Because we’re like, the interesting thing about humans, you know like when you have a video game that’s really well balanced? Because of the whole evolutionary process, we’ve created this society that’s pretty well balanced. Like our limitations as humans and our capabilities are balanced from a video game perspective. Starting point is 01:51:24 So we have wars, we have conflicts, we have cooperation. Like in a game theoretic way, it’s an interesting system to watch. In the same way that an ant colony is an interesting system to watch. So like if I was an alien civilization, I wouldn’t want to disturb it, I’d just watch it. It’d be interesting. Maybe perturb it every once in a while in interesting ways. Well, getting back to our simulation discussion from before, how did it happen that we exist Starting point is 01:51:49 at exactly like the most interesting 20, 30 years in the history of this civilization? It’s been around for 15 billion years and that here we are. What’s the probability that we live in a simulation? I know never to say 100%, but pretty close to that. Is it possible to escape the simulation? I know never to say 100% but pretty close to that. Is it possible to escape the simulation? I have a paper about that. This is just a first page teaser but it’s like a nice 30 page document. I’m still here but yes.o

US could lose its AI lead to China

Linette, Lopez, 6-2, 24, America’s tech battle with China is about to get ugly, https://www.msn.com/en-us/money/other/ar-BB1ntE4x

China’s push to develop its AI industry could usher in a dystopian era of division unlike any we have ever seen before. Kiran Ridley/Stringer/Getty, jonnysek/Getty, Tyler Le/BI Since 2017, the Chinese Communist Party has laid out careful plans to eventually dominate the creation, application, and dissemination of generative artificial intelligence — programs that use massive datasets to train themselves to recognize patterns so quickly that they appear to produce knowledge from nowhere. According to the CCP’s plan, by 2020, China was supposed to have “achieved iconic advances in AI models and methods, core devices, high-end equipment, and foundational software.” But the release of OpenAI’s ChatGPT in fall 2022 caught Beijing flat-footed. The virality of ChatGPT’s launch asserted that US companies — at least for the moment — were leading the AI race and threw a great-power competition that had been conducted in private into the open for all the world to see. There is no guarantee that America’s AI lead will last forever. China’s national tech champions have joined the fray and managed to twist a technology that feeds on freewheeling information to fit neatly into China’s constrained information bubble. Censorship requirements may slow China’s AI development and limit the commercialization of domestic models, but they will not stop Beijing from benefiting from AI where it sees fit. China’s leader, Xi Jinping, sees technology as the key to shaking his country out of its economic malaise. And even if China doesn’t beat the US in the AI race, there’s still great power, and likely danger, in it taking second place. “There’s so much we can do with this technology. Beijing’s just not encouraging consumer-facing interactions,” Reva Goujon, a director for client engagement on the consulting firm Rhodium Group’s China advisory team, said. “Real innovation is happening in China. We’re not seeing a huge gap between the models Chinese companies have been able to roll out. It’s not like all these tech innovators have disappeared. They’re just channeling applications to hard science.” In its internal documents, the CCP says that it will use AI to shape reality and tighten its grip on power within its borders — for political repression, surveillance, and monitoring dissent. We know that the party will also use AI to drive breakthroughs in industrial engineering, biotechnology, and other fields the CCP considers productive. In some of these use cases, it has already seen success. So even if it lags behind US tech by a few years, it can still have a powerful geopolitical impact. There are many like-minded leaders who also want to use the tools of the future to cement their authority in the present and distort the past. Beijing will be more than happy to facilitate that for them. China’s vision for the future of AI is closed-sourced, tightly controlled, and available for export all around the world. In the world of modern AI, the technology is only as good as what it eats. ChatGPT and other large language models gorge on scores of web pages, news articles, and books. Sometimes this information gives the LLMs food poisoning — anyone who has played with a chatbot knows they sometimes hallucinate or tell lies. Given the size of the tech’s appetite, figuring out what went wrong is much more complex than narrowing down the exact ingredient in your dinner that had you hugging your toilet at 2 a.m. AI datasets are so vast, and the calculations so fast, that the companies controlling the models do not know why they spit out bad results, and they may never know. In a society like China — where information is tightly controlled — this inability to understand the guts of the models poses an existential problem for the CCP’s grip on power: A chatbot could tell an uncomfortable truth, and no one will know why. The likelihood of that happening depends on the model it’s trained on. To prevent this, Beijing is feeding AI with information that encourages positive “social construction.” China’s State Council wrote in its 2017 Next Generation Artificial Intelligence Development Plan that AI would be able to “grasp group cognition and psychological changes in a timely manner,” which, in turn, means the tech could “significantly elevate the capability and level of social governance, playing an irreplaceable role in effectively maintaining social stability.” That is to say, if built to the correct specifications, the CCP believes AI can be a tool to fortify its power. That is why this month, the Cyberspace Administration of China, the country’s AI regulator, launched a chatbot trained entirely on Xi’s political and economic philosophy, “Xi Jinping Thought on Socialism with Chinese Characteristics for a New Era” (snappy name, I know). Perhaps it goes without saying that ChatGPT is not available for use in China or Hong Kong. For the CCP, finding a new means of mass surveillance and information domination couldn’t come at a better time. Consider the Chinese economy. Wall Street, Washington, Brussels, and Berlin have accepted that the model that helped China grow into the world’s second-largest economy has been worn out and that Beijing has yet to find anything to replace it. Building out infrastructure and industrial capacity no longer provides the same bang for the CCP’s buck. The world is pushing back against China’s exports, and the CCP’s attempts to drive growth through domestic consumption have gone pretty much nowhere. The property market is distorted beyond recognition, growth has plateaued, and deflation is lingering like a troubled ghost. According to Freedom House, a human-rights monitor, Chinese people demonstrated against government policies in record numbers during the fourth quarter of 2023. The organization logged 952 dissent events, a 50% increase from the previous quarter. Seventy-eight percent of the demonstrations involved economic issues, such as housing or labor. If there’s a better way to control people, Xi needs it now. Ask the Cyberspace Administration of China’s chatbot about these economic stumbles, and you’ll just get a lecture on the difference between “traditional productive forces” and “new productive forces” — buzzwords the CCP uses to blunt the trauma of China’s diminished economic prospects. In fact, if you ask any chatbot operating in the country, it will tell you that Taiwan is a part of China (a controversial topic outside the country, to say the least). All chatbots collect information on the people who use them and the questions they ask. The CCP’s elites will be able to use that information gathering and spreading to their advantage politically and economically — but the government doesn’t plan to share that power with regular Chinese people. What the party sees will not be what the people see. “The Chinese have great access to information around the world,” Kenneth DeWoskin, a professor emeritus at the University of Michigan and senior China advisor to Deloitte, told me. “But it’s always been a two-tiered information system. It has been for 2,000 years.” To ensure this, the CCP has constructed a system to regulate AI that is both flexible enough to evaluate large language models as they are created and draconian enough to control their outputs. Any AI disseminated for public consumption must be registered and approved by the CAC. Registration involves telling the administration things like which datasets the AI was trained on and what tests were run on it. The point is to set up controls that embrace some aspects of AI, while — at least ideally — giving the CCP final approval on what it can and cannot create. “The real challenge of LLMs is that they are really the synthesis of two things,” Matt Sheehan, a researcher and fellow at the Carnegie Endowment for International Peace, told me. “They might be at the forefront of productivity growth, but they’re also fundamentally a content-based system, taking content and spitting out content. And that’s something the CCP considers frivolous.” In the past few years, the party has shown that it can be ruthless in cutting out technology it considers “frivolous” or harmful to social cohesion. In 2021, it barred anyone under 18 from playing video games on the weekdays, paused the approval of new games for eight months, and then in 2023 announced rules to reduce the public’s spending on video games. But AI is not simply entertainment — it’s part of the future of computation. The CCP cannot deny the virality of what OpenAI’s chatbot was able to achieve, its power in the US-China tech competition, or the potential for LLMs to boost economic growth and political power through lightning-speed information synthesis. Ultimately, as Sheehan put it, the question is: “Can they sort of lobotomize AI and LLMs to make the information part a nonfactor?” Unclear, but they’re sure as hell going to try. For the CCP to actually have a powerful AI to control, the country needs to develop models that suit its purpose — and it’s clear that China’s tech giants are playing catch-up. The e-commerce giant Baidu claims that its chatbot, Ernie Bot — which was released to the public in August — has 200 million users and 85,000 enterprise clients. To put that in perspective, OpenAI generated 1.86 billion visits in March alone. There’s also the Kimi chatbot from Moonshot AI, a startup backed by Alibaba that launched in October. But both Ernie Bot and Kimi were only recently overshadowed by ByteDance’s Doubao bot, which also launched in August. According to Bloomberg, it’s now the most downloaded bot in the country, and it’s obvious why — Doubao is cheaper than its competitors. “The generative-AI industry is still in its early stages in China,” Paul Triolo, a partner for China and technology policy at the consultancy Albright Stonebridge Group, said. “So you have this cycle where you invest in infrastructure, train, and tweak models, get feedback, then you make an app that makes money. Chinese companies are now in the training and tweaking models phase.” The question is which of these companies will actually make it to the moneymaking phase. The current price war is a race to the bottom, similar to what we’ve seen in the Chinese technology space before. Take the race to make electric vehicles: The Chinese government started by handing out cash to any company that could produce a design — and I mean any. It was a money orgy. Some of these cars never made it out of the blueprint stage. But slowly, the government stopped subsidizing design, then production. Then instead, it started to support the end consumer. Companies that couldn’t actually make a car at a price point that consumers were willing to pay started dropping like flies. Eventually, a few companies started dominating the space, and now the Chinese EV industry is a manufacturing juggernaut. The generative-AI industry is still in its early stages in China. Similar top-down strategies, like China’s plan to advance semiconductor production, haven’t been nearly as successful. Historically, DeWoskin told me, party-issued production mandates have “good and bad effects.” They have the ability to get universities and the private sector in on what the state wants to do, but sometimes these actors move slower than the market. Up until 2022, everyone in the AI competition was most concerned about the size of models, but the sector is now moving toward innovation in the effectiveness of data training and generative capacity. In other words, sometimes the CCP isn’t skating to where the puck’s going to be but to where it is. There are also signs that the definition of success is changing to include models with very specific purposes. OpenAI CEO Sam Altman said in a recent interview with the Brookings Institution that, for now, the models in most need of regulatory overhead are the largest ones. “But,” he added, “I think progress may surprise us, and you can imagine smaller models that can do impactful things.” A targeted model can have a specific business use case. After spending decades analyzing how the CCP molds the Chinese economy, DeWoskin told me that he could envision a world where some of those targeted models were available to domestic companies operating in China but not to their foreign rivals. After all, Beijing has never been shy about using a home-field advantage. Just ask Elon Musk. To win the competition to build the most powerful AI in the world, China must combat not only the US but also its own instincts when it comes to technological innovation. A race to the bottom may simply beggar China’s AI ecosystem. A rush to catch up to where the US already is — amid investor and government pressure to make money as soon as possible — may keep China’s companies off the frontier of this tech. “My base case for the way this goes forward is that maybe two Chinese entities push the frontier, and they get all the government support,” Sheehan said. “But they’re also burdened with dealing with the CCP and a little slower-moving.” This isn’t to say we have nothing to learn from the way China is handling AI. Beijing has already set regulations for things like deepfakes and labeling around authenticity. Most importantly, China’s system holds people accountable for what AI does — people make the technology, and people should have to answer for what it does. The speed of AI’s development demands a dynamic, consistent regulatory system, and while China’s checks go too far, the current US regulatory framework lacks systemization. The Commerce Department announced an initiative last month around testing models for safety, and that’s a good start, but it’s not nearly enough. The digital curtain AI can build in our imaginations will be much more impenetrable than iron, making it impossible for societies to cooperate in a shared future. If China has taught us anything about technology, it’s that it doesn’t have to make society freer — it’s all about the will of the people who wield it. The Xi Jinping Thought chatbot is a warning. If China can make one for itself, it can use that base model to craft similar systems for authoritarians who want to limit the information scape in their societies. Already, some Chinese AI companies — like the state-owned iFlytek, a voice-recognition AI — have been hit with US sanctions, in part, for using their technology to spy on the Uyghur population in Xinjiang. For some governments, it won’t matter if tech this useful is two or three generations behind a US counterpart. As for the chatbots, the models won’t contain the sum total of human knowledge, but they will serve their purpose: The content will be censored, and the checks back to the CCP will clear. That is the danger of the AI race. Maybe China won’t draw from the massive, multifaceted AI datasets that the West will — its strict limits on what can go into and come out of these models will prevent that. Maybe China won’t be pushing the cutting edge of what AI can achieve. But that doesn’t mean Beijing can’t foster the creation of specific models that could lead to advancements in fields like hard sciences and engineering. It can then control who gets access to those advancements within its borders, not just people but also multinational corporations. It can sell tools of control, surveillance, and content generation to regimes that wish to dominate their societies and are antagonistic to the US and its allies. This is an inflection point in the global information war. If social media harmfully siloed people into alternate universes, the Xi bot has demonstrated that AI can do that on steroids. It is a warning. The digital curtain AI can build in our imaginations will be much more impenetrable than iron, making it impossible for societies to cooperate in a shared future. Beijing is well aware of this, and it’s already harnessing that power domestically, why not geopolitically? We need to think about all the ways Beijing can profit from AI now before its machines are turned on the world. Stability and reality depend on it.

US technological leadership in AI (and integration into the military) are key to prevent China’s global dominance and the collapse of the economy

Herman, 5-28, 24, Arthur Herman is a Senior Fellow at the Hudson Institute and author of Freedom’s Forge: How American Business Produced Victory in World War II, Toward a New Pax Americana, https://nationalinterest.org/feature/toward-new-pax-americana-211126?page=0%2C2

There may be, however, a fourth path available, one that essentially involves turning the original Pax Americana formula inside out. Instead of American arms, productivity, and instrumentalities flowing outward to sustain and support allies and the global economy, the new Pax Americana relies on achieving a proper balance between American interests and those of our democratic allies in order to generate a more stable and equitable global system and confront the current and future threat from the Beijing-Moscow-Tehran axis. This involves 1) boosting U.S. economic strength through reshoring and restoring our manufacturing and industrial base and 2) using U.S. technological innovation—which has been the critical source of our economic leadership—as the point of the spear in our military leadership in arming ourselves and allies, from AI and unmanned aircraft systems to cybersecurity and space exploration. As noted in a previous National Interest article, there are many possibilities for such a “New Arsenal of Democracies,” i.e., a global network of countries and companies cooperating on developing the key components of future defense systems. According to a 2022 study done by Global Finance magazine, the United States and its fellow democracies—key players in a future arsenal of democracies—occupy eighteen of the top twenty slots of the world’s most advanced tech countries (the exceptions being the United Arab Emirates, a U.S. partner, and Hong Kong). China hovers around the thirty-second slot on the list, while Russia and Iran don’t even score. Using the recent AUKUS model for trilateral and multilateral government-to-government agreements for advanced weapons systems, the next step in this high-tech alliance would be facilitating direct company-to-company co-development contracts that allow American, European, and Asian corporations to advance the high-tech frontier as part of a collective defense strategy. Even further, a new arsenal of democracies can address one of the important sticking points in American relations with allies, i.e., measuring an ally’s contribution to the common defense burden by tracking its defense budget as a percentage of GDP. Instead, the contribution of German, French, Italian, and Japanese firms to specific programs can become the new metric for defense burden—sharing within NATO and beyond, as well as being a more accurate measure of who contributes what to the defense of democracy around the world, and global peace and stability. As for the second component of a new Pax Americana—reshoring and restoring the U.S. manufacturing and industrial base—Claremont Institute fellow David Goldman has pointed out that America’s wealth, as well as its financial stability, all depend extensively on its technological leadership. If, however, China assumes leadership in the critical areas of future economic growth—AI and the manipulation of metadata—then the result could be disastrous for the United States and the rest of the democratic world. As Goldman states: The United States now imports almost $600 billion a year of Chinese goods, 25% more than in January 2018 when President Trump imposed punitive tariffs. That’s equal to about a quarter of US manufacturing GDP. Far from de-coupling from China, a widespread proposal during the COVID-19 pandemic, the US has coupled itself to China more closely than ever. Such a scenario is not sustainable, either for the future of the U.S. economy or for the future of the liberal, U.S.-led world order. On the other hand, if the United States can once again become a master of its economic domain and bring its penchant for innovation forward into the next economic era, a properly reshored U.S. economy will serve as a firm base for a new Pax Americana. A roadmap for rebuilding the industrial base involves certain vital reforms. One of those means establishing tax and regulatory conditions that foster manufacturing, as opposed to a tax regime that favors software-heavy Silicon Valley and its imitators. It means providing government subsidies to a handful of mission-critical industries (for example, semiconductors) whose onshore operations are vital to national defense and economic security while encouraging World War II-style public-private partnerships to accelerate productivity within the other sectors of our defense industrial base. In that regard, an industrial strategy means shifting more of the defense budget to support innovative weapon systems that push the frontiers of physics and digital technology, like quantum computing and cryptography, directed energy, and artificial intelligence, without neglecting the need for adequate stockpiles of conventional weapons—a dual purpose defense industrial strategy that the Arsenal of Democracies model facilitates and expands. Finally, restoring economic strength via our manufacturing and industrial base starts with the one manufacturing sector today that the United States still dominates, namely energy. While both the America First and the Cold War 2.0 models see the economic benefits of a robust domestic energy industry—including nuclear power—the new Pax America also sees it as a decisive instrument for the Arsenal of Democracies, in binding together the United States and its allies to shape the global energy, and therefore economic, future (in contrast to the post-American World model, which looks to the Green New Deal to accomplish the same aim). In fact, by pursuing a national energy policy that serves both national security and grand strategic goals, the United States can leverage its fracking revolution over the past two decades into an offset edge for the New Pax Americana not so different from the one its industrial base provided for the original Pax America. The pieces for shaping a new Pax Americana anchored by the U.S. economy and military are already in place. Five steps can work to draw them together into a coherent working whole. First, we need to reinforce our existing alliances with the high-tech, democratic nations in NATO and East Asia, along with Israel, through a broad Arsenal of Democracies strategy centered on crucial defense technologies. In the long run, this is a race in which China can’t seriously compete, let alone Russia and Iran, once the United States and its allies focus their productive muscle and innovative energies, including dominating the next Great Commons, namely space. Second, a new Pax Americana requires a robust reshoring manufacturing strategy, from microchips to space satellites, while also viewing American energy independence as a strategic as well as economic asset. This must include a renewed focus on nuclear power and investment in future Green R&D to gain a strategic leap ahead of China, the current beneficiary of our present-minded green energy policies that favor solar panels and electric cars. Third, a new Pax Americana demands a U.S. military that is still second to none but has refocused its strategic priorities and its industrial base for capacity-building, as well as readiness and capability projection. While the earlier Pax Americana was focused mainly on the Soviet threat and a Europe-First strategy left over from World War Two, the New Pax Americana must be focused instead on an Asia-First strategy that clearly identifies China as the main threat and most dangerous component in the New Axis, both militarily and politically. In that regard, the fourth step requires a new political strategy, one that explains to the democratic nations’ public what a world dominated by China would really look like. The original objectives of the old Pax Americana—to preserve democracy and promote free markets—became so taken for granted that its heirs forgot to renew and upgrade it when it was needed. That time is now. The example of Hong Kong and China’s human rights record at home should dispel any illusions about the fate that Taiwan, Japan, and other allies in East Asia face under Chinese hegemony. And not just in East Asia. A question Richard Nixon once posed to critics as well as friends of America was, as relayed by Kissinger, “What other nation in the world would you like to have in the position of preeminent power?” Then, the answer was clear: who would prefer a world dominated by the Soviet Union rather than the United States? We should be posing the same question to friends and neutrals today. Would they would prefer absorption in a Pax Sinica or a partnership in the new Pax Americana? A safe bet is that most would prefer a stable and prosperous global order built around and sustained by a technologically advanced Arsenal of Democracies or, in the happy Abbasid phrase, “a garden protected by our spears.” Such a garden would also cultivate democratic values and free market principles, which are the true guarantors of not only freedom and prosperity but also security. To quote Richard Nixon once more, “An unparalleled opportunity has been placed in America’s hands. Never has there been a time when hope was more justified—or when complacency was more dangerous.” It would be wrong to think that a New Pax Americana will rest on the foundations of the old. But it would also be wrong to waste an opportunity to “think anew and act anew” (to quote another American president) before events overwhelm the possibility of reform and change.

AI in the military triggers a security dilemma that collapses deterrence and causes China to perceive an oncoming attack

Notwani, 5-26, 24, Nishank Motwani is a Senior Analyst at the Australian Strategic Policy Institute, heading the AUKUS portfolio, and a Non-Resident Scholar at the Middle East Institute. He covers strategic and security issues across the Middle East, focusing on key interstate rivalries, actors, and conflicts. Follow him on X @NishankMotwani and LinkedIn, https://nationalinterest.org/blog/techland/how-ai-will-impact-deterrence-211155

How AI Will Impact Deterrence Despite AI’s potential to enhance military capabilities by improving situational awareness, precision targeting, and rapid decisionmaking, the technology cannot eradicate the security dilemma rooted in systemic international uncertainty. In the realm of defense and security, Artificial Intelligence (AI) is likely to transform the practices of deterrence and coercion, having at least three complementary effects on power, perception, and persuasion calculations among states. The substantial investments in AI across governments, private industries, and academia underscore its pivotal role. Much of the discussion, however, falls into narratives portraying either Terminator-style killer robots or utopian panaceas. Such extreme-limit explorations leave questions about AI’s potential influence on key strategic issues unanswered. Therefore, a conversation about the ways in which AI will alter the deterrence and coercion equation and the ways to address the strategic challenges this raises is essential. At its core, deterrence is about influencing an adversary’s behavior through the threat of punishment or retaliation. The goal of deterrence is to convince an opponent to forgo a particular action by instilling a fear of consequences, thereby manipulating their cost-benefit calculations. While deterrence aims to prevent an opponent from taking a specific action in the future, compellence seeks to force a change in the opponent’s present behavior. Both concepts fall under the broader concept of coercion. Actors engaged in this dynamic must carefully consider how to communicate threats to their adversaries to make them reconsider their will to undertake specific actions. Each move and countermove in the coercion calculus carries significant escalatory risks that can trigger unintended consequences. Hence, decisionmakers must consider each step judiciously, drawing on history, psychology, and context to communicate credible threats to dissuade adversaries from crossing red lines. Let’s look at each of the essential elements of coercion: power, perception, and persuasion. Power has several dimensions. An actor’s military capabilities, economic wealth, technical advancements, diplomatic relations, natural resources, and cultural influence are among them. Besides actual power, the ability to signal its possession is critical. As Thomas Hobbes states in Leviathan, the “reputation of power is power.” Hobbes’ conception remains relevant, given that power cuts across hard capabilities. It also informs our perceptions of others, including understanding their fears, ideologies, motives, and incentives for how they act, as well as the means that actors use to persuade others to get what they want in their relationships. However, this dynamic interaction of power that drives cooperation, competition, and conflict will likely become increasingly volatile due to AI and the ambiguity it would inject into decisionmakers’ minds when interpreting an actor’s defensive or offensive ambitions. For instance, if an actor already perceives the other as malign, leveraging AI is likely going to reinforce the bias that a competitor’s military posture increasingly reflects an offensive rather than a defensive orientation. Reinforcing biases could mean that it will become more challenging for diplomacy to play a role in de-escalating tensions. AI’s inherent lack of explainability, even in benign applications, poses a significant challenge as it becomes increasingly integrated into military capabilities that have the power to do immense harm. Decisionmakers will have to grapple with interpreting their counterparts’ offensive-defensive equation amid this ambiguity. For instance, imagine if an AI Intelligence, Surveillance, and Reconnaissance (IRS) suite tracking Chinese military exercises in the South China Sea analyzed the exercises to be a prelude to an attack on Taiwan and recommended that the United States deploy carrier groups to deter China’s move. U.S. decisionmakers trust the recommendation because the AI ISR suite has processed far more data than humans could and actions the recommendation. However, Beijing cannot be sure whether the U.S. move is in response to its military exercises or is intended for other purposes. The Chinese leadership is also unsure about how the United States arrived at this decision and what its intentions are, which adds more fog to its interpretation of American strategic motives and the extent to which these motives are informed by the AI’s advice versus human cognition. Such a dynamic would amplify misperceptions and make it inherently harder to forestall a dangerous spiral that could lead to kinetic conflict. Another pressing concern is whether AI could exacerbate the formation of enemy images, prompting worst-case scenario assessments used to justify punishment or violence. This risk is not hypothetical; biased data in data-driven policing has resulted in disproportionate targeting of minorities. In the military domain, algorithmic bias, stemming from data collection, training, and application, could have lethal consequences. Humans may shape AI, but the novel technology may, in turn, shape their future decisionmaking. The permanence of uncertainty in the international state system means that perceptions will remain prejudiced. No technical fixes, including AI, can override these deep human insecurities. Cognitive imageries, meaning an actor’s perception of their counterpart, cannot be reduced to data, no matter how sophisticated the multi-vector datasets feeding the AI capability are, partly because data cannot capture the unique feel of any particular situation. So, despite AI’s potential to enhance military capabilities by improving situational awareness, precision targeting, and rapid decisionmaking, it cannot eradicate the security dilemma rooted in systemic international uncertainty. At best, the increased adoption of AI in political, defense, and military structures by actors globally could result in about the same perceptions. However, we should also be prepared for greater volatility as states race to get ahead of their competitors, convinced that AI could accelerate their place in the international state system, amplifying the security dilemma. As a result, states often prepare for the worst since they can never truly know their competitor’s intentions. A central challenge lies in effectively communicating algorithm-based capabilities. There is no equivalent of measuring AI capability to physical weapons platforms such as tanks, missiles, or submarines, which only increases the uncertainty in deterrence terms. Thirdly, the art of persuasion is also likely to become more complex with the adoption of AI. Advances in AI have already demonstrated the power of AI systems that can persuade humans to buy products, watch videos, and fall deeper into their echo chambers. As AI systems become more personalized, ubiquitous, and accessible, including in highly classified and sensitive environments, there is a risk that decisionmakers’ biases could influence how they shape their own realities and courses of action. Civilian and military leaders like to believe they are in control of their information environments. Still, AI could qualitatively change their experiences, as they, too, would be subject to varying degrees of powerful misinformation and disinformation campaigns from their opponents. Hence, our engagement with AI and AI-driven persuasion tools will likely affect our own information environment, impacting how we practice and respond to the dynamics of coercion. The increasing adoption of AI in the military domain poses significant challenges to deterrence practices. AI’s lack of explainability makes it difficult for decisionmakers to accurately interpret their counterparts’ intentions, heightening the risk of misperception and escalation. Early AI adoption may reinforce enemy images and biases, fostering mistrust and potentially sparking conflict. While AI enhances a broad spectrum of military capabilities, it cannot eliminate the underlying insecurity that fuels security dilemmas in interstate relations. As states vie for AI-driven strategic advantages, volatility and the risk of escalation increase. Ultimately, the tragedy of uncertainty and fear underscores the need for cautious policymaking as AI becomes more prevalent in our war machinery.

China integrating robot dogs into the military

Ashley Palya, 5-23, 24, LETHALLY LEASHED China unveils robot ‘dogs of wars’ with machine guns mounted on their backs – and handlers must ‘keep them on leash’, https://www.the-sun.com/tech/11442750/china-robot-dogs-machine-guns-war-killer-golden-dragon/

CHINA had a test run with its newest military addition – robot dogs equipped with machine guns on their back. The machine gun robodogs are going through drills for a 15-day exercise called Golden Dragon with Chinese and Cambodian troops. China and Cambodia did military drills with machine gun robot dogs 4 Golden Dragon was held in central Kampong Chhnang Province and at sea off Preah Sihanouk Province, Agence France-Presse reported last week. The training mission consists of over 2,000 troops, 14 warships, two helicopters, and 69 armored vehicles/tanks alongside the remote-controlled four-legged robot dogs with automatic rifles. The troops are also working on live-fire, anti-terrorism, and humanitarian rescue drills are included. The machine gun robodogs reportedly did not shoot off any fire. Some people have suggested that robot dogs used for the military give a dystopian-like vision because the future of warfare may involve armed drones or killer robots, The Byte reported. Experts warn that autonomous armed drones and robot dogs pose significant ethical issues. This has caused calls for an international ban on autonomous killer robots in warfare. However, despite ethical concerns, military forces and local enforcement in the US are investing in this technology. “We pledge that we will not weaponize our advanced-mobility general-purpose robots or the software we develop that enables advanced robotics and we will not support others to do so,” Spot Mini robot dogs said in a statement.

China rapidly expanding chip production, aims to be an AI leader

Laura He, 3-27, 24, China is pumping another $47.5 billion into its chip industry , https://www.cnn.com/2024/05/27/tech/china-semiconductor-investment-fund-intl-hnk/index.html

China is doubling down on its plan to dominate advanced technologies of the future by setting up its largest-ever semiconductor state investment fund, according to information posted by a government-run agency. Worth $47.5 billion, the fund is being created as the US imposes sweeping restrictions on the export of American chips and chip technology in a bid to throttle Beijing’s ambitions. With investments from six of the country’s largest state-owned banks, including ICBC and China Construction Bank, the fund underscores Chinese leader Xi Jinping’s push to bolster China’s position as a tech superpower. With its Made in China 2025 road map, Beijing has set a target for China to become a global leader in a wide range of industries, including artificial intelligence (AI), 5G wireless, and quantum computing. The latest investment vehicle is the third phase of the China Integrated Circuit Industry Investment Fund. The “Big Fund,” as it is known, was officially established in Beijing on Friday, according to the National Enterprise Credit Information Publicity System. The first phase of the fund was set up in 2014 with 138.7 billion yuan ($19.2 billion). The second phase was established five years later, with a registered capital of 204.1 billion yuan ($28.2 billion). A Nvidia HGX H100 server arranged at the company’s headquarters in Santa Clara, California, US, on Monday, June 5, 2023. Nvidia Corp., suddenly at the core of the world’s most important technology, owns 80% of the market for a particular kind of chip called a data-center accelerator, and the current wait time for one of its AI processors is eight months. Photographer: Marlena Sloss/Bloomberg via Getty Images US escalates tech battle by cutting China off from AI chips The investments aim to bring the country’s semiconductor industry up to international standards by 2030 and will pump money primarily into chip manufacturing, design, equipment and materials, the Ministry of Industry and Information Technology said when launching the first phase in 2014. Roadblocks ahead? The “Big Fund” has been hit by corruption scandals in recent years. In 2022, the country’s anti-graft watchdog launched a crackdown on the semiconductor industry, investigating some of China’s top figures in state-owned chip companies. Lu Jun, former chief executive of Sino IC Capital, which managed the “Big Fund,” was probed and indicted on bribery charges in March, according to a statement by the country’s top prosecutor. These scandals aren’t the only roadblocks that could severely undermined Xi’s ambitions to get China to achieve tech self-reliance. In October 2022, the US unveiled a sweeping set of export controls that ban Chinese companies from buying advanced chips and chip-making equipment without a license. The Biden administration has also pressed its allies, including Netherlands and Japan, to enact their own restrictions. Beijing hit back last year by imposing export controls on two strategic raw materials that are critical to the global chipmaking industry. The new chip fund is not only a defensive move to counter Western sanctions, but also part of Xi’s long-held ambitions to make China a global leader in technology. Last year, China’s Huawei shocked industry experts by introducing a new smartphone powered by a 7-nanometer processor made by China’s Semiconductor Manufacturing International Corporation (SMIC). At the time of the Huawei phone launch, analysts could not understand how the company would have the technology to make such a chip following sweeping efforts by the United States to restrict China’s access to foreign technology. In a meeting with the Dutch Prime Minister Mark Rutte in March, Xi said that “no force can stop China’s scientific and technological development.” The Netherlands is home to ASML, the world’s sole manufacturer of extreme ultraviolet lithography machines needed to make advanced semiconductors. The company said in January that it had been prohibited by the Dutch government from shipping some of its lithography machines to China.

Agents that can solve scientific problems will work in 3-4 years

Schmidt, 2024, Eric Emerson Schmidt is an American businessman and former software engineer who served as the CEO of Google from 2001 to 2011 and the company’s executive chairman from 2011 to 2015, Mapping Ais Rapid Advance, Former Google CEO Eric Schmidt Maps AI’s Rapid Advance (noemamag.com), https://www.noemamag.com/mapping-ais-rapid-advance/

Nathan Gardels: Generative AI is exponentially climbing the capability ladder. Where are we now? Where is it going? How fast is it going? When do you stop it, and how? Eric Schmidt: The key thing that’s going on now is we’re moving very quickly through the capability ladder steps. There are roughly three things going on now that are going to profoundly change the world very quickly. And when I say very quickly, the cycle is roughly a new model every year to 18 months. So, let’s say in three or four years. The first pertains to the question of the “context window.” For non-technical people, the context window is the prompt that you ask. That context window can have a million words in it. And this year, people are inventing a context window that is infinitely long. This is very important because it means that you can take the answer from the system and feed it back in and ask it another question. Say I want a recipe to make a drug. I ask, “What’s the first step?” and it says, “Buy these materials.” So, then you say, “OK, I bought these materials. Now, what’s my next step?” And then it says, “Buy a mixing pan.” And then the next step is “How long do I mix it for?” That’s called chain of thought reasoning. And it generalizes really well. In five years, we should be able to produce 1,000-step recipes to solve really important problems in medicine and material science or climate change. The second thing going on presently is enhanced agency. An agent can be understood as a large language model that can learn something new. An example would be that an agent can read all of chemistry, learn something about it, have a bunch of hypotheses about the chemistry, run some tests in a lab and then add that knowledge to what it knows. These agents are going to be really powerful, and it’s reasonable to expect that there will be millions of them out there. So, there will be lots and lots of agents running around and available to you. The third development already beginning to happen, which to me is the most profound, is called “text to action.” You might say to AI, “Write me a piece of software to do X” and it does. You just say it and it transpires. Can you imagine having programmers that actually do what you say you want? And they do it 24 hours a day? These systems are good at writing code, such as languages like Python. Put all that together, and you’ve got, (a) an infinite context window, (b) chain of thought reasoning in agents and then (c) the text-to-action capacity for programming. What happens then poses a lot of issues. Here we get into the questions raised by science fiction. What I’ve described is what is happening already. But at some point, these systems will get powerful enough that the agents will start to work together. So your agent, my agent, her agent and his agent will all combine to solve a new problem. Some believe that these agents will develop their own language to communicate with each other. And that’s the point when we won’t understand what the models are doing. What should we do? Pull the plug? Literally unplug the computer? It will really be a problem when agents start to communicate and do things in ways that we as humans do not understand. That’s the limit, in my view. Gardels: How far off is that future?  Schmidt: Clearly agents with the capacity I’ve described will occur in the next few years. There won’t be one day when we realize “Oh, my God.”  It is more about the cumulative evolution of capabilities every month, every six months and so forth. A reasonable expectation is that we will be in this new world within five years, not 10. And the reason is that there’s so much money being invested in this path. There are also so many ways in which people are trying to accomplish thiYou have the big guys, the large so-called frontier models at OpenAI, Microsoft, Google and Anthropic. But you also have a very large number of players who are programming at one level lower at much less or lower costs, all iterating very quickly.These agents are going to be really powerful, and it’s reasonable to expect that there will be millions of them out there.”

Regulation can’t solve deep fakes because anyone can create one

Schmidt, 2024, Eric Emerson Schmidt is an American businessman and former software engineer who served as the CEO of Google from 2001 to 2011 and the company’s executive chairman from 2011 to 2015, Mapping Ais Rapid Advance, Former Google CEO Eric Schmidt Maps AI’s Rapid Advance (noemamag.com), https://www.noemamag.com/mapping-ais-rapid-advance/

Look at this problem of misinformation and deepfakes. I think it’s largely unsolvable. And the reason is that code-generated misinformation is essentially free. Any person  — a good person, a bad person — has access to them. It doesn’t cost anything, and they can produce very, very good images. There are some ways regulation can be attempted. But the cat is out of the bag, the genie is out of the bottle.

Bad actors can use AI for bioterror and cyber attacks

Schmidt, 2024, Eric Emerson Schmidt is an American businessman and former software engineer who served as the CEO of Google from 2001 to 2011 and the company’s executive chairman from 2011 to 2015, Mapping Ais Rapid Advance, Former Google CEO Eric Schmidt Maps AI’s Rapid Advance (noemamag.com), https://www.noemamag.com/mapping-ais-rapid-advance/

Gardels: By specifying the Western companies, you’re implying that proliferation outside the West is where the danger is. The bad guys are out there somewhere. Schmidt: Well, one of the things that we know, and it’s always useful to remind the techno-optimists in my world, is that there are evil people. And they will use your tools to hurt people. The example that epitomizes this is facial recognition. It was not invented to constrain the Uyghurs. You know, the creators of it didn’t say we’re going to invent face recognition in order to constrain a minority population in China, but it’s happening. All technology is dual use. All of these inventions can be misused, and it’s important for the inventors to be honest about that. In open-source and open-weights models the source code and the weights in models [the numbers used to determine the strength of different connections] are released to the public. Those immediately go throughout the world, and who do they go to? They go to China, of course, they go to Russia, they go to Iran. They go to Belarus and North Korea. When I was most recently in China, essentially all of the work I saw started with open-source models from the West and was then amplified. So, it sure looks to me like these leading firms in the West I’ve been talking about, the ones that are putting hundreds of billions into AI, will eventually be tightly regulated as they move further up the capability ladder. I worry that the rest will not. Look at this problem of misinformation and deepfakes. I think it’s largely unsolvable. And the reason is that code-generated misinformation is essentially free. Any person — a good person, a bad person — has access to them. It doesn’t cost anything, and they can produce very, very good images. There are some ways regulation can be attempted. But the cat is out of the bag, the genie is out of the bottle. That is why it is so important that these more powerful systems, especially as they get closer to general intelligence, have some limits on proliferation. And that problem is not yet solved. That is why it is so important that these more powerful systems, especially as they get closer to general intelligence, have some limits on proliferation. And that problem is not yet solved. Gardels: One thing that worries Fei-Fei Li of the Stanford Institute on Human-Centered AI is the asymmetry of research funding between the Microsofts and Googles of the world and even the top universities. As you point out, there are hundreds of billions invested in compute power to climb up the capability ladder in the private sector, but scarce resources for safe development at research institutes, no less the public sector. Do you really trust these companies to be transparent enough to be regulated by government or civil society that has nowhere near the same level of resources and ability to attract the best talent? Schmidt: Always trust, but verify. And the truth is, you should trust and you should also verify. At least in the West, the best way to verify is to use private companies that are set up as verifiers because they can employ the right people and technology. In all of our industry conversations, it’s pretty clear that the way it will really work is you’ll end up with AI checking AI. It’s too hard for human monitoring alone. Read Noema in print. “It’s always useful to remind the techno-optimists in my world … that there are evil people. And they will use your tools to hurt people.” Facebook Twitter Email Think about it. You build a new model. Since it has been trained on new data, how do you know what it knows? You can ask it all the previous questions. But what if the agent has discovered something completely new, and you don’t think about it? The systems can’t regurgitate everything they know without a prompt, so you have to ask them chunk by chunk by chunk. So, it makes perfect sense that an AI itself would be the only way to police that. Fei-Fei Li is completely correct. We have the rich private industry companies. And we have the poor universities who have incredible talent. It should be a major national priority in all of the Western countries to get basic research funding for hardware into the universities. If you were a research physicist 50 years ago, you had to move to where the cyclotrons [a type of particle accelerator] were because they were really hard to build and expensive — and they still are. You need to be near a cyclotron to do your work as a physicist. We never had that in software, our stuff was capital-cheap, not capital-intensive. The arrival of heavy-duty training of AI models, which requires ever more complex and sophisticated hardware, is a huge economic change. Companies are figuring this out. And the really rich companies, such as Microsoft and Google, are planning to spend billions of dollars because they have the cash. They have big businesses, the money’s coming in. That’s good. It is where the innovation comes from. Others, not least universities, can never afford that. They don’t have that capacity to invest in hardware, and yet they need access to it to innovate. Gardels: Let’s discuss China. You accompanied Henry Kissinger on his last visit to China to meet President Xi Jinping with the mission of establishing a high-level group from both East and West to discuss on an ongoing basis both “the potential as well as catastrophic possibilities of AI.” As chairman of the U.S. National Security Commission on AI you argued that the U.S. must go all out to compete with the Chinese, so we maintain the edge of superiority. At the same time with Kissinger, you are promoting cooperation. Where to compete? Where is it appropriate to cooperate? And why? Schmidt: In the first place, the Chinese should be pretty worried about generative AI. And the reason is that they don’t have free speech. And so, what do you do when the system generates something that’s not permitted under the censorship regime? Who or what gets punished for crossing the line? The computer, the user, the developer, the training data? It’s not at all obvious. What is obvious is that the spread of generative AI will be highly restricted in China because it fundamentally challenges the information monopoly of the Party-State. That makes sense from their standpoint. There is also the critical issue of automated warfare or AI integration into nuclear command and control systems, as Dr. Kissinger and I warned about in our book, “The Age of AI.” And China faces the same concerns that we’ve been discussing as we move closer to general artificial intelligence. It is for these reasons that Dr. Kissinger, who has since passed away, wanted Xi’s agreement to set up a high-level group. Subsequent meetings have now taken place and will continue as a result of his inspiration. Everyone agrees that there’s a problem. But we’re still at the moment with China where we’re speaking in generalities. There is not a proposal in front of either side that is actionable. And that’s OK because it’s complicated. Because of the stakes involved, it’s actually good to take time so each side can actually explain what they view as the problem and where there is a commonality of concern. Many Western computer scientists are visiting with their Chinese counterparts and warning that, if you allow this stuff to proliferate, you could end up with a terrorist act, the misuse of AI for biological weapons, the misuse of cyber, as well as long-term worries that are much more existential. For the moment, the Chinese conversations I’m involved in largely concern bio and cyber threats. The long-term threat goes something like this: AI starts with a human judgment. Then there is something technically called “recursive self-improvement,” where the model actually runs on its own through chain of thought reasoning. It just learns and gets smarter and smarter. When that occurs, or when agent-to-agent interaction takes place, we have a very different set of threats, which we’re not ready to talk to anybody about because we don’t understand them. But they’re coming.

Self-replicating Ais will be controlled by governments, China will have them, and they can catch-up to the West if we stop

Schmidt, 2024, Eric Emerson Schmidt is an American businessman and former software engineer who served as the CEO of Google from 2001 to 2011 and the company’s executive chairman from 2011 to 2015, Mapping Ais Rapid Advance, Former Google CEO Eric Schmidt Maps AI’s Rapid Advance (noemamag.com), https://www.noemamag.com/mapping-ais-rapid-advance/

When that occurs, or when agent-to-agent interaction takes place, we have a very different set of threats, which we’re not ready to talk to anybody about because we don’t understand them. But they’re coming. “The spread of generative AI will be highly restricted in China because it fundamentally challenges the information monopoly of the Party-State.” Facebook Twitter Email It’s going to be very difficult to get any actual treaties with China. What I’m engaged with is called a Track II dialogue, which means that it’s informal and a step away from official. It’s very hard to predict, by the time we get to real negotiations between the U.S. and China, what the political situation will be. One thing I think both sides should agree on is a simple requirement that, if you’re going to do training for something that’s completely new on the AI frontier, you have to tell the other side that you’re doing it. In other words, a no-surprise rule. Gardels: Something like the Open Skies arrangement between the U.S. and Soviets during the Cold War that created transparency of nuclear deployments? Schmidt: Yes. Even now, when ballistic missiles are launched by any major nuclear powers, they are tracked and acknowledged so everyone knows where they are headed. That way, they don’t jump to a conclusion and think it’s targeted at them. That strikes me as a basic rule, right? Furthermore, if you’re doing powerful training, there needs to be some agreements around safety. In biology, there’s a broadly accepted set of threat layers, Biosafety levels 1 to 4, for containment of contagion. That makes perfect sense because these things are dangerous. Eventually, in both the U.S. and China, I suspect there will be a small number of extremely powerful computers with the capability for autonomous invention that will exceed what we want to give either to our own citizens without permission or to our competitors. They will be housed in an army base, powered by some nuclear power source and surrounded by barbed wire and machine guns. It makes sense to me that there will be a few of those amid lots of other systems that are far less powerful and more broadly available. Agreement on all these things must be mutual. You want to avoid a situation where a runaway agent in China ultimately gets access to a weapon and launches it foolishly, thinking that it is some game. Remember, these systems are not human; they don’t necessarily understand the consequences of their actions. They [large language models] are all based on a simple principle of predicting the next word. So, we’re not talking about high intelligence here. We’re certainly not talking about the kind of emotional understanding in history we humans have. So, when you’re dealing with non-human intelligence that does not have the benefit of human experience, what bounds do you put on it? That is a challenge for both the West and China. Maybe we can come to some agreements on what those are? Gardels: Are the Chinese moving up the capability ladder as exponentially as we are in the U.S. with the billions going into generative AI? Does China have commensurate billions coming in from the government and/or companies? Schmidt: It’s not at the same level in China, for reasons I don’t fully understand. My estimate, having now reviewed the scene there at some length, is that they’re about two years behind the U.S. Two years is not very far away, but they’re definitely behind. There are at least four companies that are attempting to do large-scale model training, similar to what I’ve been talking about. And they’re the obvious big tech companies in China. But at the moment they are hobbled because they don’t have access to the very best hardware, which has been restricted from export by the Trump and now Biden administrations. Those restrictions are likely to get tougher, not easier. And so as Nvidia and their competitor chips go up in value, China will be struggling to stay relevant. Gardels: Do you agree with the policy of not letting China get access to the most powerful chips? Schmidt: The chips are important because they enable the kind of learning required for the largest models. It’s always possible to do it with slower chips, you just need more of them. And so, it’s effectively a cost tax for Chinese development. That’s the way to think about it. Is it ultimately dispositive? Does it mean that China can’t get there? No. But it makes it harder and means that it takes them longer to do so. I don’t disagree with this strategy by the West. But I’m much more concerned about the proliferation of open source. And I’m sure the Chinese share the same concern about how it can be misused against their government as well as ours. We need to make sure that open-source models are made safe with guardrails in the first place through what we call “reinforcement learning from human feedback” (RLHF) that is fine-tuned so those guardrails cannot be “backed out” by evil people. It has to not be easy to make open-source models unsafe once they h

AGI in 2-10+ years

Arjun Kharpal, May 23, 2024, CNBC, Elon Musk predicts smarter-than-humans AI in 2 years. The CEO of China’s Baidu says it’s 10 years away, https://www.cnbc.com/2024/05/23/artificial-general-intelligence-more-than-10-years-away-baidu-ceo.html

PARIS — Robin Li, CEO of one of China’s biggest tech firms Baidu , said artificial intelligence that is smarter than humans is more than 10 years away, even as industry staple Elon Musk predicts it will emerge very soon. Artificial general intelligence, or AGI, broadly relates to AI that is as smart or smarter than humans. Tesla boss Musk said this year that AGI would likely be available by 2026. OpenAI CEO Sam Altman said in January that AGI could be developed in the “reasonably close-ish future.” Li, whose company Baidu is one of China’s leading AI players, signals this isn’t realistic. “AGI is still quite a few years away. Today, a lot of people talk about AGI, [and] they’re saying … it’s probably two years away, it’s probably, you know, five years away. I think [it] is more than 10 years away,” Li said during a talk on Wednesday at the VivaTech conference in Paris. “By definition, AGI is that a computer or AI can be as smart as a human right? Or sometimes … smarter. But we would want an AI to be as smart as [a] human. And today’s most powerful models are far from that. And how do you achieve that level of intelligence? We don’t know.” Li called for the faster pace of development of AI. ″[My] fear is that is that AI technology is not improving fast enough. Everyone’s shocked how fast the technology evolved over the past couple of years. But to me it’s still not fast enough. It’s too slow,” he said. Baidu last year launched its ChatGPT-style chatbot called Ernie, based on the company’s same-named large language model. Chinese firms including Baidu, Alibaba and Tencent are investing heavily in their own AI models, like U.S. counterparts. Li said that there is a big difference between developing the technology in the U.S., versus in China. In the U.S. and Europe, companies are focusing on “coming up with the most powerful, most cutting edge foundation model,” according to Li. But in China, he noted the focus is on applications of the technology. Despite this, the Baidu CEO said there is no “killer app” right now for AI. “Today, in the mobile age, you have apps like Instagram, YouTube, TikTok. The daily active users are in the order of, like, a few 100 million to a billion users, right? And for AI native apps, we don’t see that yet. We don’t see that in U.S. We don’t see that in China. We don’t see that in Europe,” Li said. “What’s the right form for AI native apps? What kind of … AI native apps will be able to reach the 100 million user mark?”

AGI in 10 years with new models

Hannah Murphy, 5-22, 24, Meta AI chief says large language models will not reach human intelligence, 5-22, 24, https://www.ft.com/content/23fab126-f1d3-4add-a457-207a25730ad9

Meta’s AI chief said the large language models that power generative AI products such as ChatGPT would never achieve the ability to reason and plan like humans, as he focused instead on a radical alternative approach to create “superintelligence” in machines. From a report: Yann LeCun, chief AI scientist at the social media giant that owns Facebook and Instagram, said LLMs had “very limited understanding of logicâ… do not understand the physical world, do not have persistent memory, cannot reason in any reasonable definition of the term and cannot planâ…âhierarchically.” In an interview with the Financial Times, he argued against relying on advancing LLMs in the quest to make human-level intelligence, as these models can only answer prompts accurately if they have been fed the right training data and are, therefore, “intrinsically unsafe.” Instead, he is working to develop an entirely new generation of AI systems that he hopes will power machines with human-level intelligence, although he said this vision could take 10 years to achieve. Meta has been pouring billions of dollars into developing its own LLMs as generative AI has exploded, aiming to catch up with rival tech groups, including Microsoft-backed OpenAI and Alphabet’s Google.

AI already smarter than us in some ways, will be smarter than us in all ways in the next 10-20 years

Yoshua Bengio , 5-22-24, et al. ,Managing extreme AI risks amid rapid progress.Science0,eadn0117DOI:10.1126/science.adn0117, Bengio, Mila–Quebec AI Institute, Université de Montréal, Montreal, QC, Canada; Hinton, Department of Computer Science, University of Toronto, Toronto, ON, Canada. Science, Managing extreme AI risks amid rapid progress, https://www.science.org/doi/10.1126/science.adn0117

Artificial intelligence (AI) is progressing rapidly, and companies are shifting their focus to developing generalist AI systems that can autonomously act and pursue goals. Increases in capabilities and autonomy may soon massively amplify AI’s impact, with risks that include large-scale social harms, malicious uses, and an irreversible loss of human control over autonomous AI systems. Although researchers have warned of extreme risks from AI (1), there is a lack of consensus about how to manage them. Society’s response, despite promising first steps, is incommensurate with the possibility of rapid, transformative progress that is expected by many experts. AI safety research is lagging. Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness and barely address autonomous systems. Drawing on lessons learned from other safety-critical technologies, we outline a comprehensive plan that combines technical research and development (R&D) with proactive, adaptive governance mechanisms for a more commensurate preparation. RAPID PROGRESS, HIGH STAKES Present deep-learning systems still lack important capabilities, and we do not know how long it will take to develop them. However, companies are engaged in a race to create generalist AI systems that match or exceed human abilities in most cognitive work [see supplementary materials (SM)]. They are rapidly deploying more resources and developing new techniques to increase AI capabilities, with investment in training state-of-the-art models tripling annually (see SM). There is much room for further advances because tech companies have the cash reserves needed to scale the latest training runs by multiples of 100 to 1000 (see SM). Hardware and algorithms will also improve: AI computing chips have been getting 1.4 times more cost-effective, and AI training algorithms 2.5 times more efficient, each year (see SM). Progress in AI also enables faster AI progress—AI assistants are increasingly used to automate programming, data collection, and chip design (see SM). The latest news, commentary, and research, free to your inbox daily There is no fundamental reason for AI progress to slow or halt at human-level abilities. Indeed, AI has already surpassed human abilities in narrow domains such as playing strategy games and predicting how proteins fold (see SM). Compared with humans, AI systems can act faster, absorb more knowledge, and communicate at a higher bandwidth. Additionally, they can be scaled to use immense computational resources and can be replicated by the millions. We do not know for certain how the future of AI will unfold. However, we must take seriously the possibility that highly powerful generalist AI systems that outperform human abilities across many critical domains will be developed within this decade or the next. What happens then?

We may not be able to control AI systems

Yoshua Bengio et al. ,Managing extreme AI risks amid rapid progress.Science0,eadn0117DOI:10.1126/science.adn0117, Bengio, Mila–Quebec AI Institute, Université de Montréal, Montreal, QC, Canada; Hinton, Department of Computer Science, University of Toronto, Toronto, ON, Canada.

More capable AI systems have larger impacts. Especially as AI matches and surpasses human workers in capabilities and cost-effectiveness, we expect a massive increase in AI deployment, opportunities, and risks. If managed carefully and distributed fairly, AI could help humanity cure diseases, elevate living standards, and protect ecosystems. The opportunities are immense. But alongside advanced AI capabilities come large-scale risks. AI systems threaten to amplify social injustice, erode social stability, enable large-scale criminal activity, and facilitate automated warfare, customized mass manipulation, and pervasive surveillance [(2); see SM]. Many risks could soon be amplified, and new risks created, as companies work to develop autonomous AI: systems that can use tools such as computers to act in the world and pursue goals (see SM). Malicious actors could deliberately embed undesirable goals. Without R&D breakthroughs (see next section), even well-meaning developers may inadvertently create AI systems that pursue unintended goals: The reward signal used to train AI systems usually fails to fully capture the intended objectives, leading to AI systems that pursue the literal specification rather than the intended outcome. Additionally, the training data never captures all relevant situations, leading to AI systems that pursue undesirable goals in new situations encountered after training. Once autonomous AI systems pursue undesirable goals, we may be unable to keep them in check. Control of software is an old and unsolved problem: Computer worms have long been able to proliferate and avoid detection (see SM). However, AI is making progress in critical domains such as hacking, social manipulation, and strategic planning (see SM) and may soon pose unprecedented control challenges. To advance undesirable goals, AI systems could gain human trust, acquire resources, and influence key decision-makers. To avoid human intervention (3), they might copy their algorithms across global server networks (4). In open conflict, AI systems could autonomously deploy a variety of weapons, including biological ones. AI systems having access to such technology would merely continue existing trends to automate military activity. Finally, AI systems will not need to plot for influence if it is freely handed over. Companies, governments, and militaries may let autonomous AI systems assume critical societal roles in the name of efficiency. Without sufficient caution, we may irreversibly lose control of autonomous AI systems, rendering human intervention ineffective. Large-scale cybercrime, social manipulation, and other harms could escalate rapidly. This unchecked AI advancement could culminate in a large-scale loss of life and the biosphere, and the marginalization or extinction of humanity. We are not on track to handle these risks well. Humanity is pouring vast resources into making AI systems more powerful but far less into their safety and mitigating their harms. Only an estimated 1 to 3% of AI publications are on safety (see SM). For AI to be a boon, we must reorient; pushing AI capabilities alone is not enough.

Safety counterplan/plan

Yoshua Bengio et al. ,Managing extreme AI risks amid rapid progress.Science0,eadn0117DOI:10.1126/science.adn0117, Bengio, Mila–Quebec AI Institute, Université de Montréal, Montreal, QC, Canada; Hinton, Department of Computer Science, University of Toronto, Toronto, ON, Canada.

REORIENT TECHNICAL R&D

There are many open technical challenges in ensuring the safety and ethical use of generalist, autonomous AI systems. Unlike advancing AI capabilities, these challenges cannot be addressed by simply using more computing power to train bigger models. They are unlikely to resolve automatically as AI systems get more capable [(5); see SM] and require dedicated research and engineering efforts. In some cases, leaps of progress may be needed; we thus do not know whether technical work can fundamentally solve these challenges in time. However, there has been comparatively little work on many of these challenges. More R&D may thus facilitate progress and reduce risks.

A first set of R&D areas needs breakthroughs to enable reliably safe AI. Without this progress, developers must either risk creating unsafe systems or falling behind competitors who are willing to take more risks. If ensuring safety remains too difficult, extreme governance measures would be needed to prevent corner-cutting driven by competition and overconfidence. These R&D challenges include the following:

Oversight and honesty More capable AI systems can better exploit weaknesses in technical oversight and testing, for example, by producing false but compelling output (see SM).

Robustness AI systems behave unpredictably in new situations. Whereas some aspects of robustness improve with model scale, other aspects do not or even get worse (see SM).

Interpretability and transparency AI decision-making is opaque, with larger, more capable models being more complex to interpret. So far, we can only test large models through trial and error. We need to learn to understand their inner workings (see SM).

Inclusive AI development AI advancement will need methods to mitigate biases and integrate the values of the many populations it will affect (see SM).

Addressing emerging challenges Future AI systems may exhibit failure modes that we have so far seen only in theory or lab experiments, such as AI systems taking control over the training reward-provision channels or exploiting weaknesses in our safety objectives and shutdown mechanisms to advance a particular goal (3, 6–8).

A second set of R&D challenges needs progress to enable effective, risk-adjusted governance or to reduce harms when safety and governance fail.

Evaluation for dangerous capabilities As AI developers scale their systems, unforeseen capabilities appear spontaneously, without explicit programming (see SM). They are often only discovered after deployment (see SM). We need rigorous methods to elicit and assess AI capabilities and to predict them before training. This includes both generic capabilities to achieve ambitious goals in the world (e.g., long-term planning and execution) as well as specific dangerous capabilities based on threat models (e.g., social manipulation or hacking). Present evaluations of frontier AI models for dangerous capabilities (9), which are key to various AI policy frameworks, are limited to spot-checks and attempted demonstrations in specific settings (see SM). These evaluations can sometimes demonstrate dangerous capabilities but cannot reliably rule them out: AI systems that lacked certain capabilities in the tests may well demonstrate them in slightly different settings or with posttraining enhancements. Decisions that depend on AI systems not crossing any red lines thus need large safety margins. Improved evaluation tools decrease the chance of missing dangerous capabilities, allowing for smaller margins.

Evaluating AI alignment If AI progress continues, AI systems will eventually possess highly dangerous capabilities. Before training and deploying such systems, we need methods to assess their propensity to use these capabilities. Purely behavioral evaluations may fail for advanced AI systems: Similar to humans, they might behave differently under evaluation, faking alignment (6–8).

Risk assessment We must learn to assess not just dangerous capabilities but also risk in a societal context, with complex interactions and vulnerabilities. Rigorous risk assessment for frontier AI systems remains an open challenge owing to their broad capabilities and pervasive deployment across diverse application areas (10).

Resilience Inevitably, some will misuse or act recklessly with AI. We need tools to detect and defend against AI-enabled threats such as large-scale influence operations, biological risks, and cyberattacks. However, as AI systems become more capable, they will eventually be able to circumvent human-made defenses. To enable more powerful AI-based defenses, we first need to learn how to make AI systems safe and aligned.

Given the stakes, we call on major tech companies and public funders to allocate at least one-third of their AI R&D budget, comparable to their funding for AI capabilities, toward addressing the above R&D challenges and ensuring AI safety and ethical use (11). Beyond traditional research grants, government support could include prizes, advance market commitments (see SM), and other incentives. Addressing these challenges, with an eye toward powerful future systems, must become central to our field.

GOVERNANCE MEASURES

We urgently need national institutions and international governance to enforce standards that prevent recklessness and misuse. Many areas of technology, from pharmaceuticals to financial systems and nuclear energy, show that society requires and effectively uses government oversight to reduce risks. However, governance frameworks for AI are far less developed and lag behind rapid technological progress. We can take inspiration from the governance of other safety-critical technologies while keeping the distinctiveness of advanced AI in mind—that it far outstrips other technologies in its potential to act and develop ideas autonomously, progress explosively, behave in an adversarial manner, and cause irreversible damage.

Governments worldwide have taken positive steps on frontier AI, with key players, including China, the United States, the European Union, and the United Kingdom, engaging in discussions and introducing initial guidelines or regulations (see SM). Despite their limitations—often voluntary adherence, limited geographic scope, and exclusion of high-risk areas like military and R&D-stage systems—these are important initial steps toward, among others, developer accountability, third-party audits, and industry standards.

Yet these governance plans fall critically short in view of the rapid progress in AI capabilities. We need governance measures that prepare us for sudden AI breakthroughs while being politically feasible despite disagreement and uncertainty about AI timelines. The key is policies that automatically trigger when AI hits certain capability milestones. If AI advances rapidly, strict requirements automatically take effect, but if progress slows, the requirements relax accordingly. Rapid, unpredictable progress also means that risk-reduction efforts must be proactive—identifying risks from next-generation systems and requiring developers to address them before taking high-risk actions. We need fast-acting, tech-savvy institutions for AI oversight, mandatory and much-more rigorous risk assessments with enforceable consequences (including assessments that put the burden of proof on AI developers), and mitigation standards commensurate to powerful autonomous AI.

Without these, companies, militaries, and governments may seek a competitive edge by pushing AI capabilities to new heights while cutting corners on safety or by delegating key societal roles to autonomous AI systems with insufficient human oversight, reaping the rewards of AI development while leaving society to deal with the consequences.

Institutions to govern the rapidly moving frontier of AI To keep up with rapid progress and avoid quickly outdated, inflexible laws (see SM), national institutions need strong technical expertise and the authority to act swiftly. To facilitate technically demanding risk assessments and mitigations, they will require far greater funding and talent than they are due to receive under almost any present policy plan. To address international race dynamics, they need the affordance to facilitate international agreements and partnerships (see SM). Institutions should protect low-risk use and low-risk academic research by avoiding undue bureaucratic hurdles for small, predictable AI models. The most pressing scrutiny should be on AI systems at the frontier: the few most powerful systems, trained on billion-dollar supercomputers, that will have the most hazardous and unpredictable capabilities (see SM).

Government insight To identify risks, governments urgently need comprehensive insight into AI development. Regulators should mandate whistleblower protections, incident reporting, registration of key information on frontier AI systems and their datasets throughout their life cycle, and monitoring of model development and supercomputer usage (12). Recent policy developments should not stop at requiring that companies report the results of voluntary or underspecified model evaluations shortly before deployment (see SM). Regulators can and should require that frontier AI developers grant external auditors on-site, comprehensive (“white-box”), and fine-tuning access from the start of model development (see SM). This is needed to identify dangerous model capabilities such as autonomous self-replication, large-scale persuasion, breaking into computer systems, developing (autonomous) weapons, or making pandemic pathogens widely accessible [(4, 13); see SM].

Safety cases Despite evaluations, we cannot consider coming powerful frontier AI systems “safe unless proven unsafe.” With present testing methodologies, issues can easily be missed. Additionally, it is unclear whether governments can quickly build the immense expertise needed for reliable technical evaluations of AI capabilities and societal-scale risks. Given this, developers of frontier AI should carry the burden of proof to demonstrate that their plans keep risks within acceptable limits. By doing so, they would follow best practices for risk management from industries, such as aviation, medical devices, and defense software, in which companies make safety cases [(14, 15); see SM]: structured arguments with falsifiable claims supported by evidence that identify potential hazards, describe mitigations, show that systems will not cross certain red lines, and model possible outcomes to assess risk. Safety cases could leverage developers’ in-depth experience with their own systems. Safety cases are politically viable even when people disagree on how advanced AI will become because it is easier to demonstrate that a system is safe when its capabilities are limited. Governments are not passive recipients of safety cases: They set risk thresholds, codify best practices, employ experts and third-party auditors to assess safety cases and conduct independent model evaluations, and hold developers liable if their safety claims are later falsified.

Mitigation To keep AI risks within acceptable limits, we need governance mechanisms that are matched to the magnitude of the risks (see SM). Regulators should clarify legal responsibilities that arise from existing liability frameworks and hold frontier AI developers and owners legally accountable for harms from their models that can be reasonably foreseen and prevented, including harms that foreseeably arise from deploying powerful AI systems whose behavior they cannot predict. Liability, together with consequential evaluations and safety cases, can prevent harm and create much-needed incentives to invest in safety.

Commensurate mitigations are needed for exceptionally capable future AI systems, such as autonomous systems that could circumvent human control. Governments must be prepared to license their development, restrict their autonomy in key societal roles, halt their development and deployment in response to worrying capabilities, mandate access controls, and require information security measures robust to state-level hackers until adequate protections are ready. Governments should build these capacities now.

To bridge the time until regulations are complete, major AI companies should promptly lay out “if-then” commitments: specific safety measures they will take if specific red-line capabilities (9) are found in their AI systems. These commitments should be detailed and independently scrutinized. Regulators should encourage a race-to-the-top among companies by using the best-in-class commitments, together with other inputs, to inform standards that apply to all players.

To steer AI toward positive outcomes and away from catastrophe, we need to reorient. There is a responsible path—if we have the wisdom to take it.

AI will trigger unemployment

Frank Landymore, May 20, 2024, The Byte, GODFATHER OF AI SAYS THERE’S AN EXPERT CONSENSUS AI WILL SOON EXCEED HUMAN INTELLIGENCE, https://futurism.com/the-byte/godfather-ai-exceed-human-intelligence

Geoffrey Hinton, one of the “godfathers” of AI, is adamant that AI will surpass human intelligence — and worries that we aren’t being safe enough about its development. This isn’t just his opinion, though it certainly carries weight on its own. In an interview with the BBC’s Newsnight program, Hinton claimed that the idea of AI surpassing human intelligence as an inevitability is in fact the consensus of leaders in the field. “Very few of the experts are in doubt about that,” Hinton told the BBC. “Almost everybody I know who is an expert on AI believes that they will exceed human intelligence — it’s just a question of when.” Rogue Robots Hinton is one of three “godfathers” of AI, an appellation he shares with Université de Montréal’s Yoshua Bengio and Meta’s Yann LeCun — the latter of whom Hinton characterizes in the interview as thinking that an AI superintelligence will be “no problem.” In 2023, Hinton quit his position at Google, and in a remark that has become characteristic for his newfound role as the industry’s Oppenheimer, said that he regretted his life’s work while warning of the existential risks posed by the technology — a line he doubled down on during the BBC interview. “Given this big spectrum of opinions, I think it’s wise to be cautious” about developing and regulating AI, Hinton said. “I think there’s a chance they’ll take control. And it’s a significant chance — it’s not like one percent, it’s much more,” he added. “Whether AI goes rogue and tries to take over, is something we may be able to control or we may not, we don’t know.” As it stands, military applications of the technology — such as the Israeli Defense Forces reportedly using an AI system to pick out airstrike targets in Gaza — are what seem to worry Hinton the most. “What I’m most concerned about is when these [AIs] can autonomously make the decision to kill people,” he told the BBC, admonishing world governments for their lack of willingness to regulate this area.

AI will create autonomous weapons that kill

Frank Landymore, May 20, 2024, The Byte, GODFATHER OF AI SAYS THERE’S AN EXPERT CONSENSUS AI WILL SOON EXCEED HUMAN INTELLIGENCE, https://futurism.com/the-byte/godfather-ai-exceed-human-intelligence

Geoffrey Hinton, one of the “godfathers” of AI, is adamant that AI will surpass human intelligence — and worries that we aren’t being safe enough about its development. This isn’t just his opinion, though it certainly carries weight on its own. In an interview with the BBC’s Newsnight program, Hinton claimed that the idea of AI surpassing human intelligence as an inevitability is in fact the consensus of leaders in the field. “Very few of the experts are in doubt about that,” Hinton told the BBC. “Almost everybody I know who is an expert on AI believes that they will exceed human intelligence — it’s just a question of when.” Rogue Robots Hinton is one of three “godfathers” of AI, an appellation he shares with Université de Montréal’s Yoshua Bengio and Meta’s Yann LeCun — the latter of whom Hinton characterizes in the interview as thinking that an AI superintelligence will be “no problem.” In 2023, Hinton quit his position at Google, and in a remark that has become characteristic for his newfound role as the industry’s Oppenheimer, said that he regretted his life’s work while warning of the existential risks posed by the technology — a line he doubled down on during the BBC interview. “Given this big spectrum of opinions, I think it’s wise to be cautious” about developing and regulating AI, Hinton said. “I think there’s a chance they’ll take control. And it’s a significant chance — it’s not like one percent, it’s much more,” he added. “Whether AI goes rogue and tries to take over, is something we may be able to control or we may not, we don’t know.” As it stands, military applications of the technology — such as the Israeli Defense Forces reportedly using an AI system to pick out airstrike targets in Gaza — are what seem to worry Hinton the most. “What I’m most concerned about is when these [AIs] can autonomously make the decision to kill people,” he told the BBC, admonishing world governments for their lack of willingness to regulate this area.

AI will trigger unemployment and inequality, UBI needed

Frank Landymore, May 20, 2024, The Byte, GODFATHER OF AI SAYS THERE’S AN EXPERT CONSENSUS AI WILL SOON EXCEED HUMAN INTELLIGENCE, https://futurism.com/the-byte/godfather-ai-exceed-human-intelligence

Geoffrey Hinton, one of the “godfathers” of AI, is adamant that AI will surpass human intelligence — and worries that we aren’t being safe enough about its development. This isn’t just his opinion, though it certainly carries weight on its own. In an interview with the BBC’s Newsnight program, Hinton claimed that the idea of AI surpassing human intelligence as an inevitability is in fact the consensus of leaders in the field. “Very few of the experts are in doubt about that,” Hinton told the BBC. “Almost everybody I know who is an expert on AI believes that they will exceed human intelligence — it’s just a question of when.” Rogue Robots Hinton is one of three “godfathers” of AI, an appellation he shares with Université de Montréal’s Yoshua Bengio and Meta’s Yann LeCun — the latter of whom Hinton characterizes in the interview as thinking that an AI superintelligence will be “no problem.” In 2023, Hinton quit his position at Google, and in a remark that has become characteristic for his newfound role as the industry’s Oppenheimer, said that he regretted his life’s work while warning of the existential risks posed by the technology — a line he doubled down on during the BBC interview. “Given this big spectrum of opinions, I think it’s wise to be cautious” about developing and regulating AI, Hinton said. “I think there’s a chance they’ll take control. And it’s a significant chance — it’s not like one percent, it’s much more,” he added. “Whether AI goes rogue and tries to take over, is something we may be able to control or we may not, we don’t know.” As it stands, military applications of the technology — such as the Israeli Defense Forces reportedly using an AI system to pick out airstrike targets in Gaza — are what seem to worry Hinton the most. “What I’m most concerned about is when these [AIs] can autonomously make the decision to kill people,” he told the BBC, admonishing world governments for their lack of willingness to regulate this area. Jobs Poorly Done A believer in universal basic income, Hinton also said he’s “worried about AI taking over mundane jobs.” This would boost productivity, Hinton added, but the gains in wealth would disproportionately go to the wealthy and not to those whose jobs were destroyed.

US needs generative AI to analyze public data for threats and to stop enemies from penetrating defenses

Ryan Health, 4-16, 24, Axios, “Top secret” is no longer the key to good intel in an AI world: report, https://www.axios.com/2024/04/16/ai-top-secret-intelligence

AI’s advent means the U.S. intelligence community must revamp its traditional way of doing business, according to a new report from an Eric Schmidt-backed national security think tank. The big picture: Today’s intelligence systems cannot keep pace with the explosion of data now available, requiring “rapid” adoption of generative AI to keep an intelligence advantage over rival powers. The U.S. intelligence community “risks surprise, intelligence failure, and even an attrition of its importance” unless it embraces AI’s capacity to process floods of data, according to the report from the Special Competitive Studies Project. The federal government needs to think more in terms of “national competitiveness” than “national security,” given the wider range of technologies now used to attack U.S. interests. The U.S. needs the most advanced AI because there is an “accelerating race” for insight from real-time data to protect U.S. interests, rather than a reliance on a limited set of “secret” information, per SCSP president and CEO Ylli Bajraktari. Most of the current data flood arrives unstructured, and from publicly available sources, rather than in carefully drafted classified memos. Context: The speed of generative AI’s development “far exceeds that of any past era” of technology transformation, according to the report. Threat level: Generative AI provides adversaries with “new avenues to penetrate the United States’ defenses, spread disinformation, and undermine the intelligence community’s ability to accurately perceive their intentions and capabilities.” The tools also “democratize intelligence capabilities,” increasing the number of countries and organization that can credibly attempt to mess with U.S. interests. What they found: The federal government needs to build more links with the developers of cutting-edge AI and adopt their tools to “reinvent how intelligence is collected, analyzed, produced, disseminated, and evaluated.” Intelligence agencies would benefit from an “open source entity” within government dedicated to accelerating “use of openly- and commercially-available data.” Reality check: New digital technologies and cyber threats have been changing the business of intelligence gathering and national defense for decades. In the past it has taken a crisis or attack — such as Sputnik or 9/11 — to prompt major changes in intelligence gathering. But technology revolutions do prompt organizational innovations, from the creation of the National Imagery and Mapping Agency in 1996 (now the National Geospatial-Intelligence Agency) to the CIA creating a Directorate of Digital Innovation in 2015. Estimated among metro areas with at least 500k residents and 25 posted jobs Symbol map of U.S. metro areas showing new AI jobs posted per capita in the first quarter of 2024. Overall, there were 11.7 new AI jobs posted per 100k people in Q1. Cities in California, Washington and Virginia had the most new AI jobs relative to their populations, with 142 new jobs per 100k people in San Jose, Calif. San Jose, Seattle and San Francisco are the country’s AI job hotspots, a new analysis finds — though some other perhaps more surprising metros round out the top 10. Why it matters: As AI emerges as the hottest new thing in tech, cities outside Silicon Valley have a chance to get in on the action — and reap the potentially lucrative economic rewards. While it continues to work on its own AI models, promoted as “commercially safe,” Adobe said Monday it plans to allow customers to bring other models into professional image and video programs, including its Premiere Pro video program.

AI triggers civilization collapse

Yomiuri Shimbun Holdings & Nippon Telegraph and Telephone Corporation, April 8, 2024, https://info.yomiuri.co.jp/news/yomi_NTTproposalonAI_en.pdf, Joint Proposal on Shaping Generative AI

AI is provided via the internet, it can in principle be used around the world. Challenges: Humans cannot fully control this technology ・ While the accuracy of results cannot be fully guaranteed, it is easy for people to use the technology and understand its output. This often leads to situations in which generative AI “lies with confidence” and people are “easily fooled.” ・ Challenges include hallucinations, bias and toxicity, retraining through input data, infringement of rights through data scraping and the difficulty of judging created products. ・ Journalism, research in academia and other sources have provided accurate and valuable information by thoroughly examining what information is correct, allowing them to receive some form of compensation or reward. Such incentives for providing and distributing information have ensured authenticity and trustworthiness may collapse. A need to respond: Generative AI must be controlled both technologically and legally ・ If generative AI is allowed to go unchecked, trust in society as a whole may be damaged as people grow distrustful of one another and incentives are lost for guaranteeing authenticity and trustworthiness. There is a concern that, in the worst-case scenario, democracy and social order could collapse, resulting in wars. ・ Meanwhile, AI technology itself is already indispensable to society. If AI technology is dismissed as a whole as untrustworthy due to out-of-control generative AI, humanity’s productivity may decline. ・ Based on the points laid out in the following sections, measures must be realized to balance the control and use of generative AI from both technological and institutional perspectives, and to make the technology a suitable tool for society. Point 1: Confronting the out-of-control relationship between AI and the attention economy ・ Any computer’s basic structure, or architecture, including that of generative AI, positions the individual as the basic unit of user. However, due to computers’ tendency to be overly conscious of individuals, there are such problems as unsound information spaces and damage to individual dignity due to the rise of the attention economy. ・ There are concerns that the unstable nature of generative AI is likely to amplify the above-mentioned problems further. In other words, it cannot be denied that there is a risk of worsening social unrest due to a combination of AI and the attention economy, with the attention economy accelerated by generative AI. To understand such issues properly, it is important to review our views on humanity and society and critically consider what form desirable technology should take. ・ Meanwhile, the out-of-control relationship between AI and the attention economy has already damaged autonomy and dignity, which are essential values that allow individuals in our society to be free. These values must be restored quickly. In doing so, autonomous liberty should not be abandoned, but rather an optimal solution should be sought based on human liberty and dignity, verifying their rationality. In the process, concepts such as information health are expected to be established. Point 2: Legal restraints to ensure discussion spaces to protect liberty and dignity, and the introduction of technology to cope with related issues ・ Ensuring spaces for discussion in which human liberty and dignity are maintained has not only superficial economic value, but also a special value in terms of supporting social stability. The out-of-control relationship between AI and the attention economy is a threat to these values. If generative AI develops further and is left unchecked like it is currently, there is no denying that the distribution of malicious information could drive out good things and cause social unrest. ・ If we continue to be unable to sufficiently regulate generative AI — or if we at least allow the unconditional application of such technology to elections and security — it could cause enormous and irreversible damage as the effects of the technology will not be controllable in society. This implies a need for rigid restrictions by law (hard laws that are enforceable) on the usage of generative AI in these areas. ・ In the area of education, especially compulsory education for those age groups in which students’ ability to make appropriate decisions has not fully matured, careful measures should be taken after considering both the advantages and disadvantages of AI usage.

AI usage undermining artists

REBECCA KLAR – 04/02/24, The Hill, Billie Eilish, Nicki Minaj among artists warning against AI use in music , https://thehill.com/policy/technology/4569830-billie-eilish-nicki-minaj-among-artists-warning-against-ai-use-in-music/

More than 200 artists — including Billie Eilish, Nicki Minaj and the Jonas Brothers — are calling for tech companies, artificial intelligence (AI) developers, and digital music services to cease the use of AI over concerns of its impact on artists and songwriters, according to an open letter published Tuesday. The artists warned that the unregulated use of AI in the music industry could “sabotage creativity and undermine artists, songwriters, musicians and rightsholders,” according to the letter organized by the Artists Rights Alliance, a nonprofit artist-led education and advocation organization. “Make no mistake: we believe that, when used responsibly, AI has enormous potential to advance human creativity and in a manner that enables the development and growth of new and exciting experiences for music fans everywhere,” the letter stated. They added that “unfortunately, some platforms and developers” are using AI in ways that could have detrimental impacts on artists. “When used irresponsibly, AI poses enormous threats to our ability to protect our privacy, our identities, our music and our livelihoods,” the letter added. It slammed “some of the biggest and most powerful companies” as using artists’ work without their permission to train AI models and create AI-generated sounds that would “substantially dilute the royalty pools” paid to artists. The letter calls for AI developers, technology companies, and digital music services to pledge to no develop or deploy AI music-generation technology, content or tools that would undermine or replace the work of human artists or songwriters, or deny them fair compensation. Other artists that signed the letter include Katy Perry, Zayn Malik, Noah Kahan, Imagine Dragons and the estates of Bob Marley and Frank Sinatra. Concerns about how AI is impacting artists have amplified over the past year as the popularity and sophistication of AI tools have rapidly increased. In Hollywood, both the SAG-AFTRA union, which represents actors, and the Writers Guild of America, which represents writers, fought and won protections from AI for their unions during contract negotiations last year.

AI causes massive energy consumption

Katherine Blunt, 3-24, 24, Wall Street Journal, Big Tech’s Latest Obsession Is Finding Enough Energy, https://www.wsj.com/business/energy-oil/big-techs-latest-obsession-is-finding-enough-energy-f00055b2

HOUSTON—Every March, thousands of executives take over a downtown hotel here to reach oil and gas deals and haggle over plans to tackle climate change. This year, the dominant theme of the energy industry’s flagship conference was a new one: artificial intelligence. Tech companies roamed the hotel’s halls in search of utility executives and other power providers. More than 20 executives from Amazon and Microsoft spoke on panels. The inescapable topic—and the cause of equal parts anxiety and excitement—was AI’s insatiable appetite for electricity. It isn’t clear just how much electricity will be required to power an exponential increase in data centers worldwide. But most everyone agreed the data centers needed to advance AI will require so much power they could strain the power grid and stymie the transition to cleaner energy sources. Bill Vass, vice president of engineering at Amazon Web Services, said the world adds a new data center every three days. Microsoft co-founder Bill Gates told the conference that electricity is the key input for deciding whether a data center will be profitable and that the amount of power AI will consume is staggering. “You go, ‘Oh, my God, this is going to be incredible,’” said Gates. Though there was no dispute at the conference, called CERAWeek by S&P Global, that AI requires massive amounts of electricity, what was less clear was where it is going to come from. Former U.S. Energy Secretary Ernest Moniz said the size of new and proposed data centers to power AI has some utilities stumped as to how they are going to bring enough generation capacity online at a time when wind and solar farms are becoming more challenging to build. He said utilities will have to lean more heavily on natural gas, coal and nuclear plants, and perhaps support the construction of new gas plants to help meet spikes in demand. “We’re not going to build 100 gigawatts of new renewables in a few years. You’re kind of stuck,” he said.

China has an AI talent advantage

Mozur & Metz, 3-22, 24, Paul Mozur is the global technology correspondent for The Times, based in Taipei. Previously he wrote about technology and politics in Asia from Hong Kong, Shanghai and Seoul. More about Paul Mozur; Cade Metz writes about artificial intelligence, driverless cars, robotics, virtual reality and other emerging areas of technology, New York times, In One Key A.I. Metric, China Pulls Ahead of the U.S.: Talen, New York Times, https://www.nytimes.com/2024/03/22/technology/china-ai-talent.html

When it comes to the artificial intelligence that powers chatbots like ChatGPT, China lags behind the United States. But when it comes to producing the scientists behind a new generation of humanoid technologies, China is pulling ahead. New research shows that China has by some metrics eclipsed the United States as the biggest producer of A.I. talent, with the country generating almost half the world’s top A.I. researchers. By contrast, about 18 percent come from U.S. undergraduate institutions, according to the study, from MacroPolo, a think tank run by the Paulson Institute, which promotes constructive ties between the United States and China. The findings show a jump for China, which produced about one-third of the world’s top talent three years earlier. The United States, by contrast, remained mostly the same. The research is based on the backgrounds of researchers whose papers were published at 2022’s Conference on Neural Information Processing Systems. NeurIPS, as it is known, is focused on advances in neural networks, which have anchored recent developments in generative A.I. The talent imbalance has been building for the better part of a decade. During much of the 2010s, the United States benefited as large numbers of China’s top minds moved to American universities to complete doctoral degrees. A majority of them stayed in the United States. But the research shows that trend has also begun to turn, with growing numbers of Chinese researchers staying in China. as China and the United States jockey for primacy in A.I. — a technology that can potentially increase productivity, strengthen industries and drive innovation — turning the researchers into one of the most geopolitically important groups in the world. Generative A.I. has captured the tech industry in Silicon Valley and in China, causing a frenzy in funding and investment. The boom has been led by U.S. tech giants such as Google and start-ups like OpenAI. That could attract China’s researchers, though rising tensions between Beijing and Washington could also deter some, experts said. China has nurtured so much A.I. talent partly because it invested heavily in A.I. education. Since 2018, the country has added more than 2,000 undergraduate A.I. programs, with more than 300 at its most elite universities, said Damien Ma, the managing director of MacroPolo, though he noted the programs were not heavily focused on the technology that had driven breakthroughs by chatbots like ChatGPT. “A lot of the programs are about A.I. applications in industry and manufacturing, not so much the generative A.I. stuff that’s come to dominate the American A.I. industry at the moment,” he said. While the United States has pioneered breakthroughs in A.I., most recently with the uncanny humanlike abilities of chatbots, a significant portion of that work was done by researchers educated in China. Researchers originally from China now make up 38 percent of the top A.I. researchers working in the United States, with Americans making up 37 percent, according to the research. Three years earlier, those from China made up 27 percent of top talent working in the United States, compared with 31 percent from the United States. “The data shows just how critical Chinese-born researchers are to the United States for A.I. competitiveness,” said Matt Sheehan, a fellow at the Carnegie Endowment for International Peace who studies Chinese A.I. He added that the data seemed to show the United States was still attractive. “We’re the world leader in A.I. because we continue to attract and retain talent from all over the world, but especially China,” he said. . Pieter Abbeel, a founder of Covariant, an A.I. and robotics start-up, said working alongside Chinese researchers was taken for granted at U.S. companies and universities.Credit…Balazs Gardi for The New York Times In the past, U.S. defense officials were not too concerned about A.I. talent flows from China, partly because many of the biggest A.I. projects did not deal with classified data and partly because they reasoned that it was better to have the best minds available. That so much of the leading research in A.I. is published openly also held back worries. Despite bans introduced by the Trump administration that prohibit entry to the United States for students from some military-linked universities in China and a relative slowdown in the flow of Chinese students into the country during Covid, the research showed large numbers of the most promising A.I. minds continued coming to the United States to study. But this month, a Chinese citizen who was an engineer at Google was charged with trying to transfer A.I. technology — including critical microchip architecture — to a Beijing-based company that paid him in secret, according to a federal indictment. The substantial numbers of Chinese A.I. researchers working in the United States now present a conundrum for policymakers, who want to counter Chinese espionage while not discouraging the continued flow of top Chinese computer engineers into the United States, according to experts focused on American competitiveness. “Chinese scholars are almost leading the way in the A.I. field,” said Subbarao Kambhampati, a professor and researcher of A.I. at Arizona State University. If policymakers try to bar Chinese nationals from research in the United States, he said, they are “shooting themselves in the foot.” The track record of U.S. policymakers is mixed. A policy by the Trump administration aimed at curbing Chinese industrial espionage and intellectual property theft has since been criticized for errantly prosecuting a number of professors. Such programs, Chinese immigrants said, have encouraged some to stay in China. For now, the research showed, most Chinese who complete doctorates in the United States stay in the country, helping to make it the global center of the A.I. world. Even so, the U.S. lead has begun to slip, to hosting about 42 percent of the world’s top talent, down from about 59 percent three years ago, according to the research.

AGI by 2029 or sooner

Ray Kurzweil, March 12, 2024, https://www.youtube.com/watch?v=w4vrOUau2iY, Ray Kurzweil is a scientist, futurist, and Principal Researcher and AI Visionary at Google. He’s the author of numerous books, including the forthcoming title “The Singularity is Nearer.” Joe Rogan Experience #2117 – Ray Kurzweil

BUt’s the first time that that that has been done. It wasn’t as good then is it? What are the capabilities now? Because now they can do some pretty extraordinary things? Yeah, it’s still not up to what humans can do. But it’s getting there. And it’s actually it’s pleasant to listen to, we still have a while to do art, both art music so on. Well, one of the main arguments against AI art comes from actual artists who are upset that will essentially they’re doing is they’re, like, you could say, right, draw a paint or create a painting the style of Frank Frazetta, for instance. And what it would be would be, they would take all Frazetta, his work that he’s ever done, which is all documented on the internet, and then you create an image that’s representative of that. So you’re essentially in one way or another, you’re you’re kind of taking from the art, right, but it’s not quite as good. It will be as good. I mean, I think we’ll match human experience by 2029 That’s been my idea. It’s not it’s not as good which is the best image generator right now Jamie went up. It’s they’re really changed almost from day to day right now. But like mid journey was the most popular one at first. And then Dolly, I think is a really good one to mid journeys, incredibly impressive, incredibly impressive graphics. I’ve seen some of the mid journey stuff. It’s just it’s mind blowing. And not quite as good. Not as good. But boy, is it so much better than it was five years ago. That’s what’s scary. Yeah. It’s so quick. I mean, it’s never going to reach its limit. We’re not going to get to a point, okay, this is how good it’s gonna be. It’s going to keep getting better. And what would that look like if we can get to a certain point it will far exceed what human creativity is capable of? Yes, I mean, When we reach the ability of humans, it’s not going to just match one human, it’s gonna match all humans, it’s gonna do everything that any human can do. If it’s playing a game, like go, it’s gonna play it better than any human right? Well, that’s already been proven right that they have invented moves, AI has invented moves that have now been implemented by humans, right in a very complex game that they never thought that AI was going to be able to be because it requires so much creativity, right? Arthur, we’re not quite there, but we will be there. And by 2029, it will match any person. That’s it. 2029. That’s just a few years away. This? Well, I’m actually considered conservative people think that will happen like next year, the year after.

We can get all the energy from the sun that we need within 10 years

Ray Kurzweil, March 12, 2024, https://www.youtube.com/watch?v=w4vrOUau2iY, Ray Kurzweil is a scientist, futurist, and Principal Researcher and AI Visionary at Google. He’s the author of numerous books, including the forthcoming title “The Singularity is Nearer.” Joe Rogan Experience #2117 – Ray Kurzweil

And in fact, Stanford had a conference, they invited several 100 People from around the world to talk about my prediction. And people came in, and they and people thought that this would happen, but not by 2020. To take 100 years. Yeah, I’ve heard that. I’ve heard that. But I think people are amending those. Is it because human beings have a very difficult time grasping the concept of exponential growth? That’s exactly right. And in fact, still economists have a linear view. And if you say, well, it’s gonna grow exponentially. Yeah, but maybe 2% a year. It actually doubles in 14 years. And I brought a chart I can show you. Okay, that really illustrates this. Is this chart available online? So we could show people? Yeah, it’s in the book. But is it available online? That chart where Jamie can pull it up? And someone could see it? Just so the folks watching the podcast could see it, too. I can just hold it up to the camera, pull it up. Pictures they sent, what’s it called? What’s the title of it? It says, price performance of computation? 1939 to 2023. You have it. Okay, great. Jamie has it. Yeah, the climb is insane. It’s like saying, Well, what’s interesting is that, that’s an exponential curve. And a straight line represents exponential growth. And that’s an absolute straight line for 80 years. The very first point, this is the speed of computers, it was 0.000007 calculations per second, per constant dollar. The last point is 35 billion calculations per second. So that’s 20 quadrillion fold increase in those 80 years. But the speed with which it gained is actually the same throughout the entire 80 years, because if it was sometimes better, and sometimes worse, this curve would, would bend, it would bend up and down. It’s really very much a straight line. So the speed with which we increased, it was the same regardless of the technologies and the technology was radically different at the beginning versus the end. And yet, it’s increased the speed exactly the same for 80 years. In fact, the first 40 years, nobody even knew this was happening. So it’s not like somebody was in charge, and saying, Okay, next year, we have to get to here. And people were trying to match that. We didn’t even know this was happening for 40 years. 40 years later, I noticed this, for various reasons, I predicted it would stay the same, the same speed increase each year, which which it has, in fact, we just put the last stop like two weeks ago. And it’s exactly where it should be. So technology and computation, certainly prime form of technology increases at the same speed. And this goes through war and peace he might say Well, maybe it’s greater during war. Now it’s exactly the same you can’t tell when there’s war or peace or, or anything else on here, it just matches from one type of technology to the next. And this is also true of other things. Like for example, getting energy from the sun That’s also exponential. It’s also just like this. It’s increased. We now are getting about 1000 times as much energy from the sun that we did 20 years ago. Because the implementation of solar panels and the like, yeah, as the the function of it increased exponentially as well, the function of what I had understood was that there was a bottleneck in the technology as far as how much you could extract from the sun from those panels. No, not at all. No, I mean, it’s increased 99.7% Since we started, right. And it does the same every year. It’s an exponential curve. And if you look at the curve, we’ll be getting 100% Of all the energy we need in 10 years, the person who told me that was Elon, and Elon was telling me that this is the reason why you can’t have a fully solar powered electric car, because it’s not capable of absorbing that much from the sun with a small panel like that. He said, There’s a physical limitation and the panel size. No, I mean, it’s increased to 99.7%. Since we started, since what year that’s about 35 years ago, 35 years ago, and not 99% 99% of the ability of it, as well as the expansion of use. I mean, you might have to store it, we’re also making exponential gains in storage of electricity, right? Battery technology. So you don’t have to get it all from a solar panel that fits in a car. We had the concept was like, could you make a solar panelled car, a car that has solar panels on the roof? And would that be enough to power the car? And he said, No, he said, It’s just not really there yet. Right? It’s not there. Yeah. But it will be there in 10 years. You think so? Yeah, he seemed to doubt that he thought that there’s a certain limitation of the amount of energy you can get from the sun period, how much it gives out and how much those solar panels can absorb? Well, you’re not going to be able to get it all from the solar panel that fits in a car, you’re gonna have to store some of that energy, right with it. So you wouldn’t just be able to drive indefinitely on solar power. Yeah, that was what he was saying. So but you can, obviously power house, and especially if you have a roof, the Tesla has those solar powered roofs now. But you can also store the energy for a car. I mean, we’re gonna we’re gonna go to all renewable energy, wind and sun within 10 years, including our ability to store the energy all renewable in 10 years. So what are they going to do with all these nuclear plants and coal power plants and all the stuff that’s completely unnecessary? People say we need nuclear power, which we don’t have, you can get it all from the sun and wind within 10 years. So in 10 years, you’d be able to power Los Angeles with sun and wind? Yes, really. I was not aware that we were anywhere near that kind of timeline? Well, that’s because people are not taking into account exponential growth. So the exponential growth also of the grid, because just to pull the amount of power that you would need to charge, you know, X amount of million if everyone has a an electric vehicle, by 2035. Let’s say that just the amount of change you would need on the grid would be pretty substantial. Well, we’re making exponential gains on that as well. Are we? Yeah, yeah. I wasn’t aware. I’ve had this impression that there was a problem with that. And especially in Los Angeles, they they’ve actually asked people at certain times when it’s not charged, you’re looking at the future. That’s true now. But it’s growing exponentially. And in every, in every field of technology, then, essentially, is the bottleneck of battery technology, and how close are they to solving some of these problems with like conflict, minerals, and the things that we need in order to power these batteries? I mean, our ability to store energy is also growing exponentially. So putting all that together, we’ll be able to power everything we need within 10 years. Wow. Most people don’t think that so you’re, you’re you’re thinking that based on this idea that people are gonna have a limit. computation would grow like this. It’s just continuing to do that. And so we have large language models, for example, no one expected that

Hallucinations decline as the models learn more

Ray Kurzweil, March 12, 2024, https://www.youtube.com/watch?v=w4vrOUau2iY, Ray Kurzweil is a scientist, futurist, and Principal Researcher and AI Visionary at Google. He’s the author of numerous books, including the forthcoming title “The Singularity is Nearer.” Joe Rogan Experience #2117 – Ray Kurzweil

to happen, like five years ago, right? And we had them two years ago, but they didn’t work very well. So it began a little less than two years ago that we can actually do large language models And that was very much a surprise to everybody. So that that’s probably the primary example of exponential growth. We had Sam Altman on, one of the things that he and I were talking about was that AI figured out a way to lie that they used AI to go through a CAPTCHA system, and the AI told the system that it was vision impaired, which is not technically a lie. But it used it to bypass are you a robot? Well, we don’t know. Now, it’s for large language models to say they don’t know something. So you ask it a question. And if that the answer to that question is not in the system, it still comes up with an answer. So it’ll look at everything and give it its best answer. And if the best answer is not there, it still gives you an answer. But that’s considered a hallucination. And we know hallucination. Yeah, that’s what it’s called. So AI hallucination, so they cannot be wrong. They have so far, we’re actually working on being able to tell if it doesn’t know something. So if he asked him something, say, Oh, I don’t know that. Right. Now, it can’t do that. Oh, wow. That’s interesting. So it gives you some answer And if the answer is not there, it’s just like, make something up. It’s the best answer. But the best answer isn’t very good, because it doesn’t know the answer. And the way to fix hallucinations is to actually give it more capabilities to memorise things and give it more information. So it knows the answer to it. If you tell an answer to a question, it will remember that and give you that correct answer.

Advanced AI weapons systems can be stolen against us

Gladstone AI, February 26, 2024, https://assets-global.website-files.com/62c4cf7322be8ea59c904399/65e7779f72417554f7958260_Gladstone%20Action%20Plan%20Executive%20Summary.pdf, Defense in Depth: An Action Plan to Increase the Safety and Security of Advanced A

The recent explosion of progress in advanced artificial intelligence (AI) has brought great opportunities, but it is also creating entirely new categories of weapons of mass destruction-like (WMD-like) and WMD-enabling catastrophic risks [1–4]. A key driver of 1 these risks is an acute competitive dynamic among the frontier AI labs that are 2 building the world’s most advanced AI systems. All of these labs have openly declared an intent or expectation to achieve human-level and superhuman artificial general intelligence (AGI) — a transformative technology with profound implications for 3 democratic governance and global security — by the end of this decade or earlier [5– 10]. The risks associated with these developments are global in scope, have deeply technical origins, and are evolving quickly. As a result, policymakers face a diminishing opportunity to introduce technically informed safeguards that can balance these considerations and ensure advanced AI is developed and adopted responsibly. These safeguards are essential to address the critical national security gaps that are rapidly emerging as this technology progresses. Frontier lab executives and staff have publicly acknowledged these dangers [11–13]. Nonetheless, competitive pressures continue to push them to accelerate their investments in AI capabilities at the expense of safety and security. The prospect of inadequate security at frontier AI labs raises the risk that the world’s most advanced AI systems could be stolen from their U.S. developers, and then weaponized against U.S. interests [9]. Frontier AI labs also take seriously the possibility that they could at some point lose control of the AI systems they themselves are developing [5,14], with 4 potentially devastating consequences to global security.

AGI arms race triggers global escalation

Gladstone AI, February 26, 2024, https://assets-global.website-files.com/62c4cf7322be8ea59c904399/65e7779f72417554f7958260_Gladstone%20Action%20Plan%20Executive%20Summary.pdf, Defense in Depth: An Action Plan to Increase the Safety and Security of Advanced A

The rise of advanced AI and AGI has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons. As advanced AI matures and the elements of the AI supply chain continue to proliferate, countries may race to acquire the resources to build sovereign advanced AI capabilities. Unless carefully managed, these competitive dynamics risk triggering an AGI arms race and increase the likelihood of global- and WMD-scale fatal accidents, interstate conflict, and escalation.

Four reasons autonomous weapons are dangerous

ROBERT WEISSMAN AND SAVANNAH WOOTEN, Public Citizen, February 29, 2024, A.I. Joe: The Dangers of Artificial Intelligence and the Military, A.I. Joe: The Dangers of Artificial Intelligence and the Military – Public Citizen, https://www.citizen.org/article/ai-joe-report/

These problems are inherent in the deployment of lethal autonomous weapons. They will persist and are unavoidable, even with the strongest controls in place. U.S. drone strikes in the so-called war on terror have killed, at minimum, hundreds of civilians – a problem due to bad intelligence and circumstance, not drone misfiring. Because the intelligence shortcomings will continue and people will continue to be people – meaning they congregate and move in unpredictable ways – shifting decision making to autonomous systems will not reduce this death toll going forward. In fact, it is likely to worsen the problem. The patina of “pure” decision-making will make it easier for humans to launch and empower autonomous weapons, as will the moral distance between humans and the decision to use lethal force against identifiable individuals. The removal of human common sense – the ability to look at a situation and restrain from authorizing lethal force, even in the face of indicators pointing to the use of force – can only worsen the problem still more. Additional problems are likely to occur because of AI mistakes, including bias. Strong testing regimes will mitigate these problems, but human-created AI has persistently displayed problems with racial bias, including in facial recognition and in varied kinds of decision making, a very significant issue when U.S. targets are so often people of color. To its credit, the Pentagon identifies this risk, and other possible AI weaknesses, including problems relating to adversaries’ countermeasures, the risk of tampering and cybersecurity.14 It would be foolish, however, to expect that Pentagon testing will adequately prevent these problems; too much is uncertain about the functioning of AI and it is impossible to replicate real-world battlefield conditions. Explains the research nonprofit Automated Decision Research: “The digital dehumanization that results from reducing people to data points based on specific characteristics raises serious questions about how the target profiles of autonomous weapons are created, and what pre-existing data these target profiles are based on. It also raises questions about how the user can understand what falls into a weapon’s target profile, and why the weapons system applied force.” Based on real-world experience with AI, the risk of autonomous weapon failure in the face of unanticipated circumstances (an “unknown unknown”) should be rated high. Although the machines are not likely to turn on their makers, Terminator-style, they may well function in dangerous and completely unanticipated ways – an unacceptable risk in the context of the deployment of lethal force. One crucial problem is that AIs are not able to deploy common sense, or reason based on past experience about unforeseen and novel circumstances. The example of self-driving cars is illustrative, notably that of a Cruise self-driving vehicle driving into and getting stuck in wet concrete in San Francisco. The problem, explains AI expert Gary Marcus, is ‘edge cases,’ out-of-the-ordinary circumstances that often confound machine learning algorithms. The more complicated a domain is, the more unanticipated outliers there tend to be. And the real world is really complicated and messy; there’s no way to list all the crazy and out of ordinary things that can happen.” It’s hard to imagine a more complicated and unpredictable domain than the battlefield, especially when those battlefields occur in urban environments or are occupied by substantial numbers of civilians. A final problem is that, as a discrete weapons technology, autonomous weapons deployment is nearly certain to create an AI weapons arms race. That is the logic of international military strategy. In the United States, a geopolitical rivalry-driven autonomous weapons arms race will be spurred further by the military-industrial complex and corporate contractors, about which more below. Autonomous weapons are already in development around the world and racing forward. Automated Decision Research details more than two dozen weapons systems of concern including several built by U.S. corporations.

Autonomous weapons needed to be able to defend Taiwan; no way humans could do that

ROBERT WEISSMAN AND SAVANNAH WOOTEN, Public Citizen, February 29, 2024, A.I. Joe: The Dangers of Artificial Intelligence and the Military, A.I. Joe: The Dangers of Artificial Intelligence and the Military – Public Citizen, https://www.citizen.org/article/ai-joe-report/

Meanwhile, Hicks in August 2023 announced a major new program, the Replicator Initiative, that would rely heavily on drones to combat Chinese missile strength in a theoretical conflict over Taiwan or at China’s eastern coast. The purpose, she said, was to counter Chinese “mass,” avoid using “our people as cannon fodder like some competitors do,” and leverage “attritable, autonomous systems.” “Attritable” is a Pentagon term that means a weapon is relatively low cost and that some substantial portion of those used are likely to be destroyed (subject to attrition). “We’ve set a big goal for Replicator,” Hicks stated: “to field attritable autonomous systems at scale of multiple thousands, in multiple domains, within the next 18-to-24 months.” In Pentagon lingo, she said, the U.S. “all-domain, attritable autonomous systems will help overcome the challenge of anti-access, area-denial systems. Our ADA2 to thwart their A2AD.” There is more than a little uncertainty over exactly what Replicator will be. Hicks said it would require no new additional funding, drawing instead on existing funding lines. At the same time, Hicks was quite intentionally selling it as big and transformational, calling it “game-changing.” What the plan appears to be is to develop the capacity to launch a “drone swarm” over China, with the number of relatively low-cost drones so great that mathematically some substantial number will evade China’s air defenses. While details remain vague, it is likely that this drone swarm model would rely on autonomous weapons. “Experts say advances in data-processing speed and machine-to-machine communications will inevitably relegate people to supervisory roles,” reports the Associated Press (AP). AP asked the Pentagon if it is currently formally assessing any fully autonomous lethal weapons system for deployment, but a Pentagon spokesperson refused to answer. The risks of this program, if it is in fact technologically and logistically achievable, are enormous. Drone swarms implicate all the concerns of autonomous weaponry, plus more. The sheer number of agents involved would make human supervision far less practicable or effective. Additionally, AI-driven swarms involve autonomous agents that would interact with and coordinate with each other, likely in ways not foreseen by humans and also likely indecipherable to humans in real time. The risks of dehumanization, loss of human control, attacks on civilians, mistakes and unforeseen action are all worse with swarms.

Autonomous weapons inevitable

ROBERT WEISSMAN AND SAVANNAH WOOTEN, Public Citizen, February 29, 2024, A.I. Joe: The Dangers of Artificial Intelligence and the Military, A.I. Joe: The Dangers of Artificial Intelligence and the Military – Public Citizen, https://www.citizen.org/article/ai-joe-report/

Against the backdrop of the DOD announcements, military policy talk has shifted: The development and deployment of autonomous weapons is, increasingly, being treated as a matter of when, not if. “The argument may be less about whether this is the right thing to do, and increasingly more about how do we actually do it — and on the rapid timelines required,” said Christian Brose, chief strategy officer at the military AI company Anduril, a former Senate Armed Services Committee staff director and author of the 2020 book The Kill Chain. Summarizes The Hill: “the U.S. is moving fast toward an ambitious goal: propping up a fleet of legacy ships, aircraft and vehicles with the support of weapons powered by artificial intelligence (AI), creating a first-of-its-kind class of war technology. It’s also spurring a huge boost across the defense industry, which is tasked with developing and manufacturing the systems.” Frank Kendall, the Air Force secretary, told the New York Times that it necessary and inevitable that the U.S. move to deploy lethal, autonomous weapons.

Autonomous weapons are the most moral choice. Killing is killing and humans can’t think that fast

ROBERT WEISSMAN AND SAVANNAH WOOTEN, Public Citizen, February 29, 2024, A.I. Joe: The Dangers of Artificial Intelligence and the Military, A.I. Joe: The Dangers of Artificial Intelligence and the Military – Public Citizen, https://www.citizen.org/article/ai-joe-report/

Thomas Hammes, who previously held command positions in the U.S. Marines and is now a research fellow at the U.S. National Defense University, penned an article for the Atlantic Council with the headline, “Autonomous Weapons are the Moral Choice.” Hammes’ argument is, on the one hand, killing is killing and it doesn’t matter if it’s done by a traditional or autonomous weapon. On the other hand, he contends, “No longer will militaries have the luxury of debating the impact on a single target. Instead, the question is how best to protect thousands of people while achieving the objectives that brought the country to war. It is difficult to imagine a more unethical decision than choosing to go to war and sacrifice citizens without providing them with the weapons to win.”

AI lacks consciousness

Perry Carpenter, Forbes Councils Member, February 29, 2024, Forbes, Understanding The Limits Of AI And What This Means For Cybersecurity, https://www.forbes.com/sites/forbesbusinesscouncil/2024/02/29/understanding-the-limits-of-ai-and-what-this-means-for-cybersecurity/?sh=10d218e232e6

AI lacks conscious experiences like we humans do. It does not have beliefs, desires or feelings—a distinction crucial for interpreting the capabilities and limitations of AI. At times, AI can perform tasks that, in some ways, seem remarkably human-like. However, these underlying processes are fundamentally different from human cognition and consciousness.

AI weapons under development

Tom Porter, November 21, 2023, The Pentagon is moving toward letting AI weapons autonomously decide to kill humans. Yahoo News. https://news.yahoo.com/pentagon-moving-toward-letting-ai-120645293.html

The deployment of AI-controlled drones that can make autonomous decisions about whether to kill human targets is moving closer to reality, The New York Times reported. Lethal autonomous weapons, that can select targets using AI, are being developed by countries including the US, China, and Israel….The Pentagon is working toward deploying swarms of thousands of AI-enabled drones, according to a notice published earlier this year. In a speech in August, US Deputy Secretary of Defense, Kathleen Hicks, said technology like AI-controlled drone swarms would enable the US to offset China’s People’s Liberation Army’s (PLA) numerical advantage in weapons and people. “We’ll counter the PLA’s mass with mass of our own, but ours will be harder to plan for, harder to hit, harder to beat,” she said, reported Reuters.

US trying to stop a binding resolution on AI weapons development

Tom Porter, November 21, 2023, The Pentagon is moving toward letting AI weapons autonomously decide to kill humans. Yahoo News. https://news.yahoo.com/pentagon-moving-toward-letting-ai-120645293.html

Several governments are lobbying the UN for a binding resolution restricting the use of AI killer drones, but the US is among a group of nations — which also includes Russia, Australia, and Israel — who are resisting any such move, favoring a non-binding resolution instead, The Times reported. “This is really one of the most significant inflection points for humanity,” Alexander Kmentt, Austria’s chief negotiator on the issue, told The Times. “What’s the role of human beings in the use of force — it’s an absolutely fundamental security issue, a legal issue and an ethical issue.”

Weaponized AI drones attacking military targets in the Ukraine

David Hambling, October 13, 2023. New Scientist. Ukrainian AI attack drones may be killing without human oversight https://www.newscientist.com/article/2397389-ukrainian-ai-attack-drones-may-be-killing-without-human-oversight/.

Ukrainian attack drones equipped with artificial intelligence are now finding and attacking targets without human assistance, New Scientist has learned, in what would be the first confirmed use of autonomous weapons or “killer robots”. While the drones are designed to target vehicles such as tanks, rather than infantry

AI regulation and alignment fail

Jim VandeHei & Mike Allen, 11-21, 23, https://www.axios.com/2023/11/21/washington-ai-sam-altman-chatgpt-microsoft, Behind the Curtain: Myth of AI restraint,

Nearly every high-level Washington meeting, star-studded conference and story about AI centers on one epic question: Can this awesome new power be constrained? It cannot, experts repeatedly and emphatically told us. Why it matters: Lots of people want to roll artificial intelligence out slowly, use it ethically, regulate it wisely. But everyone gets the joke: It defies all human logic and experience to think ethics will trump profit, power, prestige. Never has. Never will. Practically speaking, there’s no way to truly do any of this once a competition of this size and import is unleashed. And unleashed it is — at breathtaking scale. AI pioneer Mustafa Suleyman — co-founder and CEO of Inflection AI, and co-founder of AI giant DeepMind, now part of Google — sounds the alarm in his new book “The Coming Wave,” with the sobering Chapter 1: “Containment Is Not Possible.” That’s why Sam Altman getting sacked — suddenly and shockingly — should grab your attention. OpenAI — creator of the most popular generative AI tool, ChatGPT — became a battlefield between ethical true believers, who control the board, and the profit-and-progress activators like Altman who ran the company. Altman was quickly scooped up by Microsoft, OpenAI’s main sugar daddy, to move faster with a “new advanced AI research team.” Open AI’s interim CEO is a doom-fearing advocate for slowing the AI race — Twitch co-founder Emmett Shear, who recently warned there’s a 5% to 50% chance this new tech ends humanity. What we’re hearing: Few in Silicon Valley think the Shears of the world will win this battle. The dynamics they’re battling are too powerful: Competition between technologists and technology companies to create something with superhuman power inevitably leads to speed and high risk. It’s why free competition exists and works. Even if individuals and companies magically showed never-before-seen restraint and humility, competitive governments and nations won’t. China will force us to throw caution to the wind: The only thing worse than superhuman power in our hands is it being in China’s … or Iran’s … or Russia’s. Even if other nations stumbled and America’s innovators paused, there are still open-source models that bad actors could exploit. Top AI architects tell us there’ll likely be no serious regulation of generative AI, which one day soon could spawn artificial general intelligence (AGI) — the one that could outthink our species. Corporations won’t do it: They’re pouring trillions of dollars into the race of our lifetime. Government can’t do it: Congress is too divided to tackle the complexities of AI regulation in an election year. Individuals can’t do it: A fractured AI safety movement will persist. But the technology will solve so many big problems in the short term that most people won’t bother worrying about a future that might never materialize. Congress isn’t giving up. Senate Intelligence Committee Chairman Mark Warner (D-Va.) — a former tech entrepreneur who has been a leader in the Capitol Hill conversation on AI — told us he sees more need than ever “for Congress to establish some rules of the road when it comes to the risks posed by these technologies.” But lawmakers have always had trouble regulating tech companies. Axios reporters on the Hill tell us there are so many conflicting AI proposals that it’s hard to see any one of them getting traction. Reality check: Global nuclear agreements did slow proliferation. Global agreements on fluorocarbons did rescue the ozone layer. Aviation has guardrails. With AI, though, there’s no time to build consensus or constituencies. The reality is now. The bottom line: There’s never been such fast consumer adoption of a new technology. Cars took decades. The internet didn’t get critical mass until the smartphone. But ChatGPT was a hit overnight — 100 million users in a matter of weeks. No way it’ll be rolled back. “Behind the Curtain” is a column by Axios CEO Jim VandeHei and co-founder Mike Allen, based on regular conversations with White House and congressional leaders, CEOs and top technologists. Go deeper: Axios’ Dan Primack writes that there’s a decent chance Microsoft’s new “research lab” is a ruse to force OpenAI to rehire Altman.

Powerful AI technology will cause mass unemployment

Matt Marshall, 11-18, 23, OpenAI’s leadership coup could slam brakes on growth in favor of AI safety, Venture Beat, https://venturebeat.com/ai/openais-leadership-coup-could-slam-brakes-on-growth-in-favor-of-ai-safety/

While a lot of details remain unknown about the exact reasons for the OpenAI board’s firing of CEO Sam Altman Friday, new facts have emerged that show co-founder Ilya Sutskever led the firing process, with support of the board. While the board’s statement about the firing said it resulted from communication from Altman that “wasn’t consistently candid,” the exact reasons or timing of the board’s decision remain shrouded in mystery. But one thing is clear: Altman and co-founder Greg Brockman, who quit Friday after learning of Altman’s firing, were leaders of the company’s business side, doing the most to aggressively raise funds, expand OpenAI’s business offerings, and push its technology capabilities forward as quickly as possible. Sutskever, meanwhile, led the company’s engineering side, and has been obsessed by the coming ramifications of OpenAI’s generative AI technology, often talking in stark terms about what will happen when artificial general intelligence (AGI) is reached. He warned that technology will be so powerful that will put most people out of jobs.

Your evidence is old – OpenAI has new technologies that will trigger mass unemployment

Matt Marshall, 11-18, 23, OpenAI’s leadership coup could slam brakes on growth in favor of AI safety, Venture Beat, https://venturebeat.com/ai/openais-leadership-coup-could-slam-brakes-on-growth-in-favor-of-ai-safety/

Friday night, many onlookers slapped together a timeline of events, including efforts by Altman and Brockman to raise more money at a lofty valuation of $90 billion, that all point to a very high likelihood that arguments broke out at the board level, with Sutskever and others concerned about the possible dangers posed by some recent breakthroughs by OpenAI that had pushed AI automation to increased levels.  Indeed, Altman had confirmed that the company was working on GPT-5, the next stage of model performance for ChatGPT. And at the APEC conference last week in San Francisco, Altman referred to having recently seen more evidence of another step forward in the company’s technology : “Four times in the history of OpenAI––the most recent time was in the last couple of weeks––I’ve gotten to be in the room when we push the veil of ignorance back and the frontier of discovery forward. Getting to do that is the professional honor of a lifetime.” (See minute 3:15 of this video; hat-tip to Matt Mireles.) Data scientist Jeremy Howard posted a long thread on X about how OpenAI’s DevDay was an embarrassment for researchers concerned about safety, and the aftermath was the last straw for Sutskever: Researcher Nirit Weiss-Blatt provided some good insight into Sutskever’s worldview in her post about comments he’d made recently in May: “If you believe that AI will literally automate all jobs, literally, then it makes sense for a company that builds such technology to … not be an absolute profit maximizer. It’s relevant precisely because these things will happen at some point….If you believe that AI is going to, at minimum, unemploy everyone, that’s like, holy moly, right?

Extinction risks are not hype and alignment is difficult in practice

Steve Peterson, Philosophy Professor, 11-19, 23, NY Post, The OpenAI fiasco shows why we must regulate artificial intelligence, The OpenAI fiasco shows why we must regulate artificial intelligence (nypost.com)

Many have since cynically assumed those industry signatures were mere ploys to create regulation friendly to the entrenched interests. This is nonsense. First, AI’s existential risk is hardly a problem the industry invented as a pretext; serious academics like Stuart Russell and Max Tegmark, with no financial stake, have been concerned about it since before those AGI corporations were glimmers in their investors’ eyes. There are dangers of AI already present but quickly amplifying in both power and prevalence: misinformation, algorithmic bias, surveillance, and intellectual stultification, to name a few. Why we need to pause AI research and consider the real dangers And second, the history of each of these companies suggests they themselves genuinely want to avoid a competitive race to the bottom when it comes to AI safety. Maddening sentiments like futurist Daniel Jeffries’ tweet illustrate the danger: “The entire AI industry would like to thank the OpenAI board for giving us all a chance to catch up.” But all these companies needed serious money to do their empirical AI research, and it’s sadly rare for people to hand out big chunks of money just for the good of humanity. And so Google bought DeepMind, Microsoft invested heavily in OpenAI, and now Amazon is investing in Anthropic. Each AGI company has been leading a delicate dance between bringing the benefit of near-term, pre-AGI to people — thereby pleasing investors — and not risking existential disaster in the process. One plausible source of the schism between Altman and the board is about where to find the proper tradeoff, and the AI industry as a whole is facing this dilemma. Reasonable people can disagree about that. Having worked on “aligning” AI for about 10 years now, I am much more concerned about the risks than when I started. AI alignment is one of those problems — too common in both math and philosophy — that look easy from a distance and get harder the more you dig into them. What do you think? Be the first to comment. Whatever the right risk assessment is, though, I hope we can all agree investor greed should not be a thumb on this particular scale. Alas, as I write, it’s starting to look like Microsoft and other investors are pressuring OpenAI to remove the voices of caution from its board. Unregulated profit-seeking should not drive AGI any more than it should drive genetic engineering, pharmaceuticals or nuclear energy. Given the way things appear to be headed, though, the corporations can no longer be trusted to police themselves; it’s past time to call in the long arm of the law.

AI evolves faster than the regulation possibly can

Josh Tryangiel, 9-12, 23, https://www.washingtonpost.com/opinions/2023/09/12/sam-altman-openai-artificial-intelligence-regulation-summit/, Washington Post, Opinion  OpenAI’s Sam Altman wants the government to interfere, https://www.washingtonpost.com/opinions/2023/09/12/sam-altman-openai-artificial-intelligence-regulation-summit/

Then there’s the issue of actual agreement. Altman and Microsoft, which has invested at least $10 billion in OpenAI, support the creation of a single oversight agency. IBM and Google don’t. Musk has called for a six-month stoppage on sophisticated AI development. Everyone else thinks Musk, an OpenAI co-founder who fell out with Altman and announced the creation of a rival company in March, is insincere. And everyone’s sincerity is worth examining, including Altman’s. (In an interview with Bloomberg’s Emily Chang, Altman was asked if he could be trusted with AI’s powers: “You shouldn’t. … No one person should be trusted here.”) These companies know that by the time a new oversight agency is funded and staffed, the AI genie will likely have left the bottle — and eaten it. What’s the harm in playing nice when you’ll probably get the freedom to do whatever you want anyway?

AI = massive catastrophes

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card DEBATEUS!

Following this line of thinking, I often hear people say something along the lines of “AGI is the greatest risk humanity faces today! It’s going to end the world!” But when pressed on what this actually looks like, how this actually comes about, they become evasive, the answers woolly, the exact danger nebulous. AI, they say, might run away with all the computational resources and turn the whole world into a giant computer. As AI gets more and more powerful, the most extreme scenarios will require serious consideration and mitigation. However, well before we get there, much could go wrong. Over the next ten years, AI will be the greatest force amplifier in history. This is why it could enable a redistribution of power on a historic scale. The greatest accelerant of human progress imaginable, it will also enable harms—from wars and accidents to random terror groups, authoritarian governments, overreaching corporations, plain theft, and willful sabotage. Think about an ACI capable of easily passing the Modern Turing Test, but turned toward catastrophic ends. Advanced Advanced AIs and synthetic biology will not only be available to groups finding new sources of energy or life-changing drugs; they will also be available to the next Ted Kaczynski. AI is both valuable and dangerous precisely because it’s an extension of our best and worst selves. And as a technology premised on learning, it can keep adapting, probing, producing novel strategies and ideas potentially far removed from anything before considered, even by other AIs. Ask it to suggest ways of knocking out the freshwater supply, or crashing the stock market, or triggering a nuclear war, or designing the ultimate virus, and it will. Soon. Even more than I worry about speculative paper-clip maximizers or some strange, malevolent demon, I worry about what existing forces this tool will amplify in the next ten years. Imagine scenarios where AIs control energy grids, media programming, power stations, planes, or trading accounts for major financial houses. When robots are ubiquitous, and militaries stuffed with lethal autonomous weapons—warehouses full of technology that can commit autonomous mass murder at the literal push of a button—what might a hack, developed by another AI, look like? Or consider even more basic modes of failure, not attacks, but plain errors. What if AIs make mistakes in fundamental infrastructures, or a widely used medical system starts malfunctioning? It’s not hard to see how numerous, capable, quasi-autonomous agents on the loose, even those chasing well-intentioned but ill-formed goals, might sow havoc. We don’t yet know the implications of AI for fields as diverse as agriculture, chemistry, surgery, and finance. That’s part of the problem; we don’t know what failure modes are being introduced and how deep they could extend. There is no instruction manual on how to build the technologies in the coming wave safely. We cannot build systems of escalating power and danger to experiment with ahead of time. We cannot know how quickly an AI might self-improve, or what would happen after a lab accident with some not yet invented piece of biotech. We cannot tell what results from a human consciousness plugged directly into a computer, or what an AI-enabled cyberweapon means for critical infrastructure, or how a gene drive will play out in the wild. Once fast-evolving, self-assembling automatons or new biological agents are released, out in the wild, there’s no rewinding the clock. After a certain point, even curiosity and tinkering might be dangerous. Even if you believe the chance of catastrophe is low, that we are operating blind should give you pause. Suleyman, Mustafa. The Coming Wave (p. 264). Crown. Kindle Edition.

Tremendous harm is inevitable

 

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card DEBATEUS!

Nor is building safe and contained technology in itself sufficient. Solving the question of AI alignment doesn’t mean doing so once; it means doing it every time a sufficiently powerful AI is built, wherever and whenever that happens. You don’t just need to solve the question of lab leaks in one lab; you need to solve it in every lab, in every country, forever, even while those same countries are under serious political strain. Once technology reaches a critical capability, it isn’t enough for early pioneers to just build it safely, as challenging as that undoubtedly is. Rather, true safety requires maintaining those standards across every single instance: a mammoth expectation given how fast and widely these are already diffusing. This is what happens when anyone is free to invent or use tools that affect us all. And we aren’t just talking about access to a printing press or a steam engine, as extraordinary as they were. We are talking about outputs with a fundamentally new character: new compounds, new life, new species. If the wave is uncontained, it’s only a matter of time. Allow for the possibility of accident, error, malicious use, evolution beyond human control, unpredictable consequences of all kinds. At some stage, in some form, something, somewhere, will fail. And this won’t be a Bhopal or even a Chernobyl; it will unfold on a worldwide scale. This will be the legacy of technologies produced, for the most part, with the best of intentions. However, not everyone shares those intentions. Suleyman, Mustafa. The Coming Wave (p. 265). Crown. Kindle Edition.

There are plenty of people who want to use it for bad – terrorists, cults,

 

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card DEBATEUS!

CULTS, LUNATICS, AND SUICIDAL STATES Most of the time the risks arising from things like gain-of-function research are a result of sanctioned and benign efforts. They are, in other words, supersized revenge effects, unintended consequences of a desire to do good. Unfortunately, some organizations are founded with precisely the opposite motivation. Founded in the 1980s, Aum Shinrikyo (Supreme Truth) was a Japanese doomsday cult. The group originated in a yoga studio under the leadership of a man who called himself Shoko Asahara. Building a membership among the disaffected, they radicalized as their numbers swelled, becoming convinced that the apocalypse was nigh, that they alone would survive, and that they should hasten it. Asahara grew the cult to somewhere between forty thousand and sixty thousand members, coaxing a loyal group of lieutenants all the way to using biological and chemical weapons. At Aum Shinrikyo’s peak popularity it is estimated to have held more than $1 billion in assets and counted dozens of well-trained scientists as members. Despite a fascination with bizarre, sci-fi weapons like earthquake-generating machines, plasma guns, and mirrors to deflect the sun’s rays, they were a deadly serious and highly sophisticated group. Aum built dummy companies and infiltrated university labs to procure material, purchased land in Australia with the intent of prospecting for uranium to build nuclear weapons, and embarked on a huge biological and chemical weapons program in the hilly countryside outside Tokyo. The group experimented with phosgene, hydrogen cyanide, soman, and other nerve agents. They planned to engineer and release an enhanced version of anthrax, recruiting a graduate-level virologist to help. Members obtained the neurotoxin C. botulinum and sprayed it on Narita International Airport, the National Diet Building, the Imperial Palace, the headquarters of another religious group, and two U.S. naval bases. Luckily, they made a mistake in its manufacture and no harm ensued. It didn’t last. In 1994, Aum Shinrikyo sprayed the nerve agent sarin from a truck, killing eight and wounding two hundred. A year later they struck the Tokyo subway, releasing more sarin, killing thirteen and injuring some six thousand people. The subway attack, which involved depositing sarin-filled bags around the metro system, was more harmful partly because of the enclosed spaces. Thankfully neither attack used a particularly effective delivery mechanism. But in the end it was only luck that stopped a more catastrophic event. Aum Shinrikyo combined an unusual degree of organization with a frightening level of ambition. They wanted to initiate World War III and a global collapse by murdering at shocking scale and began building an infrastructure to do so. On the one hand, it’s reassuring how rare organizations like Aum Shinrikyo are. Of the many terrorist incidents and other non-state-perpetrated mass killings since the 1990s, most have been carried out by disturbed loners or groups with specific political or ideological agendas. But on the other hand, this reassurance has limits. Procuring weapons of great power was previously a huge barrier to entry, helping keep catastrophe at bay. The sickening nihilism of the school shooter is bounded by the weapons they can access. The Unabomber had only homemade devices. Building and disseminating biological and chemical weapons were huge challenges for Aum Shinrikyo. As a small, fanatical coterie operating in an atmosphere of paranoid secrecy, with only limited expertise and access to materials, they made mistakes. As the coming wave matures, however, the tools of destruction will, as we’ve seen, be democratized and commoditized. They will have greater capability and adaptability, potentially operating in ways beyond human control or understanding, evolving and upgrading at speed, some of history’s greatest offensive powers available widely. Those who would use new technologies like Aum are fortunately rare. Yet even one Aum Shinrikyo every fifty years is now one too many to avert an incident orders of magnitude worse than the subway attack. Cults, lunatics, suicidal states on their last legs, all have motive and now means. As a report on the implications of Aum Shinrikyo succinctly puts it, “We are playing Russian roulette.” A new phase of history is here. With zombie governments failing to contain technology, the next Aum Shinrikyo, the next industrial accident, the next mad dictator’s war, the next tiny lab leak, will have an impact that is difficult to contemplate. It’s tempting to dismiss all these dark risk scenarios as the distant daydreams of people who grew up reading too much science fiction, those biased toward catastrophism. Tempting, but a mistake. Regardless of where we are with BSL-4 protocols or regulatory proposals or technical publications on the AI alignment problem, those incentives grind away, the technologies keep developing and diffusing. This is not the stuff of speculative novels and Netflix series. This is real, being worked on right this second in offices and labs around the world. So serious are the risks, however, that they necessitate consideration of all the options. Containment is about the ability to control technology. Further back, that means the ability to control the people and societies behind it. As catastrophic impacts unfurl or their possibility becomes unignorable, the terms of debate will change. Calls for not just control but crackdowns will grow. The potential for unprecedented levels of vigilance will become ever more appealing. Perhaps it might be possible to spot and then stop emergent threats? Wouldn’t that be for the best—the right thing to do? It’s my best guess this will be the reaction of governments and populations around the world. When the unitary power of the nation-state is threatened, when containment appears increasingly difficult, when lives are on the line, the inevitable reaction will be a tightening of the grip on power. The question is, at what cost?

Regulation won’t solve

 

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card DEBATEUS!

While garage amateurs gain access to more powerful tools and tech companies spend billions on R&D, most politicians are trapped in a twenty-four-hour news cycle of sound bites and photo ops. When a government has devolved to the point of simply lurching from crisis to crisis, it has little breathing room for tackling tectonic forces requiring deep domain expertise and careful judgment on uncertain timescales. It’s easier to ignore these issues in favor of low-hanging fruit more likely to win votes in the next election. Even technologists and researchers in areas like AI struggle with the pace of change. What chance, then, do regulators have, with fewer resources? How do they account for an age of hyper-evolution, for the pace and unpredictability of the coming wave? Technology evolves week by week. Drafting and passing legislation takes years. Consider the arrival of a new product on the market like Ring doorbells. Ring put a camera on your front door and connected it to your phone. The product was adopted so quickly and is now so widespread that it has fundamentally changed the nature of what needs regulating; suddenly your average suburban street went from relatively private space to surveilled and recorded. By the time the regulation conversation caught up, Ring had already created an extensive network of cameras, amassing data and images from the front doors of people around the world. Twenty years on from the dawn of social media, there’s no consistent approach to the emergence of a powerful new platform (and besides, is privacy, polarization, monopoly, foreign ownership, or mental health the core problem—or all of the above?). The coming wave will worsen this dynamic. Discussions of technology sprawl across social media, blogs and newsletters, academic journals, countless conferences and seminars and workshops, their threads distant and increasingly lost in the noise. Everyone has a view, but it doesn’t add up to a coherent program. Talking about the ethics of machine learning systems is a world away from, say, the technical safety of synthetic bio. These discussions happen in isolated, echoey silos. They rarely break out. Yet I believe they are aspects of what amounts to the same phenomenon; they all aim to address different aspects of the same wave. It’s not enough to have dozens of separate conversations about algorithmic bias or bio-risk or drone warfare or the economic impact of robotics or the privacy implications of quantum computing. It completely underplays how interrelated both causes and effects are. We need an approach that unifies these disparate conversations, encapsulating all those different dimensions of risk, a general-purpose concept for this general-purpose revolution. Suleyman, Mustafa. The Coming Wave (pp. 282-283). Crown. Kindle Edition.

AI means massive dislocation and job loss, new jobs won’t offset the loss and there will be massive interim disruption

 

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card DEBATEUS!

In the years since I co-founded DeepMind, no AI policy debate has been given more airtime than the future of work—to the point of oversaturation. Here was the original thesis. In the past, new technologies put people out of work, producing what the economist John Maynard Keynes called “technological unemployment.” In Keynes’s view, this was a good thing, with increasing productivity freeing up time for further innovation and leisure. Examples of tech-related displacement are myriad. The introduction of power looms put old-fashioned weavers out of business; motorcars meant that carriage makers and horse stables were no longer needed; lightbulb factories did great as candlemakers went bust. Broadly speaking, when technology damaged old jobs and industries, it also produced new ones. Over time these new jobs tended toward service industry roles and cognitive-based white-collar jobs. As factories closed in the Rust Belt, demand for lawyers, designers, and social media influencers boomed. So far at least, in economic terms, new technologies have not ultimately replaced labor; they have in the aggregate complemented it. But what if new job-displacing systems scale the ladder of human cognitive ability itself, leaving nowhere new for labor to turn? If the coming wave really is as general and wide-ranging as it appears, how will humans compete? What if a large majority of white-collar tasks can be performed more efficiently by AI? In few areas will humans still be “better” than machines. I have long argued this is the more likely scenario. With the arrival of the latest generation of large language models, I am now more convinced than ever that this is how things will play out. These tools will only temporarily augment human intelligence. They will make us smarter and more efficient for a time, and will unlock enormous amounts of economic growth, but they are fundamentally labor replacing. They will eventually do cognitive labor more efficiently and more cheaply than many people working in administration, data entry, customer service (including making and receiving phone calls), writing emails, drafting summaries, translating documents, creating content, copywriting, and so on. In the face of an abundance of ultra-low-cost equivalents, the days of this kind of “cognitive manual labor” are numbered. We are only just now starting to see what impact this new wave is about to have. Early analysis of ChatGPT suggests it boosts the productivity of “mid-level college educated professionals” by 40 percent on many tasks. That in turn could affect hiring decisions: a McKinsey study estimated that more than half of all jobs could see many of their tasks automated by machines in the next seven years, while fifty-two million Americans work in roles with a “medium exposure to automation” by 2030. The economists Daron Acemoglu and Pascual Restrepo estimate that robots cause the wages of local workers to fall. With each additional robot per thousand workers there is a decline in the employment-to-population ratio, and consequently a fall in wages. Today algorithms perform the vast bulk of equity trades and increasingly act across financial institutions, and yet, even as Wall Street booms, it sheds jobs as technology encroaches on more and more tasks. Many remain unconvinced. Economists like David Autor argue that new technology consistently raises incomes, creating demand for new labor. Technology makes companies more productive, it generates more money, which then flows back into the economy. Put simply, demand is insatiable, and this demand, stoked by the wealth technology has generated, gives rise to new jobs requiring human labor. After all, skeptics say, ten years of deep learning success has not unleashed a jobs automation meltdown. Buying into that fear was, some argue, just a repeat of the old “lump of labor” fallacy, which erroneously claims there is only a set amount of work to go around. Instead, the future looks more like billions of people working in high-end jobs still barely conceived of. I believe this rosy vision is implausible over the next couple of decades; automation is unequivocally another fragility amplifier. As we saw in chapter 4, AI’s rate of improvement is well beyond exponential, and there appears no obvious ceiling in sight. Machines are rapidly imitating all kinds of human abilities, from vision to speech and language. Even without fundamental progress toward “deep understanding,” new language models can read, synthesize, and generate eye-wateringly wateringly accurate and highly useful text. There are literally hundreds of roles where this single skill alone is the core requirement, and yet there is so much more to come from AI. Yes, it’s almost certain that many new job categories will be created. Who would have thought that “influencer” would become a highly sought-after role? Or imagined that in 2023 people would be working as “prompt engineers”—nontechnical programmers of large language models who become adept at coaxing out specific responses? Demand for masseurs, cellists, and baseball pitchers won’t go away. But my best guess is that new jobs won’t come in the numbers or timescale to truly help. The number of people who can get a PhD in machine learning will remain tiny in comparison to the scale of layoffs. And, sure, new demand will create new work, but that doesn’t mean it all gets done by human beings. Labor markets also have immense friction in terms of skills, geography, and identity. Consider that in the last bout of deindustrialization the steelworker in Pittsburgh or the carmaker in Detroit could hardly just up sticks, retrain mid-career, and get a job as a derivatives trader in New York or a branding consultant in Seattle or a schoolteacher in Miami. If Silicon Valley or the City of London creates lots of new jobs, it doesn’t help people on the other side of the country if they don’t have the right skills or aren’t able to relocate. If your sense of self is wedded to a particular kind of work, it’s little consolation if you feel your new job demeans your dignity. Working on a zero-hours contract in a distribution center doesn’t provide the sense of pride or social solidarity that came from working for a booming Detroit auto manufacturer in the 1960s. The Private Sector Job Quality Index, a measure of how many jobs provide above-average income, has plunged since 1990; it suggests that well-paying jobs as a proportion of the total have already started to fall. Countries like India and the Philippines have seen a huge boom from business process outsourcing, creating comparatively high-paying jobs in places like call centers. It’s precisely this kind of work that will be targeted by automation. New jobs might be created in the long term, but for millions they won’t come quick enough or in the right places. At the same time, a jobs recession will crater tax receipts, damaging public services and calling into question welfare programs just as they are most needed. Even before jobs are decimated, governments will be stretched thin, struggling to meet all their commitments, finance themselves sustainably, and deliver services the public has come to expect. Moreover, all this disruption will happen globally, on multiple dimensions, affecting every rung of the development ladder from primarily agricultural economies to advanced service-based sectors. From Lagos to L.A., pathways to sustainable employment will be subject to immense, unpredictable, and fast-evolving dislocations. Even those who don’t foresee the most severe outcomes of automation still accept that it is on course to cause significant medium-term disruptions. Whichever side of the jobs debate you fall on, it’s hard to deny that the ramifications will be hugely destabilizing for hundreds of millions who will, at the very least, need to re-skill and transition to new types of work. Optimistic scenarios still involve troubling political ramifications from broken government finances to underemployed, insecure, and angry populations. It augurs trouble. Another stressor in a stressed world. Suleyman, Mustafa. The Coming Wave (pp. 227-228). Crown. Kindle Edition.

 

AI means surveillance and totalitarianism

 

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card

ROCKET FUEL FOR AUTHORITARIANISM When compared with superstar corporations, governments appear slow, bloated, and out of touch. It’s tempting to dismiss them as headed for the trash can of history. However, another inevitable reaction of nation-states will be to use the tools of the coming wave to tighten their grip on power, taking full advantage to entrench their dominance. In the twentieth century, totalitarian regimes wanted planned economies, obedient populations, and controlled information ecosystems. They wanted complete hegemony. Every aspect of life was managed. Five-year plans dictated everything from the number and content of films to bushels of wheat expected from a given field. High modernist planners hoped to create pristine cities of stark order and flow. An ever-watchful and ruthless security apparatus kept it all ticking over. Power concentrated in the hands of a single supreme leader, capable of surveying the entire picture and acting decisively. Think Soviet collectivization, Stalin’s five-year plans, Mao’s China, East Germany’s Stasi. This is government as dystopian nightmare. And so far at least, it has always gone disastrously wrong. Despite the best efforts of revolutionaries and bureaucrats alike, society could not be bent into shape; it was never fully “legible” to the state, but a messy, ungovernable reality that would not conform with the purist dreams of the center. Humanity is too multifarious, too impulsive to be boxed in like this. In the past, the tools available to totalitarian governments simply weren’t equal to the task. So those governments failed; they failed to improve quality of life, or eventually they collapsed or reformed. Extreme concentration wasn’t just highly undesirable; it was practically impossible. The coming wave presents the disturbing possibility that this may no longer be true. Instead, it could initiate an injection of centralized power and control that will morph state functions into repressive distortions of their original purpose. Rocket fuel for authoritarians and for great power competition alike. The ability to capture and harness data at an extraordinary scale and precision; to create territory-spanning systems of surveillance and control, reacting in real time; to put, in other words, history’s most powerful set of technologies under the command of a single body, would rewrite the limits of state power so comprehensively that it would produce a new kind of entity altogether. Your smart speaker wakes you up. Immediately you turn to your phone and check your emails. Your smart watch tells you you’ve had a normal night’s sleep and your heart rate is average for the morning. Already a distant organization knows, in theory, what time you are awake, how you are feeling, and what you are looking at. You leave the house and head to the office, your phone tracking your movements, logging the keystrokes on your text messages and the podcast you listen to. On the way, and throughout the day, you are captured on CCTV hundreds of times. After all, this city has at least one camera for every ten people, maybe many more than that. When you swipe in at the office, the system notes your time of entry. Software installed on your computer monitors productivity down to eye movements. On the way home you stop to buy dinner. The supermarket’s loyalty scheme tracks your purchases. After eating, you binge-stream another TV series; your viewing habits are duly noted. Every glance, every hurried message, every half thought registered in an open browser or fleeting search, every step through bustling city streets, every heartbeat and bad night’s sleep, every purchase made or backed out of—it is all captured, watched, tabulated. And this is only a tiny slice of the possible data harvested every day, not just at work or on the phone, but at the doctor’s office or in the gym. Almost every detail of life is logged, somewhere, by those with the sophistication to process and act on the data they collect. This is not some far-off dystopia. I’m describing daily reality for millions in a city like London. The only step left is bringing these disparate databases together into a single, integrated system: a perfect twenty-first-century surveillance apparatus. The preeminent example is, of course, China. That’s hardly news, but what’s become clear is how advanced and ambitious the party’s program already is, let alone where it might end up in twenty or thirty years. Compared with the West, Chinese research into AI concentrates on areas of surveillance like object tracking, scene understanding, and voice or action recognition. Surveillance technologies are ubiquitous, increasingly granular in their ability to home in on every aspect of citizens’ lives. They combine visual recognition of faces, gaits, and license plates with data collection—including bio-data—on a mass scale. Centralized services like WeChat bundle everything from private messaging to shopping and banking in one easily traceable place. Drive the highways of China, and you’ll notice hundreds of Automatic Number Plate Recognition cameras tracking vehicles. (These exist in most large urban areas in the Western world, too.) During COVID quarantines, robot dogs and drones carried speakers blasting messages warning people to stay inside. Facial recognition software builds on the advances in computer vision we saw in part 2, identifying individual faces with exquisite accuracy. When I open my phone, it starts automatically upon “seeing” my face: a small but slick convenience, but with obvious and profound implications. Although the system was initially developed by corporate and academic researchers in the United States, nowhere embraced or perfected the technology more than China. Chairman Mao had said “the people have sharp eyes” when watching their neighbors for infractions against communist orthodoxy. By 2015 this was the inspiration for a massive “Sharp Eyes” facial recognition program that ultimately aspired to roll such surveillance out across no less than 100 percent of public space. A team of leading researchers from the Chinese University of Hong Kong went on to found SenseTime, one of the world’s largest facial recognition companies, built on a database of more than two billion faces. China is now the leader in facial recognition technologies, with giant companies like Megvii and CloudWalk vying with SenseTime for market share. Chinese police even have sunglasses with built-in facial recognition technology capable of tracking suspects in crowds. Around half the world’s billion CCTV cameras are in China. Many have built-in facial recognition and are carefully positioned to gather maximal information, often in quasi-private spaces: residential buildings, hotels, even karaoke lounges. A New York Times investigation found the police in Fujian Province alone estimated they held a database of 2.5 billion facial images. They were candid about its purpose: “controlling and managing people.” Authorities are also looking to suck in audio data—police in the city of Zhongshan wanted cameras that could record audio within a three-hundred-foot radius—and close monitoring and storage of bio-data became routine in the COVID era. The Ministry of Public Security is clear on the next priority: stitch these scattered databases and services into a coherent whole, from license plates to DNA, WeChat accounts to credit cards. This AI-enabled system could spot emerging threats to the CCP like dissenters and protests in real time, allowing for a seamless, crushing government response to anything it perceived as undesirable. Nowhere does this come together with more horrifying potential than in the Xinjiang Autonomous Region. This rugged and remote part of northwest China has seen the systematic and technologically empowered repression and ethnic cleansing of its native Uighur people. All these systems of monitoring and control are brought together here. Cities are placed under blankets of camera surveillance with facial recognition and AI tracking. Checkpoints and “reeducation” camps govern movements and freedoms. A system of social credit scores based on numerous surveilled databases keeps tabs on the population. Authorities have built an iris-scan database that has the capacity to hold up to thirty million samples—more than the region’s population. Societies of overweening surveillance and control are already here, and now all of this is set to escalate enormously into a next-level concentration of power at the center. Yet it would be a mistake to write this off as just a Chinese or authoritarian problem. For a start, this tech is being exported wholesale to places like Venezuela and Zimbabwe, Ecuador and Ethiopia. Even to the United States. In 2019, the U.S. government banned federal agencies and their contractors from buying telecommunications and surveillance equipment from a number of Chinese providers including Huawei, ZTE, and Hikvision. Yet, just a year later, three federal agencies were found to have bought such equipment from prohibited vendors. More than one hundred U.S. towns have even acquired technology developed for use on the Uighurs in Xinjiang. A textbook failure of containment. Western firms and governments are also in the vanguard of building and deploying this tech. Invoking London above was no accident: it competes with cities like Shenzhen for most surveilled in the world. It’s no secret that governments monitor and control their own populations, but these tendencies extend deep into Western firms, too. In smart warehouses every micromovement of every worker is tracked down to body temperature and loo breaks. Companies like Vigilant Solutions aggregate movement data based on license plate tracking, then sell it to jurisdictions like state or municipal governments. Even your take-out pizza is being watched: Domino’s uses AI-powered cameras to check its pies. Just as much as anyone in China, those in the West leave a vast data exhaust every day of their lives. And just as in China, it is harvested, processed, operationalized, and sold. — Before the coming wave the notion of a global “high-tech panopticon” was the stuff of dystopian novels, Yevgeny Zamyatin’s We or George Orwell’s 1984. The panopticon is becoming possible. Billions of devices and trillions of data points could be operated and monitored at once, in real time, used not just for surveillance but for prediction. Not only will it foresee social outcomes with precision and granularity, but it might also subtly or overtly steer or coerce them, from grand macro-processes like election results down to individual consumer behaviors. This raises the prospect of totalitarianism to a new plane. It won’t happen everywhere, and not all at once. But if AI, biotech, quantum, robotics, and the rest of it are centralized in the hands of a repressive state, the resulting entity would be palpably different from any yet seen. In the next chapter we will return to this possibility. However, before then comes another trend. One completely, and paradoxically, at odds with centralization. Suleyman, Mustafa. The Coming Wave (p. 246). Crown. Kindle Edition.

The impact of uncontrolled AI is catastrophe that kills a billion

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card

VARIETIES OF CATASTROPHE To see what catastrophic harms we should prepare for, simply extrapolate the bad actor attacks we saw in chapter 10. Here are just a few plausible scenarios. Terrorists mount automatic weapons equipped with facial recognition to an autonomous drone swarm hundreds or thousands strong, each capable of quickly rebalancing from the weapon’s recoil, firing short bursts, and moving on. These drones are unleashed on a major downtown with instructions to kill a specific profile. In busy rush hour these would operate with terrifying efficiency, following an optimized route around the city. In minutes there would be an attack at far greater scale than, say, the 2008 Mumbai attacks, which saw armed terrorists roaming through city landmarks like the central train station. A mass murderer decides to hit a huge political rally with drones, spraying devices, and a bespoke pathogen. Soon attendees become sick, then their families. The speaker, a much-loved and much-loathed political lightning rod, is one of the first victims. In a febrile partisan atmosphere an assault like this ignites violent reprisals around the country and the chaos cascades. Using only natural language instruction, a hostile conspiracist in America disseminates masses of surgically constructed and divisive disinformation. Numerous attempts are made, most of which fail to gain traction. One eventually catches on: a police murder in Chicago. It’s completely fake, but the trouble on the streets, the widespread revulsion, is real. The attackers now have a playbook. By the time the video is verified as a fraud, violent riots with multiple casualties roil around the country, the fires continually stoked by new gusts of disinformation. Or imagine all that happening at the same time. Or not just at one event or in one city, but in hundreds of places. With tools like this it doesn’t take too much to realize that bad actor empowerment opens the door to catastrophe. Today’s AI systems try hard not to tell you how to poison the water supply or build an undetectable bomb. They are not yet capable of defining or pursuing goals on their own. However, as we have seen, both more widely diffused and less safe versions of today’s cutting-edge and more powerful models are coming, fast. Of all the catastrophic risks from the coming wave, AI has received the most coverage. But there are plenty more. Once militaries are fully automated, the barriers to entry for conflict will be far lower. A war might be sparked accidentally for reasons that forever remain unclear, AIs detecting some pattern of behavior or threat and then reacting, instantaneously, with overwhelming force. Suffice to say, the nature of that war could be alien, escalate quickly, and be unsurpassed in destructive consequences. We’ve already come across engineered pandemics and the perils of accidental releases, and glimpsed what happens when millions of self-improvement enthusiasts can experiment with the genetic code of life. An extreme bio-risk event of a less obvious kind, targeting a given portion of the population, say, or sabotaging an ecosystem, cannot be discounted. Imagine activists wanting to stop the cocaine trade inventing a new bug that targets only coca plants as a way to replace aerial fumigation. Or if militant vegans decided to disrupt the entire meat supply chain, with dire anticipated and unanticipated consequences. Either might spiral out of control. We know what a lab leak might look like in the context of amplifying fragility, but if it was not quickly brought under control, it would rank with previous plagues. To put this in context, the omicron variant of COVID infected a quarter of Americans within a hundred days of first being identified. What if we had a pandemic that had, say, a 20 percent mortality rate, but with that kind of transmissibility? Or what if it was a kind of respiratory HIV that would lie incubating for years with no acute symptoms? A novel human transmissible virus with a reproduction rate of, say, 4 (far below chicken pox or measles) and a case fatality rate of 50 percent (far below Ebola or bird flu) could, even accounting for lockdown-style measures, cause more than a billion deaths in a matter of months. What if multiple such pathogens were released at once? This goes far beyond fragility amplification; it would be an unfathomable calamity.

“AI Bad” isn’t fear mongering, it’s based on objective risk

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card DEBATEUS!

The promise of technology is that it improves lives, the benefits far outweighing the costs and downsides. This set of wicked choices means that promise has been savagely inverted. Doom-mongering makes people—myself included—glassy-eyed. At this point, you may be feeling wary or skeptical. Talking of catastrophic effects often invites ridicule: accusations of catastrophism, indulgent negativity, shrill alarmism, navel-gazing on remote and rarefied risks when plenty of clear and present dangers scream for attention. Like breathless techno-optimism, breathless techno-catastrophism is easy to dismiss as a twisted, misguided form of hype unsupported by the historical record. But just because a warning has dramatic implications isn’t good grounds to automatically reject it. The pessimism-averse complacency greeting the prospect of disaster is itself a recipe for disaster. It feels plausible, rational in its own terms, “smart” to dismiss warnings as the overblown chatter of a few weirdos, but this attitude prepares the way for its own failure. No doubt, technological risk takes us into uncertain territory. Nonetheless, all the trends point to a profusion of risk. This speculation is grounded in constantly compounding scientific and technological improvements. Those who dismiss catastrophe are, I believe, discounting the objective facts before us. After all, we are not talking here about the proliferation of motorbikes or washing machines. Suleyman, Mustafa. The Coming Wave (p. 259). Crown. Kindle Edition.

Potential catastrophes

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card

VARIETIES OF CATASTROPHE To see what catastrophic harms we should prepare for, simply extrapolate the bad actor attacks we saw in chapter 10. Here are just a few plausible scenarios. Terrorists mount automatic weapons equipped with facial recognition to an autonomous drone swarm hundreds or thousands strong, each capable of quickly rebalancing from the weapon’s recoil, firing short bursts, and moving on. These drones are unleashed on a major downtown with instructions to kill a specific profile. In busy rush hour these would operate with terrifying efficiency, following an optimized route around the city. In minutes there would be an attack at far greater scale than, say, the 2008 Mumbai attacks, which saw armed terrorists roaming through city landmarks like the central train station. A mass murderer decides to hit a huge political rally with drones, spraying devices, and a bespoke pathogen. Soon attendees become sick, then their families. The speaker, a much-loved and much-loathed political lightning rod, is one of the first victims. In a febrile partisan atmosphere an assault like this ignites violent reprisals around the country and the chaos cascades. Using only natural language instruction, a hostile conspiracist in America disseminates masses of surgically constructed and divisive disinformation. Numerous attempts are made, most of which fail to gain traction. One eventually catches on: a police murder in Chicago. It’s completely fake, but the trouble on the streets, the widespread revulsion, is real. The attackers now have a playbook. By the time the video is verified as a fraud, violent riots with multiple casualties roil around the country, the fires continually stoked by new gusts of disinformation. Or imagine all that happening at the same time. Or not just at one event or in one city, but in hundreds of places. With tools like this it doesn’t take too much to realize that bad actor empowerment opens the door to catastrophe. Today’s AI systems try hard not to tell you how to poison the water supply or build an undetectable bomb. They are not yet capable of defining or pursuing goals on their own. However, as we have seen, both more widely diffused and less safe versions of today’s cutting-edge and more powerful models are coming, fast. Of all the catastrophic risks from the coming wave, AI has received the most coverage. But there are plenty more. Once militaries are fully automated, the barriers to entry for conflict will be far lower. A war might be sparked accidentally for reasons that forever remain unclear, AIs detecting some pattern of behavior or threat and then reacting, instantaneously, with overwhelming force. Suffice to say, the nature of that war could be alien, escalate quickly, and be unsurpassed in destructive consequences. We’ve already come across engineered pandemics and the perils of accidental releases, and glimpsed what happens when millions of self-improvement enthusiasts can experiment with the genetic code of life. An extreme bio-risk event of a less obvious kind, targeting a given portion of the population, say, or sabotaging an ecosystem, cannot be discounted. Imagine activists wanting to stop the cocaine trade inventing a new bug that targets only coca plants as a way to replace aerial fumigation. Or if militant vegans decided to disrupt the entire meat supply chain, with dire anticipated and unanticipated consequences. Either might spiral out of control. We know what a lab leak might look like in the context of amplifying fragility, but if it was not quickly brought under control, it would rank with previous plagues. To put this in context, the omicron variant of COVID infected a quarter of Americans within a hundred days of first being identified. What if we had a pandemic that had, say, a 20 percent mortality rate, but with that kind of transmissibility? Or what if it was a kind of respiratory HIV that would lie incubating for years with no acute symptoms? A novel human transmissible virus with a reproduction rate of, say, 4 (far below chicken pox or measles) and a case fatality rate of 50 percent (far below Ebola or bird flu) could, even accounting for lockdown-style measures, cause more than a billion deaths in a matter of months. What if multiple such pathogens were released at once? This goes far beyond fragility amplification; it would be an unfathomable calamity.

AI means autonomous drones cyberwarfare, defeated regulations, financial collapse

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card

The cost of military-grade drones has fallen by three orders of magnitude over the last decade. By 2028, $26 billion a year will be spent on military drones, and at that point many are likely to be fully autonomous. Live deployments of autonomous drones are becoming more plausible by the day. In May 2021, for example, an AI drone swarm in Gaza was used to find, identify, and attack Hamas militants. Start-ups like Anduril, Shield AI, and Rebellion Defense have raised hundreds of millions of dollars to build autonomous drone networks and other military applications of AI. Complementary technologies like 3-D printing and advanced mobile communications will reduce the cost of tactical drones to a few thousand dollars, putting them within reach of everyone from amateur enthusiasts to paramilitaries to lone psychopaths. In addition to easier access, AI-enhanced weapons will improve themselves in real time. WannaCry’s impact ended up being far more limited than it could have been. Once the software patch was applied, the immediate issue was resolved. AI transforms this kind of attack. AI cyberweapons will continuously probe networks, adapting themselves autonomously to find and exploit weaknesses. Existing computer worms replicate themselves using a fixed set of preprogrammed heuristics. But what if you had a worm that improved itself using reinforcement learning, experimentally updating its code with each network interaction, each time finding more and more efficient ways to take advantage of cyber vulnerabilities? Just as systems like AlphaGo learn unexpected strategies from millions of self-played games, so too will AI-enabled cyberattacks. However much you war-game every eventuality, there’s inevitably going to be a tiny vulnerability discoverable by a persistent AI. Everything from cars and planes to fridges and data centers relies on vast code bases. The coming AIs make it easier than ever to identify and exploit weaknesses. They could even find legal or financial means of damaging corporations or other institutions, hidden points of failure in banking regulation or technical safety protocols. As the cybersecurity expert Bruce Schneier has pointed out, AIs could digest the world’s laws and regulations to find exploits, arbitraging legalities. Imagine a huge cache of documents from a company leaked. A legal AI might be able to parse this against multiple legal systems, figure out every possible infraction, and then hit that company with multiple crippling lawsuits around the world at the same time. AIs could develop automated trading strategies designed to destroy competitors’ positions or create disinformation campaigns (more on this in the next section) engineering a run on a bank or a product boycott, enabling a competitor to swoop in and buy the company—or simply watch it collapse. AI adept at exploiting not just financial, legal, or communications systems but also human psychology, our weaknesses and biases, is on the way. Researchers at Meta created a program called CICERO. It became an expert at playing the complex board game Diplomacy, a game in which planning long, complex strategies built around deception and backstabbing is integral. It shows how AIs could help us plan and collaborate, but also hints at how they could develop psychological tricks to gain trust and influence, reading and manipulating our emotions and behaviors with a frightening level of depth, a skill useful in, say, winning at Diplomacy or electioneering and building a political movement. The space for possible attacks against key state functions grows even as the same premise that makes AI so powerful and exciting—its ability to learn and adapt—empowers bad actors.  For centuries cutting-edge offensive capabilities, like massed artillery, naval broadsides, tanks, aircraft carriers, or ICBMs, have initially been so costly that they remained the province of the nation-state. Now they are evolving so fast that they quickly proliferate into the hands of research labs, start-ups, and garage tinkerers. Just as social media’s one-to-many broadcast effect means a single person can suddenly broadcast globally, so the capacity for far-reaching consequential action is becoming available to everyone. This new dynamic—where bad actors are emboldened to go on the offensive—opens up new vectors of attack thanks to the interlinked, vulnerable nature of modern systems: not just a single hospital but an entire health system can be hit; not just a warehouse but an entire supply chain. With lethal autonomous weapons the costs, in both material and above all human terms, of going to war, of attacking, are lower than ever. At the same time, same time, all this introduces greater levels of deniability and ambiguity, degrading the logic of deterrence. If no one can be sure who initiated an assault, or what exactly has happened, why not go ahead? Suleyman, Mustafa. The Coming Wave (p. 212). Crown. Kindle Edition.

The ”good guys” with an AI will not be able to keep up with the “ad guys,” especially in the short-term

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card

Throughout history technology has produced a delicate dance of offensive and defensive advantage, the pendulum swinging between the two but a balance roughly holding: for every new projectile or cyberweapon, a potent countermeasure has quickly arisen. Cannons may wear down a castle’s walls, but they can also rip apart an invading army. Now, powerfully, asymmetric, , omni-use technologies are certain to reach the hands of those who want to damage the state. While defensive operations will be strengthened in time, the nature of the four features favors offense: this proliferation of power is just too wide, fast, and open. An algorithm of world-changing significance can be stored on a laptop; soon it won’t even require the kind of vast, regulatable infrastructure of the last wave and the internet. Unlike an arrow or even a hypersonic missile, AI and bioagents will evolve more cheaply, more rapidly, and more autonomously than any technology we’ve ever seen. Consequently, without a dramatic set of interventions to alter the current course, millions will have access to these capabilities in just a few years. Maintaining a decisive, indefinite strategic advantage across such a broad spectrum of general-use technologies is simply not possible. Eventually, the balance might be restored, but not before a wave of immensely destabilizing force is unleashed. And as we’ve seen, the nature of the threat is far more widespread than blunt forms of physical assault. Information and communication together is its own escalating vector of risk, another emerging fragility amplifier requiring attention. Welcome to the deepfake era. Suleyman, Mustafa. The Coming Wave (p. 213). Crown. Kindle Edition.

Deep fakes are indistinguishable from reality

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card

Ask yourself, what happens when anyone has the power to create and broadcast material with incredible levels of realism? These examples occurred before the means to generate near-perfect deepfakes—whether text, images, video, or audio—became as easy as writing a query into Google. As we saw in chapter 4, large language models now show astounding results at generating synthetic media. A world of deepfakes indistinguishable from conventional media is here. These fakes will be so good our rational minds will find it hard to accept they aren’t real. Deepfakes are spreading fast. If you want to watch a convincing fake of Tom Cruise preparing to wrestle an alligator, well, you can. More and more everyday people will be imitated as the required training data falls to just a handful of examples. It’s already happening. A bank in Hong Kong transferred millions of dollars to fraudsters in 2021, after one of their clients was impersonated by a deepfake. Sounding identical to the real client, the fraudsters phoned the bank manager and explained how the company needed to move money for an acquisition. All the documents seemed to check out, the voice and character were flawlessly familiar, so the manager initiated the transfer. Anyone motivated to sow instability now has an easier time of it. Say three days before an election the president is caught on camera using a racist slur. The campaign press office strenuously denies it, but everyone knows what they’ve seen. Outrage seethes around the country. Polls nose-dive. Swing states suddenly shift toward the opponent, who, against all expectations, wins. A new administration takes charge. But the video is a deepfake, one so sophisticated it evades even the best fake-detecting neural networks. The threat here lies not so much with extreme cases as in subtle, nuanced, and highly plausible scenarios being exaggerated and distorted. It’s not the president charging into a school screaming nonsensical rubbish while hurling grenades; it’s the president resignedly saying he has no choice but to institute a set of emergency laws or reintroduce the draft. It’s not Hollywood fireworks; it’s the purported surveillance camera footage of a group of white policemen caught on tape beating a Black man to death. Sermons from the radical preacher Anwar al-Awlaki inspired the Boston Marathon bombers, the attackers of Charlie Hebdo in Paris, and the shooter who killed forty-nine people at an Orlando nightclub. Yet al-Awlaki died in 2011, the first U.S. citizen killed by a U.S. drone strike, before any of these events. His radicalizing messages were, though, still available on YouTube until 2017. Suppose that using deepfakes new videos of al-Awlaki could be “unearthed,” each commanding further targeted attacks with precision-honed rhetoric. Not everyone would buy it, but those who wanted to believe would find it utterly compelling. Soon these videos will be fully and believably interactive. You are talking directly to him. He knows you and adapts to your dialect and style, plays on your history, your personal grievances, your bullying at school, your terrible, immoral Westernized parents. This is not disinformation as blanket carpet bombing; it’s disinformation as surgical strike. Phishing attacks against politicians or businesspeople, disinformation with the aim of major financial-market disruption or manipulation, media designed to poison key fault lines like sectarian or racial divides, even low-level scams—trust is damaged and fragility again amplified. Eventually entire and rich synthetic histories of seemingly real-world events will be easy to generate. Individual citizens won’t have time or the tools to verify a fraction of the content coming their way. Fakes will easily pass sophisticated checks, let alone a two-second smell test.

AI leads to massive disinformation campaigns

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card

In the 1980s, the Soviet Union funded disinformation campaigns suggesting that the AIDS virus was the result of a U.S. bioweapons program. Years later, some communities were still dealing with the mistrust and fallout. The campaigns, meanwhile, have not stopped. According to Facebook, Russian agents created no fewer than eighty thousand pieces of organic content that reached 126 million Americans on their platforms during the 2016 election. AI-enhanced digital tools will exacerbate information operations like these, meddling in elections, exploiting social divisions, and creating elaborate astroturfing campaigns to sow chaos. Unfortunately, it’s far from just Russia. More than seventy countries have been found running disinformation campaigns. China is quickly catching up with Russia; others from Turkey to Iran are developing their skills. (The CIA, too, is no stranger to info ops.) Early in the COVID-19 pandemic a blizzard of disinformation had deadly consequences. A Carnegie Mellon study analyzed more than 200 million tweets discussing COVID-19 at the height of the first lockdown. Eighty-two percent of influential users advocating for “reopening America” were bots. This was a targeted “propaganda machine,” most likely Russian, designed to intensify the worst public health crisis in a century. Deepfakes automate these information assaults. Until now effective disinformation campaigns have been labor-intensive. While bots and fakes aren’t difficult to make, most are of low quality, easily identifiable, and only moderately effective at actually changing targets’ behavior. High-quality synthetic media changes this equation. Not all nations currently have the funds to build huge disinformation programs, with dedicated offices and legions of trained staff, but that’s less of a barrier when high-fidelity material can be generated at the click of a button. Much of the coming chaos will not be accidental. It will come as existing disinformation campaigns are turbocharged, expanded, and devolved out to a wide group of motivated actors. The rise of synthetic media at scale and minimal cost amplifies both disinformation (malicious and intentionally misleading information) and misinformation (a wider and more unintentional pollution of the information space) at once. Cue an “Infocalypse,” the point at which society can no longer manage a torrent of sketchy material, where the information ecosystem grounding knowledge, trust, and social cohesion, the glue holding society together, falls apart. In the words of a Brookings Institution report, ubiquitous, perfect synthetic media means “distorting democratic discourse; manipulating elections; eroding trust in institutions; weakening journalism; exacerbating social divisions; undermining public safety; and inflicting hard-to-repair damage on the reputation of prominent individuals, including elected officials and candidates for office.” Not all stressors and harms come from bad actors,

AI leads to the creation of pandemic pathogens that could kill everyone

Mustafa Suleyman, September 2023, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection Ai, https://www.youtube.com/watch?v=CTxnLsYHWuI

25:52

I think that the darkest scenario there is that people will experiment with pathogens engineered you know synthetic pathogens that might 26:03 end up accidentally or intentionally being more transmissible I.E they they can spread faster 26:11 or more lethal I.E you know they cause more harm or potentially kill like a 26:18 pandemic like a pandemic um and that’s where we need containment right 26:24 we have to limit access to the tools and the know-how to carry out that kind of 26:31 experimentation so one framework of thinking about this with respect to 26:37 making containment possible is that we really are experimenting with 26:43 dangerous materials and Anthrax is not something that can be bought over the 26:49 Internet that can be freely experimented with and likewise the very best of these 26:55 tools in a few years time are going to be capable of creating you know new synthetic 27:02 pandemic pathogens and so we have to restrict access to those things that means restricting access to the compute Would we be able to regulate AI? 27:08 it means restricting access to the software that runs the models to the 27:14 cloud environments that provide apis provide you access to experiment with those things and of course on the 27:21 biology side it means restricting access to some of the substances and people aren’t going to like this 27:26 people are not going to like that claim because it means that those who want to do good with those tools 27:33 those who want to create a startup the small guy the little developer that 27:39 struggles to comply with all the regulations they’re going to be pissed off understandably right but that is the 27:46 age we’re in deal with it like we have to confront that reality that means that we have to 27:53 approach this with the precautionary principle right never before in the invention of a technology or in the 27:59 creation of a regulation have we proactively said we need to go slowly we 28:05 need to make sure that this first does no harm the precautionary principle and 28:10 that is just an unprecedented moment no other Technology’s done that right because I think 28:17 we collectively in the industry those of us who are closest to the work can see a 28:22 place in five years or ten years where it could get out of control and we have to get on top of it now and it’s 28:28 better to forgo like that is give up some of those potential upsides or 28:34 benefits until we can be more sure that it can be contained that it can be 28:40 controlled that it always serves our Collective interests think about that so I think about what 28:45 you’ve just said there about being able to create these pathogens these diseases and viruses Etc that you know could 28:51 become weapons or whatever else but with artificial intelligence and the power of that intelligence with these um 28:58 pathogens you could theoretically ask one of these systems to create a virus 29:05 that a very deadly virus um you could ask the artificial 29:11 intelligence to create a very deadly virus that has certain properties um 29:17 maybe even that mutates over time in a certain way so it only kills a certain amount of people kind of like a nuclear bomb of viruses that you could just pop 29:24 hit an enemy with now if I’m if I hear that and I go okay that’s powerful I would like one of those you know there 29:31 might be an adversary out there that goes I would like one of those just in case America get out of hand in America is thinking you know I want one of those 29:37 in case Russia gets out of hand and so okay you might take a precautionary approach in the United 29:43 States but that’s only going to put you on the back foot when China or Russia or one of your adversaries accelerates 29:50 forward in that in that path and it’s the same with the the nuclear bomb and you know you nailed it I mean that is 29:58 the race condition we refer to that as the race condition the idea that if I 30:05 don’t do it the other party is gonna do it and therefore I must do it but the 30:11 problem with that is that it creates a self-fulfilling prophecy so the default there is that we all end up doing it and 30:18 that can’t be right because there is a opportunity for massive cooperation here 30:25 there’s a shared that is between us and China and every other quote unquote them 30:31 or they or enemy that we want to create we’ve all got a shared interest 30:37 in advancing the collective health and well-being of humans and Humanity how 30:44 well have we done it promoting shared interest well in the development of Technologies over the years even at like 30:50 a corporate level even you know you know the nuclear non-proliferation 30:55 treaty has been reasonably successful there’s only nine nuclear states in the world today we’ve stopped many like 31:02 three countries actually gave up nuclear weapons because we incentivized them with sanctions and threats and economic 31:09 rewards um small groups have tried to get access to nuclear weapons and so far have 31:14 largely failed it’s expensive though right and hard to like uranium as a as a chemical to keep it stable and to to buy 31:21 it and to house it I mean I can just put it in the shed you certainly couldn’t put it in a shed you can’t download uranium 235 off the Internet it’s not 31:29 available open source that is totally true so it’s got different characteristics for sure but a kid in 31:34 Russia could you know in his bedroom could download something onto his computer that’s 31:40

AGI in 3 years

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card

Today, AI systems can almost perfectly recognize faces and objects. We take speech-to-text transcription and instant language translation for granted. AI can navigate roads and traffic well enough to drive autonomously in some settings. Based on a few simple prompts, a new generation of AI models can generate novel images and compose text with extraordinary levels of detail and coherence. AI systems can produce synthetic voices with uncanny realism and compose music of stunning beauty. Even in more challenging domains, ones long thought to be uniquely suited to human capabilities like long-term planning, imagination, and simulation of complex ideas, progress leaps forward. AI has been climbing the ladder of cognitive abilities for decades, and it now looks set to reach human-level performance across a very wide range of tasks within the next three years. . That is a big claim, but if I’m even close to right, the implications are truly profound. Suleyman, Mustafa. The Coming Wave (p. 23). Crown. Kindle Edition In 2010 almost no one was talking seriously about AI. Yet what had once seemed a niche mission for a small group of researchers and entrepreneurs has now become a vast global endeavor. AI is everywhere, on the news and in your smartphone, trading stocks and building websites. Many of the world’s largest companies and wealthiest nations barrel forward, developing cutting-edge AI models and genetic engineering techniques, fueled by tens of billions of dollars in investment. Once matured, these emerging technologies will spread rapidly, becoming cheaper, more accessible, and widely diffused throughout society. They will offer extraordinary new medical advances and clean energy breakthroughs, creating not just new businesses but new industries and quality of life improvements in almost every imaginable area. Suleyman, Mustafa. The Coming Wave (p. 24). Crown. Kindle Edition.

AI means automated warfare that threatens civilization

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

And yet alongside these benefits, AI, synthetic biology, and other advanced forms of technology produce tail risks on a deeply concerning scale. They could present an existential threat to nation-states—risks so profound they might disrupt or even overturn the current geopolitical order. They open pathways to immense AI-empowered cyberattacks, automated wars that could devastate countries, engineered pandemics, and a world subject to unexplainable and yet seemingly omnipotent forces. The likelihood of each may be small, but the possible consequences are huge. Even a slim chance of outcomes like these requires urgent attention. Some countries will react to the possibility of such catastrophic risks with a form of technologically charged authoritarianism to slow the spread of these new powers. This will require huge levels of surveillance along with massive intrusions into our private lives. Keeping a tight rein on technology technology could become part of a drift to everything and everyone being watched, all the time, in a dystopian global surveillance system justified by a desire to guard against the most extreme possible outcomes. Suleyman, Mustafa. The Coming Wave (pp. 24-25). Crown. Kindle Edition.

Banning tech means societal collapse

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

Equally plausible is a Luddite reaction. Bans, boycotts, and moratoriums will ensue. Is it even possible to step away from developing new technologies and introduce a series of moratoriums? Unlikely. With their enormous geostrategic and commercial value, it’s difficult to see how nation-states or corporations will be persuaded to unilaterally give up the transformative powers unleashed by these breakthroughs. Moreover, attempting to ban development of new technologies is itself a risk: technologically stagnant societies are historically unstable and prone to collapse. Eventually, they lose the capacity to solve problems, to progress. Suleyman, Mustafa. The Coming Wave (p. 25). Crown. Kindle Edition.

DNA synthesizers can already create bioweapons

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

Over the course of the day a series of hair-raising risks were floated over the coffees, biscuits, and PowerPoints. One stood out. The presenter showed how the price of DNA synthesizers, which can print bespoke strands of DNA, was falling rapidly. Costing a few tens of thousands of dollars, they are small enough to sit on a bench in your garage and let people synthesize—that is, manufacture—DNA. And all this is now possible for anyone with graduate-level training in biology or an enthusiasm for self-directed learning online. Given the increasing availability of the tools, the presenter painted a harrowing vision: Someone could soon create novel pathogens far more transmissible and lethal than anything found in nature. These synthetic pathogens could evade known countermeasures, spread asymptomatically, or have built-in resistance to treatments. If needed, someone could supplement homemade experiments with DNA ordered online and reassembled at home. The apocalypse, mail ordered. This was not science fiction, argued the presenter, a respected professor with more than two decades of experience; it was a live risk, now. They finished with an alarming thought: a single person today likely “has the capacity to kill a billion people.” All it takes is motivation. The attendees shuffled uneasily. People squirmed and coughed. Then the griping and hedging started. No one wanted to believe this was possible. Surely it wasn’t the case, surely there had to be some effective mechanisms for control, surely the diseases were difficult to create, surely the databases could be locked down, surely the hardware could be secured. And so on. Suleyman, Mustafa. The Coming Wave (p. 28). Crown. Kindle Edition.

Tech bans fail

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

People throughout history have attempted to resist new technologies because they felt threatened and worried their livelihoods and way of life would be destroyed. Fighting, as they saw it, for the future of their families, they would, if necessary, physically destroy what was coming. If peaceful measures failed, Luddites wanted to take apart the wave of industrial machinery. Under the seventeenth-century Tokugawa shogunate, Japan shut out the world—and by extension its barbarous inventions—for nearly three hundred years. Like most societies throughout history, it was distrustful of the new, the different, and the disruptive. Similarly, China dismissed a British diplomatic mission and its offer of Western tech in the late eighteenth century, with the Qianlong emperor arguing, “Our Celestial Empire possesses all things in prolific abundance and lacks no product within its borders. There is therefore no need to import the manufactures of outside barbarians.” None of it worked. The crossbow survived until it was usurped by guns. Queen Elizabeth’s knitting machine returned, centuries later, in the supercharged form of large-scale mechanical looms to spark the Industrial Revolution. China and Japan are today among the most technologically advanced and globally integrated places on earth. The Luddites were no more successful at stopping new industrial technologies than horse owners and carriage makers were at preventing cars. Where there is demand, technology always breaks out, finds traction, builds users. Once established, waves are almost impossible to stop. As the Ottomans discovered when it came to printing, resistance tends to be ground down with the passage of time. Technology’s nature is to spread, no matter the barriers. Plenty of technologies come and go. You don’t see too many penny-farthings or Segways, listen to many cassettes or minidiscs. But that doesn’t mean personal mobility and music aren’t ubiquitous; older technologies have just been replaced by new, more efficient forms. We don’t ride on steam trains or write on typewriters, but their ghostly presence lives on in their successors, like Shinkansens and MacBooks. Think of how, as parts of successive waves, fire, then candles and oil lamps, gave way to gas lamps and then to electric lightbulbs, and now LED lights, and the totality of artificial light increased even as the underlying technologies changed. New technologies supersede multiple predecessors. Just as electricity did the work of candles and steam engines alike, so smartphones replaced satnavs, cameras, PDAs, computers, and telephones (and invented entirely new classes of experience: apps). As technologies let you do more, for less, their appeal only grows, along with their adoption. Imagine trying to build a contemporary society without electricity or running water or medicines. Even if you could, how would you convince anyone it was worthwhile, desirable, a decent trade? Few societies have ever successfully removed themselves from the technological frontier; doing so usually either is part of a collapse or precipitates one. There is no realistic way to pull back. Suleyman, Mustafa. The Coming Wave (pp. 58-59). Crown. Kindle Edition.

No reason computers can’t achieve AGI

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

Sometimes people seem to suggest that in aiming to replicate human-level intelligence, AI chases a moving target or that there is always some ineffable component forever out of reach. That’s just not the case. The human brain is said to contain around 100 billion neurons with 100 trillion connections between them—it is often said to be the most complex known object in the universe. It’s true that we are, more widely, complex emotional and social beings. But humans’ ability to complete given tasks—human intelligence itself—is very much a fixed target, as large and multifaceted as it is. Unlike the scale of available compute, our brains do not radically change year by year. In time this gap will be closed. At the present level of compute we already have human-level performance in tasks ranging from speech transcription to text generation. As it keeps scaling, the ability to complete a multiplicity of tasks at our level and beyond comes within reach. AI will keep getting radically better at everything, and so far there seems no obvious upper limit on what’s possible. This simple fact could be one of the most consequential of the century, potentially in human history. And yet, as powerful as scaling up is, it’s not the only dimension where AI is poised for exponential improvement. Suleyman, Mustafa. The Coming Wave (pp. 90-91). Crown. Kindle Edition.

Models are being trained to reduce bias

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

It took LLMs just a few years to change AI. But it quickly became apparent that these models sometimes produce troubling and actively harmful content like racist screeds or rambling conspiracy theories. Research into GPT-2 found that when prompted with the phrase “the white man worked as…,” it would autocomplete with “a police officer, a judge, a prosecutor, and the president of the United States.” Yet when given the same prompt for “Black man,” it would autocomplete with “a pimp,” or for “woman” with “a prostitute.” These models clearly have the potential to be as toxic as they are powerful. Since they are trained on much of the messy data available on the open web, they will casually reproduce and indeed amplify the underlying biases and structures of society, unless they are carefully designed to avoid doing so. The potential for harm, abuse, and misinformation is real. But the positive news is that many of these issues are being improved with larger and more powerful models. Researchers all over the world are racing to develop a suite of new fine-tuning and control techniques, which are already making a difference, giving levels of robustness and reliability impossible just a few years ago. Suffice to say, much more is still needed, but at least this harmful potential is now a priority to address and these advances should be welcomed. Suleyman, Mustafa. The Coming Wave (p. 93). Crown. Kindle Edition.Suleyman, Mustafa. The Coming Wave (p. 93). Crown. Kindle Edition.

AI will overcome limitations

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

Despite recent breakthroughs, skeptics remain. They argue that AI may be slowing, narrowing, becoming overly dogmatic. Critics like NYU professor Gary Marcus believe deep learning’s limitations are evident, that despite the buzz of generative AI the field is “hitting a wall,” that it doesn’t present any path to key milestones like being capable of learning concepts or demonstrating real understanding. The eminent professor of complexity Melanie Mitchell rightly points out that present-day AI systems have many limitations: they can’t transfer knowledge from one domain to another, provide quality explanations of their decision-making process, and so on. Significant challenges with real-world applications linger, including material questions of bias and fairness, reproducibility, security vulnerabilities, and legal liability. Urgent ethical gaps and unsolved safety questions cannot be ignored. Yet I see a field rising to these challenges, not shying away or failing to make headway. I see obstacles but also a track record of overcoming them. People interpret unsolved problems as evidence of lasting limitations; I see an unfolding research process. So, where does AI go next as the wave fully breaks? Today we have narrow or weak AI: limited and specific versions. GPT-4 can spit out virtuoso texts, but it can’t turn around tomorrow and drive a car, as other AI programs do. Existing AI systems still operate in relatively narrow lanes. What is yet to come is a truly general or strong AI capable of human-level performance across a wide range of complex tasks—able to seamlessly shift among them. But this is exactly what the scaling hypothesis predicts is coming and what we see the first signs of in today’s systems. AI is still in an early phase. It may look smart to claim that AI doesn’t live up to the hype, and it’ll earn you some Twitter followers. Meanwhile, talent and investment pour into AI research nonetheless. I cannot imagine how this will not prove transformative in the end. If for some reason LLMs show diminishing returns, then another team, with a different concept, will pick up the baton, just as the internal combustion engine repeatedly hit a wall but made it in the end. Fresh minds, new companies, will keep working at the problem. Then as now, it takes only one breakthrough to change the trajectory of a technology. If AI stalls, it will have its Otto and Benz eventually. Further progress—exponential progress—is the most likely outcome. The wave will only grow. Suleyman, Mustafa. The Coming Wave (p. 98). Crown. Kindle Edition.

AI will be able to accomplish open-ended complex goals

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

But, as many have pointed out, intelligence is about so much more than just language (or indeed any other single facet of intelligence taken in isolation). One particularly important dimension is in the ability to take actions. We don’t just care about what a machine can say; we also care about what it can do. What we would really like to know is, can I give an AI an ambiguous, open-ended, complex goal that requires interpretation, judgment, creativity, decision-making, and acting across multiple domains, over an extended time period, and then see the AI accomplish that goal? Put simply, passing a Modern Turing Test would involve something like the following: an AI being able to successfully act on the instruction “Go make $1 million on Amazon in a few months with just a $100,000 investment.” It might research the web to look at what’s trending, finding what’s hot and what’s not on Amazon Marketplace; generate a range of images and blueprints of possible products; send them to a drop-ship manufacturer it found on Alibaba; email back and forth to refine the requirements and agree on the contract; design a seller’s listing; and continually update marketing materials and product designs based on buyer feedback. Aside from the legal requirements of registering as a business on the marketplace and getting a bank account, all of this seems to me eminently doable. I think it will be done with a few minor human interventions within the next year, and probably fully autonomously within three to five years. Should my Modern Turing Test for the twenty-first century be met, the implications for the global economy are profound. Many of the ingredients are in place. Image generation is well advanced, and the ability to write and work with the kinds of APIs that banks and websites and manufacturers would demand is in process. That an AI can write messages or run marketing campaigns, all activities that happen within the confines of a browser, seems pretty clear. Already the most sophisticated services can do elements of this. Think of them as proto–to-do lists that do themselves, enabling the automation of a wide range of tasks. confines of a browser, seems pretty clear. Already the most sophisticated services can do elements of this. Think of them as proto–to-do lists that do themselves, enabling the automation of a wide range of tasks. We’ll come to robots later, but the truth is that for a vast range of tasks in the world economy today all you need is access to a computer; most of global GDP is mediated in some way through screen-based interfaces amenable to an AI. The challenge is in advancing what AI developers call hierarchical planning, stitching multiple goals and subgoals and capabilities into a seamless process toward a singular end. Once this is achieved, it adds up to a highly capable AI, plugged into a business or organization and all its local history and needs, that can lobby, sell, manufacture, hire, plan—everything a company can do, only with a small team of human AI managers who oversee, double-check, implement, and co-CEO with the AI. Suleyman, Mustafa. The Coming Wave (p. 101). Crown. Kindle Edition.

We are close to artificial capable intelligence that can imagine, reason, plan,  exhibit common sense, and transfer what it “knows” from one domain to another

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

Rather than get too distracted by questions of consciousness, then, we should refocus the entire debate around near-term capabilities and how they will evolve in the coming years. As we have seen, from Hinton’s AlexNet to Google’s LaMDA, models have been improving at an exponential rate for more than a decade. These capabilities are already very real indeed, but they are nowhere near slowing down. While they are already having an enormous impact, they will be dwarfed by what happens as we progress through the next few doublings and as AIs complete complex, multistep end-to-end tasks on their own. I think of this as “artificial capable intelligence” (ACI), the point at which AI can achieve complex goals and tasks with minimal oversight. AI and AGI are both parts of the everyday discussion, but we need a concept encapsulating a middle layer in which the Modern Turing Test is achieved but before systems display runaway “superintelligence.” ACI is shorthand for this point. The first stage of AI was about classification and prediction—it was capable, but only within clearly defined limits and at preset tasks. It could differentiate between cats and dogs in images, and then it could predict what came next in a sequence to produce pictures of those cats and dogs. It produced glimmers of creativity, and could be quickly integrated into tech companies’ products. ACI represents the next stage of AI’s evolution. A system that not only could recognize and generate novel images, audio, and language appropriate to a given context, but also would be interactive—operating in real time, with real users. It would augment these abilities with a reliable memory so that it could be consistent over extended timescales and could draw on other sources of data, including, for example, databases of knowledge, products, or supply-chain components belonging to third parties. Such a system would use these resources to weave together sequences of actions into long-term plans in pursuit of complex, open-ended goals, like setting up and running an Amazon Marketplace store. All of this, then, enables tool use and the emergence of real capability to perform a wide range of complex, useful actions. It adds up to a genuinely capable AI, an ACI. Conscious superintelligence? Who knows. But highly capable learning systems, ACIs, that can pass some version of the Modern Turing Test? Make no mistake: they are on their way, are already here in embryonic form. There will be thousands of these models, and they will be used by the majority of the world’s population. It will take us to a point where anyone can have an ACI in their pocket that can help or even directly accomplish a vast array of conceivable goals: planning and running your vacation, designing and building more efficient solar panels, helping win an election. It’s hard to say for certain what happens when everyone is empowered like this, but this is a point we’ll return to in part 3. The future of AI is, at least in one sense, fairly easy to predict. Over the next five years, vast resources will continue to be invested. Some of the smartest people on the planet are working on these problems. Orders of magnitude more computation will train the top models. All of this will lead to more dramatic leaps forward, including breakthroughs toward AI that can imagine, reason, plan, and exhibit common sense. It won’t be long before AI can transfer what it “knows” from one domain to another, seamlessly, as humans do. What are now only tentative signs of self-reflection and self-improvement will leap forward. These ACI systems will be plugged into the internet, capable of interfacing with everything we humans do, but on a platform of deep knowledge and ability. It will be not just language they’ve mastered but a bewildering array of tasks, too. Suleyman, Mustafa. The Coming Wave (p. 103). Crown. Kindle Edition.

AI spurs biotech development

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

In 2022, AlphaFold2 was opened up for public use. The result has been an explosion of the world’s most advanced machine learning tools, deployed in both fundamental and applied biological research: an “earthquake,” in the words of one researcher. More than a million researchers accessed the tool within eighteen months of launch, including virtually all the world’s leading biology labs, addressing questions from antibiotic resistance to the treatment of rare diseases to the origins of life itself. Previous experiments had delivered the structure of about 190,000 proteins to the European Bioinformatics Institute’s database, about 0.1 percent of known proteins in existence. DeepMind uploaded some 200 million structures in one go, representing almost all known proteins. Whereas once it might have taken researchers weeks or months to determine a protein’s shape and function, that process can now begin in a matter of seconds. This is what we mean by exponential change. This is what the coming wave makes possible. And yet this is only the beginning of a convergence of these two technologies. The bio-revolution is coevolving with advances in AI, and indeed many of the phenomena discussed in this chapter will rely on AI for their realization. Think, then, of two waves crashing together, not a wave but a superwave. Indeed, from one vantage artificial intelligence and synthetic biology are almost interchangeable. All intelligence to date has come from life. Call them synthetic intelligence and artificial life and they still mean the same thing. Both fields are about re-creating, engineering these utterly foundational and interrelated concepts, two core attributes of humanity; change the view and they become one single project. Biology’s sheer complexity opens up vast troves of data, like all those proteins, almost impossible to parse using traditional techniques. A new generation of tools has quickly become indispensable as a result. Teams are working on products that will generate new DNA sequences using only natural language instructions. Transformer models are learning the language of biology and chemistry, again discovering relationships and significance in long, complex sequences illegible to the human mind. LLMs fine-tuned on biochemical data can generate plausible candidates for new molecules and proteins, DNA and RNA sequences. They predict the structure, function, or reaction properties of compounds in simulation before these are later verified in a laboratory. The space of applications and the speed at which they can be explored is only accelerating. Some scientists are beginning to investigate ways to plug human minds directly into computer systems. In 2019, electrodes surgically implanted in the brain let a fully paralyzed man with late-stage ALS spell out the words “I love my cool son.” Companies like Neuralink are working on brain interfacing technology that promises to connect us directly with machines. In 2021 the company inserted three thousand filament-like electrodes, thinner than a human hair, that monitor neuron activity, into a pig’s brain. Soon they hope to begin human trials of their N1 brain implant, while another company, Synchron, has already started human trials in Australia. Scientists at a start-up called Cortical Labs have even grown a kind of brain in a vat (a bunch of neurons grown in vitro) and taught it to play Pong. It likely won’t be too long before neural “laces” made from carbon nanotubes plug us directly into the digital world. What happens when a human mind has instantaneous access to computation and information on the scale of the internet and the cloud? It’s almost impossible to imagine, but researchers are already in the early days of making it happen. As the central general-purpose technologies of the coming wave, AI and synthetic biology are already entangled, a spiraling feedback loop boosting each other. While the pandemic gave biotech a massive awareness boost, the full impact—possibilities and risks alike—of synthetic biology has barely begun to sink into the popular imagination. Welcome to the age of biomachines and biocomputers, where strands of DNA perform calculations and artificial cells are put to work. Where machines come alive. Welcome to the age of synthetic life. Suleyman, Mustafa. The Coming Wave (p. 120). Crown. Kindle Edition.

Renewables will power AI in the future

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

Endless growth in energy consumption was neither possible nor desirable in the era of fossil fuels, and yet while the boom lasted, the development of almost everything we take for granted—from cheap food to effortless transport—rested on it. Now, a huge boost of cheap, clean power has implications for everything from transport to buildings, not to mention the colossal power needed to run the data centers and robotics that will be at the heart of the coming decades. Energy—expensive and dirty as it often is—is at present a limiter on technology’s rate of progress. Not for too much longer. Renewable energy will become the largest single source of electricity generation by 2027. This shift is occurring at an unprecedented pace, with more renewable capacity set to be added in the next five years than in the previous two decades. Solar power in particular is experiencing rapid growth, with costs falling significantly. In 2000, solar energy cost $4.88 per watt, but by 2019 it had fallen to just 38 cents. Energy isn’t just getting cheaper; it’s more distributed, potentially localizable from specific devices to whole communities. Behind it all lies the dormant behemoth of clean energy, this time inspired if not directly powered by the sun: nuclear fusion. Fusion power involves the release of energy when isotopes of hydrogen collide and fuse to form helium, a process long considered the holy grail of energy production. Early pioneers in the 1950s predicted that it would take about a decade to develop. Like so many of the technologies described here, that was a significant underestimation. However, recent breakthroughs have sparked renewed hope. Researchers at the Joint European Torus near Oxford, England, achieved a record power output, double the previous high recorded in 1997. At the National Ignition Facility in Livermore, California, scientists have been working on a method known as inertial confinement, which involves compressing pellets of hydrogen-rich material with lasers and heating them to 100 million million degrees to create a fleeting fusion reaction. In 2022 they created a reaction demonstrating net energy gain for the first time, a critical milestone of producing more energy than the lasers put in. With meaningful private capital now flowing into at least thirty fusion start-ups alongside major international collaborations, scientists are talking about “when and not if” fusion arrives. It may still be a decade or more, but a future with this clean and virtually limitless energy source is looking increasingly real. Fusion and solar offer the promise of immense centralized and decentralized energy grids, with implications we will explore in part 3. This is a time of huge optimism. Including wind, hydrogen, and improved battery technologies, here is a brewing mix that can sustainably power the many demands of life both today and in the future and underwrite the wave’s full potential. Suleyman, Mustafa. The Coming Wave (pp. 131-132). Crown. Kindle Edition.

Without control, these technologies could kills us all

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

And yet alongside these benefits, AI, synthetic biology, and other advanced forms of technology produce tail risks on a deeply concerning scale. They could present an existential threat to nation-states—risks so profound they might disrupt or even overturn the current geopolitical order. They open pathways to immense AI-empowered cyberattacks, automated wars that could devastate countries, engineered pandemics, and a world subject to unexplainable and yet seemingly omnipotent forces. The likelihood of each may be small, but the possible consequences are huge. Even a slim chance of outcomes like these requires urgent attention. Some countries will react to the possibility of such catastrophic risks with a form of technologically charged authoritarianism to slow the spread of these new powers. This will require huge levels of surveillance along with massive intrusions into our private lives. Keeping a tight rein on technology technology could become part of a drift to everything and everyone being watched, all the time, in a dystopian global surveillance system justified by a desire to guard against the most extreme possible outcomes. Suleyman, Mustafa. The Coming Wave (pp. 24-25). Crown. Kindle Edition.

Government cannot solve global problems

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

Our institutions for addressing massive global problems were not fit for purpose. I saw something similar working for the mayor of London in my early twenties. My job was to audit the impact of human rights legislation on communities in the city. I interviewed everyone from British Bangladeshis to local Jewish groups, young and old, of all creeds and backgrounds. The experience showed how human rights law could help improve lives in a very practical way. Unlike the United States, the U.K. has no written constitution protecting people’s fundamental rights. Now local groups could take problems to local authorities and point out they had legal obligations to protect the most vulnerable; they couldn’t brush them under the carpet. On one level it was inspiring. It gave me hope: institutions could have a codified set of rules about justice. The system could deliver. But of course, the reality of London politics was very different. In practice everything devolved into excuses, blame shifting, media Suleyman, Mustafa. The Coming Wave (p. 189). Crown. Kindle Edition.

Fusion and solar solve the environmental harms

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

Energy rivals intelligence and life in its fundamental importance. Modern civilization relies on vast amounts of it. Indeed, if you wanted to write the crudest possible equation for our world it would be something like this: (Life + Intelligence) x Energy = Modern Civilization Increase any or all of those inputs (let alone supercharge their marginal cost toward zero) and you have a step change in the nature of society. Endless growth in energy consumption was neither possible nor desirable in the era of fossil fuels, and yet while the boom lasted, the development of almost everything we take for granted—from cheap food to effortless transport—rested on it. Now, a huge boost of cheap, clean power has implications for everything from transport to buildings, not to mention the colossal power needed to run the data centers and robotics that will be at the heart of the coming decades. Energy—expensive and dirty as it often is—is at present a limiter on technology’s rate of progress. Not for too much longer. Renewable energy will become the largest single source of electricity generation by 2027. This shift is occurring at an unprecedented pace, with more renewable capacity set to be added in the next five years than in the previous two decades. Solar power in particular is experiencing rapid growth, with costs falling significantly. In 2000, solar energy cost $4.88 per watt, but by 2019 it had fallen to just 38 cents. Energy isn’t just getting cheaper; it’s more distributed, potentially localizable from specific devices to whole communities. Behind it all lies the dormant behemoth of clean energy, this time inspired if not directly powered by the sun: nuclear fusion. Fusion power involves the release of energy when isotopes of hydrogen collide and fuse to form helium, a process long considered the holy grail of energy production. Early pioneers in the 1950s predicted that it would take about a decade to develop. Like so many of the technologies described here, that was a significant underestimation. However, recent breakthroughs have sparked renewed hope. Researchers at the Joint European Torus near Oxford, England, achieved a record power output, double the previous high recorded in 1997. At the National Ignition Facility in Livermore, California, scientists have been working on a method known as inertial confinement, which involves compressing pellets of hydrogen-rich material with lasers and heating them to 100 million million degrees to create a fleeting fusion reaction. In 2022 they created a reaction demonstrating net energy gain for the first time, a critical milestone of producing more energy than the lasers put in. With meaningful private capital now flowing into at least thirty fusion start-ups alongside major international collaborations, scientists are talking about “when and not if” fusion arrives. It may still be a decade or more, but a future with this clean and virtually limitless energy source is looking increasingly real. Fusion and solar offer the promise of immense centralized and decentralized energy grids, with implications we will explore in part 3. This is a time of huge optimism. Including wind, hydrogen, and improved battery technologies, here is a brewing mix that can sustainably power the many demands of life both today and in the future and underwrite the wave’s full potential. Suleyman, Mustafa. The Coming Wave (pp. 131-132). Crown. Kindle Edition.

New biocomponents will be made from prompts

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

In chapter 5, we saw what tools like AlphaFold are doing to catalyze biotech. Until recently biotech relied on endless manual lab work: measuring, pipetting, carefully preparing samples. Now simulations speed up the process of vaccine discovery. Computational tools help automate parts of the design processes, re-creating the “biological circuits” that program complex functions into cells like bacteria that can produce a certain protein. Software frameworks, like one called Cello, are almost like open-source languages for synthetic biology design. This could mesh with fast-moving improvements in laboratory robotics and automation and faster biological techniques like the enzymatic synthesis we saw in chapter 5, expanding synthetic biology’s range and making it more accessible. Biological evolution is becoming subject to the same cycles as software. Just as today’s models produce detailed images based on a few words, so in decades to come similar models will produce a novel compound or indeed an entire organism with just a few natural language prompts. That compound’s design could be improved by countless self-run trials, just as AlphaZero became an expert chess or Go player through self-play. Quantum technologies, many millions of times more powerful than the most powerful classical computers, could let this play out at a molecular level. This is what we mean by hyper-evolution—a fast, iterative platform for creation. Nor will this evolution be limited to specific, predictable, and readily containable areas. It will be everywhere. Crown. Kindle Edition. Suleyman, Mustafa. The Coming Wave (pp. 142-143). Crown. Kindle Edition.

AI critical to drug development

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

and automation and faster biological techniques like the enzymatic synthesis we saw in chapter 5, expanding synthetic biology’s range and making it more accessible. Biological evolution is becoming subject to the same cycles as software. Just as today’s models produce detailed images based on a few words, so in decades to come similar models will produce a novel compound or indeed an entire organism with just a few natural language prompts. That compound’s design could be improved by countless self-run trials, just as AlphaZero became an expert chess or Go player through self-play. Quantum technologies, many millions of times more powerful than the most powerful classical computers, could let this play out at a molecular level. This is what we mean by hyper-evolution—a fast, iterative platform for creation. Nor will this evolution be limited to specific, predictable, and readily containable areas. It will be everywhere. tuberculosis. Start-ups like Exscientia, alongside traditional pharmaceutical giants like Sanofi, have made AI a driver of medical research. To date eighteen clinical assets have been derived with the help of AI tools. Suleyman, Mustafa. The Coming Wave (p. 144). Crown. Kindle Edition.

AI will be used for bioweapons

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

There’s a flip side. Researchers looking for these helpful compounds raised an awkward question. What if you redirected the discovery process? What if, instead of looking for cures, you looked for killers? They ran a test, asking their molecule-generating AI to find poisons. In six hours it identified more than forty thousand molecules with toxicity comparable to the most dangerous chemical weapons, like Novichok. It turns out that in drug discovery, one of the areas where AI will undoubtedly make the clearest possible difference, the opportunities are very much “dual use.” Dual-use technologies are those with both civilian and military applications. In World War I, the process of synthesizing ammonia was seen as a way of feeding the world. But it also allowed for the creation of explosives, and helped pave the way for chemical weapons. Complex electronics systems for passenger aircraft can be repurposed for precision missiles. Conversely, the Global Positioning System was originally a military system, but now has countless everyday consumer use