A.I. Daily

Fully autonomous weapons that can be created by anyone will collapse international stability

Tech Central, April 29, 2024. AI faces its Oppenheimer moment. https://techcentral.co.za/ai-faces-its-oppenheimer-moment/243742/

As autonomous weapons systems rapidly proliferate, including across battlefields in Ukraine and Gaza, algorithms and unmanned aerial vehicles are already helping military planners decide whether or not to hit targets. Soon that decision could be outsourced entirely to the machines. “This is the Oppenheimer moment of our generation,” said Austrian foreign minister Alexander Schallenberg, referencing J Robert Oppenheimer, who helped invent the atomic bomb in 1945 before going on to advocate for controls over the spread of nuclear arms. Governments around the world have taken steps to collaborate with companies integrating AI tools into defence Civilian, military and technology officials from more than 100 countries convened on Monday in Vienna to discuss how their economies can control the merger of AI with military technologies — two sectors that have recently animated investors, helping push stock valuations to historic highs. Spreading global conflict combined with financial incentives for companies to promote AI adds to the challenge of controlling killer robots, according to Jaan Tallinn, an early investor in Google’s AI platform DeepMind Technologies. “Silicon Valley’s incentives might not be aligned with the rest of humanity,” Tallinn said. Governments around the world have taken steps to collaborate with companies integrating AI tools into defence. The Pentagon is pouring millions of dollars into AI start-ups. The European Union last week paid Thales to create an imagery database to help evaluate battlefield targets. ‘Lavendrr’ Tel Aviv-based +972 Magazine reported this month that Israel was using an AI programme called “Lavender” to come up with assassination targets. The report, which Israel has disputed, said the AI system had played a “central role in the unprecedented bombing of Palestinians”. “The future of slaughter bots is here,” said Anthony Aguirre, a physicist who predicted the trajectory the technology would take in a short 2017 film seen by more than 1.6 million viewers. “We need an arms control treaty negotiated by the United Nations General Assembly.” But advocates of diplomatic solutions are likely to be frustrated, at least in the short term, according to Alexander Kmentt, Austria’s top disarmament official and the architect of this week’s conference. “A classical approach to arms control doesn’t work because we’re not talking about a single weapons system but a combination of dual-use technologies,” Kmentt said in an interview. Rather than striking a new “magnum opus” treaty, Kmentt implied that countries may be forced to muddle through with the legal tools already at their disposal. Enforcing export controls and humanitarian laws could help keep the spread of AI weapons systems in check, he said. In the longer run, after the technology becomes accessible to non-state actors and potentially to terrorists, countries will be forced into writing new rules, predicted Arnoldo André Tinoco, Costa Rica’s foreign minister. “The easy availability of autonomous weapons removes limitations that ensured only a few could enter the arms race,” he said. “Now students with a 3D printer and basic programming knowledge can make drones with the capacity to cause widespread casualties. Autonomous weapons systems have forever changed the concept of international stability.” — Jonathan Tirone, (c) 2024 Bloomberg LP

US needs generative AI to analyze public data for threats and to stop enemies from penetrating defenses

Ryan Health, 4-16, 24, Axios, “Top secret” is no longer the key to good intel in an AI world: report, https://www.axios.com/2024/04/16/ai-top-secret-intelligence

AI’s advent means the U.S. intelligence community must revamp its traditional way of doing business, according to a new report from an Eric Schmidt-backed national security think tank. The big picture: Today’s intelligence systems cannot keep pace with the explosion of data now available, requiring “rapid” adoption of generative AI to keep an intelligence advantage over rival powers. The U.S. intelligence community “risks surprise, intelligence failure, and even an attrition of its importance” unless it embraces AI’s capacity to process floods of data, according to the report from the Special Competitive Studies Project. The federal government needs to think more in terms of “national competitiveness” than “national security,” given the wider range of technologies now used to attack U.S. interests. The U.S. needs the most advanced AI because there is an “accelerating race” for insight from real-time data to protect U.S. interests, rather than a reliance on a limited set of “secret” information, per SCSP president and CEO Ylli Bajraktari. Most of the current data flood arrives unstructured, and from publicly available sources, rather than in carefully drafted classified memos. Context: The speed of generative AI’s development “far exceeds that of any past era” of technology transformation, according to the report. Threat level: Generative AI provides adversaries with “new avenues to penetrate the United States’ defenses, spread disinformation, and undermine the intelligence community’s ability to accurately perceive their intentions and capabilities.” The tools also “democratize intelligence capabilities,” increasing the number of countries and organization that can credibly attempt to mess with U.S. interests. What they found: The federal government needs to build more links with the developers of cutting-edge AI and adopt their tools to “reinvent how intelligence is collected, analyzed, produced, disseminated, and evaluated.” Intelligence agencies would benefit from an “open source entity” within government dedicated to accelerating “use of openly- and commercially-available data.” Reality check: New digital technologies and cyber threats have been changing the business of intelligence gathering and national defense for decades. In the past it has taken a crisis or attack — such as Sputnik or 9/11 — to prompt major changes in intelligence gathering. But technology revolutions do prompt organizational innovations, from the creation of the National Imagery and Mapping Agency in 1996 (now the National Geospatial-Intelligence Agency) to the CIA creating a Directorate of Digital Innovation in 2015. Estimated among metro areas with at least 500k residents and 25 posted jobs Symbol map of U.S. metro areas showing new AI jobs posted per capita in the first quarter of 2024. Overall, there were 11.7 new AI jobs posted per 100k people in Q1. Cities in California, Washington and Virginia had the most new AI jobs relative to their populations, with 142 new jobs per 100k people in San Jose, Calif. San Jose, Seattle and San Francisco are the country’s AI job hotspots, a new analysis finds — though some other perhaps more surprising metros round out the top 10. Why it matters: As AI emerges as the hottest new thing in tech, cities outside Silicon Valley have a chance to get in on the action — and reap the potentially lucrative economic rewards. While it continues to work on its own AI models, promoted as “commercially safe,” Adobe said Monday it plans to allow customers to bring other models into professional image and video programs, including its Premiere Pro video program.

AI triggers civilization collapse

Yomiuri Shimbun Holdings & Nippon Telegraph and Telephone Corporation, April 8, 2024, https://info.yomiuri.co.jp/news/yomi_NTTproposalonAI_en.pdf, Joint Proposal on Shaping Generative AI

AI is provided via the internet, it can in principle be used around the world. Challenges: Humans cannot fully control this technology ・ While the accuracy of results cannot be fully guaranteed, it is easy for people to use the technology and understand its output. This often leads to situations in which generative AI “lies with confidence” and people are “easily fooled.” ・ Challenges include hallucinations, bias and toxicity, retraining through input data, infringement of rights through data scraping and the difficulty of judging created products. ・ Journalism, research in academia and other sources have provided accurate and valuable information by thoroughly examining what information is correct, allowing them to receive some form of compensation or reward. Such incentives for providing and distributing information have ensured authenticity and trustworthiness may collapse. A need to respond: Generative AI must be controlled both technologically and legally ・ If generative AI is allowed to go unchecked, trust in society as a whole may be damaged as people grow distrustful of one another and incentives are lost for guaranteeing authenticity and trustworthiness. There is a concern that, in the worst-case scenario, democracy and social order could collapse, resulting in wars. ・ Meanwhile, AI technology itself is already indispensable to society. If AI technology is dismissed as a whole as untrustworthy due to out-of-control generative AI, humanity’s productivity may decline. ・ Based on the points laid out in the following sections, measures must be realized to balance the control and use of generative AI from both technological and institutional perspectives, and to make the technology a suitable tool for society. Point 1: Confronting the out-of-control relationship between AI and the attention economy ・ Any computer’s basic structure, or architecture, including that of generative AI, positions the individual as the basic unit of user. However, due to computers’ tendency to be overly conscious of individuals, there are such problems as unsound information spaces and damage to individual dignity due to the rise of the attention economy. ・ There are concerns that the unstable nature of generative AI is likely to amplify the above-mentioned problems further. In other words, it cannot be denied that there is a risk of worsening social unrest due to a combination of AI and the attention economy, with the attention economy accelerated by generative AI. To understand such issues properly, it is important to review our views on humanity and society and critically consider what form desirable technology should take. ・ Meanwhile, the out-of-control relationship between AI and the attention economy has already damaged autonomy and dignity, which are essential values that allow individuals in our society to be free. These values must be restored quickly. In doing so, autonomous liberty should not be abandoned, but rather an optimal solution should be sought based on human liberty and dignity, verifying their rationality. In the process, concepts such as information health are expected to be established. Point 2: Legal restraints to ensure discussion spaces to protect liberty and dignity, and the introduction of technology to cope with related issues ・ Ensuring spaces for discussion in which human liberty and dignity are maintained has not only superficial economic value, but also a special value in terms of supporting social stability. The out-of-control relationship between AI and the attention economy is a threat to these values. If generative AI develops further and is left unchecked like it is currently, there is no denying that the distribution of malicious information could drive out good things and cause social unrest. ・ If we continue to be unable to sufficiently regulate generative AI — or if we at least allow the unconditional application of such technology to elections and security — it could cause enormous and irreversible damage as the effects of the technology will not be controllable in society. This implies a need for rigid restrictions by law (hard laws that are enforceable) on the usage of generative AI in these areas. ・ In the area of education, especially compulsory education for those age groups in which students’ ability to make appropriate decisions has not fully matured, careful measures should be taken after considering both the advantages and disadvantages of AI usage.

AI usage undermining artists

REBECCA KLAR – 04/02/24, The Hill, Billie Eilish, Nicki Minaj among artists warning against AI use in music , https://thehill.com/policy/technology/4569830-billie-eilish-nicki-minaj-among-artists-warning-against-ai-use-in-music/

More than 200 artists — including Billie Eilish, Nicki Minaj and the Jonas Brothers — are calling for tech companies, artificial intelligence (AI) developers, and digital music services to cease the use of AI over concerns of its impact on artists and songwriters, according to an open letter published Tuesday. The artists warned that the unregulated use of AI in the music industry could “sabotage creativity and undermine artists, songwriters, musicians and rightsholders,” according to the letter organized by the Artists Rights Alliance, a nonprofit artist-led education and advocation organization. “Make no mistake: we believe that, when used responsibly, AI has enormous potential to advance human creativity and in a manner that enables the development and growth of new and exciting experiences for music fans everywhere,” the letter stated. They added that “unfortunately, some platforms and developers” are using AI in ways that could have detrimental impacts on artists. “When used irresponsibly, AI poses enormous threats to our ability to protect our privacy, our identities, our music and our livelihoods,” the letter added. It slammed “some of the biggest and most powerful companies” as using artists’ work without their permission to train AI models and create AI-generated sounds that would “substantially dilute the royalty pools” paid to artists. The letter calls for AI developers, technology companies, and digital music services to pledge to no develop or deploy AI music-generation technology, content or tools that would undermine or replace the work of human artists or songwriters, or deny them fair compensation. Other artists that signed the letter include Katy Perry, Zayn Malik, Noah Kahan, Imagine Dragons and the estates of Bob Marley and Frank Sinatra. Concerns about how AI is impacting artists have amplified over the past year as the popularity and sophistication of AI tools have rapidly increased. In Hollywood, both the SAG-AFTRA union, which represents actors, and the Writers Guild of America, which represents writers, fought and won protections from AI for their unions during contract negotiations last year.

AI causes massive energy consumption

Katherine Blunt, 3-24, 24, Wall Street Journal, Big Tech’s Latest Obsession Is Finding Enough Energy, https://www.wsj.com/business/energy-oil/big-techs-latest-obsession-is-finding-enough-energy-f00055b2

HOUSTON—Every March, thousands of executives take over a downtown hotel here to reach oil and gas deals and haggle over plans to tackle climate change. This year, the dominant theme of the energy industry’s flagship conference was a new one: artificial intelligence. Tech companies roamed the hotel’s halls in search of utility executives and other power providers. More than 20 executives from Amazon and Microsoft spoke on panels. The inescapable topic—and the cause of equal parts anxiety and excitement—was AI’s insatiable appetite for electricity. It isn’t clear just how much electricity will be required to power an exponential increase in data centers worldwide. But most everyone agreed the data centers needed to advance AI will require so much power they could strain the power grid and stymie the transition to cleaner energy sources. Bill Vass, vice president of engineering at Amazon Web Services, said the world adds a new data center every three days. Microsoft co-founder Bill Gates told the conference that electricity is the key input for deciding whether a data center will be profitable and that the amount of power AI will consume is staggering. “You go, ‘Oh, my God, this is going to be incredible,’” said Gates. Though there was no dispute at the conference, called CERAWeek by S&P Global, that AI requires massive amounts of electricity, what was less clear was where it is going to come from. Former U.S. Energy Secretary Ernest Moniz said the size of new and proposed data centers to power AI has some utilities stumped as to how they are going to bring enough generation capacity online at a time when wind and solar farms are becoming more challenging to build. He said utilities will have to lean more heavily on natural gas, coal and nuclear plants, and perhaps support the construction of new gas plants to help meet spikes in demand. “We’re not going to build 100 gigawatts of new renewables in a few years. You’re kind of stuck,” he said.

China has an AI talent advantage

Mozur & Metz, 3-22, 24, Paul Mozur is the global technology correspondent for The Times, based in Taipei. Previously he wrote about technology and politics in Asia from Hong Kong, Shanghai and Seoul. More about Paul Mozur; Cade Metz writes about artificial intelligence, driverless cars, robotics, virtual reality and other emerging areas of technology, New York times, In One Key A.I. Metric, China Pulls Ahead of the U.S.: Talen, New York Times, https://www.nytimes.com/2024/03/22/technology/china-ai-talent.html

When it comes to the artificial intelligence that powers chatbots like ChatGPT, China lags behind the United States. But when it comes to producing the scientists behind a new generation of humanoid technologies, China is pulling ahead. New research shows that China has by some metrics eclipsed the United States as the biggest producer of A.I. talent, with the country generating almost half the world’s top A.I. researchers. By contrast, about 18 percent come from U.S. undergraduate institutions, according to the study, from MacroPolo, a think tank run by the Paulson Institute, which promotes constructive ties between the United States and China. The findings show a jump for China, which produced about one-third of the world’s top talent three years earlier. The United States, by contrast, remained mostly the same. The research is based on the backgrounds of researchers whose papers were published at 2022’s Conference on Neural Information Processing Systems. NeurIPS, as it is known, is focused on advances in neural networks, which have anchored recent developments in generative A.I. The talent imbalance has been building for the better part of a decade. During much of the 2010s, the United States benefited as large numbers of China’s top minds moved to American universities to complete doctoral degrees. A majority of them stayed in the United States. But the research shows that trend has also begun to turn, with growing numbers of Chinese researchers staying in China. as China and the United States jockey for primacy in A.I. — a technology that can potentially increase productivity, strengthen industries and drive innovation — turning the researchers into one of the most geopolitically important groups in the world. Generative A.I. has captured the tech industry in Silicon Valley and in China, causing a frenzy in funding and investment. The boom has been led by U.S. tech giants such as Google and start-ups like OpenAI. That could attract China’s researchers, though rising tensions between Beijing and Washington could also deter some, experts said. China has nurtured so much A.I. talent partly because it invested heavily in A.I. education. Since 2018, the country has added more than 2,000 undergraduate A.I. programs, with more than 300 at its most elite universities, said Damien Ma, the managing director of MacroPolo, though he noted the programs were not heavily focused on the technology that had driven breakthroughs by chatbots like ChatGPT. “A lot of the programs are about A.I. applications in industry and manufacturing, not so much the generative A.I. stuff that’s come to dominate the American A.I. industry at the moment,” he said. While the United States has pioneered breakthroughs in A.I., most recently with the uncanny humanlike abilities of chatbots, a significant portion of that work was done by researchers educated in China. Researchers originally from China now make up 38 percent of the top A.I. researchers working in the United States, with Americans making up 37 percent, according to the research. Three years earlier, those from China made up 27 percent of top talent working in the United States, compared with 31 percent from the United States. “The data shows just how critical Chinese-born researchers are to the United States for A.I. competitiveness,” said Matt Sheehan, a fellow at the Carnegie Endowment for International Peace who studies Chinese A.I. He added that the data seemed to show the United States was still attractive. “We’re the world leader in A.I. because we continue to attract and retain talent from all over the world, but especially China,” he said. . Pieter Abbeel, a founder of Covariant, an A.I. and robotics start-up, said working alongside Chinese researchers was taken for granted at U.S. companies and universities.Credit…Balazs Gardi for The New York Times In the past, U.S. defense officials were not too concerned about A.I. talent flows from China, partly because many of the biggest A.I. projects did not deal with classified data and partly because they reasoned that it was better to have the best minds available. That so much of the leading research in A.I. is published openly also held back worries. Despite bans introduced by the Trump administration that prohibit entry to the United States for students from some military-linked universities in China and a relative slowdown in the flow of Chinese students into the country during Covid, the research showed large numbers of the most promising A.I. minds continued coming to the United States to study. But this month, a Chinese citizen who was an engineer at Google was charged with trying to transfer A.I. technology — including critical microchip architecture — to a Beijing-based company that paid him in secret, according to a federal indictment. The substantial numbers of Chinese A.I. researchers working in the United States now present a conundrum for policymakers, who want to counter Chinese espionage while not discouraging the continued flow of top Chinese computer engineers into the United States, according to experts focused on American competitiveness. “Chinese scholars are almost leading the way in the A.I. field,” said Subbarao Kambhampati, a professor and researcher of A.I. at Arizona State University. If policymakers try to bar Chinese nationals from research in the United States, he said, they are “shooting themselves in the foot.” The track record of U.S. policymakers is mixed. A policy by the Trump administration aimed at curbing Chinese industrial espionage and intellectual property theft has since been criticized for errantly prosecuting a number of professors. Such programs, Chinese immigrants said, have encouraged some to stay in China. For now, the research showed, most Chinese who complete doctorates in the United States stay in the country, helping to make it the global center of the A.I. world. Even so, the U.S. lead has begun to slip, to hosting about 42 percent of the world’s top talent, down from about 59 percent three years ago, according to the research.

AGI by 2029 or sooner

Ray Kurzweil, March 12, 2024, https://www.youtube.com/watch?v=w4vrOUau2iY, Ray Kurzweil is a scientist, futurist, and Principal Researcher and AI Visionary at Google. He’s the author of numerous books, including the forthcoming title “The Singularity is Nearer.” Joe Rogan Experience #2117 – Ray Kurzweil

BUt’s the first time that that that has been done. It wasn’t as good then is it? What are the capabilities now? Because now they can do some pretty extraordinary things? Yeah, it’s still not up to what humans can do. But it’s getting there. And it’s actually it’s pleasant to listen to, we still have a while to do art, both art music so on. Well, one of the main arguments against AI art comes from actual artists who are upset that will essentially they’re doing is they’re, like, you could say, right, draw a paint or create a painting the style of Frank Frazetta, for instance. And what it would be would be, they would take all Frazetta, his work that he’s ever done, which is all documented on the internet, and then you create an image that’s representative of that. So you’re essentially in one way or another, you’re you’re kind of taking from the art, right, but it’s not quite as good. It will be as good. I mean, I think we’ll match human experience by 2029 That’s been my idea. It’s not it’s not as good which is the best image generator right now Jamie went up. It’s they’re really changed almost from day to day right now. But like mid journey was the most popular one at first. And then Dolly, I think is a really good one to mid journeys, incredibly impressive, incredibly impressive graphics. I’ve seen some of the mid journey stuff. It’s just it’s mind blowing. And not quite as good. Not as good. But boy, is it so much better than it was five years ago. That’s what’s scary. Yeah. It’s so quick. I mean, it’s never going to reach its limit. We’re not going to get to a point, okay, this is how good it’s gonna be. It’s going to keep getting better. And what would that look like if we can get to a certain point it will far exceed what human creativity is capable of? Yes, I mean, When we reach the ability of humans, it’s not going to just match one human, it’s gonna match all humans, it’s gonna do everything that any human can do. If it’s playing a game, like go, it’s gonna play it better than any human right? Well, that’s already been proven right that they have invented moves, AI has invented moves that have now been implemented by humans, right in a very complex game that they never thought that AI was going to be able to be because it requires so much creativity, right? Arthur, we’re not quite there, but we will be there. And by 2029, it will match any person. That’s it. 2029. That’s just a few years away. This? Well, I’m actually considered conservative people think that will happen like next year, the year after.

We can get all the energy from the sun that we need within 10 years

Ray Kurzweil, March 12, 2024, https://www.youtube.com/watch?v=w4vrOUau2iY, Ray Kurzweil is a scientist, futurist, and Principal Researcher and AI Visionary at Google. He’s the author of numerous books, including the forthcoming title “The Singularity is Nearer.” Joe Rogan Experience #2117 – Ray Kurzweil

And in fact, Stanford had a conference, they invited several 100 People from around the world to talk about my prediction. And people came in, and they and people thought that this would happen, but not by 2020. To take 100 years. Yeah, I’ve heard that. I’ve heard that. But I think people are amending those. Is it because human beings have a very difficult time grasping the concept of exponential growth? That’s exactly right. And in fact, still economists have a linear view. And if you say, well, it’s gonna grow exponentially. Yeah, but maybe 2% a year. It actually doubles in 14 years. And I brought a chart I can show you. Okay, that really illustrates this. Is this chart available online? So we could show people? Yeah, it’s in the book. But is it available online? That chart where Jamie can pull it up? And someone could see it? Just so the folks watching the podcast could see it, too. I can just hold it up to the camera, pull it up. Pictures they sent, what’s it called? What’s the title of it? It says, price performance of computation? 1939 to 2023. You have it. Okay, great. Jamie has it. Yeah, the climb is insane. It’s like saying, Well, what’s interesting is that, that’s an exponential curve. And a straight line represents exponential growth. And that’s an absolute straight line for 80 years. The very first point, this is the speed of computers, it was 0.000007 calculations per second, per constant dollar. The last point is 35 billion calculations per second. So that’s 20 quadrillion fold increase in those 80 years. But the speed with which it gained is actually the same throughout the entire 80 years, because if it was sometimes better, and sometimes worse, this curve would, would bend, it would bend up and down. It’s really very much a straight line. So the speed with which we increased, it was the same regardless of the technologies and the technology was radically different at the beginning versus the end. And yet, it’s increased the speed exactly the same for 80 years. In fact, the first 40 years, nobody even knew this was happening. So it’s not like somebody was in charge, and saying, Okay, next year, we have to get to here. And people were trying to match that. We didn’t even know this was happening for 40 years. 40 years later, I noticed this, for various reasons, I predicted it would stay the same, the same speed increase each year, which which it has, in fact, we just put the last stop like two weeks ago. And it’s exactly where it should be. So technology and computation, certainly prime form of technology increases at the same speed. And this goes through war and peace he might say Well, maybe it’s greater during war. Now it’s exactly the same you can’t tell when there’s war or peace or, or anything else on here, it just matches from one type of technology to the next. And this is also true of other things. Like for example, getting energy from the sun That’s also exponential. It’s also just like this. It’s increased. We now are getting about 1000 times as much energy from the sun that we did 20 years ago. Because the implementation of solar panels and the like, yeah, as the the function of it increased exponentially as well, the function of what I had understood was that there was a bottleneck in the technology as far as how much you could extract from the sun from those panels. No, not at all. No, I mean, it’s increased 99.7% Since we started, right. And it does the same every year. It’s an exponential curve. And if you look at the curve, we’ll be getting 100% Of all the energy we need in 10 years, the person who told me that was Elon, and Elon was telling me that this is the reason why you can’t have a fully solar powered electric car, because it’s not capable of absorbing that much from the sun with a small panel like that. He said, There’s a physical limitation and the panel size. No, I mean, it’s increased to 99.7%. Since we started, since what year that’s about 35 years ago, 35 years ago, and not 99% 99% of the ability of it, as well as the expansion of use. I mean, you might have to store it, we’re also making exponential gains in storage of electricity, right? Battery technology. So you don’t have to get it all from a solar panel that fits in a car. We had the concept was like, could you make a solar panelled car, a car that has solar panels on the roof? And would that be enough to power the car? And he said, No, he said, It’s just not really there yet. Right? It’s not there. Yeah. But it will be there in 10 years. You think so? Yeah, he seemed to doubt that he thought that there’s a certain limitation of the amount of energy you can get from the sun period, how much it gives out and how much those solar panels can absorb? Well, you’re not going to be able to get it all from the solar panel that fits in a car, you’re gonna have to store some of that energy, right with it. So you wouldn’t just be able to drive indefinitely on solar power. Yeah, that was what he was saying. So but you can, obviously power house, and especially if you have a roof, the Tesla has those solar powered roofs now. But you can also store the energy for a car. I mean, we’re gonna we’re gonna go to all renewable energy, wind and sun within 10 years, including our ability to store the energy all renewable in 10 years. So what are they going to do with all these nuclear plants and coal power plants and all the stuff that’s completely unnecessary? People say we need nuclear power, which we don’t have, you can get it all from the sun and wind within 10 years. So in 10 years, you’d be able to power Los Angeles with sun and wind? Yes, really. I was not aware that we were anywhere near that kind of timeline? Well, that’s because people are not taking into account exponential growth. So the exponential growth also of the grid, because just to pull the amount of power that you would need to charge, you know, X amount of million if everyone has a an electric vehicle, by 2035. Let’s say that just the amount of change you would need on the grid would be pretty substantial. Well, we’re making exponential gains on that as well. Are we? Yeah, yeah. I wasn’t aware. I’ve had this impression that there was a problem with that. And especially in Los Angeles, they they’ve actually asked people at certain times when it’s not charged, you’re looking at the future. That’s true now. But it’s growing exponentially. And in every, in every field of technology, then, essentially, is the bottleneck of battery technology, and how close are they to solving some of these problems with like conflict, minerals, and the things that we need in order to power these batteries? I mean, our ability to store energy is also growing exponentially. So putting all that together, we’ll be able to power everything we need within 10 years. Wow. Most people don’t think that so you’re, you’re you’re thinking that based on this idea that people are gonna have a limit. computation would grow like this. It’s just continuing to do that. And so we have large language models, for example, no one expected that

Hallucinations decline as the models learn more

Ray Kurzweil, March 12, 2024, https://www.youtube.com/watch?v=w4vrOUau2iY, Ray Kurzweil is a scientist, futurist, and Principal Researcher and AI Visionary at Google. He’s the author of numerous books, including the forthcoming title “The Singularity is Nearer.” Joe Rogan Experience #2117 – Ray Kurzweil

to happen, like five years ago, right? And we had them two years ago, but they didn’t work very well. So it began a little less than two years ago that we can actually do large language models And that was very much a surprise to everybody. So that that’s probably the primary example of exponential growth. We had Sam Altman on, one of the things that he and I were talking about was that AI figured out a way to lie that they used AI to go through a CAPTCHA system, and the AI told the system that it was vision impaired, which is not technically a lie. But it used it to bypass are you a robot? Well, we don’t know. Now, it’s for large language models to say they don’t know something. So you ask it a question. And if that the answer to that question is not in the system, it still comes up with an answer. So it’ll look at everything and give it its best answer. And if the best answer is not there, it still gives you an answer. But that’s considered a hallucination. And we know hallucination. Yeah, that’s what it’s called. So AI hallucination, so they cannot be wrong. They have so far, we’re actually working on being able to tell if it doesn’t know something. So if he asked him something, say, Oh, I don’t know that. Right. Now, it can’t do that. Oh, wow. That’s interesting. So it gives you some answer And if the answer is not there, it’s just like, make something up. It’s the best answer. But the best answer isn’t very good, because it doesn’t know the answer. And the way to fix hallucinations is to actually give it more capabilities to memorise things and give it more information. So it knows the answer to it. If you tell an answer to a question, it will remember that and give you that correct answer.

Advanced AI weapons systems can be stolen against us

Gladstone AI, February 26, 2024, https://assets-global.website-files.com/62c4cf7322be8ea59c904399/65e7779f72417554f7958260_Gladstone%20Action%20Plan%20Executive%20Summary.pdf, Defense in Depth: An Action Plan to Increase the Safety and Security of Advanced A

The recent explosion of progress in advanced artificial intelligence (AI) has brought great opportunities, but it is also creating entirely new categories of weapons of mass destruction-like (WMD-like) and WMD-enabling catastrophic risks [1–4]. A key driver of 1 these risks is an acute competitive dynamic among the frontier AI labs that are 2 building the world’s most advanced AI systems. All of these labs have openly declared an intent or expectation to achieve human-level and superhuman artificial general intelligence (AGI) — a transformative technology with profound implications for 3 democratic governance and global security — by the end of this decade or earlier [5– 10]. The risks associated with these developments are global in scope, have deeply technical origins, and are evolving quickly. As a result, policymakers face a diminishing opportunity to introduce technically informed safeguards that can balance these considerations and ensure advanced AI is developed and adopted responsibly. These safeguards are essential to address the critical national security gaps that are rapidly emerging as this technology progresses. Frontier lab executives and staff have publicly acknowledged these dangers [11–13]. Nonetheless, competitive pressures continue to push them to accelerate their investments in AI capabilities at the expense of safety and security. The prospect of inadequate security at frontier AI labs raises the risk that the world’s most advanced AI systems could be stolen from their U.S. developers, and then weaponized against U.S. interests [9]. Frontier AI labs also take seriously the possibility that they could at some point lose control of the AI systems they themselves are developing [5,14], with 4 potentially devastating consequences to global security.

AGI arms race triggers global escalation

Gladstone AI, February 26, 2024, https://assets-global.website-files.com/62c4cf7322be8ea59c904399/65e7779f72417554f7958260_Gladstone%20Action%20Plan%20Executive%20Summary.pdf, Defense in Depth: An Action Plan to Increase the Safety and Security of Advanced A

The rise of advanced AI and AGI has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons. As advanced AI matures and the elements of the AI supply chain continue to proliferate, countries may race to acquire the resources to build sovereign advanced AI capabilities. Unless carefully managed, these competitive dynamics risk triggering an AGI arms race and increase the likelihood of global- and WMD-scale fatal accidents, interstate conflict, and escalation.

Four reasons autonomous weapons are dangerous

ROBERT WEISSMAN AND SAVANNAH WOOTEN, Public Citizen, February 29, 2024, A.I. Joe: The Dangers of Artificial Intelligence and the Military, A.I. Joe: The Dangers of Artificial Intelligence and the Military – Public Citizen, https://www.citizen.org/article/ai-joe-report/

These problems are inherent in the deployment of lethal autonomous weapons. They will persist and are unavoidable, even with the strongest controls in place. U.S. drone strikes in the so-called war on terror have killed, at minimum, hundreds of civilians – a problem due to bad intelligence and circumstance, not drone misfiring. Because the intelligence shortcomings will continue and people will continue to be people – meaning they congregate and move in unpredictable ways – shifting decision making to autonomous systems will not reduce this death toll going forward. In fact, it is likely to worsen the problem. The patina of “pure” decision-making will make it easier for humans to launch and empower autonomous weapons, as will the moral distance between humans and the decision to use lethal force against identifiable individuals. The removal of human common sense – the ability to look at a situation and restrain from authorizing lethal force, even in the face of indicators pointing to the use of force – can only worsen the problem still more. Additional problems are likely to occur because of AI mistakes, including bias. Strong testing regimes will mitigate these problems, but human-created AI has persistently displayed problems with racial bias, including in facial recognition and in varied kinds of decision making, a very significant issue when U.S. targets are so often people of color. To its credit, the Pentagon identifies this risk, and other possible AI weaknesses, including problems relating to adversaries’ countermeasures, the risk of tampering and cybersecurity.14 It would be foolish, however, to expect that Pentagon testing will adequately prevent these problems; too much is uncertain about the functioning of AI and it is impossible to replicate real-world battlefield conditions. Explains the research nonprofit Automated Decision Research: “The digital dehumanization that results from reducing people to data points based on specific characteristics raises serious questions about how the target profiles of autonomous weapons are created, and what pre-existing data these target profiles are based on. It also raises questions about how the user can understand what falls into a weapon’s target profile, and why the weapons system applied force.” Based on real-world experience with AI, the risk of autonomous weapon failure in the face of unanticipated circumstances (an “unknown unknown”) should be rated high. Although the machines are not likely to turn on their makers, Terminator-style, they may well function in dangerous and completely unanticipated ways – an unacceptable risk in the context of the deployment of lethal force. One crucial problem is that AIs are not able to deploy common sense, or reason based on past experience about unforeseen and novel circumstances. The example of self-driving cars is illustrative, notably that of a Cruise self-driving vehicle driving into and getting stuck in wet concrete in San Francisco. The problem, explains AI expert Gary Marcus, is ‘edge cases,’ out-of-the-ordinary circumstances that often confound machine learning algorithms. The more complicated a domain is, the more unanticipated outliers there tend to be. And the real world is really complicated and messy; there’s no way to list all the crazy and out of ordinary things that can happen.” It’s hard to imagine a more complicated and unpredictable domain than the battlefield, especially when those battlefields occur in urban environments or are occupied by substantial numbers of civilians. A final problem is that, as a discrete weapons technology, autonomous weapons deployment is nearly certain to create an AI weapons arms race. That is the logic of international military strategy. In the United States, a geopolitical rivalry-driven autonomous weapons arms race will be spurred further by the military-industrial complex and corporate contractors, about which more below. Autonomous weapons are already in development around the world and racing forward. Automated Decision Research details more than two dozen weapons systems of concern including several built by U.S. corporations.

Autonomous weapons needed to be able to defend Taiwan; no way humans could do that

ROBERT WEISSMAN AND SAVANNAH WOOTEN, Public Citizen, February 29, 2024, A.I. Joe: The Dangers of Artificial Intelligence and the Military, A.I. Joe: The Dangers of Artificial Intelligence and the Military – Public Citizen, https://www.citizen.org/article/ai-joe-report/

Meanwhile, Hicks in August 2023 announced a major new program, the Replicator Initiative, that would rely heavily on drones to combat Chinese missile strength in a theoretical conflict over Taiwan or at China’s eastern coast. The purpose, she said, was to counter Chinese “mass,” avoid using “our people as cannon fodder like some competitors do,” and leverage “attritable, autonomous systems.” “Attritable” is a Pentagon term that means a weapon is relatively low cost and that some substantial portion of those used are likely to be destroyed (subject to attrition). “We’ve set a big goal for Replicator,” Hicks stated: “to field attritable autonomous systems at scale of multiple thousands, in multiple domains, within the next 18-to-24 months.” In Pentagon lingo, she said, the U.S. “all-domain, attritable autonomous systems will help overcome the challenge of anti-access, area-denial systems. Our ADA2 to thwart their A2AD.” There is more than a little uncertainty over exactly what Replicator will be. Hicks said it would require no new additional funding, drawing instead on existing funding lines. At the same time, Hicks was quite intentionally selling it as big and transformational, calling it “game-changing.” What the plan appears to be is to develop the capacity to launch a “drone swarm” over China, with the number of relatively low-cost drones so great that mathematically some substantial number will evade China’s air defenses. While details remain vague, it is likely that this drone swarm model would rely on autonomous weapons. “Experts say advances in data-processing speed and machine-to-machine communications will inevitably relegate people to supervisory roles,” reports the Associated Press (AP). AP asked the Pentagon if it is currently formally assessing any fully autonomous lethal weapons system for deployment, but a Pentagon spokesperson refused to answer. The risks of this program, if it is in fact technologically and logistically achievable, are enormous. Drone swarms implicate all the concerns of autonomous weaponry, plus more. The sheer number of agents involved would make human supervision far less practicable or effective. Additionally, AI-driven swarms involve autonomous agents that would interact with and coordinate with each other, likely in ways not foreseen by humans and also likely indecipherable to humans in real time. The risks of dehumanization, loss of human control, attacks on civilians, mistakes and unforeseen action are all worse with swarms.

Autonomous weapons inevitable

ROBERT WEISSMAN AND SAVANNAH WOOTEN, Public Citizen, February 29, 2024, A.I. Joe: The Dangers of Artificial Intelligence and the Military, A.I. Joe: The Dangers of Artificial Intelligence and the Military – Public Citizen, https://www.citizen.org/article/ai-joe-report/

Against the backdrop of the DOD announcements, military policy talk has shifted: The development and deployment of autonomous weapons is, increasingly, being treated as a matter of when, not if. “The argument may be less about whether this is the right thing to do, and increasingly more about how do we actually do it — and on the rapid timelines required,” said Christian Brose, chief strategy officer at the military AI company Anduril, a former Senate Armed Services Committee staff director and author of the 2020 book The Kill Chain. Summarizes The Hill: “the U.S. is moving fast toward an ambitious goal: propping up a fleet of legacy ships, aircraft and vehicles with the support of weapons powered by artificial intelligence (AI), creating a first-of-its-kind class of war technology. It’s also spurring a huge boost across the defense industry, which is tasked with developing and manufacturing the systems.” Frank Kendall, the Air Force secretary, told the New York Times that it necessary and inevitable that the U.S. move to deploy lethal, autonomous weapons.

Autonomous weapons are the most moral choice. Killing is killing and humans can’t think that fast

ROBERT WEISSMAN AND SAVANNAH WOOTEN, Public Citizen, February 29, 2024, A.I. Joe: The Dangers of Artificial Intelligence and the Military, A.I. Joe: The Dangers of Artificial Intelligence and the Military – Public Citizen, https://www.citizen.org/article/ai-joe-report/

Thomas Hammes, who previously held command positions in the U.S. Marines and is now a research fellow at the U.S. National Defense University, penned an article for the Atlantic Council with the headline, “Autonomous Weapons are the Moral Choice.” Hammes’ argument is, on the one hand, killing is killing and it doesn’t matter if it’s done by a traditional or autonomous weapon. On the other hand, he contends, “No longer will militaries have the luxury of debating the impact on a single target. Instead, the question is how best to protect thousands of people while achieving the objectives that brought the country to war. It is difficult to imagine a more unethical decision than choosing to go to war and sacrifice citizens without providing them with the weapons to win.”

AI lacks consciousness

Perry Carpenter, Forbes Councils Member, February 29, 2024, Forbes, Understanding The Limits Of AI And What This Means For Cybersecurity, https://www.forbes.com/sites/forbesbusinesscouncil/2024/02/29/understanding-the-limits-of-ai-and-what-this-means-for-cybersecurity/?sh=10d218e232e6

AI lacks conscious experiences like we humans do. It does not have beliefs, desires or feelings—a distinction crucial for interpreting the capabilities and limitations of AI. At times, AI can perform tasks that, in some ways, seem remarkably human-like. However, these underlying processes are fundamentally different from human cognition and consciousness.

AI weapons under development

Tom Porter, November 21, 2023, The Pentagon is moving toward letting AI weapons autonomously decide to kill humans. Yahoo News. https://news.yahoo.com/pentagon-moving-toward-letting-ai-120645293.html

The deployment of AI-controlled drones that can make autonomous decisions about whether to kill human targets is moving closer to reality, The New York Times reported. Lethal autonomous weapons, that can select targets using AI, are being developed by countries including the US, China, and Israel….The Pentagon is working toward deploying swarms of thousands of AI-enabled drones, according to a notice published earlier this year. In a speech in August, US Deputy Secretary of Defense, Kathleen Hicks, said technology like AI-controlled drone swarms would enable the US to offset China’s People’s Liberation Army’s (PLA) numerical advantage in weapons and people. “We’ll counter the PLA’s mass with mass of our own, but ours will be harder to plan for, harder to hit, harder to beat,” she said, reported Reuters.

US trying to stop a binding resolution on AI weapons development

Tom Porter, November 21, 2023, The Pentagon is moving toward letting AI weapons autonomously decide to kill humans. Yahoo News. https://news.yahoo.com/pentagon-moving-toward-letting-ai-120645293.html

Several governments are lobbying the UN for a binding resolution restricting the use of AI killer drones, but the US is among a group of nations — which also includes Russia, Australia, and Israel — who are resisting any such move, favoring a non-binding resolution instead, The Times reported. “This is really one of the most significant inflection points for humanity,” Alexander Kmentt, Austria’s chief negotiator on the issue, told The Times. “What’s the role of human beings in the use of force — it’s an absolutely fundamental security issue, a legal issue and an ethical issue.”

Weaponized AI drones attacking military targets in the Ukraine

David Hambling, October 13, 2023. New Scientist. Ukrainian AI attack drones may be killing without human oversight https://www.newscientist.com/article/2397389-ukrainian-ai-attack-drones-may-be-killing-without-human-oversight/.

Ukrainian attack drones equipped with artificial intelligence are now finding and attacking targets without human assistance, New Scientist has learned, in what would be the first confirmed use of autonomous weapons or “killer robots”. While the drones are designed to target vehicles such as tanks, rather than infantry

AI regulation and alignment fail

Jim VandeHei & Mike Allen, 11-21, 23, https://www.axios.com/2023/11/21/washington-ai-sam-altman-chatgpt-microsoft, Behind the Curtain: Myth of AI restraint,

Nearly every high-level Washington meeting, star-studded conference and story about AI centers on one epic question: Can this awesome new power be constrained? It cannot, experts repeatedly and emphatically told us. Why it matters: Lots of people want to roll artificial intelligence out slowly, use it ethically, regulate it wisely. But everyone gets the joke: It defies all human logic and experience to think ethics will trump profit, power, prestige. Never has. Never will. Practically speaking, there’s no way to truly do any of this once a competition of this size and import is unleashed. And unleashed it is — at breathtaking scale. AI pioneer Mustafa Suleyman — co-founder and CEO of Inflection AI, and co-founder of AI giant DeepMind, now part of Google — sounds the alarm in his new book “The Coming Wave,” with the sobering Chapter 1: “Containment Is Not Possible.” That’s why Sam Altman getting sacked — suddenly and shockingly — should grab your attention. OpenAI — creator of the most popular generative AI tool, ChatGPT — became a battlefield between ethical true believers, who control the board, and the profit-and-progress activators like Altman who ran the company. Altman was quickly scooped up by Microsoft, OpenAI’s main sugar daddy, to move faster with a “new advanced AI research team.” Open AI’s interim CEO is a doom-fearing advocate for slowing the AI race — Twitch co-founder Emmett Shear, who recently warned there’s a 5% to 50% chance this new tech ends humanity. What we’re hearing: Few in Silicon Valley think the Shears of the world will win this battle. The dynamics they’re battling are too powerful: Competition between technologists and technology companies to create something with superhuman power inevitably leads to speed and high risk. It’s why free competition exists and works. Even if individuals and companies magically showed never-before-seen restraint and humility, competitive governments and nations won’t. China will force us to throw caution to the wind: The only thing worse than superhuman power in our hands is it being in China’s … or Iran’s … or Russia’s. Even if other nations stumbled and America’s innovators paused, there are still open-source models that bad actors could exploit. Top AI architects tell us there’ll likely be no serious regulation of generative AI, which one day soon could spawn artificial general intelligence (AGI) — the one that could outthink our species. Corporations won’t do it: They’re pouring trillions of dollars into the race of our lifetime. Government can’t do it: Congress is too divided to tackle the complexities of AI regulation in an election year. Individuals can’t do it: A fractured AI safety movement will persist. But the technology will solve so many big problems in the short term that most people won’t bother worrying about a future that might never materialize. Congress isn’t giving up. Senate Intelligence Committee Chairman Mark Warner (D-Va.) — a former tech entrepreneur who has been a leader in the Capitol Hill conversation on AI — told us he sees more need than ever “for Congress to establish some rules of the road when it comes to the risks posed by these technologies.” But lawmakers have always had trouble regulating tech companies. Axios reporters on the Hill tell us there are so many conflicting AI proposals that it’s hard to see any one of them getting traction. Reality check: Global nuclear agreements did slow proliferation. Global agreements on fluorocarbons did rescue the ozone layer. Aviation has guardrails. With AI, though, there’s no time to build consensus or constituencies. The reality is now. The bottom line: There’s never been such fast consumer adoption of a new technology. Cars took decades. The internet didn’t get critical mass until the smartphone. But ChatGPT was a hit overnight — 100 million users in a matter of weeks. No way it’ll be rolled back. “Behind the Curtain” is a column by Axios CEO Jim VandeHei and co-founder Mike Allen, based on regular conversations with White House and congressional leaders, CEOs and top technologists. Go deeper: Axios’ Dan Primack writes that there’s a decent chance Microsoft’s new “research lab” is a ruse to force OpenAI to rehire Altman.

Powerful AI technology will cause mass unemployment

Matt Marshall, 11-18, 23, OpenAI’s leadership coup could slam brakes on growth in favor of AI safety, Venture Beat, https://venturebeat.com/ai/openais-leadership-coup-could-slam-brakes-on-growth-in-favor-of-ai-safety/

While a lot of details remain unknown about the exact reasons for the OpenAI board’s firing of CEO Sam Altman Friday, new facts have emerged that show co-founder Ilya Sutskever led the firing process, with support of the board. While the board’s statement about the firing said it resulted from communication from Altman that “wasn’t consistently candid,” the exact reasons or timing of the board’s decision remain shrouded in mystery. But one thing is clear: Altman and co-founder Greg Brockman, who quit Friday after learning of Altman’s firing, were leaders of the company’s business side, doing the most to aggressively raise funds, expand OpenAI’s business offerings, and push its technology capabilities forward as quickly as possible. Sutskever, meanwhile, led the company’s engineering side, and has been obsessed by the coming ramifications of OpenAI’s generative AI technology, often talking in stark terms about what will happen when artificial general intelligence (AGI) is reached. He warned that technology will be so powerful that will put most people out of jobs.

Your evidence is old – OpenAI has new technologies that will trigger mass unemployment

Matt Marshall, 11-18, 23, OpenAI’s leadership coup could slam brakes on growth in favor of AI safety, Venture Beat, https://venturebeat.com/ai/openais-leadership-coup-could-slam-brakes-on-growth-in-favor-of-ai-safety/

Friday night, many onlookers slapped together a timeline of events, including efforts by Altman and Brockman to raise more money at a lofty valuation of $90 billion, that all point to a very high likelihood that arguments broke out at the board level, with Sutskever and others concerned about the possible dangers posed by some recent breakthroughs by OpenAI that had pushed AI automation to increased levels.  Indeed, Altman had confirmed that the company was working on GPT-5, the next stage of model performance for ChatGPT. And at the APEC conference last week in San Francisco, Altman referred to having recently seen more evidence of another step forward in the company’s technology : “Four times in the history of OpenAI––the most recent time was in the last couple of weeks––I’ve gotten to be in the room when we push the veil of ignorance back and the frontier of discovery forward. Getting to do that is the professional honor of a lifetime.” (See minute 3:15 of this video; hat-tip to Matt Mireles.) Data scientist Jeremy Howard posted a long thread on X about how OpenAI’s DevDay was an embarrassment for researchers concerned about safety, and the aftermath was the last straw for Sutskever: Researcher Nirit Weiss-Blatt provided some good insight into Sutskever’s worldview in her post about comments he’d made recently in May: “If you believe that AI will literally automate all jobs, literally, then it makes sense for a company that builds such technology to … not be an absolute profit maximizer. It’s relevant precisely because these things will happen at some point….If you believe that AI is going to, at minimum, unemploy everyone, that’s like, holy moly, right?

Extinction risks are not hype and alignment is difficult in practice

Steve Peterson, Philosophy Professor, 11-19, 23, NY Post, The OpenAI fiasco shows why we must regulate artificial intelligence, The OpenAI fiasco shows why we must regulate artificial intelligence (nypost.com)

Many have since cynically assumed those industry signatures were mere ploys to create regulation friendly to the entrenched interests. This is nonsense. First, AI’s existential risk is hardly a problem the industry invented as a pretext; serious academics like Stuart Russell and Max Tegmark, with no financial stake, have been concerned about it since before those AGI corporations were glimmers in their investors’ eyes. There are dangers of AI already present but quickly amplifying in both power and prevalence: misinformation, algorithmic bias, surveillance, and intellectual stultification, to name a few. Why we need to pause AI research and consider the real dangers And second, the history of each of these companies suggests they themselves genuinely want to avoid a competitive race to the bottom when it comes to AI safety. Maddening sentiments like futurist Daniel Jeffries’ tweet illustrate the danger: “The entire AI industry would like to thank the OpenAI board for giving us all a chance to catch up.” But all these companies needed serious money to do their empirical AI research, and it’s sadly rare for people to hand out big chunks of money just for the good of humanity. And so Google bought DeepMind, Microsoft invested heavily in OpenAI, and now Amazon is investing in Anthropic. Each AGI company has been leading a delicate dance between bringing the benefit of near-term, pre-AGI to people — thereby pleasing investors — and not risking existential disaster in the process. One plausible source of the schism between Altman and the board is about where to find the proper tradeoff, and the AI industry as a whole is facing this dilemma. Reasonable people can disagree about that. Having worked on “aligning” AI for about 10 years now, I am much more concerned about the risks than when I started. AI alignment is one of those problems — too common in both math and philosophy — that look easy from a distance and get harder the more you dig into them. What do you think? Be the first to comment. Whatever the right risk assessment is, though, I hope we can all agree investor greed should not be a thumb on this particular scale. Alas, as I write, it’s starting to look like Microsoft and other investors are pressuring OpenAI to remove the voices of caution from its board. Unregulated profit-seeking should not drive AGI any more than it should drive genetic engineering, pharmaceuticals or nuclear energy. Given the way things appear to be headed, though, the corporations can no longer be trusted to police themselves; it’s past time to call in the long arm of the law.

AI evolves faster than the regulation possibly can

Josh Tryangiel, 9-12, 23, https://www.washingtonpost.com/opinions/2023/09/12/sam-altman-openai-artificial-intelligence-regulation-summit/, Washington Post, Opinion  OpenAI’s Sam Altman wants the government to interfere, https://www.washingtonpost.com/opinions/2023/09/12/sam-altman-openai-artificial-intelligence-regulation-summit/

Then there’s the issue of actual agreement. Altman and Microsoft, which has invested at least $10 billion in OpenAI, support the creation of a single oversight agency. IBM and Google don’t. Musk has called for a six-month stoppage on sophisticated AI development. Everyone else thinks Musk, an OpenAI co-founder who fell out with Altman and announced the creation of a rival company in March, is insincere. And everyone’s sincerity is worth examining, including Altman’s. (In an interview with Bloomberg’s Emily Chang, Altman was asked if he could be trusted with AI’s powers: “You shouldn’t. … No one person should be trusted here.”) These companies know that by the time a new oversight agency is funded and staffed, the AI genie will likely have left the bottle — and eaten it. What’s the harm in playing nice when you’ll probably get the freedom to do whatever you want anyway?

AI = massive catastrophes

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card DEBATEUS!

Following this line of thinking, I often hear people say something along the lines of “AGI is the greatest risk humanity faces today! It’s going to end the world!” But when pressed on what this actually looks like, how this actually comes about, they become evasive, the answers woolly, the exact danger nebulous. AI, they say, might run away with all the computational resources and turn the whole world into a giant computer. As AI gets more and more powerful, the most extreme scenarios will require serious consideration and mitigation. However, well before we get there, much could go wrong. Over the next ten years, AI will be the greatest force amplifier in history. This is why it could enable a redistribution of power on a historic scale. The greatest accelerant of human progress imaginable, it will also enable harms—from wars and accidents to random terror groups, authoritarian governments, overreaching corporations, plain theft, and willful sabotage. Think about an ACI capable of easily passing the Modern Turing Test, but turned toward catastrophic ends. Advanced Advanced AIs and synthetic biology will not only be available to groups finding new sources of energy or life-changing drugs; they will also be available to the next Ted Kaczynski. AI is both valuable and dangerous precisely because it’s an extension of our best and worst selves. And as a technology premised on learning, it can keep adapting, probing, producing novel strategies and ideas potentially far removed from anything before considered, even by other AIs. Ask it to suggest ways of knocking out the freshwater supply, or crashing the stock market, or triggering a nuclear war, or designing the ultimate virus, and it will. Soon. Even more than I worry about speculative paper-clip maximizers or some strange, malevolent demon, I worry about what existing forces this tool will amplify in the next ten years. Imagine scenarios where AIs control energy grids, media programming, power stations, planes, or trading accounts for major financial houses. When robots are ubiquitous, and militaries stuffed with lethal autonomous weapons—warehouses full of technology that can commit autonomous mass murder at the literal push of a button—what might a hack, developed by another AI, look like? Or consider even more basic modes of failure, not attacks, but plain errors. What if AIs make mistakes in fundamental infrastructures, or a widely used medical system starts malfunctioning? It’s not hard to see how numerous, capable, quasi-autonomous agents on the loose, even those chasing well-intentioned but ill-formed goals, might sow havoc. We don’t yet know the implications of AI for fields as diverse as agriculture, chemistry, surgery, and finance. That’s part of the problem; we don’t know what failure modes are being introduced and how deep they could extend. There is no instruction manual on how to build the technologies in the coming wave safely. We cannot build systems of escalating power and danger to experiment with ahead of time. We cannot know how quickly an AI might self-improve, or what would happen after a lab accident with some not yet invented piece of biotech. We cannot tell what results from a human consciousness plugged directly into a computer, or what an AI-enabled cyberweapon means for critical infrastructure, or how a gene drive will play out in the wild. Once fast-evolving, self-assembling automatons or new biological agents are released, out in the wild, there’s no rewinding the clock. After a certain point, even curiosity and tinkering might be dangerous. Even if you believe the chance of catastrophe is low, that we are operating blind should give you pause. Suleyman, Mustafa. The Coming Wave (p. 264). Crown. Kindle Edition.

Tremendous harm is inevitable

 

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card DEBATEUS!

Nor is building safe and contained technology in itself sufficient. Solving the question of AI alignment doesn’t mean doing so once; it means doing it every time a sufficiently powerful AI is built, wherever and whenever that happens. You don’t just need to solve the question of lab leaks in one lab; you need to solve it in every lab, in every country, forever, even while those same countries are under serious political strain. Once technology reaches a critical capability, it isn’t enough for early pioneers to just build it safely, as challenging as that undoubtedly is. Rather, true safety requires maintaining those standards across every single instance: a mammoth expectation given how fast and widely these are already diffusing. This is what happens when anyone is free to invent or use tools that affect us all. And we aren’t just talking about access to a printing press or a steam engine, as extraordinary as they were. We are talking about outputs with a fundamentally new character: new compounds, new life, new species. If the wave is uncontained, it’s only a matter of time. Allow for the possibility of accident, error, malicious use, evolution beyond human control, unpredictable consequences of all kinds. At some stage, in some form, something, somewhere, will fail. And this won’t be a Bhopal or even a Chernobyl; it will unfold on a worldwide scale. This will be the legacy of technologies produced, for the most part, with the best of intentions. However, not everyone shares those intentions. Suleyman, Mustafa. The Coming Wave (p. 265). Crown. Kindle Edition.

There are plenty of people who want to use it for bad – terrorists, cults,

 

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card DEBATEUS!

CULTS, LUNATICS, AND SUICIDAL STATES Most of the time the risks arising from things like gain-of-function research are a result of sanctioned and benign efforts. They are, in other words, supersized revenge effects, unintended consequences of a desire to do good. Unfortunately, some organizations are founded with precisely the opposite motivation. Founded in the 1980s, Aum Shinrikyo (Supreme Truth) was a Japanese doomsday cult. The group originated in a yoga studio under the leadership of a man who called himself Shoko Asahara. Building a membership among the disaffected, they radicalized as their numbers swelled, becoming convinced that the apocalypse was nigh, that they alone would survive, and that they should hasten it. Asahara grew the cult to somewhere between forty thousand and sixty thousand members, coaxing a loyal group of lieutenants all the way to using biological and chemical weapons. At Aum Shinrikyo’s peak popularity it is estimated to have held more than $1 billion in assets and counted dozens of well-trained scientists as members. Despite a fascination with bizarre, sci-fi weapons like earthquake-generating machines, plasma guns, and mirrors to deflect the sun’s rays, they were a deadly serious and highly sophisticated group. Aum built dummy companies and infiltrated university labs to procure material, purchased land in Australia with the intent of prospecting for uranium to build nuclear weapons, and embarked on a huge biological and chemical weapons program in the hilly countryside outside Tokyo. The group experimented with phosgene, hydrogen cyanide, soman, and other nerve agents. They planned to engineer and release an enhanced version of anthrax, recruiting a graduate-level virologist to help. Members obtained the neurotoxin C. botulinum and sprayed it on Narita International Airport, the National Diet Building, the Imperial Palace, the headquarters of another religious group, and two U.S. naval bases. Luckily, they made a mistake in its manufacture and no harm ensued. It didn’t last. In 1994, Aum Shinrikyo sprayed the nerve agent sarin from a truck, killing eight and wounding two hundred. A year later they struck the Tokyo subway, releasing more sarin, killing thirteen and injuring some six thousand people. The subway attack, which involved depositing sarin-filled bags around the metro system, was more harmful partly because of the enclosed spaces. Thankfully neither attack used a particularly effective delivery mechanism. But in the end it was only luck that stopped a more catastrophic event. Aum Shinrikyo combined an unusual degree of organization with a frightening level of ambition. They wanted to initiate World War III and a global collapse by murdering at shocking scale and began building an infrastructure to do so. On the one hand, it’s reassuring how rare organizations like Aum Shinrikyo are. Of the many terrorist incidents and other non-state-perpetrated mass killings since the 1990s, most have been carried out by disturbed loners or groups with specific political or ideological agendas. But on the other hand, this reassurance has limits. Procuring weapons of great power was previously a huge barrier to entry, helping keep catastrophe at bay. The sickening nihilism of the school shooter is bounded by the weapons they can access. The Unabomber had only homemade devices. Building and disseminating biological and chemical weapons were huge challenges for Aum Shinrikyo. As a small, fanatical coterie operating in an atmosphere of paranoid secrecy, with only limited expertise and access to materials, they made mistakes. As the coming wave matures, however, the tools of destruction will, as we’ve seen, be democratized and commoditized. They will have greater capability and adaptability, potentially operating in ways beyond human control or understanding, evolving and upgrading at speed, some of history’s greatest offensive powers available widely. Those who would use new technologies like Aum are fortunately rare. Yet even one Aum Shinrikyo every fifty years is now one too many to avert an incident orders of magnitude worse than the subway attack. Cults, lunatics, suicidal states on their last legs, all have motive and now means. As a report on the implications of Aum Shinrikyo succinctly puts it, “We are playing Russian roulette.” A new phase of history is here. With zombie governments failing to contain technology, the next Aum Shinrikyo, the next industrial accident, the next mad dictator’s war, the next tiny lab leak, will have an impact that is difficult to contemplate. It’s tempting to dismiss all these dark risk scenarios as the distant daydreams of people who grew up reading too much science fiction, those biased toward catastrophism. Tempting, but a mistake. Regardless of where we are with BSL-4 protocols or regulatory proposals or technical publications on the AI alignment problem, those incentives grind away, the technologies keep developing and diffusing. This is not the stuff of speculative novels and Netflix series. This is real, being worked on right this second in offices and labs around the world. So serious are the risks, however, that they necessitate consideration of all the options. Containment is about the ability to control technology. Further back, that means the ability to control the people and societies behind it. As catastrophic impacts unfurl or their possibility becomes unignorable, the terms of debate will change. Calls for not just control but crackdowns will grow. The potential for unprecedented levels of vigilance will become ever more appealing. Perhaps it might be possible to spot and then stop emergent threats? Wouldn’t that be for the best—the right thing to do? It’s my best guess this will be the reaction of governments and populations around the world. When the unitary power of the nation-state is threatened, when containment appears increasingly difficult, when lives are on the line, the inevitable reaction will be a tightening of the grip on power. The question is, at what cost?

Regulation won’t solve

 

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card DEBATEUS!

While garage amateurs gain access to more powerful tools and tech companies spend billions on R&D, most politicians are trapped in a twenty-four-hour news cycle of sound bites and photo ops. When a government has devolved to the point of simply lurching from crisis to crisis, it has little breathing room for tackling tectonic forces requiring deep domain expertise and careful judgment on uncertain timescales. It’s easier to ignore these issues in favor of low-hanging fruit more likely to win votes in the next election. Even technologists and researchers in areas like AI struggle with the pace of change. What chance, then, do regulators have, with fewer resources? How do they account for an age of hyper-evolution, for the pace and unpredictability of the coming wave? Technology evolves week by week. Drafting and passing legislation takes years. Consider the arrival of a new product on the market like Ring doorbells. Ring put a camera on your front door and connected it to your phone. The product was adopted so quickly and is now so widespread that it has fundamentally changed the nature of what needs regulating; suddenly your average suburban street went from relatively private space to surveilled and recorded. By the time the regulation conversation caught up, Ring had already created an extensive network of cameras, amassing data and images from the front doors of people around the world. Twenty years on from the dawn of social media, there’s no consistent approach to the emergence of a powerful new platform (and besides, is privacy, polarization, monopoly, foreign ownership, or mental health the core problem—or all of the above?). The coming wave will worsen this dynamic. Discussions of technology sprawl across social media, blogs and newsletters, academic journals, countless conferences and seminars and workshops, their threads distant and increasingly lost in the noise. Everyone has a view, but it doesn’t add up to a coherent program. Talking about the ethics of machine learning systems is a world away from, say, the technical safety of synthetic bio. These discussions happen in isolated, echoey silos. They rarely break out. Yet I believe they are aspects of what amounts to the same phenomenon; they all aim to address different aspects of the same wave. It’s not enough to have dozens of separate conversations about algorithmic bias or bio-risk or drone warfare or the economic impact of robotics or the privacy implications of quantum computing. It completely underplays how interrelated both causes and effects are. We need an approach that unifies these disparate conversations, encapsulating all those different dimensions of risk, a general-purpose concept for this general-purpose revolution. Suleyman, Mustafa. The Coming Wave (pp. 282-283). Crown. Kindle Edition.

AI means massive dislocation and job loss, new jobs won’t offset the loss and there will be massive interim disruption

 

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card DEBATEUS!

In the years since I co-founded DeepMind, no AI policy debate has been given more airtime than the future of work—to the point of oversaturation. Here was the original thesis. In the past, new technologies put people out of work, producing what the economist John Maynard Keynes called “technological unemployment.” In Keynes’s view, this was a good thing, with increasing productivity freeing up time for further innovation and leisure. Examples of tech-related displacement are myriad. The introduction of power looms put old-fashioned weavers out of business; motorcars meant that carriage makers and horse stables were no longer needed; lightbulb factories did great as candlemakers went bust. Broadly speaking, when technology damaged old jobs and industries, it also produced new ones. Over time these new jobs tended toward service industry roles and cognitive-based white-collar jobs. As factories closed in the Rust Belt, demand for lawyers, designers, and social media influencers boomed. So far at least, in economic terms, new technologies have not ultimately replaced labor; they have in the aggregate complemented it. But what if new job-displacing systems scale the ladder of human cognitive ability itself, leaving nowhere new for labor to turn? If the coming wave really is as general and wide-ranging as it appears, how will humans compete? What if a large majority of white-collar tasks can be performed more efficiently by AI? In few areas will humans still be “better” than machines. I have long argued this is the more likely scenario. With the arrival of the latest generation of large language models, I am now more convinced than ever that this is how things will play out. These tools will only temporarily augment human intelligence. They will make us smarter and more efficient for a time, and will unlock enormous amounts of economic growth, but they are fundamentally labor replacing. They will eventually do cognitive labor more efficiently and more cheaply than many people working in administration, data entry, customer service (including making and receiving phone calls), writing emails, drafting summaries, translating documents, creating content, copywriting, and so on. In the face of an abundance of ultra-low-cost equivalents, the days of this kind of “cognitive manual labor” are numbered. We are only just now starting to see what impact this new wave is about to have. Early analysis of ChatGPT suggests it boosts the productivity of “mid-level college educated professionals” by 40 percent on many tasks. That in turn could affect hiring decisions: a McKinsey study estimated that more than half of all jobs could see many of their tasks automated by machines in the next seven years, while fifty-two million Americans work in roles with a “medium exposure to automation” by 2030. The economists Daron Acemoglu and Pascual Restrepo estimate that robots cause the wages of local workers to fall. With each additional robot per thousand workers there is a decline in the employment-to-population ratio, and consequently a fall in wages. Today algorithms perform the vast bulk of equity trades and increasingly act across financial institutions, and yet, even as Wall Street booms, it sheds jobs as technology encroaches on more and more tasks. Many remain unconvinced. Economists like David Autor argue that new technology consistently raises incomes, creating demand for new labor. Technology makes companies more productive, it generates more money, which then flows back into the economy. Put simply, demand is insatiable, and this demand, stoked by the wealth technology has generated, gives rise to new jobs requiring human labor. After all, skeptics say, ten years of deep learning success has not unleashed a jobs automation meltdown. Buying into that fear was, some argue, just a repeat of the old “lump of labor” fallacy, which erroneously claims there is only a set amount of work to go around. Instead, the future looks more like billions of people working in high-end jobs still barely conceived of. I believe this rosy vision is implausible over the next couple of decades; automation is unequivocally another fragility amplifier. As we saw in chapter 4, AI’s rate of improvement is well beyond exponential, and there appears no obvious ceiling in sight. Machines are rapidly imitating all kinds of human abilities, from vision to speech and language. Even without fundamental progress toward “deep understanding,” new language models can read, synthesize, and generate eye-wateringly wateringly accurate and highly useful text. There are literally hundreds of roles where this single skill alone is the core requirement, and yet there is so much more to come from AI. Yes, it’s almost certain that many new job categories will be created. Who would have thought that “influencer” would become a highly sought-after role? Or imagined that in 2023 people would be working as “prompt engineers”—nontechnical programmers of large language models who become adept at coaxing out specific responses? Demand for masseurs, cellists, and baseball pitchers won’t go away. But my best guess is that new jobs won’t come in the numbers or timescale to truly help. The number of people who can get a PhD in machine learning will remain tiny in comparison to the scale of layoffs. And, sure, new demand will create new work, but that doesn’t mean it all gets done by human beings. Labor markets also have immense friction in terms of skills, geography, and identity. Consider that in the last bout of deindustrialization the steelworker in Pittsburgh or the carmaker in Detroit could hardly just up sticks, retrain mid-career, and get a job as a derivatives trader in New York or a branding consultant in Seattle or a schoolteacher in Miami. If Silicon Valley or the City of London creates lots of new jobs, it doesn’t help people on the other side of the country if they don’t have the right skills or aren’t able to relocate. If your sense of self is wedded to a particular kind of work, it’s little consolation if you feel your new job demeans your dignity. Working on a zero-hours contract in a distribution center doesn’t provide the sense of pride or social solidarity that came from working for a booming Detroit auto manufacturer in the 1960s. The Private Sector Job Quality Index, a measure of how many jobs provide above-average income, has plunged since 1990; it suggests that well-paying jobs as a proportion of the total have already started to fall. Countries like India and the Philippines have seen a huge boom from business process outsourcing, creating comparatively high-paying jobs in places like call centers. It’s precisely this kind of work that will be targeted by automation. New jobs might be created in the long term, but for millions they won’t come quick enough or in the right places. At the same time, a jobs recession will crater tax receipts, damaging public services and calling into question welfare programs just as they are most needed. Even before jobs are decimated, governments will be stretched thin, struggling to meet all their commitments, finance themselves sustainably, and deliver services the public has come to expect. Moreover, all this disruption will happen globally, on multiple dimensions, affecting every rung of the development ladder from primarily agricultural economies to advanced service-based sectors. From Lagos to L.A., pathways to sustainable employment will be subject to immense, unpredictable, and fast-evolving dislocations. Even those who don’t foresee the most severe outcomes of automation still accept that it is on course to cause significant medium-term disruptions. Whichever side of the jobs debate you fall on, it’s hard to deny that the ramifications will be hugely destabilizing for hundreds of millions who will, at the very least, need to re-skill and transition to new types of work. Optimistic scenarios still involve troubling political ramifications from broken government finances to underemployed, insecure, and angry populations. It augurs trouble. Another stressor in a stressed world. Suleyman, Mustafa. The Coming Wave (pp. 227-228). Crown. Kindle Edition.

 

AI means surveillance and totalitarianism

 

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card

ROCKET FUEL FOR AUTHORITARIANISM When compared with superstar corporations, governments appear slow, bloated, and out of touch. It’s tempting to dismiss them as headed for the trash can of history. However, another inevitable reaction of nation-states will be to use the tools of the coming wave to tighten their grip on power, taking full advantage to entrench their dominance. In the twentieth century, totalitarian regimes wanted planned economies, obedient populations, and controlled information ecosystems. They wanted complete hegemony. Every aspect of life was managed. Five-year plans dictated everything from the number and content of films to bushels of wheat expected from a given field. High modernist planners hoped to create pristine cities of stark order and flow. An ever-watchful and ruthless security apparatus kept it all ticking over. Power concentrated in the hands of a single supreme leader, capable of surveying the entire picture and acting decisively. Think Soviet collectivization, Stalin’s five-year plans, Mao’s China, East Germany’s Stasi. This is government as dystopian nightmare. And so far at least, it has always gone disastrously wrong. Despite the best efforts of revolutionaries and bureaucrats alike, society could not be bent into shape; it was never fully “legible” to the state, but a messy, ungovernable reality that would not conform with the purist dreams of the center. Humanity is too multifarious, too impulsive to be boxed in like this. In the past, the tools available to totalitarian governments simply weren’t equal to the task. So those governments failed; they failed to improve quality of life, or eventually they collapsed or reformed. Extreme concentration wasn’t just highly undesirable; it was practically impossible. The coming wave presents the disturbing possibility that this may no longer be true. Instead, it could initiate an injection of centralized power and control that will morph state functions into repressive distortions of their original purpose. Rocket fuel for authoritarians and for great power competition alike. The ability to capture and harness data at an extraordinary scale and precision; to create territory-spanning systems of surveillance and control, reacting in real time; to put, in other words, history’s most powerful set of technologies under the command of a single body, would rewrite the limits of state power so comprehensively that it would produce a new kind of entity altogether. Your smart speaker wakes you up. Immediately you turn to your phone and check your emails. Your smart watch tells you you’ve had a normal night’s sleep and your heart rate is average for the morning. Already a distant organization knows, in theory, what time you are awake, how you are feeling, and what you are looking at. You leave the house and head to the office, your phone tracking your movements, logging the keystrokes on your text messages and the podcast you listen to. On the way, and throughout the day, you are captured on CCTV hundreds of times. After all, this city has at least one camera for every ten people, maybe many more than that. When you swipe in at the office, the system notes your time of entry. Software installed on your computer monitors productivity down to eye movements. On the way home you stop to buy dinner. The supermarket’s loyalty scheme tracks your purchases. After eating, you binge-stream another TV series; your viewing habits are duly noted. Every glance, every hurried message, every half thought registered in an open browser or fleeting search, every step through bustling city streets, every heartbeat and bad night’s sleep, every purchase made or backed out of—it is all captured, watched, tabulated. And this is only a tiny slice of the possible data harvested every day, not just at work or on the phone, but at the doctor’s office or in the gym. Almost every detail of life is logged, somewhere, by those with the sophistication to process and act on the data they collect. This is not some far-off dystopia. I’m describing daily reality for millions in a city like London. The only step left is bringing these disparate databases together into a single, integrated system: a perfect twenty-first-century surveillance apparatus. The preeminent example is, of course, China. That’s hardly news, but what’s become clear is how advanced and ambitious the party’s program already is, let alone where it might end up in twenty or thirty years. Compared with the West, Chinese research into AI concentrates on areas of surveillance like object tracking, scene understanding, and voice or action recognition. Surveillance technologies are ubiquitous, increasingly granular in their ability to home in on every aspect of citizens’ lives. They combine visual recognition of faces, gaits, and license plates with data collection—including bio-data—on a mass scale. Centralized services like WeChat bundle everything from private messaging to shopping and banking in one easily traceable place. Drive the highways of China, and you’ll notice hundreds of Automatic Number Plate Recognition cameras tracking vehicles. (These exist in most large urban areas in the Western world, too.) During COVID quarantines, robot dogs and drones carried speakers blasting messages warning people to stay inside. Facial recognition software builds on the advances in computer vision we saw in part 2, identifying individual faces with exquisite accuracy. When I open my phone, it starts automatically upon “seeing” my face: a small but slick convenience, but with obvious and profound implications. Although the system was initially developed by corporate and academic researchers in the United States, nowhere embraced or perfected the technology more than China. Chairman Mao had said “the people have sharp eyes” when watching their neighbors for infractions against communist orthodoxy. By 2015 this was the inspiration for a massive “Sharp Eyes” facial recognition program that ultimately aspired to roll such surveillance out across no less than 100 percent of public space. A team of leading researchers from the Chinese University of Hong Kong went on to found SenseTime, one of the world’s largest facial recognition companies, built on a database of more than two billion faces. China is now the leader in facial recognition technologies, with giant companies like Megvii and CloudWalk vying with SenseTime for market share. Chinese police even have sunglasses with built-in facial recognition technology capable of tracking suspects in crowds. Around half the world’s billion CCTV cameras are in China. Many have built-in facial recognition and are carefully positioned to gather maximal information, often in quasi-private spaces: residential buildings, hotels, even karaoke lounges. A New York Times investigation found the police in Fujian Province alone estimated they held a database of 2.5 billion facial images. They were candid about its purpose: “controlling and managing people.” Authorities are also looking to suck in audio data—police in the city of Zhongshan wanted cameras that could record audio within a three-hundred-foot radius—and close monitoring and storage of bio-data became routine in the COVID era. The Ministry of Public Security is clear on the next priority: stitch these scattered databases and services into a coherent whole, from license plates to DNA, WeChat accounts to credit cards. This AI-enabled system could spot emerging threats to the CCP like dissenters and protests in real time, allowing for a seamless, crushing government response to anything it perceived as undesirable. Nowhere does this come together with more horrifying potential than in the Xinjiang Autonomous Region. This rugged and remote part of northwest China has seen the systematic and technologically empowered repression and ethnic cleansing of its native Uighur people. All these systems of monitoring and control are brought together here. Cities are placed under blankets of camera surveillance with facial recognition and AI tracking. Checkpoints and “reeducation” camps govern movements and freedoms. A system of social credit scores based on numerous surveilled databases keeps tabs on the population. Authorities have built an iris-scan database that has the capacity to hold up to thirty million samples—more than the region’s population. Societies of overweening surveillance and control are already here, and now all of this is set to escalate enormously into a next-level concentration of power at the center. Yet it would be a mistake to write this off as just a Chinese or authoritarian problem. For a start, this tech is being exported wholesale to places like Venezuela and Zimbabwe, Ecuador and Ethiopia. Even to the United States. In 2019, the U.S. government banned federal agencies and their contractors from buying telecommunications and surveillance equipment from a number of Chinese providers including Huawei, ZTE, and Hikvision. Yet, just a year later, three federal agencies were found to have bought such equipment from prohibited vendors. More than one hundred U.S. towns have even acquired technology developed for use on the Uighurs in Xinjiang. A textbook failure of containment. Western firms and governments are also in the vanguard of building and deploying this tech. Invoking London above was no accident: it competes with cities like Shenzhen for most surveilled in the world. It’s no secret that governments monitor and control their own populations, but these tendencies extend deep into Western firms, too. In smart warehouses every micromovement of every worker is tracked down to body temperature and loo breaks. Companies like Vigilant Solutions aggregate movement data based on license plate tracking, then sell it to jurisdictions like state or municipal governments. Even your take-out pizza is being watched: Domino’s uses AI-powered cameras to check its pies. Just as much as anyone in China, those in the West leave a vast data exhaust every day of their lives. And just as in China, it is harvested, processed, operationalized, and sold. — Before the coming wave the notion of a global “high-tech panopticon” was the stuff of dystopian novels, Yevgeny Zamyatin’s We or George Orwell’s 1984. The panopticon is becoming possible. Billions of devices and trillions of data points could be operated and monitored at once, in real time, used not just for surveillance but for prediction. Not only will it foresee social outcomes with precision and granularity, but it might also subtly or overtly steer or coerce them, from grand macro-processes like election results down to individual consumer behaviors. This raises the prospect of totalitarianism to a new plane. It won’t happen everywhere, and not all at once. But if AI, biotech, quantum, robotics, and the rest of it are centralized in the hands of a repressive state, the resulting entity would be palpably different from any yet seen. In the next chapter we will return to this possibility. However, before then comes another trend. One completely, and paradoxically, at odds with centralization. Suleyman, Mustafa. The Coming Wave (p. 246). Crown. Kindle Edition.

The impact of uncontrolled AI is catastrophe that kills a billion

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card

VARIETIES OF CATASTROPHE To see what catastrophic harms we should prepare for, simply extrapolate the bad actor attacks we saw in chapter 10. Here are just a few plausible scenarios. Terrorists mount automatic weapons equipped with facial recognition to an autonomous drone swarm hundreds or thousands strong, each capable of quickly rebalancing from the weapon’s recoil, firing short bursts, and moving on. These drones are unleashed on a major downtown with instructions to kill a specific profile. In busy rush hour these would operate with terrifying efficiency, following an optimized route around the city. In minutes there would be an attack at far greater scale than, say, the 2008 Mumbai attacks, which saw armed terrorists roaming through city landmarks like the central train station. A mass murderer decides to hit a huge political rally with drones, spraying devices, and a bespoke pathogen. Soon attendees become sick, then their families. The speaker, a much-loved and much-loathed political lightning rod, is one of the first victims. In a febrile partisan atmosphere an assault like this ignites violent reprisals around the country and the chaos cascades. Using only natural language instruction, a hostile conspiracist in America disseminates masses of surgically constructed and divisive disinformation. Numerous attempts are made, most of which fail to gain traction. One eventually catches on: a police murder in Chicago. It’s completely fake, but the trouble on the streets, the widespread revulsion, is real. The attackers now have a playbook. By the time the video is verified as a fraud, violent riots with multiple casualties roil around the country, the fires continually stoked by new gusts of disinformation. Or imagine all that happening at the same time. Or not just at one event or in one city, but in hundreds of places. With tools like this it doesn’t take too much to realize that bad actor empowerment opens the door to catastrophe. Today’s AI systems try hard not to tell you how to poison the water supply or build an undetectable bomb. They are not yet capable of defining or pursuing goals on their own. However, as we have seen, both more widely diffused and less safe versions of today’s cutting-edge and more powerful models are coming, fast. Of all the catastrophic risks from the coming wave, AI has received the most coverage. But there are plenty more. Once militaries are fully automated, the barriers to entry for conflict will be far lower. A war might be sparked accidentally for reasons that forever remain unclear, AIs detecting some pattern of behavior or threat and then reacting, instantaneously, with overwhelming force. Suffice to say, the nature of that war could be alien, escalate quickly, and be unsurpassed in destructive consequences. We’ve already come across engineered pandemics and the perils of accidental releases, and glimpsed what happens when millions of self-improvement enthusiasts can experiment with the genetic code of life. An extreme bio-risk event of a less obvious kind, targeting a given portion of the population, say, or sabotaging an ecosystem, cannot be discounted. Imagine activists wanting to stop the cocaine trade inventing a new bug that targets only coca plants as a way to replace aerial fumigation. Or if militant vegans decided to disrupt the entire meat supply chain, with dire anticipated and unanticipated consequences. Either might spiral out of control. We know what a lab leak might look like in the context of amplifying fragility, but if it was not quickly brought under control, it would rank with previous plagues. To put this in context, the omicron variant of COVID infected a quarter of Americans within a hundred days of first being identified. What if we had a pandemic that had, say, a 20 percent mortality rate, but with that kind of transmissibility? Or what if it was a kind of respiratory HIV that would lie incubating for years with no acute symptoms? A novel human transmissible virus with a reproduction rate of, say, 4 (far below chicken pox or measles) and a case fatality rate of 50 percent (far below Ebola or bird flu) could, even accounting for lockdown-style measures, cause more than a billion deaths in a matter of months. What if multiple such pathogens were released at once? This goes far beyond fragility amplification; it would be an unfathomable calamity.

“AI Bad” isn’t fear mongering, it’s based on objective risk

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card DEBATEUS!

The promise of technology is that it improves lives, the benefits far outweighing the costs and downsides. This set of wicked choices means that promise has been savagely inverted. Doom-mongering makes people—myself included—glassy-eyed. At this point, you may be feeling wary or skeptical. Talking of catastrophic effects often invites ridicule: accusations of catastrophism, indulgent negativity, shrill alarmism, navel-gazing on remote and rarefied risks when plenty of clear and present dangers scream for attention. Like breathless techno-optimism, breathless techno-catastrophism is easy to dismiss as a twisted, misguided form of hype unsupported by the historical record. But just because a warning has dramatic implications isn’t good grounds to automatically reject it. The pessimism-averse complacency greeting the prospect of disaster is itself a recipe for disaster. It feels plausible, rational in its own terms, “smart” to dismiss warnings as the overblown chatter of a few weirdos, but this attitude prepares the way for its own failure. No doubt, technological risk takes us into uncertain territory. Nonetheless, all the trends point to a profusion of risk. This speculation is grounded in constantly compounding scientific and technological improvements. Those who dismiss catastrophe are, I believe, discounting the objective facts before us. After all, we are not talking here about the proliferation of motorbikes or washing machines. Suleyman, Mustafa. The Coming Wave (p. 259). Crown. Kindle Edition.

Potential catastrophes

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card

VARIETIES OF CATASTROPHE To see what catastrophic harms we should prepare for, simply extrapolate the bad actor attacks we saw in chapter 10. Here are just a few plausible scenarios. Terrorists mount automatic weapons equipped with facial recognition to an autonomous drone swarm hundreds or thousands strong, each capable of quickly rebalancing from the weapon’s recoil, firing short bursts, and moving on. These drones are unleashed on a major downtown with instructions to kill a specific profile. In busy rush hour these would operate with terrifying efficiency, following an optimized route around the city. In minutes there would be an attack at far greater scale than, say, the 2008 Mumbai attacks, which saw armed terrorists roaming through city landmarks like the central train station. A mass murderer decides to hit a huge political rally with drones, spraying devices, and a bespoke pathogen. Soon attendees become sick, then their families. The speaker, a much-loved and much-loathed political lightning rod, is one of the first victims. In a febrile partisan atmosphere an assault like this ignites violent reprisals around the country and the chaos cascades. Using only natural language instruction, a hostile conspiracist in America disseminates masses of surgically constructed and divisive disinformation. Numerous attempts are made, most of which fail to gain traction. One eventually catches on: a police murder in Chicago. It’s completely fake, but the trouble on the streets, the widespread revulsion, is real. The attackers now have a playbook. By the time the video is verified as a fraud, violent riots with multiple casualties roil around the country, the fires continually stoked by new gusts of disinformation. Or imagine all that happening at the same time. Or not just at one event or in one city, but in hundreds of places. With tools like this it doesn’t take too much to realize that bad actor empowerment opens the door to catastrophe. Today’s AI systems try hard not to tell you how to poison the water supply or build an undetectable bomb. They are not yet capable of defining or pursuing goals on their own. However, as we have seen, both more widely diffused and less safe versions of today’s cutting-edge and more powerful models are coming, fast. Of all the catastrophic risks from the coming wave, AI has received the most coverage. But there are plenty more. Once militaries are fully automated, the barriers to entry for conflict will be far lower. A war might be sparked accidentally for reasons that forever remain unclear, AIs detecting some pattern of behavior or threat and then reacting, instantaneously, with overwhelming force. Suffice to say, the nature of that war could be alien, escalate quickly, and be unsurpassed in destructive consequences. We’ve already come across engineered pandemics and the perils of accidental releases, and glimpsed what happens when millions of self-improvement enthusiasts can experiment with the genetic code of life. An extreme bio-risk event of a less obvious kind, targeting a given portion of the population, say, or sabotaging an ecosystem, cannot be discounted. Imagine activists wanting to stop the cocaine trade inventing a new bug that targets only coca plants as a way to replace aerial fumigation. Or if militant vegans decided to disrupt the entire meat supply chain, with dire anticipated and unanticipated consequences. Either might spiral out of control. We know what a lab leak might look like in the context of amplifying fragility, but if it was not quickly brought under control, it would rank with previous plagues. To put this in context, the omicron variant of COVID infected a quarter of Americans within a hundred days of first being identified. What if we had a pandemic that had, say, a 20 percent mortality rate, but with that kind of transmissibility? Or what if it was a kind of respiratory HIV that would lie incubating for years with no acute symptoms? A novel human transmissible virus with a reproduction rate of, say, 4 (far below chicken pox or measles) and a case fatality rate of 50 percent (far below Ebola or bird flu) could, even accounting for lockdown-style measures, cause more than a billion deaths in a matter of months. What if multiple such pathogens were released at once? This goes far beyond fragility amplification; it would be an unfathomable calamity.

AI means autonomous drones cyberwarfare, defeated regulations, financial collapse

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card

The cost of military-grade drones has fallen by three orders of magnitude over the last decade. By 2028, $26 billion a year will be spent on military drones, and at that point many are likely to be fully autonomous. Live deployments of autonomous drones are becoming more plausible by the day. In May 2021, for example, an AI drone swarm in Gaza was used to find, identify, and attack Hamas militants. Start-ups like Anduril, Shield AI, and Rebellion Defense have raised hundreds of millions of dollars to build autonomous drone networks and other military applications of AI. Complementary technologies like 3-D printing and advanced mobile communications will reduce the cost of tactical drones to a few thousand dollars, putting them within reach of everyone from amateur enthusiasts to paramilitaries to lone psychopaths. In addition to easier access, AI-enhanced weapons will improve themselves in real time. WannaCry’s impact ended up being far more limited than it could have been. Once the software patch was applied, the immediate issue was resolved. AI transforms this kind of attack. AI cyberweapons will continuously probe networks, adapting themselves autonomously to find and exploit weaknesses. Existing computer worms replicate themselves using a fixed set of preprogrammed heuristics. But what if you had a worm that improved itself using reinforcement learning, experimentally updating its code with each network interaction, each time finding more and more efficient ways to take advantage of cyber vulnerabilities? Just as systems like AlphaGo learn unexpected strategies from millions of self-played games, so too will AI-enabled cyberattacks. However much you war-game every eventuality, there’s inevitably going to be a tiny vulnerability discoverable by a persistent AI. Everything from cars and planes to fridges and data centers relies on vast code bases. The coming AIs make it easier than ever to identify and exploit weaknesses. They could even find legal or financial means of damaging corporations or other institutions, hidden points of failure in banking regulation or technical safety protocols. As the cybersecurity expert Bruce Schneier has pointed out, AIs could digest the world’s laws and regulations to find exploits, arbitraging legalities. Imagine a huge cache of documents from a company leaked. A legal AI might be able to parse this against multiple legal systems, figure out every possible infraction, and then hit that company with multiple crippling lawsuits around the world at the same time. AIs could develop automated trading strategies designed to destroy competitors’ positions or create disinformation campaigns (more on this in the next section) engineering a run on a bank or a product boycott, enabling a competitor to swoop in and buy the company—or simply watch it collapse. AI adept at exploiting not just financial, legal, or communications systems but also human psychology, our weaknesses and biases, is on the way. Researchers at Meta created a program called CICERO. It became an expert at playing the complex board game Diplomacy, a game in which planning long, complex strategies built around deception and backstabbing is integral. It shows how AIs could help us plan and collaborate, but also hints at how they could develop psychological tricks to gain trust and influence, reading and manipulating our emotions and behaviors with a frightening level of depth, a skill useful in, say, winning at Diplomacy or electioneering and building a political movement. The space for possible attacks against key state functions grows even as the same premise that makes AI so powerful and exciting—its ability to learn and adapt—empowers bad actors.  For centuries cutting-edge offensive capabilities, like massed artillery, naval broadsides, tanks, aircraft carriers, or ICBMs, have initially been so costly that they remained the province of the nation-state. Now they are evolving so fast that they quickly proliferate into the hands of research labs, start-ups, and garage tinkerers. Just as social media’s one-to-many broadcast effect means a single person can suddenly broadcast globally, so the capacity for far-reaching consequential action is becoming available to everyone. This new dynamic—where bad actors are emboldened to go on the offensive—opens up new vectors of attack thanks to the interlinked, vulnerable nature of modern systems: not just a single hospital but an entire health system can be hit; not just a warehouse but an entire supply chain. With lethal autonomous weapons the costs, in both material and above all human terms, of going to war, of attacking, are lower than ever. At the same time, same time, all this introduces greater levels of deniability and ambiguity, degrading the logic of deterrence. If no one can be sure who initiated an assault, or what exactly has happened, why not go ahead? Suleyman, Mustafa. The Coming Wave (p. 212). Crown. Kindle Edition.

The ”good guys” with an AI will not be able to keep up with the “ad guys,” especially in the short-term

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card

Throughout history technology has produced a delicate dance of offensive and defensive advantage, the pendulum swinging between the two but a balance roughly holding: for every new projectile or cyberweapon, a potent countermeasure has quickly arisen. Cannons may wear down a castle’s walls, but they can also rip apart an invading army. Now, powerfully, asymmetric, , omni-use technologies are certain to reach the hands of those who want to damage the state. While defensive operations will be strengthened in time, the nature of the four features favors offense: this proliferation of power is just too wide, fast, and open. An algorithm of world-changing significance can be stored on a laptop; soon it won’t even require the kind of vast, regulatable infrastructure of the last wave and the internet. Unlike an arrow or even a hypersonic missile, AI and bioagents will evolve more cheaply, more rapidly, and more autonomously than any technology we’ve ever seen. Consequently, without a dramatic set of interventions to alter the current course, millions will have access to these capabilities in just a few years. Maintaining a decisive, indefinite strategic advantage across such a broad spectrum of general-use technologies is simply not possible. Eventually, the balance might be restored, but not before a wave of immensely destabilizing force is unleashed. And as we’ve seen, the nature of the threat is far more widespread than blunt forms of physical assault. Information and communication together is its own escalating vector of risk, another emerging fragility amplifier requiring attention. Welcome to the deepfake era. Suleyman, Mustafa. The Coming Wave (p. 213). Crown. Kindle Edition.

Deep fakes are indistinguishable from reality

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card

Ask yourself, what happens when anyone has the power to create and broadcast material with incredible levels of realism? These examples occurred before the means to generate near-perfect deepfakes—whether text, images, video, or audio—became as easy as writing a query into Google. As we saw in chapter 4, large language models now show astounding results at generating synthetic media. A world of deepfakes indistinguishable from conventional media is here. These fakes will be so good our rational minds will find it hard to accept they aren’t real. Deepfakes are spreading fast. If you want to watch a convincing fake of Tom Cruise preparing to wrestle an alligator, well, you can. More and more everyday people will be imitated as the required training data falls to just a handful of examples. It’s already happening. A bank in Hong Kong transferred millions of dollars to fraudsters in 2021, after one of their clients was impersonated by a deepfake. Sounding identical to the real client, the fraudsters phoned the bank manager and explained how the company needed to move money for an acquisition. All the documents seemed to check out, the voice and character were flawlessly familiar, so the manager initiated the transfer. Anyone motivated to sow instability now has an easier time of it. Say three days before an election the president is caught on camera using a racist slur. The campaign press office strenuously denies it, but everyone knows what they’ve seen. Outrage seethes around the country. Polls nose-dive. Swing states suddenly shift toward the opponent, who, against all expectations, wins. A new administration takes charge. But the video is a deepfake, one so sophisticated it evades even the best fake-detecting neural networks. The threat here lies not so much with extreme cases as in subtle, nuanced, and highly plausible scenarios being exaggerated and distorted. It’s not the president charging into a school screaming nonsensical rubbish while hurling grenades; it’s the president resignedly saying he has no choice but to institute a set of emergency laws or reintroduce the draft. It’s not Hollywood fireworks; it’s the purported surveillance camera footage of a group of white policemen caught on tape beating a Black man to death. Sermons from the radical preacher Anwar al-Awlaki inspired the Boston Marathon bombers, the attackers of Charlie Hebdo in Paris, and the shooter who killed forty-nine people at an Orlando nightclub. Yet al-Awlaki died in 2011, the first U.S. citizen killed by a U.S. drone strike, before any of these events. His radicalizing messages were, though, still available on YouTube until 2017. Suppose that using deepfakes new videos of al-Awlaki could be “unearthed,” each commanding further targeted attacks with precision-honed rhetoric. Not everyone would buy it, but those who wanted to believe would find it utterly compelling. Soon these videos will be fully and believably interactive. You are talking directly to him. He knows you and adapts to your dialect and style, plays on your history, your personal grievances, your bullying at school, your terrible, immoral Westernized parents. This is not disinformation as blanket carpet bombing; it’s disinformation as surgical strike. Phishing attacks against politicians or businesspeople, disinformation with the aim of major financial-market disruption or manipulation, media designed to poison key fault lines like sectarian or racial divides, even low-level scams—trust is damaged and fragility again amplified. Eventually entire and rich synthetic histories of seemingly real-world events will be easy to generate. Individual citizens won’t have time or the tools to verify a fraction of the content coming their way. Fakes will easily pass sophisticated checks, let alone a two-second smell test.

AI leads to massive disinformation campaigns

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card

In the 1980s, the Soviet Union funded disinformation campaigns suggesting that the AIDS virus was the result of a U.S. bioweapons program. Years later, some communities were still dealing with the mistrust and fallout. The campaigns, meanwhile, have not stopped. According to Facebook, Russian agents created no fewer than eighty thousand pieces of organic content that reached 126 million Americans on their platforms during the 2016 election. AI-enhanced digital tools will exacerbate information operations like these, meddling in elections, exploiting social divisions, and creating elaborate astroturfing campaigns to sow chaos. Unfortunately, it’s far from just Russia. More than seventy countries have been found running disinformation campaigns. China is quickly catching up with Russia; others from Turkey to Iran are developing their skills. (The CIA, too, is no stranger to info ops.) Early in the COVID-19 pandemic a blizzard of disinformation had deadly consequences. A Carnegie Mellon study analyzed more than 200 million tweets discussing COVID-19 at the height of the first lockdown. Eighty-two percent of influential users advocating for “reopening America” were bots. This was a targeted “propaganda machine,” most likely Russian, designed to intensify the worst public health crisis in a century. Deepfakes automate these information assaults. Until now effective disinformation campaigns have been labor-intensive. While bots and fakes aren’t difficult to make, most are of low quality, easily identifiable, and only moderately effective at actually changing targets’ behavior. High-quality synthetic media changes this equation. Not all nations currently have the funds to build huge disinformation programs, with dedicated offices and legions of trained staff, but that’s less of a barrier when high-fidelity material can be generated at the click of a button. Much of the coming chaos will not be accidental. It will come as existing disinformation campaigns are turbocharged, expanded, and devolved out to a wide group of motivated actors. The rise of synthetic media at scale and minimal cost amplifies both disinformation (malicious and intentionally misleading information) and misinformation (a wider and more unintentional pollution of the information space) at once. Cue an “Infocalypse,” the point at which society can no longer manage a torrent of sketchy material, where the information ecosystem grounding knowledge, trust, and social cohesion, the glue holding society together, falls apart. In the words of a Brookings Institution report, ubiquitous, perfect synthetic media means “distorting democratic discourse; manipulating elections; eroding trust in institutions; weakening journalism; exacerbating social divisions; undermining public safety; and inflicting hard-to-repair damage on the reputation of prominent individuals, including elected officials and candidates for office.” Not all stressors and harms come from bad actors,

AI leads to the creation of pandemic pathogens that could kill everyone

Mustafa Suleyman, September 2023, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection Ai, https://www.youtube.com/watch?v=CTxnLsYHWuI

25:52

I think that the darkest scenario there is that people will experiment with pathogens engineered you know synthetic pathogens that might 26:03 end up accidentally or intentionally being more transmissible I.E they they can spread faster 26:11 or more lethal I.E you know they cause more harm or potentially kill like a 26:18 pandemic like a pandemic um and that’s where we need containment right 26:24 we have to limit access to the tools and the know-how to carry out that kind of 26:31 experimentation so one framework of thinking about this with respect to 26:37 making containment possible is that we really are experimenting with 26:43 dangerous materials and Anthrax is not something that can be bought over the 26:49 Internet that can be freely experimented with and likewise the very best of these 26:55 tools in a few years time are going to be capable of creating you know new synthetic 27:02 pandemic pathogens and so we have to restrict access to those things that means restricting access to the compute Would we be able to regulate AI? 27:08 it means restricting access to the software that runs the models to the 27:14 cloud environments that provide apis provide you access to experiment with those things and of course on the 27:21 biology side it means restricting access to some of the substances and people aren’t going to like this 27:26 people are not going to like that claim because it means that those who want to do good with those tools 27:33 those who want to create a startup the small guy the little developer that 27:39 struggles to comply with all the regulations they’re going to be pissed off understandably right but that is the 27:46 age we’re in deal with it like we have to confront that reality that means that we have to 27:53 approach this with the precautionary principle right never before in the invention of a technology or in the 27:59 creation of a regulation have we proactively said we need to go slowly we 28:05 need to make sure that this first does no harm the precautionary principle and 28:10 that is just an unprecedented moment no other Technology’s done that right because I think 28:17 we collectively in the industry those of us who are closest to the work can see a 28:22 place in five years or ten years where it could get out of control and we have to get on top of it now and it’s 28:28 better to forgo like that is give up some of those potential upsides or 28:34 benefits until we can be more sure that it can be contained that it can be 28:40 controlled that it always serves our Collective interests think about that so I think about what 28:45 you’ve just said there about being able to create these pathogens these diseases and viruses Etc that you know could 28:51 become weapons or whatever else but with artificial intelligence and the power of that intelligence with these um 28:58 pathogens you could theoretically ask one of these systems to create a virus 29:05 that a very deadly virus um you could ask the artificial 29:11 intelligence to create a very deadly virus that has certain properties um 29:17 maybe even that mutates over time in a certain way so it only kills a certain amount of people kind of like a nuclear bomb of viruses that you could just pop 29:24 hit an enemy with now if I’m if I hear that and I go okay that’s powerful I would like one of those you know there 29:31 might be an adversary out there that goes I would like one of those just in case America get out of hand in America is thinking you know I want one of those 29:37 in case Russia gets out of hand and so okay you might take a precautionary approach in the United 29:43 States but that’s only going to put you on the back foot when China or Russia or one of your adversaries accelerates 29:50 forward in that in that path and it’s the same with the the nuclear bomb and you know you nailed it I mean that is 29:58 the race condition we refer to that as the race condition the idea that if I 30:05 don’t do it the other party is gonna do it and therefore I must do it but the 30:11 problem with that is that it creates a self-fulfilling prophecy so the default there is that we all end up doing it and 30:18 that can’t be right because there is a opportunity for massive cooperation here 30:25 there’s a shared that is between us and China and every other quote unquote them 30:31 or they or enemy that we want to create we’ve all got a shared interest 30:37 in advancing the collective health and well-being of humans and Humanity how 30:44 well have we done it promoting shared interest well in the development of Technologies over the years even at like 30:50 a corporate level even you know you know the nuclear non-proliferation 30:55 treaty has been reasonably successful there’s only nine nuclear states in the world today we’ve stopped many like 31:02 three countries actually gave up nuclear weapons because we incentivized them with sanctions and threats and economic 31:09 rewards um small groups have tried to get access to nuclear weapons and so far have 31:14 largely failed it’s expensive though right and hard to like uranium as a as a chemical to keep it stable and to to buy 31:21 it and to house it I mean I can just put it in the shed you certainly couldn’t put it in a shed you can’t download uranium 235 off the Internet it’s not 31:29 available open source that is totally true so it’s got different characteristics for sure but a kid in 31:34 Russia could you know in his bedroom could download something onto his computer that’s 31:40

AGI in 3 years

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card

Today, AI systems can almost perfectly recognize faces and objects. We take speech-to-text transcription and instant language translation for granted. AI can navigate roads and traffic well enough to drive autonomously in some settings. Based on a few simple prompts, a new generation of AI models can generate novel images and compose text with extraordinary levels of detail and coherence. AI systems can produce synthetic voices with uncanny realism and compose music of stunning beauty. Even in more challenging domains, ones long thought to be uniquely suited to human capabilities like long-term planning, imagination, and simulation of complex ideas, progress leaps forward. AI has been climbing the ladder of cognitive abilities for decades, and it now looks set to reach human-level performance across a very wide range of tasks within the next three years. . That is a big claim, but if I’m even close to right, the implications are truly profound. Suleyman, Mustafa. The Coming Wave (p. 23). Crown. Kindle Edition In 2010 almost no one was talking seriously about AI. Yet what had once seemed a niche mission for a small group of researchers and entrepreneurs has now become a vast global endeavor. AI is everywhere, on the news and in your smartphone, trading stocks and building websites. Many of the world’s largest companies and wealthiest nations barrel forward, developing cutting-edge AI models and genetic engineering techniques, fueled by tens of billions of dollars in investment. Once matured, these emerging technologies will spread rapidly, becoming cheaper, more accessible, and widely diffused throughout society. They will offer extraordinary new medical advances and clean energy breakthroughs, creating not just new businesses but new industries and quality of life improvements in almost every imaginable area. Suleyman, Mustafa. The Coming Wave (p. 24). Crown. Kindle Edition.

AI means automated warfare that threatens civilization

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

And yet alongside these benefits, AI, synthetic biology, and other advanced forms of technology produce tail risks on a deeply concerning scale. They could present an existential threat to nation-states—risks so profound they might disrupt or even overturn the current geopolitical order. They open pathways to immense AI-empowered cyberattacks, automated wars that could devastate countries, engineered pandemics, and a world subject to unexplainable and yet seemingly omnipotent forces. The likelihood of each may be small, but the possible consequences are huge. Even a slim chance of outcomes like these requires urgent attention. Some countries will react to the possibility of such catastrophic risks with a form of technologically charged authoritarianism to slow the spread of these new powers. This will require huge levels of surveillance along with massive intrusions into our private lives. Keeping a tight rein on technology technology could become part of a drift to everything and everyone being watched, all the time, in a dystopian global surveillance system justified by a desire to guard against the most extreme possible outcomes. Suleyman, Mustafa. The Coming Wave (pp. 24-25). Crown. Kindle Edition.

Banning tech means societal collapse

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

Equally plausible is a Luddite reaction. Bans, boycotts, and moratoriums will ensue. Is it even possible to step away from developing new technologies and introduce a series of moratoriums? Unlikely. With their enormous geostrategic and commercial value, it’s difficult to see how nation-states or corporations will be persuaded to unilaterally give up the transformative powers unleashed by these breakthroughs. Moreover, attempting to ban development of new technologies is itself a risk: technologically stagnant societies are historically unstable and prone to collapse. Eventually, they lose the capacity to solve problems, to progress. Suleyman, Mustafa. The Coming Wave (p. 25). Crown. Kindle Edition.

DNA synthesizers can already create bioweapons

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

Over the course of the day a series of hair-raising risks were floated over the coffees, biscuits, and PowerPoints. One stood out. The presenter showed how the price of DNA synthesizers, which can print bespoke strands of DNA, was falling rapidly. Costing a few tens of thousands of dollars, they are small enough to sit on a bench in your garage and let people synthesize—that is, manufacture—DNA. And all this is now possible for anyone with graduate-level training in biology or an enthusiasm for self-directed learning online. Given the increasing availability of the tools, the presenter painted a harrowing vision: Someone could soon create novel pathogens far more transmissible and lethal than anything found in nature. These synthetic pathogens could evade known countermeasures, spread asymptomatically, or have built-in resistance to treatments. If needed, someone could supplement homemade experiments with DNA ordered online and reassembled at home. The apocalypse, mail ordered. This was not science fiction, argued the presenter, a respected professor with more than two decades of experience; it was a live risk, now. They finished with an alarming thought: a single person today likely “has the capacity to kill a billion people.” All it takes is motivation. The attendees shuffled uneasily. People squirmed and coughed. Then the griping and hedging started. No one wanted to believe this was possible. Surely it wasn’t the case, surely there had to be some effective mechanisms for control, surely the diseases were difficult to create, surely the databases could be locked down, surely the hardware could be secured. And so on. Suleyman, Mustafa. The Coming Wave (p. 28). Crown. Kindle Edition.

Tech bans fail

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

People throughout history have attempted to resist new technologies because they felt threatened and worried their livelihoods and way of life would be destroyed. Fighting, as they saw it, for the future of their families, they would, if necessary, physically destroy what was coming. If peaceful measures failed, Luddites wanted to take apart the wave of industrial machinery. Under the seventeenth-century Tokugawa shogunate, Japan shut out the world—and by extension its barbarous inventions—for nearly three hundred years. Like most societies throughout history, it was distrustful of the new, the different, and the disruptive. Similarly, China dismissed a British diplomatic mission and its offer of Western tech in the late eighteenth century, with the Qianlong emperor arguing, “Our Celestial Empire possesses all things in prolific abundance and lacks no product within its borders. There is therefore no need to import the manufactures of outside barbarians.” None of it worked. The crossbow survived until it was usurped by guns. Queen Elizabeth’s knitting machine returned, centuries later, in the supercharged form of large-scale mechanical looms to spark the Industrial Revolution. China and Japan are today among the most technologically advanced and globally integrated places on earth. The Luddites were no more successful at stopping new industrial technologies than horse owners and carriage makers were at preventing cars. Where there is demand, technology always breaks out, finds traction, builds users. Once established, waves are almost impossible to stop. As the Ottomans discovered when it came to printing, resistance tends to be ground down with the passage of time. Technology’s nature is to spread, no matter the barriers. Plenty of technologies come and go. You don’t see too many penny-farthings or Segways, listen to many cassettes or minidiscs. But that doesn’t mean personal mobility and music aren’t ubiquitous; older technologies have just been replaced by new, more efficient forms. We don’t ride on steam trains or write on typewriters, but their ghostly presence lives on in their successors, like Shinkansens and MacBooks. Think of how, as parts of successive waves, fire, then candles and oil lamps, gave way to gas lamps and then to electric lightbulbs, and now LED lights, and the totality of artificial light increased even as the underlying technologies changed. New technologies supersede multiple predecessors. Just as electricity did the work of candles and steam engines alike, so smartphones replaced satnavs, cameras, PDAs, computers, and telephones (and invented entirely new classes of experience: apps). As technologies let you do more, for less, their appeal only grows, along with their adoption. Imagine trying to build a contemporary society without electricity or running water or medicines. Even if you could, how would you convince anyone it was worthwhile, desirable, a decent trade? Few societies have ever successfully removed themselves from the technological frontier; doing so usually either is part of a collapse or precipitates one. There is no realistic way to pull back. Suleyman, Mustafa. The Coming Wave (pp. 58-59). Crown. Kindle Edition.

No reason computers can’t achieve AGI

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

Sometimes people seem to suggest that in aiming to replicate human-level intelligence, AI chases a moving target or that there is always some ineffable component forever out of reach. That’s just not the case. The human brain is said to contain around 100 billion neurons with 100 trillion connections between them—it is often said to be the most complex known object in the universe. It’s true that we are, more widely, complex emotional and social beings. But humans’ ability to complete given tasks—human intelligence itself—is very much a fixed target, as large and multifaceted as it is. Unlike the scale of available compute, our brains do not radically change year by year. In time this gap will be closed. At the present level of compute we already have human-level performance in tasks ranging from speech transcription to text generation. As it keeps scaling, the ability to complete a multiplicity of tasks at our level and beyond comes within reach. AI will keep getting radically better at everything, and so far there seems no obvious upper limit on what’s possible. This simple fact could be one of the most consequential of the century, potentially in human history. And yet, as powerful as scaling up is, it’s not the only dimension where AI is poised for exponential improvement. Suleyman, Mustafa. The Coming Wave (pp. 90-91). Crown. Kindle Edition.

Models are being trained to reduce bias

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

It took LLMs just a few years to change AI. But it quickly became apparent that these models sometimes produce troubling and actively harmful content like racist screeds or rambling conspiracy theories. Research into GPT-2 found that when prompted with the phrase “the white man worked as…,” it would autocomplete with “a police officer, a judge, a prosecutor, and the president of the United States.” Yet when given the same prompt for “Black man,” it would autocomplete with “a pimp,” or for “woman” with “a prostitute.” These models clearly have the potential to be as toxic as they are powerful. Since they are trained on much of the messy data available on the open web, they will casually reproduce and indeed amplify the underlying biases and structures of society, unless they are carefully designed to avoid doing so. The potential for harm, abuse, and misinformation is real. But the positive news is that many of these issues are being improved with larger and more powerful models. Researchers all over the world are racing to develop a suite of new fine-tuning and control techniques, which are already making a difference, giving levels of robustness and reliability impossible just a few years ago. Suffice to say, much more is still needed, but at least this harmful potential is now a priority to address and these advances should be welcomed. Suleyman, Mustafa. The Coming Wave (p. 93). Crown. Kindle Edition.Suleyman, Mustafa. The Coming Wave (p. 93). Crown. Kindle Edition.

AI will overcome limitations

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

Despite recent breakthroughs, skeptics remain. They argue that AI may be slowing, narrowing, becoming overly dogmatic. Critics like NYU professor Gary Marcus believe deep learning’s limitations are evident, that despite the buzz of generative AI the field is “hitting a wall,” that it doesn’t present any path to key milestones like being capable of learning concepts or demonstrating real understanding. The eminent professor of complexity Melanie Mitchell rightly points out that present-day AI systems have many limitations: they can’t transfer knowledge from one domain to another, provide quality explanations of their decision-making process, and so on. Significant challenges with real-world applications linger, including material questions of bias and fairness, reproducibility, security vulnerabilities, and legal liability. Urgent ethical gaps and unsolved safety questions cannot be ignored. Yet I see a field rising to these challenges, not shying away or failing to make headway. I see obstacles but also a track record of overcoming them. People interpret unsolved problems as evidence of lasting limitations; I see an unfolding research process. So, where does AI go next as the wave fully breaks? Today we have narrow or weak AI: limited and specific versions. GPT-4 can spit out virtuoso texts, but it can’t turn around tomorrow and drive a car, as other AI programs do. Existing AI systems still operate in relatively narrow lanes. What is yet to come is a truly general or strong AI capable of human-level performance across a wide range of complex tasks—able to seamlessly shift among them. But this is exactly what the scaling hypothesis predicts is coming and what we see the first signs of in today’s systems. AI is still in an early phase. It may look smart to claim that AI doesn’t live up to the hype, and it’ll earn you some Twitter followers. Meanwhile, talent and investment pour into AI research nonetheless. I cannot imagine how this will not prove transformative in the end. If for some reason LLMs show diminishing returns, then another team, with a different concept, will pick up the baton, just as the internal combustion engine repeatedly hit a wall but made it in the end. Fresh minds, new companies, will keep working at the problem. Then as now, it takes only one breakthrough to change the trajectory of a technology. If AI stalls, it will have its Otto and Benz eventually. Further progress—exponential progress—is the most likely outcome. The wave will only grow. Suleyman, Mustafa. The Coming Wave (p. 98). Crown. Kindle Edition.

AI will be able to accomplish open-ended complex goals

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

But, as many have pointed out, intelligence is about so much more than just language (or indeed any other single facet of intelligence taken in isolation). One particularly important dimension is in the ability to take actions. We don’t just care about what a machine can say; we also care about what it can do. What we would really like to know is, can I give an AI an ambiguous, open-ended, complex goal that requires interpretation, judgment, creativity, decision-making, and acting across multiple domains, over an extended time period, and then see the AI accomplish that goal? Put simply, passing a Modern Turing Test would involve something like the following: an AI being able to successfully act on the instruction “Go make $1 million on Amazon in a few months with just a $100,000 investment.” It might research the web to look at what’s trending, finding what’s hot and what’s not on Amazon Marketplace; generate a range of images and blueprints of possible products; send them to a drop-ship manufacturer it found on Alibaba; email back and forth to refine the requirements and agree on the contract; design a seller’s listing; and continually update marketing materials and product designs based on buyer feedback. Aside from the legal requirements of registering as a business on the marketplace and getting a bank account, all of this seems to me eminently doable. I think it will be done with a few minor human interventions within the next year, and probably fully autonomously within three to five years. Should my Modern Turing Test for the twenty-first century be met, the implications for the global economy are profound. Many of the ingredients are in place. Image generation is well advanced, and the ability to write and work with the kinds of APIs that banks and websites and manufacturers would demand is in process. That an AI can write messages or run marketing campaigns, all activities that happen within the confines of a browser, seems pretty clear. Already the most sophisticated services can do elements of this. Think of them as proto–to-do lists that do themselves, enabling the automation of a wide range of tasks. confines of a browser, seems pretty clear. Already the most sophisticated services can do elements of this. Think of them as proto–to-do lists that do themselves, enabling the automation of a wide range of tasks. We’ll come to robots later, but the truth is that for a vast range of tasks in the world economy today all you need is access to a computer; most of global GDP is mediated in some way through screen-based interfaces amenable to an AI. The challenge is in advancing what AI developers call hierarchical planning, stitching multiple goals and subgoals and capabilities into a seamless process toward a singular end. Once this is achieved, it adds up to a highly capable AI, plugged into a business or organization and all its local history and needs, that can lobby, sell, manufacture, hire, plan—everything a company can do, only with a small team of human AI managers who oversee, double-check, implement, and co-CEO with the AI. Suleyman, Mustafa. The Coming Wave (p. 101). Crown. Kindle Edition.

We are close to artificial capable intelligence that can imagine, reason, plan,  exhibit common sense, and transfer what it “knows” from one domain to another

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

Rather than get too distracted by questions of consciousness, then, we should refocus the entire debate around near-term capabilities and how they will evolve in the coming years. As we have seen, from Hinton’s AlexNet to Google’s LaMDA, models have been improving at an exponential rate for more than a decade. These capabilities are already very real indeed, but they are nowhere near slowing down. While they are already having an enormous impact, they will be dwarfed by what happens as we progress through the next few doublings and as AIs complete complex, multistep end-to-end tasks on their own. I think of this as “artificial capable intelligence” (ACI), the point at which AI can achieve complex goals and tasks with minimal oversight. AI and AGI are both parts of the everyday discussion, but we need a concept encapsulating a middle layer in which the Modern Turing Test is achieved but before systems display runaway “superintelligence.” ACI is shorthand for this point. The first stage of AI was about classification and prediction—it was capable, but only within clearly defined limits and at preset tasks. It could differentiate between cats and dogs in images, and then it could predict what came next in a sequence to produce pictures of those cats and dogs. It produced glimmers of creativity, and could be quickly integrated into tech companies’ products. ACI represents the next stage of AI’s evolution. A system that not only could recognize and generate novel images, audio, and language appropriate to a given context, but also would be interactive—operating in real time, with real users. It would augment these abilities with a reliable memory so that it could be consistent over extended timescales and could draw on other sources of data, including, for example, databases of knowledge, products, or supply-chain components belonging to third parties. Such a system would use these resources to weave together sequences of actions into long-term plans in pursuit of complex, open-ended goals, like setting up and running an Amazon Marketplace store. All of this, then, enables tool use and the emergence of real capability to perform a wide range of complex, useful actions. It adds up to a genuinely capable AI, an ACI. Conscious superintelligence? Who knows. But highly capable learning systems, ACIs, that can pass some version of the Modern Turing Test? Make no mistake: they are on their way, are already here in embryonic form. There will be thousands of these models, and they will be used by the majority of the world’s population. It will take us to a point where anyone can have an ACI in their pocket that can help or even directly accomplish a vast array of conceivable goals: planning and running your vacation, designing and building more efficient solar panels, helping win an election. It’s hard to say for certain what happens when everyone is empowered like this, but this is a point we’ll return to in part 3. The future of AI is, at least in one sense, fairly easy to predict. Over the next five years, vast resources will continue to be invested. Some of the smartest people on the planet are working on these problems. Orders of magnitude more computation will train the top models. All of this will lead to more dramatic leaps forward, including breakthroughs toward AI that can imagine, reason, plan, and exhibit common sense. It won’t be long before AI can transfer what it “knows” from one domain to another, seamlessly, as humans do. What are now only tentative signs of self-reflection and self-improvement will leap forward. These ACI systems will be plugged into the internet, capable of interfacing with everything we humans do, but on a platform of deep knowledge and ability. It will be not just language they’ve mastered but a bewildering array of tasks, too. Suleyman, Mustafa. The Coming Wave (p. 103). Crown. Kindle Edition.

AI spurs biotech development

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

In 2022, AlphaFold2 was opened up for public use. The result has been an explosion of the world’s most advanced machine learning tools, deployed in both fundamental and applied biological research: an “earthquake,” in the words of one researcher. More than a million researchers accessed the tool within eighteen months of launch, including virtually all the world’s leading biology labs, addressing questions from antibiotic resistance to the treatment of rare diseases to the origins of life itself. Previous experiments had delivered the structure of about 190,000 proteins to the European Bioinformatics Institute’s database, about 0.1 percent of known proteins in existence. DeepMind uploaded some 200 million structures in one go, representing almost all known proteins. Whereas once it might have taken researchers weeks or months to determine a protein’s shape and function, that process can now begin in a matter of seconds. This is what we mean by exponential change. This is what the coming wave makes possible. And yet this is only the beginning of a convergence of these two technologies. The bio-revolution is coevolving with advances in AI, and indeed many of the phenomena discussed in this chapter will rely on AI for their realization. Think, then, of two waves crashing together, not a wave but a superwave. Indeed, from one vantage artificial intelligence and synthetic biology are almost interchangeable. All intelligence to date has come from life. Call them synthetic intelligence and artificial life and they still mean the same thing. Both fields are about re-creating, engineering these utterly foundational and interrelated concepts, two core attributes of humanity; change the view and they become one single project. Biology’s sheer complexity opens up vast troves of data, like all those proteins, almost impossible to parse using traditional techniques. A new generation of tools has quickly become indispensable as a result. Teams are working on products that will generate new DNA sequences using only natural language instructions. Transformer models are learning the language of biology and chemistry, again discovering relationships and significance in long, complex sequences illegible to the human mind. LLMs fine-tuned on biochemical data can generate plausible candidates for new molecules and proteins, DNA and RNA sequences. They predict the structure, function, or reaction properties of compounds in simulation before these are later verified in a laboratory. The space of applications and the speed at which they can be explored is only accelerating. Some scientists are beginning to investigate ways to plug human minds directly into computer systems. In 2019, electrodes surgically implanted in the brain let a fully paralyzed man with late-stage ALS spell out the words “I love my cool son.” Companies like Neuralink are working on brain interfacing technology that promises to connect us directly with machines. In 2021 the company inserted three thousand filament-like electrodes, thinner than a human hair, that monitor neuron activity, into a pig’s brain. Soon they hope to begin human trials of their N1 brain implant, while another company, Synchron, has already started human trials in Australia. Scientists at a start-up called Cortical Labs have even grown a kind of brain in a vat (a bunch of neurons grown in vitro) and taught it to play Pong. It likely won’t be too long before neural “laces” made from carbon nanotubes plug us directly into the digital world. What happens when a human mind has instantaneous access to computation and information on the scale of the internet and the cloud? It’s almost impossible to imagine, but researchers are already in the early days of making it happen. As the central general-purpose technologies of the coming wave, AI and synthetic biology are already entangled, a spiraling feedback loop boosting each other. While the pandemic gave biotech a massive awareness boost, the full impact—possibilities and risks alike—of synthetic biology has barely begun to sink into the popular imagination. Welcome to the age of biomachines and biocomputers, where strands of DNA perform calculations and artificial cells are put to work. Where machines come alive. Welcome to the age of synthetic life. Suleyman, Mustafa. The Coming Wave (p. 120). Crown. Kindle Edition.

Renewables will power AI in the future

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

Endless growth in energy consumption was neither possible nor desirable in the era of fossil fuels, and yet while the boom lasted, the development of almost everything we take for granted—from cheap food to effortless transport—rested on it. Now, a huge boost of cheap, clean power has implications for everything from transport to buildings, not to mention the colossal power needed to run the data centers and robotics that will be at the heart of the coming decades. Energy—expensive and dirty as it often is—is at present a limiter on technology’s rate of progress. Not for too much longer. Renewable energy will become the largest single source of electricity generation by 2027. This shift is occurring at an unprecedented pace, with more renewable capacity set to be added in the next five years than in the previous two decades. Solar power in particular is experiencing rapid growth, with costs falling significantly. In 2000, solar energy cost $4.88 per watt, but by 2019 it had fallen to just 38 cents. Energy isn’t just getting cheaper; it’s more distributed, potentially localizable from specific devices to whole communities. Behind it all lies the dormant behemoth of clean energy, this time inspired if not directly powered by the sun: nuclear fusion. Fusion power involves the release of energy when isotopes of hydrogen collide and fuse to form helium, a process long considered the holy grail of energy production. Early pioneers in the 1950s predicted that it would take about a decade to develop. Like so many of the technologies described here, that was a significant underestimation. However, recent breakthroughs have sparked renewed hope. Researchers at the Joint European Torus near Oxford, England, achieved a record power output, double the previous high recorded in 1997. At the National Ignition Facility in Livermore, California, scientists have been working on a method known as inertial confinement, which involves compressing pellets of hydrogen-rich material with lasers and heating them to 100 million million degrees to create a fleeting fusion reaction. In 2022 they created a reaction demonstrating net energy gain for the first time, a critical milestone of producing more energy than the lasers put in. With meaningful private capital now flowing into at least thirty fusion start-ups alongside major international collaborations, scientists are talking about “when and not if” fusion arrives. It may still be a decade or more, but a future with this clean and virtually limitless energy source is looking increasingly real. Fusion and solar offer the promise of immense centralized and decentralized energy grids, with implications we will explore in part 3. This is a time of huge optimism. Including wind, hydrogen, and improved battery technologies, here is a brewing mix that can sustainably power the many demands of life both today and in the future and underwrite the wave’s full potential. Suleyman, Mustafa. The Coming Wave (pp. 131-132). Crown. Kindle Edition.

Without control, these technologies could kills us all

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

And yet alongside these benefits, AI, synthetic biology, and other advanced forms of technology produce tail risks on a deeply concerning scale. They could present an existential threat to nation-states—risks so profound they might disrupt or even overturn the current geopolitical order. They open pathways to immense AI-empowered cyberattacks, automated wars that could devastate countries, engineered pandemics, and a world subject to unexplainable and yet seemingly omnipotent forces. The likelihood of each may be small, but the possible consequences are huge. Even a slim chance of outcomes like these requires urgent attention. Some countries will react to the possibility of such catastrophic risks with a form of technologically charged authoritarianism to slow the spread of these new powers. This will require huge levels of surveillance along with massive intrusions into our private lives. Keeping a tight rein on technology technology could become part of a drift to everything and everyone being watched, all the time, in a dystopian global surveillance system justified by a desire to guard against the most extreme possible outcomes. Suleyman, Mustafa. The Coming Wave (pp. 24-25). Crown. Kindle Edition.

Government cannot solve global problems

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

Our institutions for addressing massive global problems were not fit for purpose. I saw something similar working for the mayor of London in my early twenties. My job was to audit the impact of human rights legislation on communities in the city. I interviewed everyone from British Bangladeshis to local Jewish groups, young and old, of all creeds and backgrounds. The experience showed how human rights law could help improve lives in a very practical way. Unlike the United States, the U.K. has no written constitution protecting people’s fundamental rights. Now local groups could take problems to local authorities and point out they had legal obligations to protect the most vulnerable; they couldn’t brush them under the carpet. On one level it was inspiring. It gave me hope: institutions could have a codified set of rules about justice. The system could deliver. But of course, the reality of London politics was very different. In practice everything devolved into excuses, blame shifting, media Suleyman, Mustafa. The Coming Wave (p. 189). Crown. Kindle Edition.

Fusion and solar solve the environmental harms

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

Energy rivals intelligence and life in its fundamental importance. Modern civilization relies on vast amounts of it. Indeed, if you wanted to write the crudest possible equation for our world it would be something like this: (Life + Intelligence) x Energy = Modern Civilization Increase any or all of those inputs (let alone supercharge their marginal cost toward zero) and you have a step change in the nature of society. Endless growth in energy consumption was neither possible nor desirable in the era of fossil fuels, and yet while the boom lasted, the development of almost everything we take for granted—from cheap food to effortless transport—rested on it. Now, a huge boost of cheap, clean power has implications for everything from transport to buildings, not to mention the colossal power needed to run the data centers and robotics that will be at the heart of the coming decades. Energy—expensive and dirty as it often is—is at present a limiter on technology’s rate of progress. Not for too much longer. Renewable energy will become the largest single source of electricity generation by 2027. This shift is occurring at an unprecedented pace, with more renewable capacity set to be added in the next five years than in the previous two decades. Solar power in particular is experiencing rapid growth, with costs falling significantly. In 2000, solar energy cost $4.88 per watt, but by 2019 it had fallen to just 38 cents. Energy isn’t just getting cheaper; it’s more distributed, potentially localizable from specific devices to whole communities. Behind it all lies the dormant behemoth of clean energy, this time inspired if not directly powered by the sun: nuclear fusion. Fusion power involves the release of energy when isotopes of hydrogen collide and fuse to form helium, a process long considered the holy grail of energy production. Early pioneers in the 1950s predicted that it would take about a decade to develop. Like so many of the technologies described here, that was a significant underestimation. However, recent breakthroughs have sparked renewed hope. Researchers at the Joint European Torus near Oxford, England, achieved a record power output, double the previous high recorded in 1997. At the National Ignition Facility in Livermore, California, scientists have been working on a method known as inertial confinement, which involves compressing pellets of hydrogen-rich material with lasers and heating them to 100 million million degrees to create a fleeting fusion reaction. In 2022 they created a reaction demonstrating net energy gain for the first time, a critical milestone of producing more energy than the lasers put in. With meaningful private capital now flowing into at least thirty fusion start-ups alongside major international collaborations, scientists are talking about “when and not if” fusion arrives. It may still be a decade or more, but a future with this clean and virtually limitless energy source is looking increasingly real. Fusion and solar offer the promise of immense centralized and decentralized energy grids, with implications we will explore in part 3. This is a time of huge optimism. Including wind, hydrogen, and improved battery technologies, here is a brewing mix that can sustainably power the many demands of life both today and in the future and underwrite the wave’s full potential. Suleyman, Mustafa. The Coming Wave (pp. 131-132). Crown. Kindle Edition.

New biocomponents will be made from prompts

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

In chapter 5, we saw what tools like AlphaFold are doing to catalyze biotech. Until recently biotech relied on endless manual lab work: measuring, pipetting, carefully preparing samples. Now simulations speed up the process of vaccine discovery. Computational tools help automate parts of the design processes, re-creating the “biological circuits” that program complex functions into cells like bacteria that can produce a certain protein. Software frameworks, like one called Cello, are almost like open-source languages for synthetic biology design. This could mesh with fast-moving improvements in laboratory robotics and automation and faster biological techniques like the enzymatic synthesis we saw in chapter 5, expanding synthetic biology’s range and making it more accessible. Biological evolution is becoming subject to the same cycles as software. Just as today’s models produce detailed images based on a few words, so in decades to come similar models will produce a novel compound or indeed an entire organism with just a few natural language prompts. That compound’s design could be improved by countless self-run trials, just as AlphaZero became an expert chess or Go player through self-play. Quantum technologies, many millions of times more powerful than the most powerful classical computers, could let this play out at a molecular level. This is what we mean by hyper-evolution—a fast, iterative platform for creation. Nor will this evolution be limited to specific, predictable, and readily containable areas. It will be everywhere. Crown. Kindle Edition. Suleyman, Mustafa. The Coming Wave (pp. 142-143). Crown. Kindle Edition.

AI critical to drug development

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

and automation and faster biological techniques like the enzymatic synthesis we saw in chapter 5, expanding synthetic biology’s range and making it more accessible. Biological evolution is becoming subject to the same cycles as software. Just as today’s models produce detailed images based on a few words, so in decades to come similar models will produce a novel compound or indeed an entire organism with just a few natural language prompts. That compound’s design could be improved by countless self-run trials, just as AlphaZero became an expert chess or Go player through self-play. Quantum technologies, many millions of times more powerful than the most powerful classical computers, could let this play out at a molecular level. This is what we mean by hyper-evolution—a fast, iterative platform for creation. Nor will this evolution be limited to specific, predictable, and readily containable areas. It will be everywhere. tuberculosis. Start-ups like Exscientia, alongside traditional pharmaceutical giants like Sanofi, have made AI a driver of medical research. To date eighteen clinical assets have been derived with the help of AI tools. Suleyman, Mustafa. The Coming Wave (p. 144). Crown. Kindle Edition.

AI will be used for bioweapons

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

There’s a flip side. Researchers looking for these helpful compounds raised an awkward question. What if you redirected the discovery process? What if, instead of looking for cures, you looked for killers? They ran a test, asking their molecule-generating AI to find poisons. In six hours it identified more than forty thousand molecules with toxicity comparable to the most dangerous chemical weapons, like Novichok. It turns out that in drug discovery, one of the areas where AI will undoubtedly make the clearest possible difference, the opportunities are very much “dual use.” Dual-use technologies are those with both civilian and military applications. In World War I, the process of synthesizing ammonia was seen as a way of feeding the world. But it also allowed for the creation of explosives, and helped pave the way for chemical weapons. Complex electronics systems for passenger aircraft can be repurposed for precision missiles. Conversely, the Global Positioning System was originally a military system, but now has countless everyday consumer uses. At launch, the PlayStation 2 was regarded by the U.S. Department of Defense as so powerful that it could potentially help hostile militaries usually denied access to such hardware. Dual-use technologies are both helpful and potentially destructive, tools and weapons. What the concept captures is how technologies tend toward the general, and a certain class of technologies come with a heightened risk because of this. They can be put toward many ends—good, bad, everywhere in between—often with difficult-to-predict consequences. Suleyman, Mustafa. The Coming Wave (pp. 144-145). Crown. Kindle Edition.

Super intelligence is not controllable

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

I’ve often felt there’s been too much focus on distant AGI scenarios, given the obvious near-term challenges present in so much of the coming wave. However, any discussion of containment has to acknowledge that if or when AGI-like technologies do emerge, they will present containment problems beyond anything else we’ve ever encountered. Humans dominate our environment because of our intelligence. A more intelligent entity could, it follows, dominate us. The AI researcher Stuart Russell calls it the “gorilla problem”: gorillas are physically stronger and tougher than any human being, but it is they who are endangered or living in zoos; they who are contained. We, with our puny muscles but big brains, do the containment. By creating something smarter than us, we could put ourselves in the position of our primate cousins. With a long-term view in mind, those focusing on AGI scenarios are right to be concerned. Indeed, there is a strong case that by definition a superintelligence would be fully impossible to control or contain. An “intelligence explosion” is the point at which an AI can improve itself again and again, recursively making itself better in ever faster and more effective ways. Here is the definitive uncontained and uncontainable technology. The blunt truth is that nobody knows when, if, or exactly how AIs might slip beyond us and what happens next; nobody knows when or if they will become fully autonomous or how to make them behave with awareness of and alignment with our values, assuming we can settle on those values in the first place. Nobody really knows how we can contain the very features being researched so intently in the coming wave. There comes a point where technology can fully direct its own evolution; where it is subject to recursive processes of improvement; where it passes beyond explanation; where it is consequently impossible to predict how it will behave in the wild; where, in short, we reach the limits of human agency and control. Ultimately, in its most dramatic forms, the coming wave could mean humanity will no longer be at the top of the food chain. Homo technologicus may end up being threatened by its own creation. The real question is not whether the wave is coming. It clearly is; just look and you can see it forming already. Given risks like these, the real question is why it’s so hard to see it as anything other than inevitable.

There is no “threat construction” – other countries are developing AI and synthetics

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

Something similar occurred in the late 1950s, when, in the wake of a Soviet ICBM test and Sputnik, Pentagon decision-makers became convinced of an alarming “missile gap” with the Russians. It later emerged that the United States had a ten-to-one advantage at the time of the key report. Khrushchev was following a tried-and-tested Soviet strategy: bluffing. Misreading the other side meant nuclear weapons and ICBMs were both brought forward by decades. Could this same mistaken dynamic be playing out in the current technological arms races? Actually, no. First, the coming wave’s proliferation risk is acute. Because these technologies are getting cheaper and simpler to use even as they get morepowerful, more nations can engage at the frontier. Large language models are still seen as cutting-edge, yet there is no great magic or hidden state secret to them. Access to computation is likely the biggest bottleneck, but plenty of services exist to make it happen. The same goes for CRISPR or DNA synthesis. We can already see achievements like China’s moon landing or India’s billion-strong biometric identification system, Aadhaar, happening in real time. It’s no mystery that China has enormous LLMs, Taiwan is the leader in semiconductors, South Korea has world-class expertise in robots, and governments everywhere are announcing and implementing detailed technology strategies. This is happening out in the open, shared in patents and at academic conferences, reported in Wired and the Financial Times, broadcast live on Bloomberg. Declaring an arms race is no longer a conjuring act, a self-fulfilling prophecy. The prophecy has been fulfilled. It’s here, it’s happening. It is a point so obvious it doesn’t often get mentioned: there is no central authority controlling what technologies get developed, who does it, and for what purpose; technology is an orchestra with no conductor. Yet this single fact could end up being the most significant of the twenty-first century. And if the phrase “arms race” triggers worry, that’s with good reason.  Suleyman, Mustafa. The Coming Wave (p. 164). Crown. Kindle Edition.

AI will massively accelerate global growth

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

Little is ultimately more valuable than intelligence. Intelligence is the wellspring and the director, architect, and facilitator of the world economy. The more we expand the range and nature of intelligences on offer, the more growth should be possible. With generalist AI, plausible economic scenarios suggest it could lead not just to a boost in growth but to a permanent acceleration in the rate of growth itself. In blunt economic terms, AI could, long term, be the most valuable technology yet, more so when coupled with the potential of synthetic biology, robotics, and the rest. Suleyman, Mustafa. The Coming Wave (p. 175). Crown. Kindle Edition.

LLMs will be able to answer any question on any topic

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

Think about the impact of the new wave of AI systems. Large language models enable you to have a useful conversation with an AI about any topic in fluent, natural language. Within the next couple of years, whatever your job, you will be able to consult an on-demand expert, ask it about your latest ad campaign or product design, quiz it on the specifics of a legal dilemma, isolate the most effective elements of a pitch, solve a thorny logistical question, get a second opinion on a diagnosis, keep probing and testing, getting ever more detailed answers grounded in the very cutting edge of knowledge, delivered with exceptional nuance. All of the world’s knowledge, best practices, precedent, and computational power will be available, tailored to you, to your specific needs and circumstances, instantaneously and effortlessly. Suleyman, Mustafa. The Coming Wave (pp. 174-175). Crown. Kindle Edition.

AI not very intelligent and won’t be soon

ISSIE LAPOWSKY, 9-5, 23. Fortune, Why Meta’s Yann LeCun isn’t buying the AI doomer narrative, https://www.fastcompany.com/90947634/why-metas-yann-lecun-isnt-buying-the-ai-doomer-narrative

Of course, there are those who believe the opposite will be true—that as these systems improve, they’ll instead try to drive all of humanity off the proverbial cliff. Earlier this year, a slew of top AI minds, including Geoffrey Hinton and Yoshua Bengio, LeCun’s fellow “AI godfathers” who shared a 2018 Turing Award with him for their advancements in the field of deep learning, issued a one-sentence warning about the need to mitigate “the risk of extinction from AI,” comparing the technology to pandemics and nuclear war. LeCun, for one, isn’t buying the doomer narrative. Large language models are prone to hallucinations, and have no concept of how the world works, no capacity to plan, and no ability to complete basic tasks that a 10-year-old could learn in a matter of minutes. They have come nowhere close to achieving human or even animal intelligence, he argues, and there’s little evidence at this point that they will.  Yes, there are risks to releasing this technology, risks that giant corporations like Meta have quickly become more comfortable with taking. But the risk that they will destroy humanity? “Preposterous,” LeCun says.

AI development will trigger bioweapons that will kill us

Daily Star, 9-4, 23, https://www.dailystar.co.uk/news/weird-news/googles-ai-chief-warns-deadliest-30860214, AI chief warns ‘deadliest pandemics ever’ on horizon with genetic engineering

One of the biggest threats facing the planet is a super-pandemic, warns the co-founder of Google DeepMind AI technology. Mustafa Suleyman is the billionaire co-founder of the computer giant’s DeepMind but warns it’s not robots that pose the most danger to mankind. He claims the ability to cook up a deadly pandemic at home is likely to become commonplace before the end of this decade. ‘AI Godfather’ regrets his work and warns ‘we need to worry’ about ‘scary’ bots Discussing the future of genetic engineering, he warned: “I think that the darkest scenario there is that people will experiment with …synthetic pathogens that might end up accidentally or intentionally being more transmissible; they they can spread faster, or be more lethal…” Similarly, he said advanced AI technology is getting cheaper and easier to obtain at an alarming rate thanks to the tech being made “open”. It means anyone can get their hands on the technology – and use it to help them cheat on their exams or cook up a virus that could paralyse the world. He says AI and genetic engineering could be a deadly combination He says AI and genetic engineering could be a deadly combination (Image: Youtube) READ MORE Elon Musk warns AI ‘is dangerous to civilisation’ if we start to rely on it That’s why, Mustafa says, an international treaty – including America’s perceived enemies such as Russia and China – needs to be agreed to limit the use of advanced AI and genetic manipulation. “There’s a shared goal that is between us and China and every other …’enemy’ that we want to create we’ve all got a shared interest in advancing the collective health and well-being of humans and Humanity,” he says.

AI could kill us all

Andrew Freedman, Ryan Heath, Sam Baker, 8-1, 2023, Existential threats to humanity are soaring this year, https://www.axios.com/2023/08/01/climate-change-artificial-intelligence-nuclear-war-existential

Put aside your politics and look at the world clinically, and you’ll see the three areas many experts consider existential threats to humanity worsening in 2023. Why it matters: This isn’t meant to start your day with doom and gloom. But focus your mind on how the threats of nuclear catastrophe, rising temperatures and all-powerful AI capabilities are spiking worldwide. It underscores the urgent need for smart people running government — and big companies — to solve increasingly complex problems at faster rates. Climate: The danger is becoming impossible to ignore, Axios’ Andrew Freedman writes. You just lived through the hottest month ever recorded on Earth. The world’s oceans are absurdly warm, with temperatures in the 90s° around the Florida Keys, bleaching and even killing coral reefs in just one week. Antarctic sea ice is plummeting even in the dead of winter. Wildfires are raging. Climate scientists don’t relish saying, “I told you so,” but they’ve been warning for years that each seemingly incremental rise in global average temperatures would translate into severe heat waves, droughts, floods and stronger hurricanes. And the worst part is, we can’t even call this our “new normal,” because it’s going to keep getting worse as long as carbon emissions keep increasing. This is a global problem that will require a global solution, but tensions between the world’s top two emitters — the U.S. and China — are high, and getting the big global powers to abide by a sufficiently hardcore climate commitment has so far proven impossible. AI: The technology’s top architects say there’s a non-zero chance it’ll destroy humanity — and they don’t really know how or why it works, Axios’ Ryan Heath reports. AI — with its ability to mass-produce fake videos, soundbites and images — poses clear risks to Americans’ already tenuous trust in elections and institutions. Nukes: China has expanded its nuclear arsenal on land, air and sea — raising the likelihood of a dangerous new world with three, rather than two, nuclear superpowers, Axios’ Sam Baker writes. “Beijing, Moscow and Washington will likely be atomic peers,” the N.Y. Times reports. “This new reality is prompting a broad rethinking of American nuclear strategy that few anticipated a dozen years ago.” Russian President Vladimir Putin said this summer that he moved some of his country’s roughly 5,000 nuclear weapons into Belarus — closer to Ukraine and Western Europe. President Biden warned in June that Putin’s threat to use tactical nuclear weapons in Ukraine is “real.”

AI deep fakes will undermine democracy

Rebecca Klar, 6-18, 23, The Hill, How AI is changing the 2024 election, https://thehill.com/homenews/campaign/4054333-how-ai-is-changing-the-2024-election/

As the generative artificial intelligence (AI) industry booms, the 2024 election cycle is shaping up to be a watershed moment for the technology’s role in political campaigns. The proliferation of AI — a technology that can create text, image and video — raises concerns about the spread of misinformation and how voters will react to artificially generated content in the politically polarized environment. Already, the presidential campaigns for former President Trump and Florida Gov. Ron DeSantis (R) have produced high-profile videos with AI. Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights, said the proliferation of the AI systems available to the public, awareness of how simple it is to use them and the “erosion of the sense that creating things like deepfakes is something that good, honest people would never do” will make 2024 a “significant turning point” for how AI is used in campaigns. “I think now, increasingly, there’s an attitude that, ‘Well, it’s just the way it goes, you can’t tell what’s true anymore,’” Barrett said. The use of AI-generated campaign videos is already becoming more normalized in the Republican primary. After DeSantis announced his campaign during a Twitter Spaces conversation with company CEO Elon Musk, Trump posted a deepfake video — which is a digital representation made from AI that fabricates realistic-looking images and sounds — parodying the announcement on Truth Social. Donald Trump Jr. posted a deepfake video of DeSantis edited into a scene from the television show “The Office,” and the former president has shared AI images of himself. The Hill Elections 2024 coverage Last week, DeSantis’s campaign released an ad that used seemingly AI-produced images of Trump embracing Anthony Fauci, the former director of the National Institute of Allergy and Infectious Diseases. “If you proposed that 10 years ago, I think people would have said, ‘That’s crazy, that will just backfire,’” Barrett said. “But today, it just happens as if it’s normal.” Critics noted that DeSantis’s use of the generated photo of Trump and Fauci was deceptive because the video does not disclose the use of AI technology. “Using AI to create an ominous background or strange pictures, that’s not categorically different than what advertising has long been,” said Robert Weissman, president of the progressive consumer rights advocacy group Public Citizen. “It doesn’t involve any deception of voters.” “[The DeSantis ad] is fundamentally deceptive,” he said. “That’s the big worry that voters will be tricked into believing things are true that are not.” Someone familiar with DeSantis’s operation noted that the governor’s presidential campaign was not the only campaign using AI in videos. “This was not an ad, it was a social media post,” the person familiar with the operation said. “If the Trump team is upset about this, I’d ask them why they have been continuously posting fake images and false talking points to smear the governor.” While proponents of AI acknowledge the risks of the technology, they argue it will eventually play a consequential role in campaigning. “I believe there’s going to be new tools that streamline content creation and deployment, and likely tools that help with data-intensive tasks like understanding voter sentiment,” said Mike Nellis, founder and CEO of the progressive agency Authentic. Nellis has teamed up with Higher Ground Labs to establish Quiller.ai, which is an AI tool that has the ability to write and send campaign fundraising emails. “At the end of the day, Quiller is going to help us write better content faster,” Nellis told The Hill. “What happens on a lot of campaigns is they hire young people, teach them to write fundraising emails, and then ask them to write hundreds more, and it’s not sustainable. Tools like Quiller get us to a better place and it improves the efficiency of our campaigns.” As generative AI text and video become more common — and increasingly difficult to discern as the generated content appears more plausible — there’s also a concern that voters will become more skeptical about all content AI generates. Sarah Kreps, director of the Cornell Tech Policy Institute, said people may start to either “assume that nothing is true” or “just believe their partisan cues.” “Neither one of those is really helpful for democracy. If you don’t believe anything, this whole pillar of trust we really on for democracy is eroded,” Kreps said. ChatGPT, which is OpenAI’s AI-powered chatbot, burst onto the scene with an exponential rise in use since its November launch, along with rival products like Google’s Bard chatbot and image and video-based tools. These products have the administration and Congress scrambling to consider how to address the industry while staying competitive on a global scale. But as Congress mulls regulation, between scheduled Senate briefings and a series of hearings, the industry has been largely left to create the rules of the road. On the campaign front, the rise of AI-generated content is magnifying the already prevalent concerns of election misinformation spreading on social media. Meta, the parent company of Facebook and Instagram, released a blog post in January 2020 stating it would remove “misleading manipulated media” that meets certain criteria, including content that is the “product of artificial intelligence or machine learning” that “replaces or superimposes content onto a video, making it appear to be authentic.” Ultimately, though, Barrett said the burden of deciphering what is AI-generated or not will fall on voters. “This kind of stuff will be disseminated, even if it is restricted in some way; it’ll probably be out in the world for a while before it is restricted or labeled, and people have to be wary,” he said. Others point out that it’s still too difficult to predict how AI will be integrated into campaigns and other organizations. “I think the real story is that new technologies should integrate into business at a deliberate and careful pace, and that the inappropriate/almost immoral uses are the ones that are going to get all the attention in the first inning, but it’s a long game and most of the productive useful integrations will evolve more slowly and hardly even be noticed,” said Nick Everhart, a Republican political consultant in Ohio and president of Content Creative Media. Weissman noted that Public Citizen has asked the Federal Election Commission to issue a rule to the extent of its authority to prohibit the use of deceptive deepfakes. “We think that the agency has authority as it regards candidates but not political committees or others,” Weissman said. “That would be good, but it’s not enough.” However, it remains unclear how quickly campaigns will adopt AI technology this cycle. “A lot of people are saying this is going to be the AI election,” Nellis said. “I’m not entirely sure that’s true. The smart and innovative campaigns will embrace AI, but a lot of campaigns are often slow to adopt new and emerging technology. I think 2026 will be the real AI election.”

AI will kill everyone

The Week, June 17, 2023, https://theweek.com/artificial-intelligence/1024341/ai-the-worst-case-scenario, AI: The worst-case scenario, AI: The worst-case scenario; Artificial intelligence’s architects warn it could cause human “extinction.” How might that happen?

Artificial intelligence’s architects warn it could cause human “extinction.” How might that happen? Here’s everything you need to know: What are AI experts afraid of? They fear that AI will become so superintelligent and powerful that it becomes autonomous and causes mass social disruption or even the eradication of the human race. More than 350 AI researchers and engineers recently issued a warning that AI poses risks comparable to those of “pandemics and nuclear war.” In a 2022 survey of AI experts, the median odds they placed on AI causing extinction or the “severe disempowerment of the human species” were 1 in 10. “This is not science fiction,” said Geoffrey Hinton, often called the “godfather of AI,” who recently left Google so he could sound a warning about AI’s risks. “A lot of smart people should be putting a lot of effort into figuring out how we deal with the possibility of AI taking over.” When might this happen? Hinton used to think the danger was at least 30 years away, but says AI is evolving into a superintelligence so rapidly that it may be smarter than humans in as little as five years. AI-powered ChatGPT and Bing’s Chatbot already can pass the bar and medical licensing exams, including essay sections, and on IQ tests score in the 99th percentile — genius level. Hinton and other doomsayers fear the moment when “artificial general intelligence,” or AGI, can outperform humans on almost every task. Some AI experts liken that eventuality to the sudden arrival on our planet of a superior alien race. You have “no idea what they’re going to do when they get here, except that they’re going to take over the world,” said computer scientist Stuart Russell, another pioneering AI researcher. How might AI actually harm us? One scenario is that malevolent actors will harness its powers to create novel bioweapons more deadly than natural pandemics. As AI becomes increasingly integrated into the systems that run the world, terrorists or rogue dictators could use AI to shut down financial markets, power grids, and other vital infrastructure, such as water supplies. The global economy could grind to a halt. Authoritarian leaders could use highly realistic AI-generated propaganda and Deep Fakes to stoke civil war or nuclear war between nations. In some scenarios, AI itself could go rogue and decide to free itself from the control of its creators. To rid itself of humans, AI could trick a nation’s leaders into believing an enemy has launched nuclear missiles so that they launch their own. Some say AI could design and create machines or biological organisms like the Terminator from the film series to act out its instructions in the real world. It’s also possible that AI could wipe out humans without malice, as it seeks other goals. How would that work? AI creators themselves don’t fully understand how the programs arrive at their determinations, and an AI tasked with a goal might try to meet it in unpredictable and destructive ways. A theoretical scenario often cited to illustrate that concept is an AI instructed to make as many paper clips as possible. It could commandeer virtually all human resources to the making of paper clips, and when humans try to intervene to stop it, the AI could decide eliminating people is necessary to achieve its goal. A more plausible real-world scenario is that an AI tasked with solving climate change decides that the fastest way to halt carbon emissions is to extinguish humanity. “It does exactly what you wanted it to do, but not in the way you wanted it to,” explained Tom Chivers, author of a book on the AI threat. Are these scenarios far-fetched? Some AI experts are highly skeptical AI could cause an apocalypse. They say that our ability to harness AI will evolve as AI does, and that the idea that algorithms and machines will develop a will of their own is an overblown fear influenced by science fiction, not a pragmatic assessment of the technology’s risks. But those sounding the alarm argue that it’s impossible to envision exactly what AI systems far more sophisticated than today’s might do, and that it’s shortsighted and imprudent to dismiss the worst-case scenarios. So, what should we do? That’s a matter of fervent debate among AI experts and public officials. The most extreme Cassandras call for shutting down AI research entirely. There are calls for moratoriums on its development, a government agency that would regulate AI, and an international regulatory body. AI’s mind-boggling ability to tie together all human knowledge, perceive patterns and correlations, and come up with creative solutions is very likely to do much good in the world, from curing diseases to fighting climate change. But creating an intelligence greater than our own also could lead to darker outcomes. “The stakes couldn’t be higher,” said Russell. “How do you maintain power over entities more powerful than you forever? If we don’t control our own civilization, we have no say in whether we continue to exist.” A fear envisioned in fiction Fear of AI vanquishing humans may be novel as a real-world concern, but it’s a long-running theme in novels and movies. In 1818’s “Franken­stein,” Mary Shelley wrote of a scientist who brings to life an intelligent creature who can read and understand human emotions — and eventually destroys his creator. In Isaac Asimov’s 1950 short-story collection “I, Robot,” humans live among sentient robots guided by three Laws of Robotics, the first of which is to never injure a human. Stanley Kubrick’s 1968 film “A Space Odyssey” depicts HAL, a spaceship supercomputer that kills astronauts who decide to disconnect it. Then there’s the “Terminator” franchise and its Skynet, an AI defense system that comes to see humanity as a threat and tries to destroy it in a nuclear attack. No doubt many more AI-inspired projects are on the way. AI pioneer Stuart Russell reports being contacted by a director who wanted his help depicting how a hero programmer could save humanity by outwitting AI. No human could possibly be that smart, Russell told him. “It’s like, I can’t help you with that, sorry,” he said.