A.I. Daily


AI = massive catastrophes

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card DEBATEUS!

Following this line of thinking, I often hear people say something along the lines of “AGI is the greatest risk humanity faces today! It’s going to end the world!” But when pressed on what this actually looks like, how this actually comes about, they become evasive, the answers woolly, the exact danger nebulous. AI, they say, might run away with all the computational resources and turn the whole world into a giant computer. As AI gets more and more powerful, the most extreme scenarios will require serious consideration and mitigation. However, well before we get there, much could go wrong. Over the next ten years, AI will be the greatest force amplifier in history. This is why it could enable a redistribution of power on a historic scale. The greatest accelerant of human progress imaginable, it will also enable harms—from wars and accidents to random terror groups, authoritarian governments, overreaching corporations, plain theft, and willful sabotage. Think about an ACI capable of easily passing the Modern Turing Test, but turned toward catastrophic ends. Advanced Advanced AIs and synthetic biology will not only be available to groups finding new sources of energy or life-changing drugs; they will also be available to the next Ted Kaczynski. AI is both valuable and dangerous precisely because it’s an extension of our best and worst selves. And as a technology premised on learning, it can keep adapting, probing, producing novel strategies and ideas potentially far removed from anything before considered, even by other AIs. Ask it to suggest ways of knocking out the freshwater supply, or crashing the stock market, or triggering a nuclear war, or designing the ultimate virus, and it will. Soon. Even more than I worry about speculative paper-clip maximizers or some strange, malevolent demon, I worry about what existing forces this tool will amplify in the next ten years. Imagine scenarios where AIs control energy grids, media programming, power stations, planes, or trading accounts for major financial houses. When robots are ubiquitous, and militaries stuffed with lethal autonomous weapons—warehouses full of technology that can commit autonomous mass murder at the literal push of a button—what might a hack, developed by another AI, look like? Or consider even more basic modes of failure, not attacks, but plain errors. What if AIs make mistakes in fundamental infrastructures, or a widely used medical system starts malfunctioning? It’s not hard to see how numerous, capable, quasi-autonomous agents on the loose, even those chasing well-intentioned but ill-formed goals, might sow havoc. We don’t yet know the implications of AI for fields as diverse as agriculture, chemistry, surgery, and finance. That’s part of the problem; we don’t know what failure modes are being introduced and how deep they could extend. There is no instruction manual on how to build the technologies in the coming wave safely. We cannot build systems of escalating power and danger to experiment with ahead of time. We cannot know how quickly an AI might self-improve, or what would happen after a lab accident with some not yet invented piece of biotech. We cannot tell what results from a human consciousness plugged directly into a computer, or what an AI-enabled cyberweapon means for critical infrastructure, or how a gene drive will play out in the wild. Once fast-evolving, self-assembling automatons or new biological agents are released, out in the wild, there’s no rewinding the clock. After a certain point, even curiosity and tinkering might be dangerous. Even if you believe the chance of catastrophe is low, that we are operating blind should give you pause. Suleyman, Mustafa. The Coming Wave (p. 264). Crown. Kindle Edition.

Tremendous harm is inevitable

 

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card DEBATEUS!

Nor is building safe and contained technology in itself sufficient. Solving the question of AI alignment doesn’t mean doing so once; it means doing it every time a sufficiently powerful AI is built, wherever and whenever that happens. You don’t just need to solve the question of lab leaks in one lab; you need to solve it in every lab, in every country, forever, even while those same countries are under serious political strain. Once technology reaches a critical capability, it isn’t enough for early pioneers to just build it safely, as challenging as that undoubtedly is. Rather, true safety requires maintaining those standards across every single instance: a mammoth expectation given how fast and widely these are already diffusing. This is what happens when anyone is free to invent or use tools that affect us all. And we aren’t just talking about access to a printing press or a steam engine, as extraordinary as they were. We are talking about outputs with a fundamentally new character: new compounds, new life, new species. If the wave is uncontained, it’s only a matter of time. Allow for the possibility of accident, error, malicious use, evolution beyond human control, unpredictable consequences of all kinds. At some stage, in some form, something, somewhere, will fail. And this won’t be a Bhopal or even a Chernobyl; it will unfold on a worldwide scale. This will be the legacy of technologies produced, for the most part, with the best of intentions. However, not everyone shares those intentions. Suleyman, Mustafa. The Coming Wave (p. 265). Crown. Kindle Edition.

There are plenty of people who want to use it for bad – terrorists, cults,

 

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card DEBATEUS!

CULTS, LUNATICS, AND SUICIDAL STATES Most of the time the risks arising from things like gain-of-function research are a result of sanctioned and benign efforts. They are, in other words, supersized revenge effects, unintended consequences of a desire to do good. Unfortunately, some organizations are founded with precisely the opposite motivation. Founded in the 1980s, Aum Shinrikyo (Supreme Truth) was a Japanese doomsday cult. The group originated in a yoga studio under the leadership of a man who called himself Shoko Asahara. Building a membership among the disaffected, they radicalized as their numbers swelled, becoming convinced that the apocalypse was nigh, that they alone would survive, and that they should hasten it. Asahara grew the cult to somewhere between forty thousand and sixty thousand members, coaxing a loyal group of lieutenants all the way to using biological and chemical weapons. At Aum Shinrikyo’s peak popularity it is estimated to have held more than $1 billion in assets and counted dozens of well-trained scientists as members. Despite a fascination with bizarre, sci-fi weapons like earthquake-generating machines, plasma guns, and mirrors to deflect the sun’s rays, they were a deadly serious and highly sophisticated group. Aum built dummy companies and infiltrated university labs to procure material, purchased land in Australia with the intent of prospecting for uranium to build nuclear weapons, and embarked on a huge biological and chemical weapons program in the hilly countryside outside Tokyo. The group experimented with phosgene, hydrogen cyanide, soman, and other nerve agents. They planned to engineer and release an enhanced version of anthrax, recruiting a graduate-level virologist to help. Members obtained the neurotoxin C. botulinum and sprayed it on Narita International Airport, the National Diet Building, the Imperial Palace, the headquarters of another religious group, and two U.S. naval bases. Luckily, they made a mistake in its manufacture and no harm ensued. It didn’t last. In 1994, Aum Shinrikyo sprayed the nerve agent sarin from a truck, killing eight and wounding two hundred. A year later they struck the Tokyo subway, releasing more sarin, killing thirteen and injuring some six thousand people. The subway attack, which involved depositing sarin-filled bags around the metro system, was more harmful partly because of the enclosed spaces. Thankfully neither attack used a particularly effective delivery mechanism. But in the end it was only luck that stopped a more catastrophic event. Aum Shinrikyo combined an unusual degree of organization with a frightening level of ambition. They wanted to initiate World War III and a global collapse by murdering at shocking scale and began building an infrastructure to do so. On the one hand, it’s reassuring how rare organizations like Aum Shinrikyo are. Of the many terrorist incidents and other non-state-perpetrated mass killings since the 1990s, most have been carried out by disturbed loners or groups with specific political or ideological agendas. But on the other hand, this reassurance has limits. Procuring weapons of great power was previously a huge barrier to entry, helping keep catastrophe at bay. The sickening nihilism of the school shooter is bounded by the weapons they can access. The Unabomber had only homemade devices. Building and disseminating biological and chemical weapons were huge challenges for Aum Shinrikyo. As a small, fanatical coterie operating in an atmosphere of paranoid secrecy, with only limited expertise and access to materials, they made mistakes. As the coming wave matures, however, the tools of destruction will, as we’ve seen, be democratized and commoditized. They will have greater capability and adaptability, potentially operating in ways beyond human control or understanding, evolving and upgrading at speed, some of history’s greatest offensive powers available widely. Those who would use new technologies like Aum are fortunately rare. Yet even one Aum Shinrikyo every fifty years is now one too many to avert an incident orders of magnitude worse than the subway attack. Cults, lunatics, suicidal states on their last legs, all have motive and now means. As a report on the implications of Aum Shinrikyo succinctly puts it, “We are playing Russian roulette.” A new phase of history is here. With zombie governments failing to contain technology, the next Aum Shinrikyo, the next industrial accident, the next mad dictator’s war, the next tiny lab leak, will have an impact that is difficult to contemplate. It’s tempting to dismiss all these dark risk scenarios as the distant daydreams of people who grew up reading too much science fiction, those biased toward catastrophism. Tempting, but a mistake. Regardless of where we are with BSL-4 protocols or regulatory proposals or technical publications on the AI alignment problem, those incentives grind away, the technologies keep developing and diffusing. This is not the stuff of speculative novels and Netflix series. This is real, being worked on right this second in offices and labs around the world. So serious are the risks, however, that they necessitate consideration of all the options. Containment is about the ability to control technology. Further back, that means the ability to control the people and societies behind it. As catastrophic impacts unfurl or their possibility becomes unignorable, the terms of debate will change. Calls for not just control but crackdowns will grow. The potential for unprecedented levels of vigilance will become ever more appealing. Perhaps it might be possible to spot and then stop emergent threats? Wouldn’t that be for the best—the right thing to do? It’s my best guess this will be the reaction of governments and populations around the world. When the unitary power of the nation-state is threatened, when containment appears increasingly difficult, when lives are on the line, the inevitable reaction will be a tightening of the grip on power. The question is, at what cost?

Regulation won’t solve

 

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card DEBATEUS!

While garage amateurs gain access to more powerful tools and tech companies spend billions on R&D, most politicians are trapped in a twenty-four-hour news cycle of sound bites and photo ops. When a government has devolved to the point of simply lurching from crisis to crisis, it has little breathing room for tackling tectonic forces requiring deep domain expertise and careful judgment on uncertain timescales. It’s easier to ignore these issues in favor of low-hanging fruit more likely to win votes in the next election. Even technologists and researchers in areas like AI struggle with the pace of change. What chance, then, do regulators have, with fewer resources? How do they account for an age of hyper-evolution, for the pace and unpredictability of the coming wave? Technology evolves week by week. Drafting and passing legislation takes years. Consider the arrival of a new product on the market like Ring doorbells. Ring put a camera on your front door and connected it to your phone. The product was adopted so quickly and is now so widespread that it has fundamentally changed the nature of what needs regulating; suddenly your average suburban street went from relatively private space to surveilled and recorded. By the time the regulation conversation caught up, Ring had already created an extensive network of cameras, amassing data and images from the front doors of people around the world. Twenty years on from the dawn of social media, there’s no consistent approach to the emergence of a powerful new platform (and besides, is privacy, polarization, monopoly, foreign ownership, or mental health the core problem—or all of the above?). The coming wave will worsen this dynamic. Discussions of technology sprawl across social media, blogs and newsletters, academic journals, countless conferences and seminars and workshops, their threads distant and increasingly lost in the noise. Everyone has a view, but it doesn’t add up to a coherent program. Talking about the ethics of machine learning systems is a world away from, say, the technical safety of synthetic bio. These discussions happen in isolated, echoey silos. They rarely break out. Yet I believe they are aspects of what amounts to the same phenomenon; they all aim to address different aspects of the same wave. It’s not enough to have dozens of separate conversations about algorithmic bias or bio-risk or drone warfare or the economic impact of robotics or the privacy implications of quantum computing. It completely underplays how interrelated both causes and effects are. We need an approach that unifies these disparate conversations, encapsulating all those different dimensions of risk, a general-purpose concept for this general-purpose revolution. Suleyman, Mustafa. The Coming Wave (pp. 282-283). Crown. Kindle Edition.

AI means massive dislocation and job loss, new jobs won’t offset the loss and there will be massive interim disruption

 

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card DEBATEUS!

In the years since I co-founded DeepMind, no AI policy debate has been given more airtime than the future of work—to the point of oversaturation. Here was the original thesis. In the past, new technologies put people out of work, producing what the economist John Maynard Keynes called “technological unemployment.” In Keynes’s view, this was a good thing, with increasing productivity freeing up time for further innovation and leisure. Examples of tech-related displacement are myriad. The introduction of power looms put old-fashioned weavers out of business; motorcars meant that carriage makers and horse stables were no longer needed; lightbulb factories did great as candlemakers went bust. Broadly speaking, when technology damaged old jobs and industries, it also produced new ones. Over time these new jobs tended toward service industry roles and cognitive-based white-collar jobs. As factories closed in the Rust Belt, demand for lawyers, designers, and social media influencers boomed. So far at least, in economic terms, new technologies have not ultimately replaced labor; they have in the aggregate complemented it. But what if new job-displacing systems scale the ladder of human cognitive ability itself, leaving nowhere new for labor to turn? If the coming wave really is as general and wide-ranging as it appears, how will humans compete? What if a large majority of white-collar tasks can be performed more efficiently by AI? In few areas will humans still be “better” than machines. I have long argued this is the more likely scenario. With the arrival of the latest generation of large language models, I am now more convinced than ever that this is how things will play out. These tools will only temporarily augment human intelligence. They will make us smarter and more efficient for a time, and will unlock enormous amounts of economic growth, but they are fundamentally labor replacing. They will eventually do cognitive labor more efficiently and more cheaply than many people working in administration, data entry, customer service (including making and receiving phone calls), writing emails, drafting summaries, translating documents, creating content, copywriting, and so on. In the face of an abundance of ultra-low-cost equivalents, the days of this kind of “cognitive manual labor” are numbered. We are only just now starting to see what impact this new wave is about to have. Early analysis of ChatGPT suggests it boosts the productivity of “mid-level college educated professionals” by 40 percent on many tasks. That in turn could affect hiring decisions: a McKinsey study estimated that more than half of all jobs could see many of their tasks automated by machines in the next seven years, while fifty-two million Americans work in roles with a “medium exposure to automation” by 2030. The economists Daron Acemoglu and Pascual Restrepo estimate that robots cause the wages of local workers to fall. With each additional robot per thousand workers there is a decline in the employment-to-population ratio, and consequently a fall in wages. Today algorithms perform the vast bulk of equity trades and increasingly act across financial institutions, and yet, even as Wall Street booms, it sheds jobs as technology encroaches on more and more tasks. Many remain unconvinced. Economists like David Autor argue that new technology consistently raises incomes, creating demand for new labor. Technology makes companies more productive, it generates more money, which then flows back into the economy. Put simply, demand is insatiable, and this demand, stoked by the wealth technology has generated, gives rise to new jobs requiring human labor. After all, skeptics say, ten years of deep learning success has not unleashed a jobs automation meltdown. Buying into that fear was, some argue, just a repeat of the old “lump of labor” fallacy, which erroneously claims there is only a set amount of work to go around. Instead, the future looks more like billions of people working in high-end jobs still barely conceived of. I believe this rosy vision is implausible over the next couple of decades; automation is unequivocally another fragility amplifier. As we saw in chapter 4, AI’s rate of improvement is well beyond exponential, and there appears no obvious ceiling in sight. Machines are rapidly imitating all kinds of human abilities, from vision to speech and language. Even without fundamental progress toward “deep understanding,” new language models can read, synthesize, and generate eye-wateringly wateringly accurate and highly useful text. There are literally hundreds of roles where this single skill alone is the core requirement, and yet there is so much more to come from AI. Yes, it’s almost certain that many new job categories will be created. Who would have thought that “influencer” would become a highly sought-after role? Or imagined that in 2023 people would be working as “prompt engineers”—nontechnical programmers of large language models who become adept at coaxing out specific responses? Demand for masseurs, cellists, and baseball pitchers won’t go away. But my best guess is that new jobs won’t come in the numbers or timescale to truly help. The number of people who can get a PhD in machine learning will remain tiny in comparison to the scale of layoffs. And, sure, new demand will create new work, but that doesn’t mean it all gets done by human beings. Labor markets also have immense friction in terms of skills, geography, and identity. Consider that in the last bout of deindustrialization the steelworker in Pittsburgh or the carmaker in Detroit could hardly just up sticks, retrain mid-career, and get a job as a derivatives trader in New York or a branding consultant in Seattle or a schoolteacher in Miami. If Silicon Valley or the City of London creates lots of new jobs, it doesn’t help people on the other side of the country if they don’t have the right skills or aren’t able to relocate. If your sense of self is wedded to a particular kind of work, it’s little consolation if you feel your new job demeans your dignity. Working on a zero-hours contract in a distribution center doesn’t provide the sense of pride or social solidarity that came from working for a booming Detroit auto manufacturer in the 1960s. The Private Sector Job Quality Index, a measure of how many jobs provide above-average income, has plunged since 1990; it suggests that well-paying jobs as a proportion of the total have already started to fall. Countries like India and the Philippines have seen a huge boom from business process outsourcing, creating comparatively high-paying jobs in places like call centers. It’s precisely this kind of work that will be targeted by automation. New jobs might be created in the long term, but for millions they won’t come quick enough or in the right places. At the same time, a jobs recession will crater tax receipts, damaging public services and calling into question welfare programs just as they are most needed. Even before jobs are decimated, governments will be stretched thin, struggling to meet all their commitments, finance themselves sustainably, and deliver services the public has come to expect. Moreover, all this disruption will happen globally, on multiple dimensions, affecting every rung of the development ladder from primarily agricultural economies to advanced service-based sectors. From Lagos to L.A., pathways to sustainable employment will be subject to immense, unpredictable, and fast-evolving dislocations. Even those who don’t foresee the most severe outcomes of automation still accept that it is on course to cause significant medium-term disruptions. Whichever side of the jobs debate you fall on, it’s hard to deny that the ramifications will be hugely destabilizing for hundreds of millions who will, at the very least, need to re-skill and transition to new types of work. Optimistic scenarios still involve troubling political ramifications from broken government finances to underemployed, insecure, and angry populations. It augurs trouble. Another stressor in a stressed world. Suleyman, Mustafa. The Coming Wave (pp. 227-228). Crown. Kindle Edition.

 

AI means surveillance and totalitarianism

 

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card

ROCKET FUEL FOR AUTHORITARIANISM When compared with superstar corporations, governments appear slow, bloated, and out of touch. It’s tempting to dismiss them as headed for the trash can of history. However, another inevitable reaction of nation-states will be to use the tools of the coming wave to tighten their grip on power, taking full advantage to entrench their dominance. In the twentieth century, totalitarian regimes wanted planned economies, obedient populations, and controlled information ecosystems. They wanted complete hegemony. Every aspect of life was managed. Five-year plans dictated everything from the number and content of films to bushels of wheat expected from a given field. High modernist planners hoped to create pristine cities of stark order and flow. An ever-watchful and ruthless security apparatus kept it all ticking over. Power concentrated in the hands of a single supreme leader, capable of surveying the entire picture and acting decisively. Think Soviet collectivization, Stalin’s five-year plans, Mao’s China, East Germany’s Stasi. This is government as dystopian nightmare. And so far at least, it has always gone disastrously wrong. Despite the best efforts of revolutionaries and bureaucrats alike, society could not be bent into shape; it was never fully “legible” to the state, but a messy, ungovernable reality that would not conform with the purist dreams of the center. Humanity is too multifarious, too impulsive to be boxed in like this. In the past, the tools available to totalitarian governments simply weren’t equal to the task. So those governments failed; they failed to improve quality of life, or eventually they collapsed or reformed. Extreme concentration wasn’t just highly undesirable; it was practically impossible. The coming wave presents the disturbing possibility that this may no longer be true. Instead, it could initiate an injection of centralized power and control that will morph state functions into repressive distortions of their original purpose. Rocket fuel for authoritarians and for great power competition alike. The ability to capture and harness data at an extraordinary scale and precision; to create territory-spanning systems of surveillance and control, reacting in real time; to put, in other words, history’s most powerful set of technologies under the command of a single body, would rewrite the limits of state power so comprehensively that it would produce a new kind of entity altogether. Your smart speaker wakes you up. Immediately you turn to your phone and check your emails. Your smart watch tells you you’ve had a normal night’s sleep and your heart rate is average for the morning. Already a distant organization knows, in theory, what time you are awake, how you are feeling, and what you are looking at. You leave the house and head to the office, your phone tracking your movements, logging the keystrokes on your text messages and the podcast you listen to. On the way, and throughout the day, you are captured on CCTV hundreds of times. After all, this city has at least one camera for every ten people, maybe many more than that. When you swipe in at the office, the system notes your time of entry. Software installed on your computer monitors productivity down to eye movements. On the way home you stop to buy dinner. The supermarket’s loyalty scheme tracks your purchases. After eating, you binge-stream another TV series; your viewing habits are duly noted. Every glance, every hurried message, every half thought registered in an open browser or fleeting search, every step through bustling city streets, every heartbeat and bad night’s sleep, every purchase made or backed out of—it is all captured, watched, tabulated. And this is only a tiny slice of the possible data harvested every day, not just at work or on the phone, but at the doctor’s office or in the gym. Almost every detail of life is logged, somewhere, by those with the sophistication to process and act on the data they collect. This is not some far-off dystopia. I’m describing daily reality for millions in a city like London. The only step left is bringing these disparate databases together into a single, integrated system: a perfect twenty-first-century surveillance apparatus. The preeminent example is, of course, China. That’s hardly news, but what’s become clear is how advanced and ambitious the party’s program already is, let alone where it might end up in twenty or thirty years. Compared with the West, Chinese research into AI concentrates on areas of surveillance like object tracking, scene understanding, and voice or action recognition. Surveillance technologies are ubiquitous, increasingly granular in their ability to home in on every aspect of citizens’ lives. They combine visual recognition of faces, gaits, and license plates with data collection—including bio-data—on a mass scale. Centralized services like WeChat bundle everything from private messaging to shopping and banking in one easily traceable place. Drive the highways of China, and you’ll notice hundreds of Automatic Number Plate Recognition cameras tracking vehicles. (These exist in most large urban areas in the Western world, too.) During COVID quarantines, robot dogs and drones carried speakers blasting messages warning people to stay inside. Facial recognition software builds on the advances in computer vision we saw in part 2, identifying individual faces with exquisite accuracy. When I open my phone, it starts automatically upon “seeing” my face: a small but slick convenience, but with obvious and profound implications. Although the system was initially developed by corporate and academic researchers in the United States, nowhere embraced or perfected the technology more than China. Chairman Mao had said “the people have sharp eyes” when watching their neighbors for infractions against communist orthodoxy. By 2015 this was the inspiration for a massive “Sharp Eyes” facial recognition program that ultimately aspired to roll such surveillance out across no less than 100 percent of public space. A team of leading researchers from the Chinese University of Hong Kong went on to found SenseTime, one of the world’s largest facial recognition companies, built on a database of more than two billion faces. China is now the leader in facial recognition technologies, with giant companies like Megvii and CloudWalk vying with SenseTime for market share. Chinese police even have sunglasses with built-in facial recognition technology capable of tracking suspects in crowds. Around half the world’s billion CCTV cameras are in China. Many have built-in facial recognition and are carefully positioned to gather maximal information, often in quasi-private spaces: residential buildings, hotels, even karaoke lounges. A New York Times investigation found the police in Fujian Province alone estimated they held a database of 2.5 billion facial images. They were candid about its purpose: “controlling and managing people.” Authorities are also looking to suck in audio data—police in the city of Zhongshan wanted cameras that could record audio within a three-hundred-foot radius—and close monitoring and storage of bio-data became routine in the COVID era. The Ministry of Public Security is clear on the next priority: stitch these scattered databases and services into a coherent whole, from license plates to DNA, WeChat accounts to credit cards. This AI-enabled system could spot emerging threats to the CCP like dissenters and protests in real time, allowing for a seamless, crushing government response to anything it perceived as undesirable. Nowhere does this come together with more horrifying potential than in the Xinjiang Autonomous Region. This rugged and remote part of northwest China has seen the systematic and technologically empowered repression and ethnic cleansing of its native Uighur people. All these systems of monitoring and control are brought together here. Cities are placed under blankets of camera surveillance with facial recognition and AI tracking. Checkpoints and “reeducation” camps govern movements and freedoms. A system of social credit scores based on numerous surveilled databases keeps tabs on the population. Authorities have built an iris-scan database that has the capacity to hold up to thirty million samples—more than the region’s population. Societies of overweening surveillance and control are already here, and now all of this is set to escalate enormously into a next-level concentration of power at the center. Yet it would be a mistake to write this off as just a Chinese or authoritarian problem. For a start, this tech is being exported wholesale to places like Venezuela and Zimbabwe, Ecuador and Ethiopia. Even to the United States. In 2019, the U.S. government banned federal agencies and their contractors from buying telecommunications and surveillance equipment from a number of Chinese providers including Huawei, ZTE, and Hikvision. Yet, just a year later, three federal agencies were found to have bought such equipment from prohibited vendors. More than one hundred U.S. towns have even acquired technology developed for use on the Uighurs in Xinjiang. A textbook failure of containment. Western firms and governments are also in the vanguard of building and deploying this tech. Invoking London above was no accident: it competes with cities like Shenzhen for most surveilled in the world. It’s no secret that governments monitor and control their own populations, but these tendencies extend deep into Western firms, too. In smart warehouses every micromovement of every worker is tracked down to body temperature and loo breaks. Companies like Vigilant Solutions aggregate movement data based on license plate tracking, then sell it to jurisdictions like state or municipal governments. Even your take-out pizza is being watched: Domino’s uses AI-powered cameras to check its pies. Just as much as anyone in China, those in the West leave a vast data exhaust every day of their lives. And just as in China, it is harvested, processed, operationalized, and sold. — Before the coming wave the notion of a global “high-tech panopticon” was the stuff of dystopian novels, Yevgeny Zamyatin’s We or George Orwell’s 1984. The panopticon is becoming possible. Billions of devices and trillions of data points could be operated and monitored at once, in real time, used not just for surveillance but for prediction. Not only will it foresee social outcomes with precision and granularity, but it might also subtly or overtly steer or coerce them, from grand macro-processes like election results down to individual consumer behaviors. This raises the prospect of totalitarianism to a new plane. It won’t happen everywhere, and not all at once. But if AI, biotech, quantum, robotics, and the rest of it are centralized in the hands of a repressive state, the resulting entity would be palpably different from any yet seen. In the next chapter we will return to this possibility. However, before then comes another trend. One completely, and paradoxically, at odds with centralization. Suleyman, Mustafa. The Coming Wave (p. 246). Crown. Kindle Edition.

The impact of uncontrolled AI is catastrophe that kills a billion

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card

VARIETIES OF CATASTROPHE To see what catastrophic harms we should prepare for, simply extrapolate the bad actor attacks we saw in chapter 10. Here are just a few plausible scenarios. Terrorists mount automatic weapons equipped with facial recognition to an autonomous drone swarm hundreds or thousands strong, each capable of quickly rebalancing from the weapon’s recoil, firing short bursts, and moving on. These drones are unleashed on a major downtown with instructions to kill a specific profile. In busy rush hour these would operate with terrifying efficiency, following an optimized route around the city. In minutes there would be an attack at far greater scale than, say, the 2008 Mumbai attacks, which saw armed terrorists roaming through city landmarks like the central train station. A mass murderer decides to hit a huge political rally with drones, spraying devices, and a bespoke pathogen. Soon attendees become sick, then their families. The speaker, a much-loved and much-loathed political lightning rod, is one of the first victims. In a febrile partisan atmosphere an assault like this ignites violent reprisals around the country and the chaos cascades. Using only natural language instruction, a hostile conspiracist in America disseminates masses of surgically constructed and divisive disinformation. Numerous attempts are made, most of which fail to gain traction. One eventually catches on: a police murder in Chicago. It’s completely fake, but the trouble on the streets, the widespread revulsion, is real. The attackers now have a playbook. By the time the video is verified as a fraud, violent riots with multiple casualties roil around the country, the fires continually stoked by new gusts of disinformation. Or imagine all that happening at the same time. Or not just at one event or in one city, but in hundreds of places. With tools like this it doesn’t take too much to realize that bad actor empowerment opens the door to catastrophe. Today’s AI systems try hard not to tell you how to poison the water supply or build an undetectable bomb. They are not yet capable of defining or pursuing goals on their own. However, as we have seen, both more widely diffused and less safe versions of today’s cutting-edge and more powerful models are coming, fast. Of all the catastrophic risks from the coming wave, AI has received the most coverage. But there are plenty more. Once militaries are fully automated, the barriers to entry for conflict will be far lower. A war might be sparked accidentally for reasons that forever remain unclear, AIs detecting some pattern of behavior or threat and then reacting, instantaneously, with overwhelming force. Suffice to say, the nature of that war could be alien, escalate quickly, and be unsurpassed in destructive consequences. We’ve already come across engineered pandemics and the perils of accidental releases, and glimpsed what happens when millions of self-improvement enthusiasts can experiment with the genetic code of life. An extreme bio-risk event of a less obvious kind, targeting a given portion of the population, say, or sabotaging an ecosystem, cannot be discounted. Imagine activists wanting to stop the cocaine trade inventing a new bug that targets only coca plants as a way to replace aerial fumigation. Or if militant vegans decided to disrupt the entire meat supply chain, with dire anticipated and unanticipated consequences. Either might spiral out of control. We know what a lab leak might look like in the context of amplifying fragility, but if it was not quickly brought under control, it would rank with previous plagues. To put this in context, the omicron variant of COVID infected a quarter of Americans within a hundred days of first being identified. What if we had a pandemic that had, say, a 20 percent mortality rate, but with that kind of transmissibility? Or what if it was a kind of respiratory HIV that would lie incubating for years with no acute symptoms? A novel human transmissible virus with a reproduction rate of, say, 4 (far below chicken pox or measles) and a case fatality rate of 50 percent (far below Ebola or bird flu) could, even accounting for lockdown-style measures, cause more than a billion deaths in a matter of months. What if multiple such pathogens were released at once? This goes far beyond fragility amplification; it would be an unfathomable calamity.

“AI Bad” isn’t fear mongering, it’s based on objective risk

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card DEBATEUS!

The promise of technology is that it improves lives, the benefits far outweighing the costs and downsides. This set of wicked choices means that promise has been savagely inverted. Doom-mongering makes people—myself included—glassy-eyed. At this point, you may be feeling wary or skeptical. Talking of catastrophic effects often invites ridicule: accusations of catastrophism, indulgent negativity, shrill alarmism, navel-gazing on remote and rarefied risks when plenty of clear and present dangers scream for attention. Like breathless techno-optimism, breathless techno-catastrophism is easy to dismiss as a twisted, misguided form of hype unsupported by the historical record. But just because a warning has dramatic implications isn’t good grounds to automatically reject it. The pessimism-averse complacency greeting the prospect of disaster is itself a recipe for disaster. It feels plausible, rational in its own terms, “smart” to dismiss warnings as the overblown chatter of a few weirdos, but this attitude prepares the way for its own failure. No doubt, technological risk takes us into uncertain territory. Nonetheless, all the trends point to a profusion of risk. This speculation is grounded in constantly compounding scientific and technological improvements. Those who dismiss catastrophe are, I believe, discounting the objective facts before us. After all, we are not talking here about the proliferation of motorbikes or washing machines. Suleyman, Mustafa. The Coming Wave (p. 259). Crown. Kindle Edition.

Potential catastrophes

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card

VARIETIES OF CATASTROPHE To see what catastrophic harms we should prepare for, simply extrapolate the bad actor attacks we saw in chapter 10. Here are just a few plausible scenarios. Terrorists mount automatic weapons equipped with facial recognition to an autonomous drone swarm hundreds or thousands strong, each capable of quickly rebalancing from the weapon’s recoil, firing short bursts, and moving on. These drones are unleashed on a major downtown with instructions to kill a specific profile. In busy rush hour these would operate with terrifying efficiency, following an optimized route around the city. In minutes there would be an attack at far greater scale than, say, the 2008 Mumbai attacks, which saw armed terrorists roaming through city landmarks like the central train station. A mass murderer decides to hit a huge political rally with drones, spraying devices, and a bespoke pathogen. Soon attendees become sick, then their families. The speaker, a much-loved and much-loathed political lightning rod, is one of the first victims. In a febrile partisan atmosphere an assault like this ignites violent reprisals around the country and the chaos cascades. Using only natural language instruction, a hostile conspiracist in America disseminates masses of surgically constructed and divisive disinformation. Numerous attempts are made, most of which fail to gain traction. One eventually catches on: a police murder in Chicago. It’s completely fake, but the trouble on the streets, the widespread revulsion, is real. The attackers now have a playbook. By the time the video is verified as a fraud, violent riots with multiple casualties roil around the country, the fires continually stoked by new gusts of disinformation. Or imagine all that happening at the same time. Or not just at one event or in one city, but in hundreds of places. With tools like this it doesn’t take too much to realize that bad actor empowerment opens the door to catastrophe. Today’s AI systems try hard not to tell you how to poison the water supply or build an undetectable bomb. They are not yet capable of defining or pursuing goals on their own. However, as we have seen, both more widely diffused and less safe versions of today’s cutting-edge and more powerful models are coming, fast. Of all the catastrophic risks from the coming wave, AI has received the most coverage. But there are plenty more. Once militaries are fully automated, the barriers to entry for conflict will be far lower. A war might be sparked accidentally for reasons that forever remain unclear, AIs detecting some pattern of behavior or threat and then reacting, instantaneously, with overwhelming force. Suffice to say, the nature of that war could be alien, escalate quickly, and be unsurpassed in destructive consequences. We’ve already come across engineered pandemics and the perils of accidental releases, and glimpsed what happens when millions of self-improvement enthusiasts can experiment with the genetic code of life. An extreme bio-risk event of a less obvious kind, targeting a given portion of the population, say, or sabotaging an ecosystem, cannot be discounted. Imagine activists wanting to stop the cocaine trade inventing a new bug that targets only coca plants as a way to replace aerial fumigation. Or if militant vegans decided to disrupt the entire meat supply chain, with dire anticipated and unanticipated consequences. Either might spiral out of control. We know what a lab leak might look like in the context of amplifying fragility, but if it was not quickly brought under control, it would rank with previous plagues. To put this in context, the omicron variant of COVID infected a quarter of Americans within a hundred days of first being identified. What if we had a pandemic that had, say, a 20 percent mortality rate, but with that kind of transmissibility? Or what if it was a kind of respiratory HIV that would lie incubating for years with no acute symptoms? A novel human transmissible virus with a reproduction rate of, say, 4 (far below chicken pox or measles) and a case fatality rate of 50 percent (far below Ebola or bird flu) could, even accounting for lockdown-style measures, cause more than a billion deaths in a matter of months. What if multiple such pathogens were released at once? This goes far beyond fragility amplification; it would be an unfathomable calamity.

AI means autonomous drones cyberwarfare, defeated regulations, financial collapse

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card

The cost of military-grade drones has fallen by three orders of magnitude over the last decade. By 2028, $26 billion a year will be spent on military drones, and at that point many are likely to be fully autonomous. Live deployments of autonomous drones are becoming more plausible by the day. In May 2021, for example, an AI drone swarm in Gaza was used to find, identify, and attack Hamas militants. Start-ups like Anduril, Shield AI, and Rebellion Defense have raised hundreds of millions of dollars to build autonomous drone networks and other military applications of AI. Complementary technologies like 3-D printing and advanced mobile communications will reduce the cost of tactical drones to a few thousand dollars, putting them within reach of everyone from amateur enthusiasts to paramilitaries to lone psychopaths. In addition to easier access, AI-enhanced weapons will improve themselves in real time. WannaCry’s impact ended up being far more limited than it could have been. Once the software patch was applied, the immediate issue was resolved. AI transforms this kind of attack. AI cyberweapons will continuously probe networks, adapting themselves autonomously to find and exploit weaknesses. Existing computer worms replicate themselves using a fixed set of preprogrammed heuristics. But what if you had a worm that improved itself using reinforcement learning, experimentally updating its code with each network interaction, each time finding more and more efficient ways to take advantage of cyber vulnerabilities? Just as systems like AlphaGo learn unexpected strategies from millions of self-played games, so too will AI-enabled cyberattacks. However much you war-game every eventuality, there’s inevitably going to be a tiny vulnerability discoverable by a persistent AI. Everything from cars and planes to fridges and data centers relies on vast code bases. The coming AIs make it easier than ever to identify and exploit weaknesses. They could even find legal or financial means of damaging corporations or other institutions, hidden points of failure in banking regulation or technical safety protocols. As the cybersecurity expert Bruce Schneier has pointed out, AIs could digest the world’s laws and regulations to find exploits, arbitraging legalities. Imagine a huge cache of documents from a company leaked. A legal AI might be able to parse this against multiple legal systems, figure out every possible infraction, and then hit that company with multiple crippling lawsuits around the world at the same time. AIs could develop automated trading strategies designed to destroy competitors’ positions or create disinformation campaigns (more on this in the next section) engineering a run on a bank or a product boycott, enabling a competitor to swoop in and buy the company—or simply watch it collapse. AI adept at exploiting not just financial, legal, or communications systems but also human psychology, our weaknesses and biases, is on the way. Researchers at Meta created a program called CICERO. It became an expert at playing the complex board game Diplomacy, a game in which planning long, complex strategies built around deception and backstabbing is integral. It shows how AIs could help us plan and collaborate, but also hints at how they could develop psychological tricks to gain trust and influence, reading and manipulating our emotions and behaviors with a frightening level of depth, a skill useful in, say, winning at Diplomacy or electioneering and building a political movement. The space for possible attacks against key state functions grows even as the same premise that makes AI so powerful and exciting—its ability to learn and adapt—empowers bad actors.  For centuries cutting-edge offensive capabilities, like massed artillery, naval broadsides, tanks, aircraft carriers, or ICBMs, have initially been so costly that they remained the province of the nation-state. Now they are evolving so fast that they quickly proliferate into the hands of research labs, start-ups, and garage tinkerers. Just as social media’s one-to-many broadcast effect means a single person can suddenly broadcast globally, so the capacity for far-reaching consequential action is becoming available to everyone. This new dynamic—where bad actors are emboldened to go on the offensive—opens up new vectors of attack thanks to the interlinked, vulnerable nature of modern systems: not just a single hospital but an entire health system can be hit; not just a warehouse but an entire supply chain. With lethal autonomous weapons the costs, in both material and above all human terms, of going to war, of attacking, are lower than ever. At the same time, same time, all this introduces greater levels of deniability and ambiguity, degrading the logic of deterrence. If no one can be sure who initiated an assault, or what exactly has happened, why not go ahead? Suleyman, Mustafa. The Coming Wave (p. 212). Crown. Kindle Edition.

The ”good guys” with an AI will not be able to keep up with the “ad guys,” especially in the short-term

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card

Throughout history technology has produced a delicate dance of offensive and defensive advantage, the pendulum swinging between the two but a balance roughly holding: for every new projectile or cyberweapon, a potent countermeasure has quickly arisen. Cannons may wear down a castle’s walls, but they can also rip apart an invading army. Now, powerfully, asymmetric, , omni-use technologies are certain to reach the hands of those who want to damage the state. While defensive operations will be strengthened in time, the nature of the four features favors offense: this proliferation of power is just too wide, fast, and open. An algorithm of world-changing significance can be stored on a laptop; soon it won’t even require the kind of vast, regulatable infrastructure of the last wave and the internet. Unlike an arrow or even a hypersonic missile, AI and bioagents will evolve more cheaply, more rapidly, and more autonomously than any technology we’ve ever seen. Consequently, without a dramatic set of interventions to alter the current course, millions will have access to these capabilities in just a few years. Maintaining a decisive, indefinite strategic advantage across such a broad spectrum of general-use technologies is simply not possible. Eventually, the balance might be restored, but not before a wave of immensely destabilizing force is unleashed. And as we’ve seen, the nature of the threat is far more widespread than blunt forms of physical assault. Information and communication together is its own escalating vector of risk, another emerging fragility amplifier requiring attention. Welcome to the deepfake era. Suleyman, Mustafa. The Coming Wave (p. 213). Crown. Kindle Edition.

Deep fakes are indistinguishable from reality

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card

Ask yourself, what happens when anyone has the power to create and broadcast material with incredible levels of realism? These examples occurred before the means to generate near-perfect deepfakes—whether text, images, video, or audio—became as easy as writing a query into Google. As we saw in chapter 4, large language models now show astounding results at generating synthetic media. A world of deepfakes indistinguishable from conventional media is here. These fakes will be so good our rational minds will find it hard to accept they aren’t real. Deepfakes are spreading fast. If you want to watch a convincing fake of Tom Cruise preparing to wrestle an alligator, well, you can. More and more everyday people will be imitated as the required training data falls to just a handful of examples. It’s already happening. A bank in Hong Kong transferred millions of dollars to fraudsters in 2021, after one of their clients was impersonated by a deepfake. Sounding identical to the real client, the fraudsters phoned the bank manager and explained how the company needed to move money for an acquisition. All the documents seemed to check out, the voice and character were flawlessly familiar, so the manager initiated the transfer. Anyone motivated to sow instability now has an easier time of it. Say three days before an election the president is caught on camera using a racist slur. The campaign press office strenuously denies it, but everyone knows what they’ve seen. Outrage seethes around the country. Polls nose-dive. Swing states suddenly shift toward the opponent, who, against all expectations, wins. A new administration takes charge. But the video is a deepfake, one so sophisticated it evades even the best fake-detecting neural networks. The threat here lies not so much with extreme cases as in subtle, nuanced, and highly plausible scenarios being exaggerated and distorted. It’s not the president charging into a school screaming nonsensical rubbish while hurling grenades; it’s the president resignedly saying he has no choice but to institute a set of emergency laws or reintroduce the draft. It’s not Hollywood fireworks; it’s the purported surveillance camera footage of a group of white policemen caught on tape beating a Black man to death. Sermons from the radical preacher Anwar al-Awlaki inspired the Boston Marathon bombers, the attackers of Charlie Hebdo in Paris, and the shooter who killed forty-nine people at an Orlando nightclub. Yet al-Awlaki died in 2011, the first U.S. citizen killed by a U.S. drone strike, before any of these events. His radicalizing messages were, though, still available on YouTube until 2017. Suppose that using deepfakes new videos of al-Awlaki could be “unearthed,” each commanding further targeted attacks with precision-honed rhetoric. Not everyone would buy it, but those who wanted to believe would find it utterly compelling. Soon these videos will be fully and believably interactive. You are talking directly to him. He knows you and adapts to your dialect and style, plays on your history, your personal grievances, your bullying at school, your terrible, immoral Westernized parents. This is not disinformation as blanket carpet bombing; it’s disinformation as surgical strike. Phishing attacks against politicians or businesspeople, disinformation with the aim of major financial-market disruption or manipulation, media designed to poison key fault lines like sectarian or racial divides, even low-level scams—trust is damaged and fragility again amplified. Eventually entire and rich synthetic histories of seemingly real-world events will be easy to generate. Individual citizens won’t have time or the tools to verify a fraction of the content coming their way. Fakes will easily pass sophisticated checks, let alone a two-second smell test.

AI leads to massive disinformation campaigns

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card

In the 1980s, the Soviet Union funded disinformation campaigns suggesting that the AIDS virus was the result of a U.S. bioweapons program. Years later, some communities were still dealing with the mistrust and fallout. The campaigns, meanwhile, have not stopped. According to Facebook, Russian agents created no fewer than eighty thousand pieces of organic content that reached 126 million Americans on their platforms during the 2016 election. AI-enhanced digital tools will exacerbate information operations like these, meddling in elections, exploiting social divisions, and creating elaborate astroturfing campaigns to sow chaos. Unfortunately, it’s far from just Russia. More than seventy countries have been found running disinformation campaigns. China is quickly catching up with Russia; others from Turkey to Iran are developing their skills. (The CIA, too, is no stranger to info ops.) Early in the COVID-19 pandemic a blizzard of disinformation had deadly consequences. A Carnegie Mellon study analyzed more than 200 million tweets discussing COVID-19 at the height of the first lockdown. Eighty-two percent of influential users advocating for “reopening America” were bots. This was a targeted “propaganda machine,” most likely Russian, designed to intensify the worst public health crisis in a century. Deepfakes automate these information assaults. Until now effective disinformation campaigns have been labor-intensive. While bots and fakes aren’t difficult to make, most are of low quality, easily identifiable, and only moderately effective at actually changing targets’ behavior. High-quality synthetic media changes this equation. Not all nations currently have the funds to build huge disinformation programs, with dedicated offices and legions of trained staff, but that’s less of a barrier when high-fidelity material can be generated at the click of a button. Much of the coming chaos will not be accidental. It will come as existing disinformation campaigns are turbocharged, expanded, and devolved out to a wide group of motivated actors. The rise of synthetic media at scale and minimal cost amplifies both disinformation (malicious and intentionally misleading information) and misinformation (a wider and more unintentional pollution of the information space) at once. Cue an “Infocalypse,” the point at which society can no longer manage a torrent of sketchy material, where the information ecosystem grounding knowledge, trust, and social cohesion, the glue holding society together, falls apart. In the words of a Brookings Institution report, ubiquitous, perfect synthetic media means “distorting democratic discourse; manipulating elections; eroding trust in institutions; weakening journalism; exacerbating social divisions; undermining public safety; and inflicting hard-to-repair damage on the reputation of prominent individuals, including elected officials and candidates for office.” Not all stressors and harms come from bad actors,

AI leads to the creation of pandemic pathogens that could kill everyone

Mustafa Suleyman, September 2023, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection Ai, https://www.youtube.com/watch?v=CTxnLsYHWuI

25:52

I think that the darkest scenario there is that people will experiment with pathogens engineered you know synthetic pathogens that might 26:03 end up accidentally or intentionally being more transmissible I.E they they can spread faster 26:11 or more lethal I.E you know they cause more harm or potentially kill like a 26:18 pandemic like a pandemic um and that’s where we need containment right 26:24 we have to limit access to the tools and the know-how to carry out that kind of 26:31 experimentation so one framework of thinking about this with respect to 26:37 making containment possible is that we really are experimenting with 26:43 dangerous materials and Anthrax is not something that can be bought over the 26:49 Internet that can be freely experimented with and likewise the very best of these 26:55 tools in a few years time are going to be capable of creating you know new synthetic 27:02 pandemic pathogens and so we have to restrict access to those things that means restricting access to the compute Would we be able to regulate AI? 27:08 it means restricting access to the software that runs the models to the 27:14 cloud environments that provide apis provide you access to experiment with those things and of course on the 27:21 biology side it means restricting access to some of the substances and people aren’t going to like this 27:26 people are not going to like that claim because it means that those who want to do good with those tools 27:33 those who want to create a startup the small guy the little developer that 27:39 struggles to comply with all the regulations they’re going to be pissed off understandably right but that is the 27:46 age we’re in deal with it like we have to confront that reality that means that we have to 27:53 approach this with the precautionary principle right never before in the invention of a technology or in the 27:59 creation of a regulation have we proactively said we need to go slowly we 28:05 need to make sure that this first does no harm the precautionary principle and 28:10 that is just an unprecedented moment no other Technology’s done that right because I think 28:17 we collectively in the industry those of us who are closest to the work can see a 28:22 place in five years or ten years where it could get out of control and we have to get on top of it now and it’s 28:28 better to forgo like that is give up some of those potential upsides or 28:34 benefits until we can be more sure that it can be contained that it can be 28:40 controlled that it always serves our Collective interests think about that so I think about what 28:45 you’ve just said there about being able to create these pathogens these diseases and viruses Etc that you know could 28:51 become weapons or whatever else but with artificial intelligence and the power of that intelligence with these um 28:58 pathogens you could theoretically ask one of these systems to create a virus 29:05 that a very deadly virus um you could ask the artificial 29:11 intelligence to create a very deadly virus that has certain properties um 29:17 maybe even that mutates over time in a certain way so it only kills a certain amount of people kind of like a nuclear bomb of viruses that you could just pop 29:24 hit an enemy with now if I’m if I hear that and I go okay that’s powerful I would like one of those you know there 29:31 might be an adversary out there that goes I would like one of those just in case America get out of hand in America is thinking you know I want one of those 29:37 in case Russia gets out of hand and so okay you might take a precautionary approach in the United 29:43 States but that’s only going to put you on the back foot when China or Russia or one of your adversaries accelerates 29:50 forward in that in that path and it’s the same with the the nuclear bomb and you know you nailed it I mean that is 29:58 the race condition we refer to that as the race condition the idea that if I 30:05 don’t do it the other party is gonna do it and therefore I must do it but the 30:11 problem with that is that it creates a self-fulfilling prophecy so the default there is that we all end up doing it and 30:18 that can’t be right because there is a opportunity for massive cooperation here 30:25 there’s a shared that is between us and China and every other quote unquote them 30:31 or they or enemy that we want to create we’ve all got a shared interest 30:37 in advancing the collective health and well-being of humans and Humanity how 30:44 well have we done it promoting shared interest well in the development of Technologies over the years even at like 30:50 a corporate level even you know you know the nuclear non-proliferation 30:55 treaty has been reasonably successful there’s only nine nuclear states in the world today we’ve stopped many like 31:02 three countries actually gave up nuclear weapons because we incentivized them with sanctions and threats and economic 31:09 rewards um small groups have tried to get access to nuclear weapons and so far have 31:14 largely failed it’s expensive though right and hard to like uranium as a as a chemical to keep it stable and to to buy 31:21 it and to house it I mean I can just put it in the shed you certainly couldn’t put it in a shed you can’t download uranium 235 off the Internet it’s not 31:29 available open source that is totally true so it’s got different characteristics for sure but a kid in 31:34 Russia could you know in his bedroom could download something onto his computer that’s 31:40

AGI in 3 years

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi)., page number at end of card

Today, AI systems can almost perfectly recognize faces and objects. We take speech-to-text transcription and instant language translation for granted. AI can navigate roads and traffic well enough to drive autonomously in some settings. Based on a few simple prompts, a new generation of AI models can generate novel images and compose text with extraordinary levels of detail and coherence. AI systems can produce synthetic voices with uncanny realism and compose music of stunning beauty. Even in more challenging domains, ones long thought to be uniquely suited to human capabilities like long-term planning, imagination, and simulation of complex ideas, progress leaps forward. AI has been climbing the ladder of cognitive abilities for decades, and it now looks set to reach human-level performance across a very wide range of tasks within the next three years. . That is a big claim, but if I’m even close to right, the implications are truly profound. Suleyman, Mustafa. The Coming Wave (p. 23). Crown. Kindle Edition In 2010 almost no one was talking seriously about AI. Yet what had once seemed a niche mission for a small group of researchers and entrepreneurs has now become a vast global endeavor. AI is everywhere, on the news and in your smartphone, trading stocks and building websites. Many of the world’s largest companies and wealthiest nations barrel forward, developing cutting-edge AI models and genetic engineering techniques, fueled by tens of billions of dollars in investment. Once matured, these emerging technologies will spread rapidly, becoming cheaper, more accessible, and widely diffused throughout society. They will offer extraordinary new medical advances and clean energy breakthroughs, creating not just new businesses but new industries and quality of life improvements in almost every imaginable area. Suleyman, Mustafa. The Coming Wave (p. 24). Crown. Kindle Edition.

AI means automated warfare that threatens civilization

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

And yet alongside these benefits, AI, synthetic biology, and other advanced forms of technology produce tail risks on a deeply concerning scale. They could present an existential threat to nation-states—risks so profound they might disrupt or even overturn the current geopolitical order. They open pathways to immense AI-empowered cyberattacks, automated wars that could devastate countries, engineered pandemics, and a world subject to unexplainable and yet seemingly omnipotent forces. The likelihood of each may be small, but the possible consequences are huge. Even a slim chance of outcomes like these requires urgent attention. Some countries will react to the possibility of such catastrophic risks with a form of technologically charged authoritarianism to slow the spread of these new powers. This will require huge levels of surveillance along with massive intrusions into our private lives. Keeping a tight rein on technology technology could become part of a drift to everything and everyone being watched, all the time, in a dystopian global surveillance system justified by a desire to guard against the most extreme possible outcomes. Suleyman, Mustafa. The Coming Wave (pp. 24-25). Crown. Kindle Edition.

Banning tech means societal collapse

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

Equally plausible is a Luddite reaction. Bans, boycotts, and moratoriums will ensue. Is it even possible to step away from developing new technologies and introduce a series of moratoriums? Unlikely. With their enormous geostrategic and commercial value, it’s difficult to see how nation-states or corporations will be persuaded to unilaterally give up the transformative powers unleashed by these breakthroughs. Moreover, attempting to ban development of new technologies is itself a risk: technologically stagnant societies are historically unstable and prone to collapse. Eventually, they lose the capacity to solve problems, to progress. Suleyman, Mustafa. The Coming Wave (p. 25). Crown. Kindle Edition.

DNA synthesizers can already create bioweapons

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

Over the course of the day a series of hair-raising risks were floated over the coffees, biscuits, and PowerPoints. One stood out. The presenter showed how the price of DNA synthesizers, which can print bespoke strands of DNA, was falling rapidly. Costing a few tens of thousands of dollars, they are small enough to sit on a bench in your garage and let people synthesize—that is, manufacture—DNA. And all this is now possible for anyone with graduate-level training in biology or an enthusiasm for self-directed learning online. Given the increasing availability of the tools, the presenter painted a harrowing vision: Someone could soon create novel pathogens far more transmissible and lethal than anything found in nature. These synthetic pathogens could evade known countermeasures, spread asymptomatically, or have built-in resistance to treatments. If needed, someone could supplement homemade experiments with DNA ordered online and reassembled at home. The apocalypse, mail ordered. This was not science fiction, argued the presenter, a respected professor with more than two decades of experience; it was a live risk, now. They finished with an alarming thought: a single person today likely “has the capacity to kill a billion people.” All it takes is motivation. The attendees shuffled uneasily. People squirmed and coughed. Then the griping and hedging started. No one wanted to believe this was possible. Surely it wasn’t the case, surely there had to be some effective mechanisms for control, surely the diseases were difficult to create, surely the databases could be locked down, surely the hardware could be secured. And so on. Suleyman, Mustafa. The Coming Wave (p. 28). Crown. Kindle Edition.

Tech bans fail

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

People throughout history have attempted to resist new technologies because they felt threatened and worried their livelihoods and way of life would be destroyed. Fighting, as they saw it, for the future of their families, they would, if necessary, physically destroy what was coming. If peaceful measures failed, Luddites wanted to take apart the wave of industrial machinery. Under the seventeenth-century Tokugawa shogunate, Japan shut out the world—and by extension its barbarous inventions—for nearly three hundred years. Like most societies throughout history, it was distrustful of the new, the different, and the disruptive. Similarly, China dismissed a British diplomatic mission and its offer of Western tech in the late eighteenth century, with the Qianlong emperor arguing, “Our Celestial Empire possesses all things in prolific abundance and lacks no product within its borders. There is therefore no need to import the manufactures of outside barbarians.” None of it worked. The crossbow survived until it was usurped by guns. Queen Elizabeth’s knitting machine returned, centuries later, in the supercharged form of large-scale mechanical looms to spark the Industrial Revolution. China and Japan are today among the most technologically advanced and globally integrated places on earth. The Luddites were no more successful at stopping new industrial technologies than horse owners and carriage makers were at preventing cars. Where there is demand, technology always breaks out, finds traction, builds users. Once established, waves are almost impossible to stop. As the Ottomans discovered when it came to printing, resistance tends to be ground down with the passage of time. Technology’s nature is to spread, no matter the barriers. Plenty of technologies come and go. You don’t see too many penny-farthings or Segways, listen to many cassettes or minidiscs. But that doesn’t mean personal mobility and music aren’t ubiquitous; older technologies have just been replaced by new, more efficient forms. We don’t ride on steam trains or write on typewriters, but their ghostly presence lives on in their successors, like Shinkansens and MacBooks. Think of how, as parts of successive waves, fire, then candles and oil lamps, gave way to gas lamps and then to electric lightbulbs, and now LED lights, and the totality of artificial light increased even as the underlying technologies changed. New technologies supersede multiple predecessors. Just as electricity did the work of candles and steam engines alike, so smartphones replaced satnavs, cameras, PDAs, computers, and telephones (and invented entirely new classes of experience: apps). As technologies let you do more, for less, their appeal only grows, along with their adoption. Imagine trying to build a contemporary society without electricity or running water or medicines. Even if you could, how would you convince anyone it was worthwhile, desirable, a decent trade? Few societies have ever successfully removed themselves from the technological frontier; doing so usually either is part of a collapse or precipitates one. There is no realistic way to pull back. Suleyman, Mustafa. The Coming Wave (pp. 58-59). Crown. Kindle Edition.

No reason computers can’t achieve AGI

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

Sometimes people seem to suggest that in aiming to replicate human-level intelligence, AI chases a moving target or that there is always some ineffable component forever out of reach. That’s just not the case. The human brain is said to contain around 100 billion neurons with 100 trillion connections between them—it is often said to be the most complex known object in the universe. It’s true that we are, more widely, complex emotional and social beings. But humans’ ability to complete given tasks—human intelligence itself—is very much a fixed target, as large and multifaceted as it is. Unlike the scale of available compute, our brains do not radically change year by year. In time this gap will be closed. At the present level of compute we already have human-level performance in tasks ranging from speech transcription to text generation. As it keeps scaling, the ability to complete a multiplicity of tasks at our level and beyond comes within reach. AI will keep getting radically better at everything, and so far there seems no obvious upper limit on what’s possible. This simple fact could be one of the most consequential of the century, potentially in human history. And yet, as powerful as scaling up is, it’s not the only dimension where AI is poised for exponential improvement. Suleyman, Mustafa. The Coming Wave (pp. 90-91). Crown. Kindle Edition.

Models are being trained to reduce bias

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

It took LLMs just a few years to change AI. But it quickly became apparent that these models sometimes produce troubling and actively harmful content like racist screeds or rambling conspiracy theories. Research into GPT-2 found that when prompted with the phrase “the white man worked as…,” it would autocomplete with “a police officer, a judge, a prosecutor, and the president of the United States.” Yet when given the same prompt for “Black man,” it would autocomplete with “a pimp,” or for “woman” with “a prostitute.” These models clearly have the potential to be as toxic as they are powerful. Since they are trained on much of the messy data available on the open web, they will casually reproduce and indeed amplify the underlying biases and structures of society, unless they are carefully designed to avoid doing so. The potential for harm, abuse, and misinformation is real. But the positive news is that many of these issues are being improved with larger and more powerful models. Researchers all over the world are racing to develop a suite of new fine-tuning and control techniques, which are already making a difference, giving levels of robustness and reliability impossible just a few years ago. Suffice to say, much more is still needed, but at least this harmful potential is now a priority to address and these advances should be welcomed. Suleyman, Mustafa. The Coming Wave (p. 93). Crown. Kindle Edition.Suleyman, Mustafa. The Coming Wave (p. 93). Crown. Kindle Edition.

AI will overcome limitations

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

Despite recent breakthroughs, skeptics remain. They argue that AI may be slowing, narrowing, becoming overly dogmatic. Critics like NYU professor Gary Marcus believe deep learning’s limitations are evident, that despite the buzz of generative AI the field is “hitting a wall,” that it doesn’t present any path to key milestones like being capable of learning concepts or demonstrating real understanding. The eminent professor of complexity Melanie Mitchell rightly points out that present-day AI systems have many limitations: they can’t transfer knowledge from one domain to another, provide quality explanations of their decision-making process, and so on. Significant challenges with real-world applications linger, including material questions of bias and fairness, reproducibility, security vulnerabilities, and legal liability. Urgent ethical gaps and unsolved safety questions cannot be ignored. Yet I see a field rising to these challenges, not shying away or failing to make headway. I see obstacles but also a track record of overcoming them. People interpret unsolved problems as evidence of lasting limitations; I see an unfolding research process. So, where does AI go next as the wave fully breaks? Today we have narrow or weak AI: limited and specific versions. GPT-4 can spit out virtuoso texts, but it can’t turn around tomorrow and drive a car, as other AI programs do. Existing AI systems still operate in relatively narrow lanes. What is yet to come is a truly general or strong AI capable of human-level performance across a wide range of complex tasks—able to seamlessly shift among them. But this is exactly what the scaling hypothesis predicts is coming and what we see the first signs of in today’s systems. AI is still in an early phase. It may look smart to claim that AI doesn’t live up to the hype, and it’ll earn you some Twitter followers. Meanwhile, talent and investment pour into AI research nonetheless. I cannot imagine how this will not prove transformative in the end. If for some reason LLMs show diminishing returns, then another team, with a different concept, will pick up the baton, just as the internal combustion engine repeatedly hit a wall but made it in the end. Fresh minds, new companies, will keep working at the problem. Then as now, it takes only one breakthrough to change the trajectory of a technology. If AI stalls, it will have its Otto and Benz eventually. Further progress—exponential progress—is the most likely outcome. The wave will only grow. Suleyman, Mustafa. The Coming Wave (p. 98). Crown. Kindle Edition.

AI will be able to accomplish open-ended complex goals

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

But, as many have pointed out, intelligence is about so much more than just language (or indeed any other single facet of intelligence taken in isolation). One particularly important dimension is in the ability to take actions. We don’t just care about what a machine can say; we also care about what it can do. What we would really like to know is, can I give an AI an ambiguous, open-ended, complex goal that requires interpretation, judgment, creativity, decision-making, and acting across multiple domains, over an extended time period, and then see the AI accomplish that goal? Put simply, passing a Modern Turing Test would involve something like the following: an AI being able to successfully act on the instruction “Go make $1 million on Amazon in a few months with just a $100,000 investment.” It might research the web to look at what’s trending, finding what’s hot and what’s not on Amazon Marketplace; generate a range of images and blueprints of possible products; send them to a drop-ship manufacturer it found on Alibaba; email back and forth to refine the requirements and agree on the contract; design a seller’s listing; and continually update marketing materials and product designs based on buyer feedback. Aside from the legal requirements of registering as a business on the marketplace and getting a bank account, all of this seems to me eminently doable. I think it will be done with a few minor human interventions within the next year, and probably fully autonomously within three to five years. Should my Modern Turing Test for the twenty-first century be met, the implications for the global economy are profound. Many of the ingredients are in place. Image generation is well advanced, and the ability to write and work with the kinds of APIs that banks and websites and manufacturers would demand is in process. That an AI can write messages or run marketing campaigns, all activities that happen within the confines of a browser, seems pretty clear. Already the most sophisticated services can do elements of this. Think of them as proto–to-do lists that do themselves, enabling the automation of a wide range of tasks. confines of a browser, seems pretty clear. Already the most sophisticated services can do elements of this. Think of them as proto–to-do lists that do themselves, enabling the automation of a wide range of tasks. We’ll come to robots later, but the truth is that for a vast range of tasks in the world economy today all you need is access to a computer; most of global GDP is mediated in some way through screen-based interfaces amenable to an AI. The challenge is in advancing what AI developers call hierarchical planning, stitching multiple goals and subgoals and capabilities into a seamless process toward a singular end. Once this is achieved, it adds up to a highly capable AI, plugged into a business or organization and all its local history and needs, that can lobby, sell, manufacture, hire, plan—everything a company can do, only with a small team of human AI managers who oversee, double-check, implement, and co-CEO with the AI. Suleyman, Mustafa. The Coming Wave (p. 101). Crown. Kindle Edition.

We are close to artificial capable intelligence that can imagine, reason, plan,  exhibit common sense, and transfer what it “knows” from one domain to another

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

Rather than get too distracted by questions of consciousness, then, we should refocus the entire debate around near-term capabilities and how they will evolve in the coming years. As we have seen, from Hinton’s AlexNet to Google’s LaMDA, models have been improving at an exponential rate for more than a decade. These capabilities are already very real indeed, but they are nowhere near slowing down. While they are already having an enormous impact, they will be dwarfed by what happens as we progress through the next few doublings and as AIs complete complex, multistep end-to-end tasks on their own. I think of this as “artificial capable intelligence” (ACI), the point at which AI can achieve complex goals and tasks with minimal oversight. AI and AGI are both parts of the everyday discussion, but we need a concept encapsulating a middle layer in which the Modern Turing Test is achieved but before systems display runaway “superintelligence.” ACI is shorthand for this point. The first stage of AI was about classification and prediction—it was capable, but only within clearly defined limits and at preset tasks. It could differentiate between cats and dogs in images, and then it could predict what came next in a sequence to produce pictures of those cats and dogs. It produced glimmers of creativity, and could be quickly integrated into tech companies’ products. ACI represents the next stage of AI’s evolution. A system that not only could recognize and generate novel images, audio, and language appropriate to a given context, but also would be interactive—operating in real time, with real users. It would augment these abilities with a reliable memory so that it could be consistent over extended timescales and could draw on other sources of data, including, for example, databases of knowledge, products, or supply-chain components belonging to third parties. Such a system would use these resources to weave together sequences of actions into long-term plans in pursuit of complex, open-ended goals, like setting up and running an Amazon Marketplace store. All of this, then, enables tool use and the emergence of real capability to perform a wide range of complex, useful actions. It adds up to a genuinely capable AI, an ACI. Conscious superintelligence? Who knows. But highly capable learning systems, ACIs, that can pass some version of the Modern Turing Test? Make no mistake: they are on their way, are already here in embryonic form. There will be thousands of these models, and they will be used by the majority of the world’s population. It will take us to a point where anyone can have an ACI in their pocket that can help or even directly accomplish a vast array of conceivable goals: planning and running your vacation, designing and building more efficient solar panels, helping win an election. It’s hard to say for certain what happens when everyone is empowered like this, but this is a point we’ll return to in part 3. The future of AI is, at least in one sense, fairly easy to predict. Over the next five years, vast resources will continue to be invested. Some of the smartest people on the planet are working on these problems. Orders of magnitude more computation will train the top models. All of this will lead to more dramatic leaps forward, including breakthroughs toward AI that can imagine, reason, plan, and exhibit common sense. It won’t be long before AI can transfer what it “knows” from one domain to another, seamlessly, as humans do. What are now only tentative signs of self-reflection and self-improvement will leap forward. These ACI systems will be plugged into the internet, capable of interfacing with everything we humans do, but on a platform of deep knowledge and ability. It will be not just language they’ve mastered but a bewildering array of tasks, too. Suleyman, Mustafa. The Coming Wave (p. 103). Crown. Kindle Edition.

AI spurs biotech development

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

In 2022, AlphaFold2 was opened up for public use. The result has been an explosion of the world’s most advanced machine learning tools, deployed in both fundamental and applied biological research: an “earthquake,” in the words of one researcher. More than a million researchers accessed the tool within eighteen months of launch, including virtually all the world’s leading biology labs, addressing questions from antibiotic resistance to the treatment of rare diseases to the origins of life itself. Previous experiments had delivered the structure of about 190,000 proteins to the European Bioinformatics Institute’s database, about 0.1 percent of known proteins in existence. DeepMind uploaded some 200 million structures in one go, representing almost all known proteins. Whereas once it might have taken researchers weeks or months to determine a protein’s shape and function, that process can now begin in a matter of seconds. This is what we mean by exponential change. This is what the coming wave makes possible. And yet this is only the beginning of a convergence of these two technologies. The bio-revolution is coevolving with advances in AI, and indeed many of the phenomena discussed in this chapter will rely on AI for their realization. Think, then, of two waves crashing together, not a wave but a superwave. Indeed, from one vantage artificial intelligence and synthetic biology are almost interchangeable. All intelligence to date has come from life. Call them synthetic intelligence and artificial life and they still mean the same thing. Both fields are about re-creating, engineering these utterly foundational and interrelated concepts, two core attributes of humanity; change the view and they become one single project. Biology’s sheer complexity opens up vast troves of data, like all those proteins, almost impossible to parse using traditional techniques. A new generation of tools has quickly become indispensable as a result. Teams are working on products that will generate new DNA sequences using only natural language instructions. Transformer models are learning the language of biology and chemistry, again discovering relationships and significance in long, complex sequences illegible to the human mind. LLMs fine-tuned on biochemical data can generate plausible candidates for new molecules and proteins, DNA and RNA sequences. They predict the structure, function, or reaction properties of compounds in simulation before these are later verified in a laboratory. The space of applications and the speed at which they can be explored is only accelerating. Some scientists are beginning to investigate ways to plug human minds directly into computer systems. In 2019, electrodes surgically implanted in the brain let a fully paralyzed man with late-stage ALS spell out the words “I love my cool son.” Companies like Neuralink are working on brain interfacing technology that promises to connect us directly with machines. In 2021 the company inserted three thousand filament-like electrodes, thinner than a human hair, that monitor neuron activity, into a pig’s brain. Soon they hope to begin human trials of their N1 brain implant, while another company, Synchron, has already started human trials in Australia. Scientists at a start-up called Cortical Labs have even grown a kind of brain in a vat (a bunch of neurons grown in vitro) and taught it to play Pong. It likely won’t be too long before neural “laces” made from carbon nanotubes plug us directly into the digital world. What happens when a human mind has instantaneous access to computation and information on the scale of the internet and the cloud? It’s almost impossible to imagine, but researchers are already in the early days of making it happen. As the central general-purpose technologies of the coming wave, AI and synthetic biology are already entangled, a spiraling feedback loop boosting each other. While the pandemic gave biotech a massive awareness boost, the full impact—possibilities and risks alike—of synthetic biology has barely begun to sink into the popular imagination. Welcome to the age of biomachines and biocomputers, where strands of DNA perform calculations and artificial cells are put to work. Where machines come alive. Welcome to the age of synthetic life. Suleyman, Mustafa. The Coming Wave (p. 120). Crown. Kindle Edition.

Renewables will power AI in the future

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

Endless growth in energy consumption was neither possible nor desirable in the era of fossil fuels, and yet while the boom lasted, the development of almost everything we take for granted—from cheap food to effortless transport—rested on it. Now, a huge boost of cheap, clean power has implications for everything from transport to buildings, not to mention the colossal power needed to run the data centers and robotics that will be at the heart of the coming decades. Energy—expensive and dirty as it often is—is at present a limiter on technology’s rate of progress. Not for too much longer. Renewable energy will become the largest single source of electricity generation by 2027. This shift is occurring at an unprecedented pace, with more renewable capacity set to be added in the next five years than in the previous two decades. Solar power in particular is experiencing rapid growth, with costs falling significantly. In 2000, solar energy cost $4.88 per watt, but by 2019 it had fallen to just 38 cents. Energy isn’t just getting cheaper; it’s more distributed, potentially localizable from specific devices to whole communities. Behind it all lies the dormant behemoth of clean energy, this time inspired if not directly powered by the sun: nuclear fusion. Fusion power involves the release of energy when isotopes of hydrogen collide and fuse to form helium, a process long considered the holy grail of energy production. Early pioneers in the 1950s predicted that it would take about a decade to develop. Like so many of the technologies described here, that was a significant underestimation. However, recent breakthroughs have sparked renewed hope. Researchers at the Joint European Torus near Oxford, England, achieved a record power output, double the previous high recorded in 1997. At the National Ignition Facility in Livermore, California, scientists have been working on a method known as inertial confinement, which involves compressing pellets of hydrogen-rich material with lasers and heating them to 100 million million degrees to create a fleeting fusion reaction. In 2022 they created a reaction demonstrating net energy gain for the first time, a critical milestone of producing more energy than the lasers put in. With meaningful private capital now flowing into at least thirty fusion start-ups alongside major international collaborations, scientists are talking about “when and not if” fusion arrives. It may still be a decade or more, but a future with this clean and virtually limitless energy source is looking increasingly real. Fusion and solar offer the promise of immense centralized and decentralized energy grids, with implications we will explore in part 3. This is a time of huge optimism. Including wind, hydrogen, and improved battery technologies, here is a brewing mix that can sustainably power the many demands of life both today and in the future and underwrite the wave’s full potential. Suleyman, Mustafa. The Coming Wave (pp. 131-132). Crown. Kindle Edition.

Without control, these technologies could kills us all

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

And yet alongside these benefits, AI, synthetic biology, and other advanced forms of technology produce tail risks on a deeply concerning scale. They could present an existential threat to nation-states—risks so profound they might disrupt or even overturn the current geopolitical order. They open pathways to immense AI-empowered cyberattacks, automated wars that could devastate countries, engineered pandemics, and a world subject to unexplainable and yet seemingly omnipotent forces. The likelihood of each may be small, but the possible consequences are huge. Even a slim chance of outcomes like these requires urgent attention. Some countries will react to the possibility of such catastrophic risks with a form of technologically charged authoritarianism to slow the spread of these new powers. This will require huge levels of surveillance along with massive intrusions into our private lives. Keeping a tight rein on technology technology could become part of a drift to everything and everyone being watched, all the time, in a dystopian global surveillance system justified by a desire to guard against the most extreme possible outcomes. Suleyman, Mustafa. The Coming Wave (pp. 24-25). Crown. Kindle Edition.

Government cannot solve global problems

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

Our institutions for addressing massive global problems were not fit for purpose. I saw something similar working for the mayor of London in my early twenties. My job was to audit the impact of human rights legislation on communities in the city. I interviewed everyone from British Bangladeshis to local Jewish groups, young and old, of all creeds and backgrounds. The experience showed how human rights law could help improve lives in a very practical way. Unlike the United States, the U.K. has no written constitution protecting people’s fundamental rights. Now local groups could take problems to local authorities and point out they had legal obligations to protect the most vulnerable; they couldn’t brush them under the carpet. On one level it was inspiring. It gave me hope: institutions could have a codified set of rules about justice. The system could deliver. But of course, the reality of London politics was very different. In practice everything devolved into excuses, blame shifting, media Suleyman, Mustafa. The Coming Wave (p. 189). Crown. Kindle Edition.

Fusion and solar solve the environmental harms

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

Energy rivals intelligence and life in its fundamental importance. Modern civilization relies on vast amounts of it. Indeed, if you wanted to write the crudest possible equation for our world it would be something like this: (Life + Intelligence) x Energy = Modern Civilization Increase any or all of those inputs (let alone supercharge their marginal cost toward zero) and you have a step change in the nature of society. Endless growth in energy consumption was neither possible nor desirable in the era of fossil fuels, and yet while the boom lasted, the development of almost everything we take for granted—from cheap food to effortless transport—rested on it. Now, a huge boost of cheap, clean power has implications for everything from transport to buildings, not to mention the colossal power needed to run the data centers and robotics that will be at the heart of the coming decades. Energy—expensive and dirty as it often is—is at present a limiter on technology’s rate of progress. Not for too much longer. Renewable energy will become the largest single source of electricity generation by 2027. This shift is occurring at an unprecedented pace, with more renewable capacity set to be added in the next five years than in the previous two decades. Solar power in particular is experiencing rapid growth, with costs falling significantly. In 2000, solar energy cost $4.88 per watt, but by 2019 it had fallen to just 38 cents. Energy isn’t just getting cheaper; it’s more distributed, potentially localizable from specific devices to whole communities. Behind it all lies the dormant behemoth of clean energy, this time inspired if not directly powered by the sun: nuclear fusion. Fusion power involves the release of energy when isotopes of hydrogen collide and fuse to form helium, a process long considered the holy grail of energy production. Early pioneers in the 1950s predicted that it would take about a decade to develop. Like so many of the technologies described here, that was a significant underestimation. However, recent breakthroughs have sparked renewed hope. Researchers at the Joint European Torus near Oxford, England, achieved a record power output, double the previous high recorded in 1997. At the National Ignition Facility in Livermore, California, scientists have been working on a method known as inertial confinement, which involves compressing pellets of hydrogen-rich material with lasers and heating them to 100 million million degrees to create a fleeting fusion reaction. In 2022 they created a reaction demonstrating net energy gain for the first time, a critical milestone of producing more energy than the lasers put in. With meaningful private capital now flowing into at least thirty fusion start-ups alongside major international collaborations, scientists are talking about “when and not if” fusion arrives. It may still be a decade or more, but a future with this clean and virtually limitless energy source is looking increasingly real. Fusion and solar offer the promise of immense centralized and decentralized energy grids, with implications we will explore in part 3. This is a time of huge optimism. Including wind, hydrogen, and improved battery technologies, here is a brewing mix that can sustainably power the many demands of life both today and in the future and underwrite the wave’s full potential. Suleyman, Mustafa. The Coming Wave (pp. 131-132). Crown. Kindle Edition.

New biocomponents will be made from prompts

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

In chapter 5, we saw what tools like AlphaFold are doing to catalyze biotech. Until recently biotech relied on endless manual lab work: measuring, pipetting, carefully preparing samples. Now simulations speed up the process of vaccine discovery. Computational tools help automate parts of the design processes, re-creating the “biological circuits” that program complex functions into cells like bacteria that can produce a certain protein. Software frameworks, like one called Cello, are almost like open-source languages for synthetic biology design. This could mesh with fast-moving improvements in laboratory robotics and automation and faster biological techniques like the enzymatic synthesis we saw in chapter 5, expanding synthetic biology’s range and making it more accessible. Biological evolution is becoming subject to the same cycles as software. Just as today’s models produce detailed images based on a few words, so in decades to come similar models will produce a novel compound or indeed an entire organism with just a few natural language prompts. That compound’s design could be improved by countless self-run trials, just as AlphaZero became an expert chess or Go player through self-play. Quantum technologies, many millions of times more powerful than the most powerful classical computers, could let this play out at a molecular level. This is what we mean by hyper-evolution—a fast, iterative platform for creation. Nor will this evolution be limited to specific, predictable, and readily containable areas. It will be everywhere. Crown. Kindle Edition. Suleyman, Mustafa. The Coming Wave (pp. 142-143). Crown. Kindle Edition.

AI critical to drug development

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

and automation and faster biological techniques like the enzymatic synthesis we saw in chapter 5, expanding synthetic biology’s range and making it more accessible. Biological evolution is becoming subject to the same cycles as software. Just as today’s models produce detailed images based on a few words, so in decades to come similar models will produce a novel compound or indeed an entire organism with just a few natural language prompts. That compound’s design could be improved by countless self-run trials, just as AlphaZero became an expert chess or Go player through self-play. Quantum technologies, many millions of times more powerful than the most powerful classical computers, could let this play out at a molecular level. This is what we mean by hyper-evolution—a fast, iterative platform for creation. Nor will this evolution be limited to specific, predictable, and readily containable areas. It will be everywhere. tuberculosis. Start-ups like Exscientia, alongside traditional pharmaceutical giants like Sanofi, have made AI a driver of medical research. To date eighteen clinical assets have been derived with the help of AI tools. Suleyman, Mustafa. The Coming Wave (p. 144). Crown. Kindle Edition.

AI will be used for bioweapons

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

There’s a flip side. Researchers looking for these helpful compounds raised an awkward question. What if you redirected the discovery process? What if, instead of looking for cures, you looked for killers? They ran a test, asking their molecule-generating AI to find poisons. In six hours it identified more than forty thousand molecules with toxicity comparable to the most dangerous chemical weapons, like Novichok. It turns out that in drug discovery, one of the areas where AI will undoubtedly make the clearest possible difference, the opportunities are very much “dual use.” Dual-use technologies are those with both civilian and military applications. In World War I, the process of synthesizing ammonia was seen as a way of feeding the world. But it also allowed for the creation of explosives, and helped pave the way for chemical weapons. Complex electronics systems for passenger aircraft can be repurposed for precision missiles. Conversely, the Global Positioning System was originally a military system, but now has countless everyday consumer uses. At launch, the PlayStation 2 was regarded by the U.S. Department of Defense as so powerful that it could potentially help hostile militaries usually denied access to such hardware. Dual-use technologies are both helpful and potentially destructive, tools and weapons. What the concept captures is how technologies tend toward the general, and a certain class of technologies come with a heightened risk because of this. They can be put toward many ends—good, bad, everywhere in between—often with difficult-to-predict consequences. Suleyman, Mustafa. The Coming Wave (pp. 144-145). Crown. Kindle Edition.

Super intelligence is not controllable

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

I’ve often felt there’s been too much focus on distant AGI scenarios, given the obvious near-term challenges present in so much of the coming wave. However, any discussion of containment has to acknowledge that if or when AGI-like technologies do emerge, they will present containment problems beyond anything else we’ve ever encountered. Humans dominate our environment because of our intelligence. A more intelligent entity could, it follows, dominate us. The AI researcher Stuart Russell calls it the “gorilla problem”: gorillas are physically stronger and tougher than any human being, but it is they who are endangered or living in zoos; they who are contained. We, with our puny muscles but big brains, do the containment. By creating something smarter than us, we could put ourselves in the position of our primate cousins. With a long-term view in mind, those focusing on AGI scenarios are right to be concerned. Indeed, there is a strong case that by definition a superintelligence would be fully impossible to control or contain. An “intelligence explosion” is the point at which an AI can improve itself again and again, recursively making itself better in ever faster and more effective ways. Here is the definitive uncontained and uncontainable technology. The blunt truth is that nobody knows when, if, or exactly how AIs might slip beyond us and what happens next; nobody knows when or if they will become fully autonomous or how to make them behave with awareness of and alignment with our values, assuming we can settle on those values in the first place. Nobody really knows how we can contain the very features being researched so intently in the coming wave. There comes a point where technology can fully direct its own evolution; where it is subject to recursive processes of improvement; where it passes beyond explanation; where it is consequently impossible to predict how it will behave in the wild; where, in short, we reach the limits of human agency and control. Ultimately, in its most dramatic forms, the coming wave could mean humanity will no longer be at the top of the food chain. Homo technologicus may end up being threatened by its own creation. The real question is not whether the wave is coming. It clearly is; just look and you can see it forming already. Given risks like these, the real question is why it’s so hard to see it as anything other than inevitable.

There is no “threat construction” – other countries are developing AI and synthetics

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

Something similar occurred in the late 1950s, when, in the wake of a Soviet ICBM test and Sputnik, Pentagon decision-makers became convinced of an alarming “missile gap” with the Russians. It later emerged that the United States had a ten-to-one advantage at the time of the key report. Khrushchev was following a tried-and-tested Soviet strategy: bluffing. Misreading the other side meant nuclear weapons and ICBMs were both brought forward by decades. Could this same mistaken dynamic be playing out in the current technological arms races? Actually, no. First, the coming wave’s proliferation risk is acute. Because these technologies are getting cheaper and simpler to use even as they get morepowerful, more nations can engage at the frontier. Large language models are still seen as cutting-edge, yet there is no great magic or hidden state secret to them. Access to computation is likely the biggest bottleneck, but plenty of services exist to make it happen. The same goes for CRISPR or DNA synthesis. We can already see achievements like China’s moon landing or India’s billion-strong biometric identification system, Aadhaar, happening in real time. It’s no mystery that China has enormous LLMs, Taiwan is the leader in semiconductors, South Korea has world-class expertise in robots, and governments everywhere are announcing and implementing detailed technology strategies. This is happening out in the open, shared in patents and at academic conferences, reported in Wired and the Financial Times, broadcast live on Bloomberg. Declaring an arms race is no longer a conjuring act, a self-fulfilling prophecy. The prophecy has been fulfilled. It’s here, it’s happening. It is a point so obvious it doesn’t often get mentioned: there is no central authority controlling what technologies get developed, who does it, and for what purpose; technology is an orchestra with no conductor. Yet this single fact could end up being the most significant of the twenty-first century. And if the phrase “arms race” triggers worry, that’s with good reason.  Suleyman, Mustafa. The Coming Wave (p. 164). Crown. Kindle Edition.

AI will massively accelerate global growth

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

Little is ultimately more valuable than intelligence. Intelligence is the wellspring and the director, architect, and facilitator of the world economy. The more we expand the range and nature of intelligences on offer, the more growth should be possible. With generalist AI, plausible economic scenarios suggest it could lead not just to a boost in growth but to a permanent acceleration in the rate of growth itself. In blunt economic terms, AI could, long term, be the most valuable technology yet, more so when coupled with the potential of synthetic biology, robotics, and the rest. Suleyman, Mustafa. The Coming Wave (p. 175). Crown. Kindle Edition.

LLMs will be able to answer any question on any topic

Mustafa Suleyman, September 2023, The Coming Wave, Suleyman is the co-founder of Deep Mind (AI company sold to Google), former Google AI Director, Co-Founder of Inflection AI (Pi).

Think about the impact of the new wave of AI systems. Large language models enable you to have a useful conversation with an AI about any topic in fluent, natural language. Within the next couple of years, whatever your job, you will be able to consult an on-demand expert, ask it about your latest ad campaign or product design, quiz it on the specifics of a legal dilemma, isolate the most effective elements of a pitch, solve a thorny logistical question, get a second opinion on a diagnosis, keep probing and testing, getting ever more detailed answers grounded in the very cutting edge of knowledge, delivered with exceptional nuance. All of the world’s knowledge, best practices, precedent, and computational power will be available, tailored to you, to your specific needs and circumstances, instantaneously and effortlessly. Suleyman, Mustafa. The Coming Wave (pp. 174-175). Crown. Kindle Edition.

AI not very intelligent and won’t be soon

ISSIE LAPOWSKY, 9-5, 23. Fortune, Why Meta’s Yann LeCun isn’t buying the AI doomer narrative, https://www.fastcompany.com/90947634/why-metas-yann-lecun-isnt-buying-the-ai-doomer-narrative

Of course, there are those who believe the opposite will be true—that as these systems improve, they’ll instead try to drive all of humanity off the proverbial cliff. Earlier this year, a slew of top AI minds, including Geoffrey Hinton and Yoshua Bengio, LeCun’s fellow “AI godfathers” who shared a 2018 Turing Award with him for their advancements in the field of deep learning, issued a one-sentence warning about the need to mitigate “the risk of extinction from AI,” comparing the technology to pandemics and nuclear war. LeCun, for one, isn’t buying the doomer narrative. Large language models are prone to hallucinations, and have no concept of how the world works, no capacity to plan, and no ability to complete basic tasks that a 10-year-old could learn in a matter of minutes. They have come nowhere close to achieving human or even animal intelligence, he argues, and there’s little evidence at this point that they will.  Yes, there are risks to releasing this technology, risks that giant corporations like Meta have quickly become more comfortable with taking. But the risk that they will destroy humanity? “Preposterous,” LeCun says.

AI development will trigger bioweapons that will kill us

Daily Star, 9-4, 23, https://www.dailystar.co.uk/news/weird-news/googles-ai-chief-warns-deadliest-30860214, AI chief warns ‘deadliest pandemics ever’ on horizon with genetic engineering

One of the biggest threats facing the planet is a super-pandemic, warns the co-founder of Google DeepMind AI technology. Mustafa Suleyman is the billionaire co-founder of the computer giant’s DeepMind but warns it’s not robots that pose the most danger to mankind. He claims the ability to cook up a deadly pandemic at home is likely to become commonplace before the end of this decade. ‘AI Godfather’ regrets his work and warns ‘we need to worry’ about ‘scary’ bots Discussing the future of genetic engineering, he warned: “I think that the darkest scenario there is that people will experiment with …synthetic pathogens that might end up accidentally or intentionally being more transmissible; they they can spread faster, or be more lethal…” Similarly, he said advanced AI technology is getting cheaper and easier to obtain at an alarming rate thanks to the tech being made “open”. It means anyone can get their hands on the technology – and use it to help them cheat on their exams or cook up a virus that could paralyse the world. He says AI and genetic engineering could be a deadly combination He says AI and genetic engineering could be a deadly combination (Image: Youtube) READ MORE Elon Musk warns AI ‘is dangerous to civilisation’ if we start to rely on it That’s why, Mustafa says, an international treaty – including America’s perceived enemies such as Russia and China – needs to be agreed to limit the use of advanced AI and genetic manipulation. “There’s a shared goal that is between us and China and every other …’enemy’ that we want to create we’ve all got a shared interest in advancing the collective health and well-being of humans and Humanity,” he says.

AI could kill us all

Andrew Freedman, Ryan Heath, Sam Baker, 8-1, 2023, Existential threats to humanity are soaring this year, https://www.axios.com/2023/08/01/climate-change-artificial-intelligence-nuclear-war-existential

Put aside your politics and look at the world clinically, and you’ll see the three areas many experts consider existential threats to humanity worsening in 2023. Why it matters: This isn’t meant to start your day with doom and gloom. But focus your mind on how the threats of nuclear catastrophe, rising temperatures and all-powerful AI capabilities are spiking worldwide. It underscores the urgent need for smart people running government — and big companies — to solve increasingly complex problems at faster rates. Climate: The danger is becoming impossible to ignore, Axios’ Andrew Freedman writes. You just lived through the hottest month ever recorded on Earth. The world’s oceans are absurdly warm, with temperatures in the 90s° around the Florida Keys, bleaching and even killing coral reefs in just one week. Antarctic sea ice is plummeting even in the dead of winter. Wildfires are raging. Climate scientists don’t relish saying, “I told you so,” but they’ve been warning for years that each seemingly incremental rise in global average temperatures would translate into severe heat waves, droughts, floods and stronger hurricanes. And the worst part is, we can’t even call this our “new normal,” because it’s going to keep getting worse as long as carbon emissions keep increasing. This is a global problem that will require a global solution, but tensions between the world’s top two emitters — the U.S. and China — are high, and getting the big global powers to abide by a sufficiently hardcore climate commitment has so far proven impossible. AI: The technology’s top architects say there’s a non-zero chance it’ll destroy humanity — and they don’t really know how or why it works, Axios’ Ryan Heath reports. AI — with its ability to mass-produce fake videos, soundbites and images — poses clear risks to Americans’ already tenuous trust in elections and institutions. Nukes: China has expanded its nuclear arsenal on land, air and sea — raising the likelihood of a dangerous new world with three, rather than two, nuclear superpowers, Axios’ Sam Baker writes. “Beijing, Moscow and Washington will likely be atomic peers,” the N.Y. Times reports. “This new reality is prompting a broad rethinking of American nuclear strategy that few anticipated a dozen years ago.” Russian President Vladimir Putin said this summer that he moved some of his country’s roughly 5,000 nuclear weapons into Belarus — closer to Ukraine and Western Europe. President Biden warned in June that Putin’s threat to use tactical nuclear weapons in Ukraine is “real.”

AI deep fakes will undermine democracy

Rebecca Klar, 6-18, 23, The Hill, How AI is changing the 2024 election, https://thehill.com/homenews/campaign/4054333-how-ai-is-changing-the-2024-election/

As the generative artificial intelligence (AI) industry booms, the 2024 election cycle is shaping up to be a watershed moment for the technology’s role in political campaigns. The proliferation of AI — a technology that can create text, image and video — raises concerns about the spread of misinformation and how voters will react to artificially generated content in the politically polarized environment. Already, the presidential campaigns for former President Trump and Florida Gov. Ron DeSantis (R) have produced high-profile videos with AI. Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights, said the proliferation of the AI systems available to the public, awareness of how simple it is to use them and the “erosion of the sense that creating things like deepfakes is something that good, honest people would never do” will make 2024 a “significant turning point” for how AI is used in campaigns. “I think now, increasingly, there’s an attitude that, ‘Well, it’s just the way it goes, you can’t tell what’s true anymore,’” Barrett said. The use of AI-generated campaign videos is already becoming more normalized in the Republican primary. After DeSantis announced his campaign during a Twitter Spaces conversation with company CEO Elon Musk, Trump posted a deepfake video — which is a digital representation made from AI that fabricates realistic-looking images and sounds — parodying the announcement on Truth Social. Donald Trump Jr. posted a deepfake video of DeSantis edited into a scene from the television show “The Office,” and the former president has shared AI images of himself. The Hill Elections 2024 coverage Last week, DeSantis’s campaign released an ad that used seemingly AI-produced images of Trump embracing Anthony Fauci, the former director of the National Institute of Allergy and Infectious Diseases. “If you proposed that 10 years ago, I think people would have said, ‘That’s crazy, that will just backfire,’” Barrett said. “But today, it just happens as if it’s normal.” Critics noted that DeSantis’s use of the generated photo of Trump and Fauci was deceptive because the video does not disclose the use of AI technology. “Using AI to create an ominous background or strange pictures, that’s not categorically different than what advertising has long been,” said Robert Weissman, president of the progressive consumer rights advocacy group Public Citizen. “It doesn’t involve any deception of voters.” “[The DeSantis ad] is fundamentally deceptive,” he said. “That’s the big worry that voters will be tricked into believing things are true that are not.” Someone familiar with DeSantis’s operation noted that the governor’s presidential campaign was not the only campaign using AI in videos. “This was not an ad, it was a social media post,” the person familiar with the operation said. “If the Trump team is upset about this, I’d ask them why they have been continuously posting fake images and false talking points to smear the governor.” While proponents of AI acknowledge the risks of the technology, they argue it will eventually play a consequential role in campaigning. “I believe there’s going to be new tools that streamline content creation and deployment, and likely tools that help with data-intensive tasks like understanding voter sentiment,” said Mike Nellis, founder and CEO of the progressive agency Authentic. Nellis has teamed up with Higher Ground Labs to establish Quiller.ai, which is an AI tool that has the ability to write and send campaign fundraising emails. “At the end of the day, Quiller is going to help us write better content faster,” Nellis told The Hill. “What happens on a lot of campaigns is they hire young people, teach them to write fundraising emails, and then ask them to write hundreds more, and it’s not sustainable. Tools like Quiller get us to a better place and it improves the efficiency of our campaigns.” As generative AI text and video become more common — and increasingly difficult to discern as the generated content appears more plausible — there’s also a concern that voters will become more skeptical about all content AI generates. Sarah Kreps, director of the Cornell Tech Policy Institute, said people may start to either “assume that nothing is true” or “just believe their partisan cues.” “Neither one of those is really helpful for democracy. If you don’t believe anything, this whole pillar of trust we really on for democracy is eroded,” Kreps said. ChatGPT, which is OpenAI’s AI-powered chatbot, burst onto the scene with an exponential rise in use since its November launch, along with rival products like Google’s Bard chatbot and image and video-based tools. These products have the administration and Congress scrambling to consider how to address the industry while staying competitive on a global scale. But as Congress mulls regulation, between scheduled Senate briefings and a series of hearings, the industry has been largely left to create the rules of the road. On the campaign front, the rise of AI-generated content is magnifying the already prevalent concerns of election misinformation spreading on social media. Meta, the parent company of Facebook and Instagram, released a blog post in January 2020 stating it would remove “misleading manipulated media” that meets certain criteria, including content that is the “product of artificial intelligence or machine learning” that “replaces or superimposes content onto a video, making it appear to be authentic.” Ultimately, though, Barrett said the burden of deciphering what is AI-generated or not will fall on voters. “This kind of stuff will be disseminated, even if it is restricted in some way; it’ll probably be out in the world for a while before it is restricted or labeled, and people have to be wary,” he said. Others point out that it’s still too difficult to predict how AI will be integrated into campaigns and other organizations. “I think the real story is that new technologies should integrate into business at a deliberate and careful pace, and that the inappropriate/almost immoral uses are the ones that are going to get all the attention in the first inning, but it’s a long game and most of the productive useful integrations will evolve more slowly and hardly even be noticed,” said Nick Everhart, a Republican political consultant in Ohio and president of Content Creative Media. Weissman noted that Public Citizen has asked the Federal Election Commission to issue a rule to the extent of its authority to prohibit the use of deceptive deepfakes. “We think that the agency has authority as it regards candidates but not political committees or others,” Weissman said. “That would be good, but it’s not enough.” However, it remains unclear how quickly campaigns will adopt AI technology this cycle. “A lot of people are saying this is going to be the AI election,” Nellis said. “I’m not entirely sure that’s true. The smart and innovative campaigns will embrace AI, but a lot of campaigns are often slow to adopt new and emerging technology. I think 2026 will be the real AI election.”

AI will kill everyone

The Week, June 17, 2023, https://theweek.com/artificial-intelligence/1024341/ai-the-worst-case-scenario, AI: The worst-case scenario, AI: The worst-case scenario; Artificial intelligence’s architects warn it could cause human “extinction.” How might that happen?

Artificial intelligence’s architects warn it could cause human “extinction.” How might that happen? Here’s everything you need to know: What are AI experts afraid of? They fear that AI will become so superintelligent and powerful that it becomes autonomous and causes mass social disruption or even the eradication of the human race. More than 350 AI researchers and engineers recently issued a warning that AI poses risks comparable to those of “pandemics and nuclear war.” In a 2022 survey of AI experts, the median odds they placed on AI causing extinction or the “severe disempowerment of the human species” were 1 in 10. “This is not science fiction,” said Geoffrey Hinton, often called the “godfather of AI,” who recently left Google so he could sound a warning about AI’s risks. “A lot of smart people should be putting a lot of effort into figuring out how we deal with the possibility of AI taking over.” When might this happen? Hinton used to think the danger was at least 30 years away, but says AI is evolving into a superintelligence so rapidly that it may be smarter than humans in as little as five years. AI-powered ChatGPT and Bing’s Chatbot already can pass the bar and medical licensing exams, including essay sections, and on IQ tests score in the 99th percentile — genius level. Hinton and other doomsayers fear the moment when “artificial general intelligence,” or AGI, can outperform humans on almost every task. Some AI experts liken that eventuality to the sudden arrival on our planet of a superior alien race. You have “no idea what they’re going to do when they get here, except that they’re going to take over the world,” said computer scientist Stuart Russell, another pioneering AI researcher. How might AI actually harm us? One scenario is that malevolent actors will harness its powers to create novel bioweapons more deadly than natural pandemics. As AI becomes increasingly integrated into the systems that run the world, terrorists or rogue dictators could use AI to shut down financial markets, power grids, and other vital infrastructure, such as water supplies. The global economy could grind to a halt. Authoritarian leaders could use highly realistic AI-generated propaganda and Deep Fakes to stoke civil war or nuclear war between nations. In some scenarios, AI itself could go rogue and decide to free itself from the control of its creators. To rid itself of humans, AI could trick a nation’s leaders into believing an enemy has launched nuclear missiles so that they launch their own. Some say AI could design and create machines or biological organisms like the Terminator from the film series to act out its instructions in the real world. It’s also possible that AI could wipe out humans without malice, as it seeks other goals. How would that work? AI creators themselves don’t fully understand how the programs arrive at their determinations, and an AI tasked with a goal might try to meet it in unpredictable and destructive ways. A theoretical scenario often cited to illustrate that concept is an AI instructed to make as many paper clips as possible. It could commandeer virtually all human resources to the making of paper clips, and when humans try to intervene to stop it, the AI could decide eliminating people is necessary to achieve its goal. A more plausible real-world scenario is that an AI tasked with solving climate change decides that the fastest way to halt carbon emissions is to extinguish humanity. “It does exactly what you wanted it to do, but not in the way you wanted it to,” explained Tom Chivers, author of a book on the AI threat. Are these scenarios far-fetched? Some AI experts are highly skeptical AI could cause an apocalypse. They say that our ability to harness AI will evolve as AI does, and that the idea that algorithms and machines will develop a will of their own is an overblown fear influenced by science fiction, not a pragmatic assessment of the technology’s risks. But those sounding the alarm argue that it’s impossible to envision exactly what AI systems far more sophisticated than today’s might do, and that it’s shortsighted and imprudent to dismiss the worst-case scenarios. So, what should we do? That’s a matter of fervent debate among AI experts and public officials. The most extreme Cassandras call for shutting down AI research entirely. There are calls for moratoriums on its development, a government agency that would regulate AI, and an international regulatory body. AI’s mind-boggling ability to tie together all human knowledge, perceive patterns and correlations, and come up with creative solutions is very likely to do much good in the world, from curing diseases to fighting climate change. But creating an intelligence greater than our own also could lead to darker outcomes. “The stakes couldn’t be higher,” said Russell. “How do you maintain power over entities more powerful than you forever? If we don’t control our own civilization, we have no say in whether we continue to exist.” A fear envisioned in fiction Fear of AI vanquishing humans may be novel as a real-world concern, but it’s a long-running theme in novels and movies. In 1818’s “Franken­stein,” Mary Shelley wrote of a scientist who brings to life an intelligent creature who can read and understand human emotions — and eventually destroys his creator. In Isaac Asimov’s 1950 short-story collection “I, Robot,” humans live among sentient robots guided by three Laws of Robotics, the first of which is to never injure a human. Stanley Kubrick’s 1968 film “A Space Odyssey” depicts HAL, a spaceship supercomputer that kills astronauts who decide to disconnect it. Then there’s the “Terminator” franchise and its Skynet, an AI defense system that comes to see humanity as a threat and tries to destroy it in a nuclear attack. No doubt many more AI-inspired projects are on the way. AI pioneer Stuart Russell reports being contacted by a director who wanted his help depicting how a hero programmer could save humanity by outwitting AI. No human could possibly be that smart, Russell told him. “It’s like, I can’t help you with that, sorry,” he said.