Scientists reveal the 13 dark technology scenarios that keep them up at night

robot ai sophia
The rise of AI has left many open-ended questions about the future of technology and the unintended consequences it may lead to. Yu Ruidong/China News Service/Visual China Group via Getty Images
  • The rapid advancement of technology is causing people plenty of anxiety — including the scientists, researchers, and futurists, working on the technology itself.
  • The advent of artificial intelligence, drones and self-driving vehicles raises many open-ended questions about the future of technology and the unintended consequences it may lead to. 
  • We spoke to scientists and researchers to find out 13 dark theories that keep them up at night.
  • Visit Business Insider's homepage for more stories.
Advertisement

Back in 1970, author Alvin Toffler coined the term "future shock.

He envisioned a dystopian future in which technological and sociological change would be too rapid to cope with, and people would be utterly overwhelmed by daily life. 

In the last decade or two, the pace of technological change has accelerated rapidly compared to the era in which Toffler was writing.

Artificial intelligence, social media, self-driving cars, genetic modification you can perform in a garage — even the most optimistic futurists may acknowledge that some technologies are heading toward a tipping point, in which it becomes difficult to predict how they'll be used or what unintended consequences may occur.

Advertisement

Shows like "Black Mirror" routinely take a fictional look at these dark technological scenarios, but the show might not be so far-fetched after all. We talked to scientists, technologists, and researchers to see what terrifying but plausible applications for emerging technologies keep them awake at night.

Advertisement

Computers could eventually learn to discriminate against human workers in hiring processes.

waiting for job interview
Joe Raedle/Getty Images

Thanks to modern machine learning, artificial intelligence can already defeat us at games like chess and Go.

And when one AI faces off against the other, they can both become so good that humans don't stand a chance of winning. Today, AI is helping people perform a vast number of tasks — everything from improving photos as they're taken to making grammar recommendations in word processing apps

That's the heart of Rob Peterscheck's concern. Peterscheck is principal consultant at Small Scale AI, and he is concerned that as job seekers start to rely on AI-based tools to write résumés, they'll inevitably be submitted to employers who use their own AIs to help select candidates in the hiring process.

"The AI that is judging résumés has the effect of simultaneously training the AI that's writing résumés," Peterscheck told Business Insider. 

This feedback loop means the résumé-writing AI has an opportunity to learn from every submission, and eventually create résumés so likely to be selected that mere humans writing on their own can't possibly compete. 

"This means a human could never write a résumé better than an AI when an AI is judging the results," Peterscheck said.

In fact, in any scenario in which machine learning systems can be trained to do a task, the machines will inevitably get better than humans, Peterscheck said. 

"If a machine is judging results, over time this will reduce the available work for humans," he said.

Advertisement

A swarm of drones could easily obliterate America's infrastructure.

In this May 21, 2019 photo, two drones fly above Lake Street in downtown Reno, Nev., on, as part of a NASA simulation to test emerging technology that someday will be used to manage travel of hundreds of thousands of commercial, unmanned aerial vehicles (UAVs) delivering packages. It marked the first time such tests have been conducted in an urban setting. (AP Photo/Scott Sonner)
In this May 21, 2019 photo, two drones fly above Lake Street in downtown Reno, Nev., on, as part of a NASA simulation to test emerging technology that someday will be used to manage travel of hundreds of thousands of commercial, unmanned aerial vehicles (UAVs) delivering packages. It marked the first time such tests have been conducted in an urban setting. (AP Photo/Scott Sonner) Associated Press

Drones have become commonplace, routinely flying missions for construction companies, civil engineers, security firms, real estate agents, and a hundred other applications. They're so common that the Federal Aviation Administration has a drone pilot certification program

But aside from being convenient for business, drones may pose a large security threat and can fly through domestic security loopholes like mosquitoes through a chain-link fence.

J. Luke Bennecke is a 30-year veteran civil engineer who specializes in transportation engineering and doubles as a techno-thriller novelist. It's the vulnerability of our national power infrastructure — and how easily drones could attack it — that keeps Bennecke awake at night.

"It's very easy to set up thousands of drones that cost maybe a thousand dollars each. That's a couple of million dollars, which is nothing for a terrorist," Bennecke told Business Insider. "You can make a block of Semtex with homemade materials from Home Depot and the local market, and affix it to the bottom of the drones. You can fly them with a pre-programmed route to crash into buildings, power plants, interchanges, whatever."

Semtex is a potent plastic explosive commonly used in commercial blasting operations as well as military applications, but also used by terrorists

The Department of Homeland Security has documented no fewer than 16 "critical infrastructure sectors […] considered so vital to the United States that their incapacitation or destruction would have a debilitating effect on security, national economic security, [or] national public health or safety."

Bennecke is concerned that a mass attack could cripple the entire nation, but to date, the government hasn't proactively addressed this risk — even though drone technology gets better and cheaper with each passing year.

Advertisement

Toy drones could reveal your favorite hiding places to people looking for you.

Activists Valerie Milner-Brown and Linda Davidsen fly drones near Heathrow Airport in London, Britain, September 12, 2019. REUTERS/Henry Nicholls
Activists Valerie Milner-Brown and Linda Davidsen fly drones near Heathrow Airport in London Reuters

As drones and other remote-operated vehicles get smarter, cheaper, and better connected, toy drones are becoming commonplace. Both kids and adults play with ground-based drones, and connectivity to a remote server can enable them to upload video and rely on AI to train themselves to maneuver around the home.

James Song, a leading technologist at Shadow Foundry, is concerned about a particular nightmare scenario that can result when the data these toys collect is used maliciously. 

"The sensors and video feed of these drones will be able to train an AI to maneuver within the home. If the data mis-appropriated, that means you're buying toys that have the potential to teach military computers how to enter your home and find you. And because it's a toy, you're teaching it details like your personal favorite hiding places."  

Song added that AI systems can learn from that, and if killer robots are ever deployed, they'll arrive already knowing the layout of your home as well as "what you're thinking before you even think it."

Advertisement

In the 'paper-clip maximizer' scenario, humans are ground up into paper clips by overeager AI.

paper clips paperclips
Shutterstock/Ernie Janes

Sergey Yudovskiy, CEO of the robotics company electroNeek, is concerned about automation.

It's not a new concern. People have had existential worries about automation since the birth of the Industrial Revolution. But electroNeek develops robotic process automation technology, through which bots can be taught to imitate human workflow, automating and streamlining tasks. 

"This technology develops really fast. Lots of companies already talk about Cognitive RPA, which will be able to make decisions on its own," said Yudovskiy, which suggests that robots can find their own shortcuts to accomplish tasks faster without human training or intervention.

And that leads to the dreaded "paper-clip maximizer" nightmare scenario — a thought experiment from philosopher Nick Bostrom that AI scientists fret over late at night when they can't sleep.

In the paper-clip maximizer scenario, rather than AI exterminating humans out of malice in the style of popular movies like "The Terminator," it happens purely by accident.

Imagine an AI robot tasked with making as many paper clips as possible. It begins fulfilling its mission to the exclusion of all other criteria — in the process, annihilating all life on Earth because it didn't prioritize or understand the value of anything else. But on the plus side, it makes all the paper clips.

That exact scenario might not ever play out, but the lesson, according to Bostrom, is that "we need to be careful about what we wish for from a superintelligence, because we might get it."

Advertisement

Self-driving cars can be hacked and remotely controlled.

self-driving autonomous car vehicle
Mikhail Pochuyev\TASS via Getty Images

A future in which most cars are self-driving appears to be inevitable, with various semi-autonomous self-driving features available in models from virtually every major car maker, and companies like Uber investing heavily in self-driving research. It's just a matter of time before you don't drive your car — it drives you.

But self-driving cars are likely to rely heavily on mobile networks like 5G and connected services for their smarts, and concerns about hackers taking control of cars are common among researchers. Wired, for example, worked with a hacker to demonstrate that an ordinary Jeep Cherokee could be hacked remotely while it was driving down the freeway.

Bennecke thinks that anyone concerned about hackers taking control of a single car is thinking too small. He based his novel "Civil Terror: Gridlock" on this idea.

"If the road has a dedicated lane for self-driving cars, you can pack them together much more closely. You can have a line of cars all going 60 mph with 5-foot gaps between them. That's the equivalent of hundreds of billions of dollars of infrastructure improvement without actually having to do any widening of the freeways," Bennecke told Business Insider.

But if a terrorist could take control of this network of cars, he said, you could have a massive high-speed pileup, resulting in countless fatalities.

"The loss of life would be almost beyond comprehension," he said.

Advertisement

A massive ransomware attack could cripple the world economy.

hacker
Gorodenkoff/Shutterstock

Ransomware is a pressing issue in the tech world. In a ransomware attack, cybercriminals infect computers with malware that encrypts all the files, and then demand ransom in exchange for the unlock code.

Some researchers worry about the Moby Dick of ransomware attacks that's been dubbed the Bashe scenario.

Michael Gillespie, a security researcher Emsisoft, said "such a global ransomware attack would dwarf the WannaCry outbreak and be a truly nightmare scenario." 

A well-organized ransomware attack that targets mobile devices — which until now have been largely immune to ransomware — could encrypt data on about 30 million devices worldwide. 

"It could have an economic cost of $193 billion. Such an attack would impact every sector, from finance to transportation to energy production, causing global disruption on an unprecedented scale. Think 'The Walking Dead,' but without the zombies," Gillespie told Business Insider.

Advertisement

5G networks could unintentionally amplify cyberattacks and stop entire cities from functioning.

banks, markets computer, cyber hacking cyber attack
A market maker works on the trading floor at IG Index in London, Britain January 14, 2016. REUTERS/Stefan Wermuth

The next generation of wireless cellular, called 5G, has started to roll out across the US, delivering speeds orders of magnitude faster than the 4G networks in use today.

For consumers, that means better video playback on mobile devices, but for businesses, 5G is expected to enable all manner of connected devices to work together more efficiently — from connected cars to factory automation to smart buildings.

Joseph Cortese, associate director at the cybersecurity firm A-LIGN, said that will be a good thing. 

"We will see a rush of businesses attempting to be the first-to-market with 5G enabled devices," Cortese said. "This will lead to an enormous swell in the size of the Internet of Things, with thousands of new devices joining the network." The Internet of Things is the name given to networks of connected devices in homes, business, and across cities. 

But Cortese also said we need to be prepared for cyberattacks, which reliance on 5G could make unimaginably worse. 

"Distributed denial of service attacks have the potential to quickly overload 5G networks and impact critical services. In the past, DDoS attacks have troubled services like Netflix and Airbnb, but in the future, the Internet of Things will be used for things like directing traffic patterns and providing emergency services workers with critical information."

In such an attack, large parts of a city's infrastructure will be rendered useless. In a smart city that depends on the Internet of Things, operations would be brought to a halt. 

Cortese said, "Even a simple attack has the potential to cripple a smart city that relies on 5G networks to function."

Advertisement

CRISPR could be used to engineer a virus that insidiously infects unborn children.

dna cut and paste crispr
Samantha Lee/Business Insider

In recent months, science news has been filled with headlines about CRISPR, a technology that allows researchers to edit DNA with unprecedented ease and accuracy. Proponents herald CRISPR as a genetic tool that is now being used to remove malaria from mosquitoes, treat muscular dystrophy, and improve food

Eventually, CRISPR can potentially rewrite medical science at a fundamental level, eliminating genetically transmitted diseases, creating highly targeted drugs, and more.

But the very features that make CRISPR so attractive to medical researchers could also be used by medically savvy terrorists on a limited budget. Bennecke offers a terrifying scenario in which terrorists infect the water supply with a CRISPR-created virus.

"With CRISPR, it's possible to weaponize the water supply," Bennecke told Business Insider. "You could create a virus that actually modifies the human gametes in male. People think they have the flu, and get better. But unbeknownst to them, their sperm cells are changed, which will affect everyone's offspring."

Advertisement

Deepfake videos can end political and corporate careers.

rise of deepfakes 4x3
Samantha Lee/Business Insider

Deepfakes are sophisticated video forgeries in which AI tools can replace people's faces in a video. For the most part, deepfakes are malicious, often used to fabricate videos like revenge porn. The technology — which is already free and easy to use — is only getting better.

Matt Rahman, COO of the computer security company IOActive, said he is especially interested in the dangers of deepfakes. 

"In a few years, you won't be able to tell what's fake," he said, because such video will be indistinguishable from reality.

That means disinformation campaigns using deepfakes could derail politicians' careers and negatively affect voter turnout for elections, as well as foment crises in the private sector. 

"By the time the fake video comes out and spreads through media outlets, a publicly traded company stock can drop by several percent," Rahman said. "While the company tries to restore credibility and investor confidence, the adversary is already making money on the company's stock. The videos can cause fast and furious damage, since it takes a few days for the PR department, lawyers, and crisis teams to handle the situation."

Advertisement

And deepfakes may completely undermine democracy and the press.

deepfake putin
ALEXANDRA ROBINSON/AFP/Getty Images

Deepfakes pose a substantial existential threat to society. It's not just about the risk to politicians and other public figures who are being targeted by cybercriminals — the entire system is potentially at risk. 

That's the contention of from Danika Laszuk, general manager of the venture capital fund Betaworks Camp.

"Deepfakes have the potential to completely undermine of truth," Laszuk told Business Insider. "If public figures can use the mere existence of the technology to cast doubt on the veracity of a video they don't like, the public will get exhausted trying to distinguish fact from fiction. This has the potential to undermine democracy here and around the world, and provide fodder for conspiracy theorists and others with opaque agendas."

Advertisement

Targeted advertising might become impossible to resist.

targeted ad
JUSTIN TALLIS/AFP via Getty Images

We're surrounded by advertising, and it's increasingly personal and targeted

A number of technologies are converging to make advertising more effective, including "digital twins," which are virtual models of consumers that are built from datasets that illuminate everything that is knowable about a person, from their physical attributes and demographic information to their personal preferences. And this information will only get more precise and useful to advertisers as time goes on. 

To be sure, we're still in the early days of this kind of hyper-personalized advertising. Targeted ads, for example, continue to recommend the toaster you already bought.

But Charlie Cabot, research lead at Privitar, said he is concerned that if trends continue, hyper-personalized advertising will become "too manipulative."

"As AI improves, and the ad models can get access to more data, the capacity to persuade you to buy may reach unacceptable levels," Cabot told Business Insider, suggesting we may be literally unable to resist this kind of advertising. "Just imagine what will happen when this technology is also applied to political ads."

Advertisement

Self-driving cars could choose who to crash into based on net worth.

car crash
Shutterstock

Science fiction is often fueled by computers and robots with self-aware artificial general intelligence — a sentient AI that has the capacity to understand the world in much the same way that humans do.

But some experts fear a greater threat from more narrow AIs — the kind we are getting good at creating today, optimized to solve a single kind of problem (like how to compose music or drive a car) and has no awareness aside from that. 

These AIs often have access to vast amounts of data and their designers have a murky understanding of the AI's decision-making process, since machine learning systems are generally self-taught and too complex for human programmers to reverse engineer.

Shriram Ramanathan, a senior analyst at Lux Research, said he worries all of these factors are converging in ways we can't predict.

"Researchers are unleashing AI technologies into the marketplace with little understanding of the long-term impact," he told Business Insider. "Most of these AIs are trained for a narrow set of use cases and definitely not capable of handling black swan events," or events with severe but unexpected consequences.

The result, Ramanathan said, may be chilling. The trolley problem, for example, is a classic philosophical thought experiment in which a runaway train is barreling toward either one of two groups of people, and the conductor has to choose who will die. This thought experiment is real for self-driving car engineers, who must equip a car to make real-time decisions with life-and-death consequences. 

"What if a semi-autonomous car makes a decision on which person to crash into by analyzing LinkedIn and Facebook data to determine an individual's net worth?" Ramanathan told Business Insider.

Advertisement

People may choose to replace limbs with prosthesis by choice.

Prosthetics_Thumb_YT
Gulliermo Martinez and Jay Reed

Modern prosthetics would be barely recognizable to doctors from just a generation ago.

Today, thanks to computers that are integrated directly into prosthetics that read signals in nerve endings, people with cutting-edge artificial limbs can control their prosthetics essentially the same as natural arms and legs. A prototype prosthesis even allows patients to feel natural sensations from the device, which enables them to walk while blindfolded — an unprecedented advancement in artificial limbs.

Hiep Dang, senior director of product management at the software firm Cylance, said he imagines that these devices will continue to evolve and improve. Eventually, some people may want to augment their body to take advantage of the technology.

"With the growing sophistication in prosthetics, people may decide to have elective surgery to replace fully functioning body parts," Dang said. "Prostheses will continue to become stronger, more durable, reliable and dexterous. We will reach a tipping point where an organic body will be a liability." 

"To what extent will we develop prostheses to enhance and extend human performance?"

AI Drones self-driving cars
Advertisement
Close icon Two crossed lines that form an 'X'. It indicates a way to close an interaction, or dismiss a notification.