Science —

Can we keep our technology from slipping out of control?

In A Dangerous Master, a bioethicist argues for debate before tech is adopted.

A Dangerous Master is reminiscent of—and sometimes even references—about a million popular books and movies: Robert Heinlein's I Will Fear No Evil; Isaac Asimov's I, Robot; David Mitchell's The Bone Doctors; Kazuo Ishiguro's Never Let Me Go; Neal Stephenson's The Diamond Age; GATTACCA; The Matrix; The X Men; The Phantom Menace.

But while these works and the various dystopias they depict are characterized as speculative fiction, Wendell Wallach's book and the various dystopias it depicts warrant neither qualifier. They reside firmly in the real world—or could imminently, if we do not heed his warning to vigilantly track technological developments and constantly assess if the benefits they provide are worth the risks they inevitably engender.

All technological innovations, starting with the fire brought down from Olympus by Prometheus, are a hopelessly entangled mass of risks and benefits. Wallach is in a good position to know. He's chaired of the Technology and Ethics study group at Yale University's Interdisciplinary Center for Bioethics for most of its thirteen year existence. He knows that scientific inquiry and discovery will inevitably lead to technologies that can be used for ill as well as for good.

And he is ok with that—he does not want to curb scientific inquiry or discovery. He just doesn't want us adopting technologies blindly. As of now, that seems to be exactly what we're doing. Too many potentially dangerous, or at the very least highly controversial, technologies become facts on the ground before the public is even aware of them. The public—meaning us—never gets the chance to reflect on the utility of these innovations and debate whether or not they're worth it.

Wallach gives numerous examples of the tradeoffs inherent in adopting new technologies, some of which are familiar from headlines and some of which are less so. The push and pull between surveillance and privacy provides a perfect case study. After September 11, stopping terrorists became one of the US government's highest priorities. While obviously an admirable goal, our desperation for security led to a surveillance program that is (a) hardly in line with the ideals of democracy and freedom upon which our society is based and (b) not even necessarily effective.

A mature, introspective society needs to reflect on the ramifications and potential uses and abuses of technologies like those employed for mass surveillance and decide if they are worthwhile before those technologies are put into action. Times of crisis and communal panic, like the wake of the attacks on the World Trade Center, are the exact wrong time for such reflection.

Some of the technologies Wallach considers, like the drastic extension of life via cryonic preservation or the uploading of personalities into "mind-files," seem so speculative and far off that it seems almost silly to debate their merits. But today's science fiction tropes have a habit of turning into tomorrow's realities. And debating something that seems abstract can sometimes shed light on issues that are too hot-button to be debated calmly.

Will these mind-files have rights? Will only the wealthy be able to achieve this kind of techno-immortality? How will a glut of older minds and perhaps bodies affect the job prospects, creative impulses, and resources of younger generations? Wallach insists that the time to hammer out the extent to which we as a society are willing to accept the risks of a particular technology is precisely when it is so speculative that it seems unreal. Because once it is real, it is too late.

So, ok, we must sit down and weigh the costs and benefits of a given path and then decide whether that path should be taken. But Wallach also wonders whether there are certain paths we should never go down, ever, just on principle, no matter how great the potential benefits. If so, who gets to decide which roads those are?

This is a particularly confounding question in terms of biological innovations, which trigger people's "yuck" factor—they can't exactly articulate why something is wrong, they just know viscerally that it is. Although there are those who claim that this type of primal revulsion is a valuable evolutionary metric and shouldn't be discounted, it is too often used to condemn activities that run counter the religious beliefs of one particular group. To give an example, the "yuck" argument is the one used to vilify both homosexuality and genetically modified crops.

As of now, the same genetic engineering that can be used to make a synthetic energy-producing organism can also be used to make a synthetic pathogen; it can cure genetic disorders but it can also be used to create only blonde, blue eyed progeny. And this technology is not confined to tightly regulated laboratory settings—DIY biologists can already do some of these things in their garages.

No global consensus has been reached about whether this is "playing God" and should be absolutely verboten or a great therapeutic stride forward and should be embraced and promoted. That's largely because there has been barely any public debate over it.

The military arena is another where Wallach thinks that there are lines that should not be crossed, and things that should not be done. But rather than the "yuck" factor, here he cites the concept that Roman philosophers deemed mala en se—things that are evil in themselves. Rape and biological weapons, he says, are both mala en se (not that this has seemed to preclude their use in war). He thinks the same thing about killer robots—not the drones we have now, which are ultimately directed by human beings, but machines that will autonomously "decide" to take a human life without any input from a human agent.

He thinks this because machines cannot be held responsible for their actions. While this seems like a fairly uncontroversial claim (especially when applied to killer robots) what about self driving cars? They are machines that will undoubtedly kill someone at some point. Before that point, we had better have some ideas as to who, precisely, can be held accountable when a machine kills a person.

Between the time of this books writing and its publication, France expanded its domestic surveillance in the wake of the Charlie Hebdo attacks, even as the NSA's mass surveillance program as revealed by Edward Snowden was declared illegal, Google was awarded a patent for downloading specific personalities and embalming them in robotic form, the genomes of human embryos were edited, and the idea that robots can take over almost any job previously thought to need a human touch reached the mainstream.

We clearly missed our chance to figure out whether or not we want these technologies. Wallach is adamant that we don't also miss the chance to figure out how to best harness them.

Listing image by Photograph by Ged Carroll

Channel Ars Technica