Science, unlike technology, is an absolute good, and learning about the world is a kind of categorical imperative: an unconditional moral obligation that is its own justification. Those who expand human thought are especially heroic, because they replace obscurity with the truth, which however so shocking is always salutary.
But science is only directly useful insofar as it leads to new technologies. In my new life, I often ask myself: With the time I have left, what novel technologies should I pursue? Which should I reject? Not long ago, the partners at my firm considered a technology that might prevent disease. But we chose to let someone else commercialize it, because its expansive powers and potential liability confounded us. Was our choice admirable or cowardly?
These are not easy questions, not least because there is no consensus—and surprisingly little systematic writing—about what technology is and how it develops. The best general book on the subject, Brian Arthur’s The Nature of Technology: What It Is and How It Evolves (2009) distinguishes between the singular use of the word technology as a means to fulfill a human purpose (for instance, a speech-recognition algorithm or filtration process) and a generic assemblage of practices and components (technological “domains,” like electronics or biotechnology). Arthur, an economist at the Santa Fe Institute who refined models of increasing returns, writes, “A technology is more than a mere means. It is … an orchestration of phenomena to our use.”
If technology is functional and its value instrumental, then it follows that not all singular applications of technological domains are equal. Nuclear fission can power a plant or detonate a bomb. The Haber-Bosch process, which converts atmospheric nitrogen to ammonia by a reaction with hydrogen, was used to manufacture munitions in Germany during World War I, but half the world’s population now depends on food grown with nitrogen fertilizers. (Fritz Haber, who was awarded the 1918 Nobel Prize in Chemistry for coinventing the process, was a conflicted technologist—the father of chemical warfare in World War I. His wife, also a chemist, killed herself in protest, in 1915.) What’s more, designs possess a moral direction, even if technologies can be put to different uses. You can hammer a nail with a pistol butt, although that’s not what it’s for; a spade can kill a man, but it’s better for digging. Therefore, the first commandment for technologists is: Design technologies to swell happiness. A corollary: Do not create technologies that might increase suffering and oppression, unless you’re very sure the technology will be properly regulated.
However, the regulation of new technologies presents a special problem. The future is unknowable, and any really revolutionary technology transforms what it means to be human and may threaten our survival or the survival of the species with whom we share the planet. Haber’s fertilizers fed the world’s people, but also fed algae in the sea: Fertilizer runoffs have created algae blooms, which poison fish. The problem of unpredictable effects is especially acute with some energy and all geoengineering technologies; with biotechnologies such as gene drives that can force a genetic modification through an entire population in a few generations; with artificial eggs and sperm that might allow parents to augment their offspring with heritable traits.
One tool to regulate future technologies is the precautionary principle, which in its strongest form warns technologists to “first do no harm.” It’s an alluringly simple rule. But in an influential paper on the principle, the Harvard jurist Cass Sunstein cautions, “Taken in [its] strong form, the precautionary principle should be rejected … because it leads in no directions at all. The principle is literally paralyzing—forbidding inaction, stringent regulation, and everything in between.” A weaker version, adopted by the nations that attended the Earth Summit in Rio in 1992, stipulates, “Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.” The threshold for plausible harm is left worryingly undefined in most weak versions of the principle. Nonetheless, the weaker version suggests a second commandment for technologists: In regulating new technologies, balance costs and benefits, and work with your fellow citizens, your nation’s lawmakers, and the world’s diplomats to enact reasonable laws that limit the potential damage of a new technology, as further evidence is forthcoming. It’s good that Facebook invented a global social network, but the company must now cooperate with regulators to limit how malefactors can hack our heads, maddening populations and hijacking elections.
A final commandment helps technologists choose which technologies to pursue. In a complicated fashion, new technologies are not only the “orchestration of phenomena to our use” but are tools of scientific inquiry. Brian Arthur notes, “Science not only uses technology, it builds itself from technology.” High-throughput screening speeds drug discovery, but it also provides new understanding of cancer genomics. Deep learning may one day permit driverless cars, but it will also untangle the mysteries of brain development. Thus, the third commandment for technologists: The best technologies have utility but also provide fresh scientific insights. Prioritize those.
On my desk at work, I have a replica of the skull of La Ferrassie 1, the most complete Neanderthal skeleton ever found. The original belonged to an adult male who lived 50,000 to 70,000 years ago. He walked as upright as you or me, and had you met him on a Paleolithic hillside in what is now the Vézère Valley in France, he would have seemed hauntingly strange: obviously human but stockier, broad-nosed, and beetle-browed. In ways we can only dimly guess, his manners would have been strange too. Surely, he could talk after a fashion, because he possessed the anatomy for speech and shared with us a gene, FOXP2, necessary for the development of language. But the archeological record tells us that he was also different from Homo sapiens. Around 70,000 years ago something switched on in the heads of modern humans—either a genetic mutation or a social adaptation; we don’t know what—that allowed us to design new stone tools that Neanderthals only clumsily imitated, as well as make cave art, flutes, wine, and, eventually, all the rest: the vault of King’s College Chapel, Cambridge; Darwin collecting his irrefutable facts; a cure for cancer; the mission to Mars.
Our Neanderthal cousins never evolved our powers of innovation. They died; we did not.
More Great WIRED Stories
- Crawling dead: how ants turn into zombies
- PHOTOS: Sculpture … or human organ?
- New to Snapchat? Here’s what you need to know
- Tech disrupted everything. Who’s shaping the future?
- How to build a floating bridge in 12 minutes
- Hungry for even more deep dives on your next favorite topic? Sign up for the Backchannel newsletter