Internet Idea Books: Roundup, Review, and Response

What Technology Wants (Kevin Kelly, 2010) is a sweeping history of technology as a unified force which he calls “the technium”. Kelly starts slowly, drawing ever larger circles of human history, biological evolution, and the formation of planet earth from starstuff. His scope, from the Big Bang to the Singularity, is unmatchable. But the purpose of this incredible breadth is not readily apparent, and isn’t for the first half of the book, as Kelly talks about everything but technology. I advise the reader to sit back and enjoy the ride, even if it covers a lot of familiar ground.

In not the first chapter on evolution, Kelly argues that the tree of life is not random, but instead is constrained by chemistry, physics, geometry, and so on. The author points to many examples of convergent evolution, where the same “unlikely” anatomical feature was evolved multiple times independently. For example, both bats and dolphins use echolocation but their common ancestor did not. Kelly is careful to attribute this phenomenon to the constraints implicit in the system and not supernatural intelligence. He argues that, in the broadest strokes, evolution is “preordained” even as the details are not.

Kelly begins the next chapter by noting that evolution itself was discovered by Alfred Russel Wallace independently and concurrently as it was by Charles Darwin. This becomes the segue into convergent invention and discovery, insisting that the technium should be regarded as an extension of life, obeying most of its rules, although human decision replaces natural selection. Technology becomes an overpowering force that loses adaptations as willingly as animals devolve (which is to say, not very).

The premise that technology extends life becomes the central to Kelly’s predictions. He paints a grandiose picture of technologies that are as varied and awe-inspiring as the forms of life, encouraging ever-more opportunities in an accelerating dance of evolution. “Extrapolated, technology wants what life wants,” he claims, and lists the attributes technology aspires to. Generally speaking, Kelly predicts technological divergence, where your walls are screens and your furniture thinks, and the death of inert matter. Like the forms of life, technology will specialize into countless species and then become unnoticed, or even unnoticeable.

Much of what Kelly predicts has already happened for passive technologies. We don’t notice paper, government, roads, or agriculture. But I don’t think that information technology will achieve the same saturation. No matter how cheap an intelligent door becomes, a non-intelligent version will be cheaper still, and has inertia behind it. Kelly claims that such resistance can only delay the adoption of technology, not prevent it. Nevertheless something about Kelly’s book disturbed me. It was wrong, I felt, but I couldn’t articulate why. So I read a trio of books that take a more cautioned view of information and communication technologies. As I read, I asked of them: what has the internet taken from us, and how to we take it back?

You Are Not A Gadget: A Manifesto (Jaron Lanier, 2010) takes up the counterargument. As I leafed through it briefly, trying to decide if it would fit in this blog post, I was uncertain. Bold pronouncements in sub-headers mark every other page, including “Life on the Curved Surface of Moore’s Law,” “What Do You Do When the Techies are Crazier Than the Luddites?,” “Information Doesn’t Deserve to Be Free,” “Crashing Down Maslow’s Pyramid,” and “Do Edit Wars Have Casualties?”. But don’t let Lanier’s breezy, ranting style fool you. As I discovered on a proper reading, the book is remarkably coherent and thought-out.

To my balance-seeking delight, Lanier attacks many of the ideas Kelly espouses and even mentions him by name. Lanier was part of the same “merry band of idealists” in the 1980s but has since left the fold. In fact, he’s called What Technology Wants “the essential statement of the stuff I disagree with.” Both writers agree that we can control technology only to a limited extent. But where Kelly sees the march of technology as simply inevitable, Lanier describes a specific mechanism. “Lock-in” is the process by which once-fluid designs are cemented by often arbitrary and uninformed decisions. For example, the MIDI file format for music requires notes be expressed as discrete key-up, key-down events. It was intended for the “tile mosaic world of the keyboardist,” but became standard throughout music, despite its inability to convey the “watercolor world of the violin.” While Kelly placidly says that the details of technology can take one of many equally valid forms, Lanier urges us to choose these details to advance human expression while there is still time. Lock-in only appears inevitable in hindsight.

Lanier raises concerns that Web 2.0 locks in a human being, that the fluidity of human relationship is pigeonholed into Facebook’s dropdown menu of relationship statuses, just as musical notes became quantized by MIDI. By this process, “life is turned into a database,” under the false belief that “computers can presently represent human thought or human relationships.” Not only are humans getting locked in to this way of thinking, but Web 2.0 is becoming increasingly locked in to society. “The most effective young Facebook users,” he continues, are those who “tend their doppelgängers fastidiously.” The result is that “relationships take on the troubles of software engineering.” As as software engineer trying to maintain work/life balance, to me this is nothing short of horrifying.

Lanier doesn’t stop with interpersonal relationships but expands to the economy and culture. Lanier discusses the failure of the last decade to produce significant innovations in music, or financial institutions to support new types of artistic expression. (Weren’t we supposed to have live holographic party magicians or virtual reality concerts by now?) As a musician, Lanier derides how free culture turns his work into “just one dot in a vast pointillist spew of similar songs when it is robbed of its motivating context.” Cycling back to human relationship, he dismisses the possibility of “any connection between me and the mashers, or those who perceive the mashups. Empathy—connection—is then replaced by hive statistics.”

The Shallows: What the Internet is Doing to Our Brains (Nicholas Carr, 2010) weaves a number of subjects together into a coherent narrative. After a brief introduction, Carr gives a guided tour of neuroscience, specifically neuroplasticity, which is how the brain changes and rewires itself in response to stimuli. Although I was familiar with much of the material, I would see connections to his thesis poking through the facts, as Carr laid the groundwork for his arguments. He also goes through the history of “intellectual technology”, from writing to scrolls to books to the printing press to mass media. The author draws parallels, but also indicates when analog analogies are inappropriate.

Later chapters are more direct. He splits the internet’s effect on cognitive load and memory across chapters, but after demonstrating how complex human memory is, how unlike static bits on a hard drive, the distinction seems somewhat arbitrary. His larger point about memory is that it is tied to cognition, rich in detail and associations, malleable over time. The human brain cannot be understood terms of the Von Neumann distinction between code and data, processing and storage. Rather ever part of the brain influences every other part of the brain, often in still-mysterious ways.

Carr claims to walk a line between exuberance and criticism. While he does address the upsides of bountiful information, they seem to be apologetic afterthoughts. However, Carr does not just make vague or anecdotal claims (as Lanier often does) but rather draws on many psychological experiments throughout the book. Rather remarkably, science bears out what Transcendentalist philosophers have long known intuitively. For example, the are demonstrable benefits on memory gained by a walk in the park (Berman, Jonides, and Kaplan, “The Cognitive Benefits of Interacting with Nature,” Psychological Science, 19, no. 12 (December 2008): 1207-12). Other experiments show the plasticity of the brain, the effect of information overload, and the costs and ineffectiveness of multitasking. They are supplemented by quotes from philosophers and media theorists, and historical anecdotes. Nietzsche at his typewriter, anyone?.

Reading The Shallows was a rather meta experience. It is a book about the demise of books, and the brains willing and capable of reading them. I found myself staring into space after five minutes of daydreaming, slowly drifting away from the topic at hand. I also found that, like a smoker waiting for his fix, I had a certain nagging sensation that I should be checking emails or be online. Carr made a compelling case for me to fight that urge while providing an equally compelling read.

Writing in The Hedgehog ReviewChad Wellmon has a response to Carr and similar critics. Wellmon positions himself between “alarmism and utopianism,” and points out that information overload is not a historically new phenomenon. “Beating Carr to the punch by more than two centuries,” writes Wellmon, “Immanuel Kant complained that such an overabundance of books encouraged people to ‘read a lot’ and ‘superficially’.” Neither are methods of dealing with information overload new: “eighteenth-century German readers had a range of technologies and methods at their disposal for dealing with the proliferation of print—dictionaries, bibliographies, reviews, note-taking, encyclopedias, marginalia, commonplace books, footnotes.”

Alone Together: Why We Expect More from Technology and Less from Each Other (Sherry Turkle, 2011) is a psychologist’s view of human interaction with technology. As the subtitle suggests, the book is divided into two parts. The first concerns itself with the “robotic movement,” from tamagotchis to AIBO to androids, noting how we ascribe emotion, cognition, and friendship to inanimate objects incapable of these qualities. Turkle notes how quickly “better than nothing” becomes “better than anything,” as convenience and lack of ambiguity provided by robots quickly becomes more valuable than human companionship.

The second part describes the opposite effect, that as our devices allow us to always be connected, we have lost privacy, connection, and intimacy. I am reminded of the woman with bandaged thumbs who texts her housemate in the next room but thinks it would be rude to knock, or the man pushing his daughter on the swing while instant messaging the woman he “married” on Second Life. There is the high schooler who wants to abandon text messages for a rare conversation not even in person but over a land line, and the college student so exhausted by keeping up with email that he wearily asks “how long do I have to keep doing this?”. Turkle weaves these and many other anecdotes into trends, supplementing them with psychoanalytic analysis.

Alone Together is difficult to read at times because it accurately describes that sad state of our society. “I’d rather talk to a robot,” one of the men interviewed says. “Friends can be exhausting. The robot will always be there for me. And whenever I’m done, I can walk away.” Turkle points out, with what Kelly might call “human chauvinism,” that robots are capable only of imitating or performing emotions, and not actually feeling them. Even though Turkle is not constantly editorializing, there is always an undercurrent of loss and mild horror at what she describes, sharpened by how common and banal it has become.

If you identify with Carr’s stories about losing the ability to concentrate for an entire book, Turkle has written shorter pieces in The New York Times and given a TED talk. They come highly recommended. I’ll also plug Nathan Jurgenson’s essay The IRL Fetish, which takes an intriguing counterpoint. His thesis is that “nothing has contributed more to our collective appreciation for being logged off and technologically disconnected than the very technologies of connection. The ease of digital distraction has made us appreciate solitude with a new intensity.”

Each of the critics comes from a different branch of psychology: Lanier representing humanism, Carr surveying cognitive psychology, and Turkle hailing from psychoanalysis. Armed with their ideas, I can return to and attempt to refute Kelly’s vision of global, ubiquitous intelligence. A world where every last object has artificial intelligence is a world where where the tiniest thing is prone to have a bug, run out of batteries, or spy on you. These countless streams of information are impossible to assemble into a coherent whole, or only slightly more optimistically, can be assembled into a coherent whole only with the help of more information technology. Even if the data exists, it may be impossible to integrate if it’s in semantically incompatible formats. If you thought Macintosh to Windows is a wide gap, try getting your ceiling fan to talk to your lamp. What Kelly describes and endorses is the end of general purpose computing, which is also the end of general purpose thinking.

Drawing on Kelly’s observations of the “Amish hackers,” each individual should maintain a pool of technologies as small as possible, even as the global supply of technologies increases. We should use as few technologies as possible to create meaning, connection, and opportunity in our lives. That is, we should strive to do more with less. This applies not only to technology (Lanier) but to information (Carr) and to people (Turkle). What technologies and gadgets we retain should be concentrated, in the sense of gathering many functions and benefits into a small, consistent package. Similarly, we should work towards improving our relationships with our family and close friends, instead of accumulating hundreds of “friends” on Facebook.

Indeed, the technologies that are most destructive are those that present us with wide but shallow information, less about more. These include email, texting, and social networks. At any moment, something important, noteworthy, or even urgent can come to us through these media, but that rarely happens. Hence the tendency to “check” email or Facebook regularly, even though one is unlikely to find something of value. Internet media prizes aggregation, fleetingly funny images, and the “wisdom” of the crowd. We’ve traded steak for ground beef, fruit for candy. “More” and “better” are competing goals, and my preference ought to be clear.

To be fair, Kevin Kelly has gotten gotten caught in the crossfire of a much larger philosophical debate. He doesn’t espouse the state of information technology of the last six months; his approach is more more historical and eventual. As founding editor of Wired, Kelly is more the father of today’s Silicon Valley culture than the culture itself. But there are points raised by the other authors, and by my own judgement, that need to be addressed, even if Kelly did not spend much time endorsing them.

As I read these books, I questioned my own career path as a programmer and computer scientist. Ultimately, however, they renewed my passion for the field while providing me a framework on which to hang my dissatisfaction with a particular breed of startup. As the number of apps skyrockets and we find the space of online micro media sharing services increasingly fleshed-out, there is far more noise than signal. Many subscribe to the doctrine that technology can know everything and solve any problem; they are in the thrall of  the social, local, and mobile. Lanier has choice words for these companies, where “rooms full of MIT PhD engineers [are] not seeking cancer cures or sources of safe drinking water for the underdeveloped world but schemes to send little digital pictures of teddy bears and dragons between adult members of social networks. At the end of the road of the pursuit of technological sophistication appears to lie a playhouse in which human kind regresses to nursery school.”

There’s a reason the quirky, made-up names of startups often sound like baby talk—they’re infantilizing. The immediate, unfiltered sharing of internet culture threatens our ability to grow as individuals, and therefore as family members, close friends, and lovers as well. We live in an era of immaturity.

But I have found that this is not a reason to abandon computer science but to embrace it. To know exactly what computers can and cannot do causes me to place more value on human beings. The constraints of a computer’s inner workings often reappear in the online culture they support; I am learning to recognize and work around these issues. The transition from computer science to philosophical media theory is quite natural.

Meanwhile, writing code is an excellent outlet for my drive to simplify. Good code is logically structured, semantically meaningful, and internally consistent. It does what it claims to do, and there is as little of it as possible. We should never force these qualities on to a human being, and yet that is exactly what we are doing.

One thing all the authors agree on is that information technology is not going away. Indeed, this essay was originally published on a blog, and you likely found it via social media. It’s Turkle who points out that clinical treatment for addiction is never about the addicting agent itself and always about the person’s relationship with it. If we have a technology addiction, getting rid of technology will be ineffective. Instead, as Lanier and Carr suggest, we can shape technology—and ourselves—in ways that promote value and nuance in human expression, cognition, and relationships.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: