Archive for the ‘Media Theory’ Category

Type:Rider and Flappy Bird

I wouldn’t have thought typography could be the subject of a video game, but Type:Rider does just that. The levels are a tour of Western history from the middle ages onward, each corresponding to a different typeface in the context of its era. The Gothic type’s levels take cues from medieval churches while the 1920’s Futura feels like a modern art museum. The player’s avatar is a colon, two rolling dots bound together by some magnetic-seeming attraction. Gameplay consists of navigating through terrain including each letter of the alphabet rendered in the that typeface. The letters are arranged to create interesting geometrical puzzles that make them memorable. The player also navigates through oversized versions of the printing technologies of the day, meanwhile collecting asterisks that unlock brief passages about the key figures and inventions of the time period.

There are a number of features that make Type:Rider stand out. It is highly polished, with beautiful visual environments and suitable thematic music. (Surprisingly the typesetting of the informative passages is often found wanting; perhaps the English translation wasn’t proofed by the original European developers?) The controls are relatively expressive, in that with a few taps the skilled player can move the colon in one of many possible ways. The game has value: it took a team of experienced designers and developers time and money to create it, and the user must expend time and money to enjoy it. But yet, the game has a deeper message. Yes, it’s about typography, but mere type is the means by which we transfer knowledge; typography is the beatification of knowledge. Typography rewards diligence, attention to detail, graphical density, and knowledge of prior work. Typography is the wings on which intellectualism is borne.

Contrast this with the maddeningly weak and imprecise wings of Flappy Bird. Wired does a good job recounting the saga of the infamous iOS game and its creator, Dong Nguyen. Anyone can pick up the game and play it immediately, but playing well is exceedingly difficult: mastery and skill-building are sacrificed on the alter of ease-of-use. Play happens in all-too-brief bouts, which provide instant gratification with no time commitment. No depth of knowledge, skill, or artistic message is ever accumulated.

Papert distinguishes between children programming computers and computers programming children, and this is certainly the latter. Flappy bird conditions one exact response, with no room for exploration or creativity. No justification is given as to why the world must be the way it so firmly is. More concretely, flappy bird is fake difficulty, riding on an artificially narrow method of control. It perniciously makes the smart phone, and the human, less smart.

Dong Nguyen made (and is likely still making) fifty thousand dollars a day off advertising shown to the game’s users. I highly doubt the users (largely teens) are spending anywhere close to that amount of money on the advertised products. Flappy bird generates money but not wealth; like doomed financial products it is built on value that simply isn’t there. Sooner or later, this bubble must burst.

But despite the attention directed towards Flappy bird, it is hardly unique. Only four of the top fifty grossing apps (as of when I checked) are not games (Pandora, Skype, and two dating apps). The rest are games, targeted at the under-20 crowd, driven by ads and in-app purchases (which include the removal of ads). The app store has become Western kids in a gigantic candy store, and this has pushed adults and their fine intellectual cuisine off to the margins. The market has spoken: mass-produced low-quality ad-ridden software for entitled children is what sells, adults and society be damned.

I will quote (again) from Jaron Lanier, You Are Not A Gadget: “Rooms full of MIT PhD engineers [are] not seeking cancer cures or sources of safe drinking water for the underdeveloped world but schemes to send little digital pictures of teddy bears and dragons between adult members of social networks. At the end of the road of the pursuit of technological sophistication appears to lie a playhouse in which human kind regresses to nursery school.”

Even Type:Rider is not immune. It has the requisite Facebook and Twitter integration, though they are less prominent. It is also available as a Facebook game. What is offers, then, is not a completely pure solitary experience but rather a compromise given the nature of the market.

It is said that technology changes quickly and people change slowly, but the reality is more complex. People have shown a remarkable ability to adapt to new technologies, without fundamentally altering how they think or what goals they have. Meanwhile, the face of technology changes, but many ideas remain timeless and fixed, old wine repackaged into new bottles. Furthermore standards and protocols by which devices communicate with each other, once set, become incredibly difficult to change. We are in danger of not changing with technology, and then creating technology that prevents us from changing.


Nested Fractally

Recently I was struck by just how true an xkcd comic is: CC-BY-NC 2.5 Randall Munroe

But there is more on the internet than crazy straw aficionados. (Understatement of the year.) There are groups for any interest you can think of, and subinterests within them that you can’t think of. You can buy anything. You can sell anything. You can, ostensibly, learn anything. You can read news articles from hundreds of sources in dozens of languages updated in nearly real time. You can browse the blogosphere, where most people get half a dozen readers on a good day, most of them algorithms. (Thank you, cherished human reader!) Nearly every book, song, and film published or distributed in another medium is available, legally or otherwise. There are more videos than you can watch of cats wearing clothes and people, well, not.

Even media theorist Clay Shirkey acknowledges that there’s plenty of crud online. “If you’re going to look for where the change is happening,” he counters, “you have to look on the margins.” He goes on to talk about the version control system Git used by computer programmers to track changes in their software, and how it has so much potential to be used for legal codes and budgets. When you can see exactly which lawmaker changed what line in the budget, we finally have accountability and transparency in government. Getting the bitterly partisan lawmakers in Washington to use the system is one problem. But is Git even the right system for them to use?

Shirkey advocates for using the tools programmers have developed and repurposing them for other text documents. But Git is highly specialized for code, and has many assumptions about its users baked-in. First, we need to distinguish between Git and Github. Git is an open-source version control system designed for the Linux operating system (and its cousin, Max OS X). It is primarily controlled by the command line, and it quite difficult for people without the right background to use. Typical git commands look like this:

git add .
git diff --staged > myfile.diff
git commit -m "Commit message"
git push origin master

There are a number of arcane details that can go wrong, prompting error messages inscrutable to laypeople. The UNIX command line is extremely powerful but doesn’t provide any indication as to how to use it, and is unforgiving of mistakes. Most lawyers and government employees are familiar only with the Windows graphical user interface. Git for Windows exists, but is not supported. You can read the Illustrated Guide to Git on Windows (not updated since 2009) and decide for yourself whether it’s something ordinary people can use. The lack of customer support and quality assurance is a deal breaker for financial, legal, and classified documents.

Enter Github. A San Francisco-based company founded in 2008, Github provides web hosting and graphical user interfaces for Git. Only through Github’s software does Git become useable to the general population. Tellingly, both the links in the last paragraph go to pages they host, since their business revolves around people using Git. As a programmer, their service allows me work on code collaboratively with friends, share it with the rest of the world, track revisions, and have a backup “in the cloud.” And for that, it works great.

However, Github is too young and too unstable to trust with all our legal documents. In March, a hacker was able to add a file to a repository he did not have permission to modify. And just a few weeks ago (September 2012), an issue allowed 16 private repositories to be accessed by anyone. What if these had contained confidential financial, diplomatic, or military information? Github not ready to handle government data. In fact, trusting any single corporation with government IT is a bad idea. (Microsoft is a necessary evil.) Whatever computer system handles government documents needs to be completely secure, bug-free, reliable, useable, and have all the necessary functions. No wonder the government lags behind in technology adoption!

Don’t get me wrong, I personally love Github. But both Git and Github are designed for the open source movement, which is at least as complicated as the silly straw movement. Government will always have some closed source (and not silly) components. (Do you want the nuclear launch codes on WikiLeaks?) Don’t put software into use doing what it wasn’t designed to do. It won’t suit your needs, and it can easily be counterproductive. It may not even do the job it was designed for in the first place.

A few weeks ago, the Khan Academy released a new suite of computer science curricula. These are small samples of code that learners can interact with in real time, seeing both code and graphical output. At the time, I thought it was a step in the right direction. And, I guess, it is. But this week Bret Victor published an essay titled Learnable Programming that shows just how ineffective and confusing KA system is, saying that it “ignore[s] decades of learning about learning.” His work was cited as an inspiration for the KA system, but he responds that what they created is confusing and obstructs learning. He provides glimpses of a programming language and environment designed specifically to teach “a way of thinking, not a rote skill.”

Midway through, he talks about Hypercard, a piece of software from the mid-1980s. Its salient feature was that “any user can remix their software with copy and paste, thereby subtly transitioning from user to creator, and often eventually from creator to programmer.” He explains that it is “seen by some as ‘what the web should have been’.” Before the web solidified in the 90s thanks to the work of people like Tim Berners-Lee, there were many different forms it could have taken. But that age of endless possibilities is over. The core structure of the web has hardened, probably for the duration of human civilization. Today, we’re stuck making “overlay networks” that use the Internet Protocol in ways it was never intended for, or standardizing a new system on a single website. On Facebook, it is possible to recombine and share content at the click of a button. But this isn’t built in to the fabric of the web itself…yet.

In the last two or three decades, computer scientists were given nearly unprecedented power to shape the technologies that now underly society. The problem is that these people have no training or preparation on how to design for society. Computer scientists have little schooling in psychology (what they do have extends to single users or small groups, part of a field known as “Human-Computer Interaction”.) They have no training in sociology, anthropology, or media theory. They had no idea how the saturation of cell phones would affect social interaction or how the web would give rise to “hacktivist” groups like Anonymous. They didn’t know (how could anyone have known?) that the barrage of email and tweets would make us less able to have conversations. At it was beyond anyone’s imagination that connecting a few government and university computers would give rise to an online world consisting largely of advertising and vice. Which brings us full circle, back to xkcd: CC-BY-NC Randall Munroe

Truly, there is no bottom.

The Pedestrian

“To enter out into that silence that was the city at eight o’clock of a misty evening in November, to put your feet upon that buckling concrete walk, to step over grassy seams and make your way, hands in pockets, through the silences, that was what Mr. Leonard Mead most dearly loved to do.” So begins Ray Bradbury’s 1951 short story, The PedestrianI recommend you print it out, read it, and then grab six colors of highlighter and spend 20 minutes annotating it like you supposedly learned how to in high school. Or that’s what I did, anyway, and it worked well for me. Read it on your screen, at the very least.

I remembered the story’s plot from elementary school, and it percolated out of my subconscious today, no doubt prompted by the day’s barrage of tweets against Twitter. But I found that it offered a much more mature and poetic reading, while remaining remarkably relevant more than sixty years after it was written.

There are a number of recurring images. Light and darkness, nature, stone, and metal, death, dry riverbeds, insects, cold, smells. They don’t form a nice set of binaries but instead a web of intricate relationships. Yes, nature images are positive and reflect Mead’s independence, and they’re contrasted with the stone houses and streets. But the car’s metallic images are more lifeless still. Bradbury combines the images; tombs tie together death and stone. Light subdivides into electric and moonlight, and both are used positively and negatively. Then there’s the line, “the light held him fixed, like a museum specimen, needle thrust through his chest.” It combines light, insects, metal, and death into one powerfully visceral image.

But alas I’m rusty writing poetry essays, or perhaps I’m using that as an excuse not to write one. The simple reading of the story, the one I remember from grade school, is that in the future a man walking alone is so unusual it arouses police suspicion. There was something about television, ah yes: “The tombs, ill-lit by television light, where the people sat like the dead, the gray or multicolored lights touching their faces, but never really touching them.” And the man is arrested for not taking part.

Actually, he’s institutionalized for “regressive tendencies”. Not participating in technology, in Bradbury’s 2053, is a sign of insanity. The autonomous police car itself is technology manifest, with its “phonograph voice,” its “radio throat,” its computation “dropping card by punch-slotted card under electric eyes”. These technologies were sophisticated in 1951, but today the effect is to make the car seem antiquated and incompetent. Then there’s the car’s back seat, “which was a little cell, a little black jail with bars. It smelled of riveted steel. It smelled of harsh antiseptic; it smelled too clean and hard and metallic. There was nothing soft there.”  (Can’t you just hear Bradbury’s voice narrating that?) It’s a world where technology has taken over, and forced humanity into tombs and prisons.

But back to the idea of not watching TV seen as signifying insanity – that’s ridiculous science fiction, right? Wrong. Psychologists have started to see people who are not on Facebook as suspicious. They’ve even pointed out that certain psychopaths had minimal online presences, trivializing a rather complex condition. By 2053, we may well live in a world scarily similar to that of Bradbury’s imagining, where everyone is glued to a screen in an extrovert’s playground. Mead, a bachelor, is not lonely in solitude, but his innocuous behavior is seen as deviant.

But there’s one other level here (at least). As Mead passes each house with occupants tuned into to the television, he asks mockingly, “where are the cowboys rushing, and do I see the United States Cavalry over the next hill to the rescue?” He isn’t one for exciting adventures. Instead, he likes to “imagine himself upon the center of a plain, a wintry, windless Arizona desert with no house in a thousand miles, and only dry river beds, the streets, for company.” Television and the internet hype and distort and exaggerate whatever passes through them. These media engage in brinksmanship to find the most shocking or compelling story. Mead’s walks are a return to the plain, the ordinary, the simple that is not simplistic — the pedestrian.

Writing in 1973, Umberto Ecco describes “hyperreality,” where “the ‘completely real’ becomes identified with the ‘completely fake’. Absolute unreality is offered as a real presence.” The spectrum of realness is bent back on itself into a strange loop, circumscribing and barricading  and caging the pedestrian, which had been free to wander aimlessly.

Ours is the age of manufactured excitement, with the 24-hour news cycle, reality television, and social media. The ordinary, the long-form, is boring. Instead, we smash everything into tiny pieces and let the data scientists reassemble pictures of what they claim to be human beings. Lost is that ineffable essence of humanity that comes from context. The sound of a friend’s laughter, the pat on the back, these transient and ordinary experiences are no longer good enough in a world where computers define humanity.