Archive for the ‘Internet’ Category

As We May Have Thought

Vannevar Bush wanted to build machines that made people smarter. His 1945 paper, As We May Think, described analog computers that captured and stored information, the earliest vision of today’s internet. All of Bush’s hopes for chemical photography have been surpasses by today’s digital cameras, and digital storage media are more compact than the most hopeful predictions of microfilm. He also predicts dictation, and though today’s software does a passable but not perfect job, it has not reached the level of ubiquity Bush predicts. He is also wrong about the form factor of cameras, predicting a walnut-sized lens mounted like a miner’s lamp. The result is similar to Google Glass, and no other product:

One can now picture a future investigator in his laboratory. His hands are free, and he is not anchored. As he moves about and observes, he photographs and comments. Time is automatically recorded to tie the two records together.

As for selecting information from the ensuing gigantic database, Bush posits the “Memex”, a desk with displays built into it. The memex is personal, allowing users to connect pieces of information together into trails for later examination. The memex is personal and built on connections, much like the mind.

The late Douglas Engelbart expanded on the purely hypothetical Memex with NLS, short for oNLine System. In “the mother of all demos”, he showed how users traverse and manipulate trees of data, with rich transclusion of content. Unlike the Memex, real-time sharing is possible by way of video chat. Like the memex, NLS was primary text, and the user-facing component was the size of a desk.

And yet … Bush and Englebart’s systems are not psychologically or sociologically realistic. Though Bush was writing in 1945, his vision seemed Victorian: a facade of proper intellectualism with no grounding in the less dapper side of human nature. One can hardly imagine multimedia beyond classical music and Old Master paintings emanating from the memex.  Bush used the effectiveness of Turkish bows in the crusades as an example of what one could research on a Memex. He missed the target. The Memex and NLS were designed for a race of hyper-rational superhumans that do not exist.

The fictitious enlightened user would emphasize restraint, but today’s technology can, for all intents and purposes, do anything. The ability to do anything is less productive and more dangerous than it sounds. Intellectually, such a system encourages slapdash and incomplete thought. It does not force you to abandon weaker ways of thinking; it gives you no guidance towards what will work, what will work well, and what will not work at all. Sociologically, the availability of information on a scale beyond what Bush could have dreamed hasn’t made us an enlightened society. Having correct information about (for example) evolution readily available online has not made a dent in the number of people who read Genesis literally. And it gets worse.

Moore’s law may be the worst thing to happen to information technology. With computing so cheap and so ubiquitous, with the ability to do anything, we have overshot the island of scarcity inhabited by Bush and Engelbart and into the land of social media, entertainment, and vice. The universal systems for the betterment of humanity have fallen to fragmented competitors in an open market. The emphasis on mobile these last six years has led to apps of reduced capability, used in bursts, on small screens with weak interaction paradigms. This is what happens when there’s more computing power in your pocket than Neil Armstrong took to the moon: we stop going to the moon.

Recreational computation is here to stay, but we may yet partially reclaim the medium. Clay Shirky is found of pointing out that erotic novels appeared centuries before scientific journals. Analogously, we should not be deterred by the initial explosion of inanity accompanying the birth of a new, more open medium.

I can only hazard a guess as to how this can be done for recreational computing: teach the internet to forget. (h/t AJ Keen, Digital Vertigo) One’s entire life should not be online (contrary to Facebook’s Timeline – it’s always hard to change ideas when corporations are profiting on them). A good social website would limit the ways in which content can be produced and shared, in an attempt to promote quality over quantity. Snapchat is a promising experiment in this regard. There’s been talk of having links decay and die over time, but this sees like a patch on not having proper transclusion in the first place.

As for programming, though, the future is constrained, even ascetic. If Python embodies the ability to do anything, then the future is Haskell, the most widely-used [citation needed] functional programming language.

Functional programming is a more abstract way of programming than most traditional languages, which use the imperative paradigm. If I had to describe the difference between imperative programming and functional programming to a layperson, it would be this: imperative programming is like prose, and functional programming is like poetry. In imperative programming, it’s easy to tack on one more thing. Imperative code likes to grow. Functional code demands revision, and absolute clarity of ideas that must be reforged for another feature to be added. In functional languages, there are fewer tools available, so one needs to be familiar with most of them in order to be proficient. Needless to say, imperative languages predominate outside of the ivory tower. Which is a shame, because imperative languages blithely let you think anything.

The problem with thinking anything is similar to doing anything: there’s no structure. But if we can’t think anything than some good ideas may remain unthought. There is a tension between thinking only good ideas and thinking any idea. In theory at least, this is the relationship between industry and academia. While companies want to produce a product quickly,  academia has developed programming paradigms that are harder to use in the short term but create more manageable code over time. These are all various ways of imposing constraints, of moving away from the ability to do anything. Maybe then we’ll get something done.

A week without social media

The Jewish holiday of Passover requires adherents to not consume or possess any leaven during its eight days. Leaven symbolically represents excess, to be puffed up and arrogant. As I became increasingly aware of the time I was wasting on social media, I hit upon the mechanism that sites like these use to identify you: cookies. Which are leaven, right? So I decided to spend Passover without Facebook or Twitter.

It started small: I would stare at my phone where the apps had been idly looking to kill a few moments. Instead of needing to be on top of my newsfeeds, I found myself putting my phone away more readily and engaging in what I was doing. I began to notice details in the objects and buildings around me, but human contact proved elusive. Without sifting through other people’s half-baked ideas on an hourly basis, I felt like I had a tremendous amount of free time on my hands. The time I would have spent clicking “refresh” became time I spent walking around campus lost in thought. Unplugging led to a good deal of, not necessarily loneliness, but solitude.

Sometimes details matter, and a focus on trends leads to dangerous generalizations. The problem with social media is that it’s all details, with no underlying story or pattern to cling to. Our brains are taxed by trying to glue these shards together, but no two come from the same whole. Occasionally we manage to create a passable mosaic.

After all this introspection, I came to be more at peace with my religion. There are things that we know to be true, like evolution, and we should promote them. And there are things we know to be patently false, like homeopathy, and we should decry them. But a lot of things fall in the middle, not least of which is the human condition. The most visible parts of religion espouse certainty, but the Passover seder is really about uncertainty. It invites questioning and varying interpretations. Judaism embraces ambiguity and even contradiction. (Especially contradiction.) This is a welcome break from a world of pithy opinions crammed into 140 characters and relationships pigeonholed into a dropdown menu. It’s complicated, indeed.

Is slavery a presence or an absence? On one hand, I still feel that social media distracts us as a society from what really matters. It fragments our thought processes, and our relationships, by prizing efficiency over effort. On the other hand, technology connects and empowers us; without it we regress. Is social media a way to break down the interpersonal walls on campus, or a cause of their existence? Does the spread of information strengthen the spread of knowledge, or short-circuit it?

I have always felt that long-form writing helps me put together my thoughts on factual issues. In writing this piece, I realized that the same is true in the personal realm. True connection means giving people the freedom to think independently and then express themselves to others. Slavery is to uncritically compress and post whatever comes to mind.

Nested Fractally

Recently I was struck by just how true an xkcd comic is:

xkcd.com/1095 CC-BY-NC 2.5 Randall Munroe

But there is more on the internet than crazy straw aficionados. (Understatement of the year.) There are groups for any interest you can think of, and subinterests within them that you can’t think of. You can buy anything. You can sell anything. You can, ostensibly, learn anything. You can read news articles from hundreds of sources in dozens of languages updated in nearly real time. You can browse the blogosphere, where most people get half a dozen readers on a good day, most of them algorithms. (Thank you, cherished human reader!) Nearly every book, song, and film published or distributed in another medium is available, legally or otherwise. There are more videos than you can watch of cats wearing clothes and people, well, not.

Even media theorist Clay Shirkey acknowledges that there’s plenty of crud online. “If you’re going to look for where the change is happening,” he counters, “you have to look on the margins.” He goes on to talk about the version control system Git used by computer programmers to track changes in their software, and how it has so much potential to be used for legal codes and budgets. When you can see exactly which lawmaker changed what line in the budget, we finally have accountability and transparency in government. Getting the bitterly partisan lawmakers in Washington to use the system is one problem. But is Git even the right system for them to use?

Shirkey advocates for using the tools programmers have developed and repurposing them for other text documents. But Git is highly specialized for code, and has many assumptions about its users baked-in. First, we need to distinguish between Git and Github. Git is an open-source version control system designed for the Linux operating system (and its cousin, Max OS X). It is primarily controlled by the command line, and it quite difficult for people without the right background to use. Typical git commands look like this:

git add .
git diff --staged > myfile.diff
git commit -m "Commit message"
git push origin master

There are a number of arcane details that can go wrong, prompting error messages inscrutable to laypeople. The UNIX command line is extremely powerful but doesn’t provide any indication as to how to use it, and is unforgiving of mistakes. Most lawyers and government employees are familiar only with the Windows graphical user interface. Git for Windows exists, but is not supported. You can read the Illustrated Guide to Git on Windows (not updated since 2009) and decide for yourself whether it’s something ordinary people can use. The lack of customer support and quality assurance is a deal breaker for financial, legal, and classified documents.

Enter Github. A San Francisco-based company founded in 2008, Github provides web hosting and graphical user interfaces for Git. Only through Github’s software does Git become useable to the general population. Tellingly, both the links in the last paragraph go to pages they host, since their business revolves around people using Git. As a programmer, their service allows me work on code collaboratively with friends, share it with the rest of the world, track revisions, and have a backup “in the cloud.” And for that, it works great.

However, Github is too young and too unstable to trust with all our legal documents. In March, a hacker was able to add a file to a repository he did not have permission to modify. And just a few weeks ago (September 2012), an issue allowed 16 private repositories to be accessed by anyone. What if these had contained confidential financial, diplomatic, or military information? Github not ready to handle government data. In fact, trusting any single corporation with government IT is a bad idea. (Microsoft is a necessary evil.) Whatever computer system handles government documents needs to be completely secure, bug-free, reliable, useable, and have all the necessary functions. No wonder the government lags behind in technology adoption!

Don’t get me wrong, I personally love Github. But both Git and Github are designed for the open source movement, which is at least as complicated as the silly straw movement. Government will always have some closed source (and not silly) components. (Do you want the nuclear launch codes on WikiLeaks?) Don’t put software into use doing what it wasn’t designed to do. It won’t suit your needs, and it can easily be counterproductive. It may not even do the job it was designed for in the first place.

A few weeks ago, the Khan Academy released a new suite of computer science curricula. These are small samples of code that learners can interact with in real time, seeing both code and graphical output. At the time, I thought it was a step in the right direction. And, I guess, it is. But this week Bret Victor published an essay titled Learnable Programming that shows just how ineffective and confusing KA system is, saying that it “ignore[s] decades of learning about learning.” His work was cited as an inspiration for the KA system, but he responds that what they created is confusing and obstructs learning. He provides glimpses of a programming language and environment designed specifically to teach “a way of thinking, not a rote skill.”

Midway through, he talks about Hypercard, a piece of software from the mid-1980s. Its salient feature was that “any user can remix their software with copy and paste, thereby subtly transitioning from user to creator, and often eventually from creator to programmer.” He explains that it is “seen by some as ‘what the web should have been’.” Before the web solidified in the 90s thanks to the work of people like Tim Berners-Lee, there were many different forms it could have taken. But that age of endless possibilities is over. The core structure of the web has hardened, probably for the duration of human civilization. Today, we’re stuck making “overlay networks” that use the Internet Protocol in ways it was never intended for, or standardizing a new system on a single website. On Facebook, it is possible to recombine and share content at the click of a button. But this isn’t built in to the fabric of the web itself…yet.

In the last two or three decades, computer scientists were given nearly unprecedented power to shape the technologies that now underly society. The problem is that these people have no training or preparation on how to design for society. Computer scientists have little schooling in psychology (what they do have extends to single users or small groups, part of a field known as “Human-Computer Interaction”.) They have no training in sociology, anthropology, or media theory. They had no idea how the saturation of cell phones would affect social interaction or how the web would give rise to “hacktivist” groups like Anonymous. They didn’t know (how could anyone have known?) that the barrage of email and tweets would make us less able to have conversations. At it was beyond anyone’s imagination that connecting a few government and university computers would give rise to an online world consisting largely of advertising and vice. Which brings us full circle, back to xkcd:

http://xkcd.com/624/ CC-BY-NC Randall Munroe

Truly, there is no bottom.

A Response to Khan Academy Computer Science

Today the Khan Academy launched new computer science content. The centerpiece is an interactive editing environment with Processing and JavaScript, with real-time updating and number scrubbing in the style of Bret Victor. Anyone can make their own drawings or animations, and there’s the obligatory social component. The system is remarkably light-weight and does a good job of getting pesky details (include statements, integer arithmetic, cryptic error messages) out of the way. Pop-up sliders allow you to change numbers in the program and see the effect immediately (although at far too course a grain). The result is something like a children’s science museum crossed with Minecraft. Six-year-old me would have had a blast with this. I think that interactivity finally takes advantage of the medium the Khan Academy is working in.

Well, mostly. Each of the stock programs has a video lecture associated with it. A few of these are from the archives, but most of them are newly recorded by KA intern Jessica Liu. (Props on choosing a woman to fight gender stereotypes.) The tired claim that a confused student can pause the video has more weight here, since instead of puzzling it out on their own, they can edit the code and see the changes. When the student is resumes the video, the changes are undone.

But the problem is that the video is still (still!) a lecture. Liu works through what’s going on for the student, laying it our in plain sight, which is ironically the worst place to put it. Consider the example of a bouncing ball. Liu dives right in to the code. It’s the best she can do, because for all the student’s interactivity with the program, she has no interactivity with the student. She has no way to know what he is thinking, what other programs he has looked at, what his math background is, or whether he’s about to click the back button.

In a traditional school, a teacher might use the program as follows. (Disclaimer: I’m not a teacher.) She waits until she’s made a few passes over parabolas and quadratics in other ways. Then she sits the class in front of the animation with no code. (Remember when teachers placed opaque paper on the overhead projector? Same idea. Resize the browser window.) She’ll ask the class to observe the program. “The ball goes up and down,” someone will say. “Yes – and does it always move at the same speed?” When the class isn’t sure, she can ask what information they’d need to know to find out. “What else do you notice?” “The green wall is behind the ball and always there,” another student will say. “It is. Can anyone tell us why?” “Because the wall is under the ball. Like those glass plates you showed us,” says another student recalling a previous classroom demonstration. “What else?” A usually quiet student speaks up: “The shadow gets larger and lighter as the ball goes up.” “Very good Johnny! Does everybody see that?”

After the class has found all the subtleties that the teacher has in mind, she’ll have them pair up and, on paper, come up with what they think the program is. Maybe each group will be assigned one aspect previously identified. After a few minutes, they class will reconvene and the teacher helps collate their responses in typeset code. She may feign misunderstanding the students so they clarify their ideas and highlight misconceptions to the class. And then she runs the code, which probably doesn’t work, and hilarity ensues while the class debugs it. Then, and only then, is the Khan Academy model held up as an example.

This method isn’t something the Khan Academy is choosing not to do – it’s something they simply cannot do. In a traditional classroom, there is continuity from lesson to lesson so students aren’t confounded by terms like “draw loop”. The group dynamic means everyone is focuses and not wasting each other’s time (well, in theory.) There is a desire for students to show what they know to the class. And the teacher can intervene at any point if something goes wrong. With a recorded lecture, the chance to collaborate is missed. And because we can’t expect every student to have every idea, collaboration is vital. The result is code that the students can own, instead of The Right Answer™.

Yes, there will be the solitary deep thinker who will take this resource and teach himself (almost always himself) to think mathematically and to map the text to the picture. But this has been true in every generation. The technology removes some small but critical barriers to learning, perhaps enough to qualify as innovative. (Perhaps.) But the pedagogy lags behind. This is because high-fidelity media works far better interactively than when recorded. (More on that later.)

Takeaway: the new Khan Academy computer science content may inspire the next generation, but don’t expect it alone to build crack coders or mathematicians.

Anyone teaching students themselves in a traditional classroom may use this lesson plan. If you’re a publishing company or an online resource, email me for approval (see the About page).

UPDATE: In late September, Bret Victor responded with an essay titled Learnable Programming. It attacks the claims that KA CS is as innovative as we all thought it was.

A chat with Jim Waldo

Jim Waldo, Chief Technology Officer at Harvard University, gave a talk at a private corporate event I attended earlier this week. My company builds some of the boxes that help make the internet work. Although there’s a lot going on from a technical perspective, it’s not very glamorous to the general public. Jim, I suspect, was brought in in part to tell us why we did what we do.

Waldo’s talk focused on new online education ventures, including Harvard and MIT’s edX and Stanford’s Coursera and Udacity. At first he seemed to be one of many uncritical voices in the movement. Responding to criticism that only twenty thousand finished Stanford’s AI course out of the 150 thousand that enrolled, he said “then half the people who have finished an AI course finished that one.” He also endorsed the flipped classroom, where students watch lectures as online videos and then do homework in class. (He poked fun at the oxymoron.) Afterwards I asked him a question along the lines of “the distribution methods have changed, but the content itself has not,” saying that “worksheets and lectures” are the same as “lectures and worksheets”. He responded by saying that the Khan Academy differed from that approach.

But when I caught up with Waldo during the lunch break, he backpedaled on that point. What differentiated Khan from traditional lectures was that Khan was broken up in to 10-minute chunks, interspersed with questions. He admitted that it wasn’t a huge distinction. When he asked me what I would do differently, I told him that I didn’t have the educator’s background to do something completely different, but I did have the same engineer’s background that Sal Khan has (ostensibly). Rather than run with an idea that seems to work and make thousands of videos, I told Jim, I would have picked out a particular course (say, high school physics) and done and redone it four or five times.

“You’ve picked up on the difference between Coursera and edX,” said Jim. I noticed some east coast/west coast rivalry. Places like the Khan Academy, he explained, tend to think they have The Answer. By contrast, what Harvard was doing was an experiment. “Anyone who says they have the answer in this field is lying. Possibly to themselves,” he said. “We don’t know what we’re doing, and that’s exciting.”

To his credit, Waldo made mention of this in his talk as well. He explained how no online school can beat the residential experience, but they can come a lot closer than they currently do. Rather than take an educator’s reasoned approach, he said that he’s open to trying anything, mining big data, and figuring out what techniques work using  statistics. When I pressed him about the validity of conventional assessment, he replied that “the first set of statistics will tell us what statistics we should have been collecting.” He was even open to the possibility that it would all fall through in five years, but didn’t think that would happen.

Jim’s intellectual humility was refreshing. He didn’t give in to hype in the ways other online learning tools have, particularly the Khan Academy. For the good of humanity, I hope that he can find a secret sauce (or secret sauces) buried in the data he’ll collect over the coming years. I think he stands a better chance than the Khan Academy of bringing quality education, which creates deep understanding rather than blind memorization, to all.

Whether I think he’ll actually pull it off is a topic for another post. In the mean time, we agreed that even if we don’t have the right kind of content to put online yet, building the internet is a worthwhile goal. Maybe one day we’ll be able to use it for something more noble than LOLcats.

(Note: None of Waldo’s quotes should be taken as exactly verbatim, but they are close. Many were not recorded, except by me tweeting them, and I wrote this post a few days after it happened. I have tried to present his view accurately and fairly. Just trying to be careful here.)

Internet Idea Books: Roundup, Review, and Response

What Technology Wants (Kevin Kelly, 2010) is a sweeping history of technology as a unified force which he calls “the technium”. Kelly starts slowly, drawing ever larger circles of human history, biological evolution, and the formation of planet earth from starstuff. His scope, from the Big Bang to the Singularity, is unmatchable. But the purpose of this incredible breadth is not readily apparent, and isn’t for the first half of the book, as Kelly talks about everything but technology. I advise the reader to sit back and enjoy the ride, even if it covers a lot of familiar ground.

In not the first chapter on evolution, Kelly argues that the tree of life is not random, but instead is constrained by chemistry, physics, geometry, and so on. The author points to many examples of convergent evolution, where the same “unlikely” anatomical feature was evolved multiple times independently. For example, both bats and dolphins use echolocation but their common ancestor did not. Kelly is careful to attribute this phenomenon to the constraints implicit in the system and not supernatural intelligence. He argues that, in the broadest strokes, evolution is “preordained” even as the details are not.

Kelly begins the next chapter by noting that evolution itself was discovered by Alfred Russel Wallace independently and concurrently as it was by Charles Darwin. This becomes the segue into convergent invention and discovery, insisting that the technium should be regarded as an extension of life, obeying most of its rules, although human decision replaces natural selection. Technology becomes an overpowering force that loses adaptations as willingly as animals devolve (which is to say, not very).

The premise that technology extends life becomes the central to Kelly’s predictions. He paints a grandiose picture of technologies that are as varied and awe-inspiring as the forms of life, encouraging ever-more opportunities in an accelerating dance of evolution. “Extrapolated, technology wants what life wants,” he claims, and lists the attributes technology aspires to. Generally speaking, Kelly predicts technological divergence, where your walls are screens and your furniture thinks, and the death of inert matter. Like the forms of life, technology will specialize into countless species and then become unnoticed, or even unnoticeable.

Much of what Kelly predicts has already happened for passive technologies. We don’t notice paper, government, roads, or agriculture. But I don’t think that information technology will achieve the same saturation. No matter how cheap an intelligent door becomes, a non-intelligent version will be cheaper still, and has inertia behind it. Kelly claims that such resistance can only delay the adoption of technology, not prevent it. Nevertheless something about Kelly’s book disturbed me. It was wrong, I felt, but I couldn’t articulate why. So I read a trio of books that take a more cautioned view of information and communication technologies. As I read, I asked of them: what has the internet taken from us, and how to we take it back? Continue reading

How to save the world

The end of World War I was a bad time to be an optimist. It wasn’t that millions of young men had died or that western Europe had been transfigured into a hellish bombed-out landscape, although that was certainly true. It was the inescapable philosophical consideration that civilization had done this to itself. The “progress” of the industrial revolution and German unification led inexorably to total war. Civilization itself was fundamentally flawed and unsustainable; the only alternative was to admit Rousseau was right and go back to the trees.

Of course, that’s not what happened, and twenty years later they were at it again. The technology changed dramatically, but it didn’t change the fact that people were still killing each other, only how they did it. The changes that mattered were the social institutions built afterwards. Instead of the outrageous reparations in the Treaty of Versailles, there was the conciliatory Marshall Plan. Instead of the League of Nations, there was the United Nations. It wasn’t technological improvements that saved lives and improved the quality of living after the war. It was the people, with their resiliency, their forgiveness, and their intent not to make the same mistake twice.

We now find ourselves, once again, on the brink of destruction. It is not destruction by military means, but rather, economic and environmental means. Natural resources are being depleted faster than they can be renewed, if they can be renewed at all. Industrialization has spread concrete, steel, and chemicals across previously untouched land. The established political institutions are being challenged by forces as diverse as the Arab Spring and the Occupy movement. The economy is still largely in shambles. And then there’s the small matter of climate change. And so on. We’ve heard it all before. At TED 2012, this grim view was presented by Paul Gilding (talk, follow-up blog post). He’s pretty blunt about it: the earth is full.

Around a third of the world lives on less the two dollars a day. They have dramatically different cultures, education, living conditions, access to technology than the typical American or European. You honestly think that they’re the ones that are going to fix the problems? The people who are illiterate, innumerate, and don’t know where their next meal is coming from are going to fix climate change?

Depending on your answer, I have two different responses. I’ll give both of them, but you might want to think about it first. Continue reading