Archive for the ‘Computer Science’ Category

Using Simulation

Sherry Turkle’s Simulation and its Discontents takes a disapproving stance towards software that mimics the real world. She surveys many fields of science and engineering, where simulation takes many forms, including protein folding, architectural drawings, and physical phenomena. She highlights how older practitioners miss having their hands on their workpiece, while younger ones are anxious about knowledge they may never have. In the 1980s, simulation drove a wedge between MIT faculty and students; more recently it has been begrudgingly accepted by all.

There is certainly a generation gap here, but it exists as much in the technology itself as in the minds of the scientists. Turkle repeatedly emphasizes the “black box” nature of software, and how its users cannot examine the code themselves. She conveniently ignores the open source movement, which creates software that can be understood, modified, and redistributed by its users. True, much of science and engineering software is still proprietary, and open source offerings are frequently inferior to the paid versions, but she doesn’t even have that discussion.

Secondly, even if we users could see the source, understanding it is not trivial. Her book predates the “learn to code” movement by a few years, but the same objections apply: computer science is an engineering field in its own right, and software should be developed and maintained by specialized practitioners rather than done “on the side” by all engineers. Yes, domain knowledge experts should be brought in when necessary. Research into more advanced programming languages will likely only make the situation worse, as they typically rely on an ever-larger and more abstract body of knowledge in order to work in them, thus catering to the expert over the beginner.

Any simulation that performs calculations that could be done by hand is really an automation. A true simulation, arguably by definition, cannot be fully understood. I agree that all engineers should be able to dash off a few lines to automate a menial task, but simulations are harder. In particular, there are languages (Python and Ruby) that are easy to learn well enough to automate simple tasks. But most simulations aren’t written in these languages. The need for performance drives simulations to be written in C or C++, frequently incorporating many libraries (shared code written by someone else; even programmers don’t understand all of their program). Unlike the command line utilities of yore, graphical user interfaces and high-performance graphics rendering require specialized and complex programming techniques. Integrating with the internet or a database makes programs more complicated still. Finally the size of programs has ballooned. There is so much code in a typical piece of software, and it is so complex, that I find it naive when non-programmers to insist that if only they could see the code, they could understand it.

Programs and programming today are far more complicated than in the 1980s. The most advanced climate models consist of a million lines of FORTRAN code, simulating equations from many disparate fields of natural science. They are beyond the understanding of any single individual. And indeed, understanding is no longer the goal. Just as technology has allowed us to see things invisible to our eyes, and hear things inaudible to our ears, simulation allows us to think things incomprehensible to our brains.

Turkle certainly grasps that deep, full understanding is imperiled, but only in her conclusion does she so much as entertain the notion that this may be necessary or good. Simulation may open up new possibilities more than it closes them. Science has surpassed the point of what can be understood by crystaline thought; the future is noisier and approximate. Robert Browning wrote that “a man’s reach should exceed his grasp”. A fictionalized Nikola Tesla ponders this quote in The Prestige, with the context that what man can affect, he cannot always control. Turkle would do well to heed his next remark: “It’s a lie. Man’s grasp exceeds his nerve.”

How do we get the nerve to embrace the new future of simulation? In part, by addressing specific concerns raised by Turkle’s interviewees. Defaults are too tempting, so we shouldn’t provide them. A design can appear finalized before it actually is, preventing further iteration, so displays can mimic physical imprecision. High-level summaries should allow the user to see examples, to jump between layers, to see why the computer has classified or calculated the way that it did. Nevertheless I expect changes in simulation to come primarily from software engineering and culture rather than the technology itself.

Turkle gives an example of a protein folding simulation devising a molecule that is clearly wrong to the biologist, because it defies her understanding of how proteins work. But what is her thought process if not another simulation? Perhaps it is more correct than the software in this case, but in general, thinking and outlining should be considered the cheapest, fastest, and lowest-fidelity of many simulation tools available to us. Intuition can be powerful, but it can also be wrong, and any claim in science or engineering requires more than intuition to support it. More formalized methods of thinking (how did they build skyscrapers in the 1920s?) are just following algorithms that the machine can do today, faster and with (potentially!) fewer errors. If the biologist can articulate her thought process, it can be automated, and if the cannot, it’s mere intuition.

With regards to creativity, simulation  and I include the word processor here — is a double-edged sword. When the barrier to creation is low, we can get our thoughts out quickly, and complete the dreaded first draft. Ideas and structure form and reflow on the fly.  The work is crafted interactively, iteratively, in a continuous and tight feedback loop. Human and simulation play off each other. This is clearly better than the computer working in isolation, such as the protein folding program that merely produced the “right” answer. What I propose is that it may also be superior to humans working “blind”, creating ideas fully in their heads, attempting to simulate them there, in discrete drafts. This model is a relic of hand- or typewritten pages, technologies where copying wasn’t instant. The downside is that it’s easy to do less thought before writing, and the end product may lack a harmonious global structure as a result. The compromise is to work with many tools of increasing fidelity and expense. When an idea does not work, we desire to “fail fast”  in the least expressive medium in which the flaw is manifest.

Frequently ideas we think are brilliant fall over when confronted, and a simulation can fail fast, can confront them quickly. In user interface design, paper prototypes are created not for the designer but for a test subject, and all the intellectualized beauty in the world means nothing if the user can’t operate the interface. This echoes a fundamental tenant of engineering across the board: you are not designing for yourself. What you think doesn’t matter unless validated by a ruthlessly objective third party. Writing now forms the exception: to reify human thought itself without the consent of the external world shifts the burden onto the author. Yet even though the writer struggles to crystalize his ideas as much as possible prior to composing them, he knows the value of trusted readers to provide feedback.

This leads us to the notion of software testing, which is all but absent from Turkle’s book. Provably correct software is an active area of research, so those shipping software today verify its correctness empirically. Testing exists on many scales, from system-wide routines of many actions to “unit tests” that cover as small a functionality as possible. Although typically written by the developers, a user can also benefit from writing unit tests, as they force her think about and articulate how the simulation would act in a very controlled instance. She will build confidence that the simulation matches her ideas. When a test fails, either the simulation, her understanding, or her ability to articulate her understanding, is incomplete.

Testing software is a philosophical shift from understanding it, moving programming away from science and towards engineering. Scientists will protest that experimenting on a designed thing is less effective than understanding the design directly, to which I respond by challenging the notion that software is designed. I have already discussed the staggering complexity of some software, and this should lead us to treat it as discovered rather than invented, and therefore investigated by experimentation rather than reverse-engineering. We explore simulation as an image of the real world: highly accurate but imperfect, yet faster, cheaper, and more versatile.

George Box, the statistician (what are statistics if not a simulation?), wrote that “all models are wrong, but some are useful”. He diffuses Turkle’s claim that simulations mislead by agreeing with it, and then salvages utility. He deconstructs and reconstructs simulation as a practice in a sentence. It is certainly worthwhile to make our models less wrong or more transparent when possible (these are often competing aims), and to supplement simulations with the real. Still, as in all discussions on culture-shaping technology, we must conclude that the main approach must be to use the technology more prudently, rather than ban it or hope that it will go away. Whether simulation acts as a magnifying glass or a funhouse mirror depend on how you look at it.

Advertisements

Critical Complexity

Here’s a task for you: draw a circle radius three around the origin.

What system do you use? Well, you could use an intuitive system like Piaget’s turtle. Walk out three, turn ninety degrees, and then walk forward while turning inward. By identifying as a specific agent, you take advantage of having a brain that evolved to control a body. If it doesn’t seem intuitive, that’s because you’ve been trained to use other systems. Your familiarity is trumping what comes naturally, at least to children.

You’re probably thinking in Cartesian coordinates. You may even recall that x^2 + y^2 = 3^2 will give you the circle I asked for. But that’s only because you memorized it. Why this formula? It’s not obvious that it should be a circle. It doesn’t feel very circular, unless you fully understand the abstraction beneath it (in this case, the Pythagorean theorem) and how it applies to the situation.

Turtle geometry intuitively fits the human, but it’s limited and naive. Cartesian geometry accurately fits your monitor or graph paper, the technology, but it’s an awkward way to express circles. So let’s do something different. In polar coordinates, all we have to say is r=3 and we’re done. It’s not a compromise between the human and the technology, it’s an abstraction – doing something more elegant and concise than either native form. Human and technology alike  stretch to accommodate the new representation. Abstractions aren’t fuzzy and amorphous. Abstractions are crisp, and stacked on top of each other, like new shirts in a store.

We’ve invented notation that, for this problem, compresses the task as much as possible. The radius is specified; the fact that it’s a circle centered around the origin are implicit in the conventional meaning of r and the lack of other information. It’s been maximally compressed (related technical term: Kolmogorov complexity).

Compression is one of the best tools we have for fighting complexity. By definition, compression hides the meaningless while showing the meaningful. It’s a continuous spectrum, on which sits a point I’ll call critical complexity. Critical complexity is the threshold above which a significant abstraction infrastructure is necessary. But that definition doesn’t mean much to you — yet.

Think of knowledge as terrain. To get somewhere, we build roads, which in our metaphor are abstraction. Roads connect to each other, and take us to new places. It was trivial to abstract Cartesian coordinates into polar by means of conversions. This is like building a road, with one end connecting to the existing street grid and another ending somewhere new. It’s trivial to represent a circle in polar coordinates. This is what we do at the newly accessible location. We’ve broken a non-trivial problem into two trivial pieces – although it wasn’t a particularly hard problem, as otherwise we wouldn’t have been able to do that.

Delivering these words to your machine is a hard problem. You’re probably using a webbrowser, which is written in software code, which is running on digital electronics, which are derived from analog electronics obeying Maxwell’s equations, and so on. But the great thing about abstractions is that you only need to understand the topmost one. You can work in polar coordinates without converting back to Cartesian, and you can use a computer without obtaining multiple engineering degrees first. You can build your own network of roads about how to operate a computer, disconnected from your road network about physics.

Or perhaps not disconnected, but connected by a tunnel through the mountain of what you don’t understand. A tunnel is a way to bypass ignorance to learn about other things based on knowledge you don’t have, but don’t need. Of course, someone knows those things – they’ve laboriously built roads over the mountain so that you can cruise under it. These people, known as scientists and engineers, slice hard problems into many layers of smaller ones. A hard problem may have so many layers that, even if each is trivial on its own, they are non-trivial collectively. That said, some problems are easier than they look because our own sensemaking abstractions blind us.

If you want to write an analog clock in JavaScript, your best bet is to configure someone else’s framework. That is, you say you want a gray clockface and a red second hand, and the framework magically does it. The user, hardly a designer, is reduced to muttering incantations at a black box hoping the spell will work as expected. Inside the box is some 200 lines or more, most of it spent on things not at all related to the high-level description of an analog clock. The resulting clock is a cul-de-sac at the end of a tunnel, overlooking a precipice.

By contrast, the nascent Elm language provides a demo of the analog clock. Its eight lines of code effectively define the Kolmogorov complexity: each operation is significant. Almost every word or number defines part of the dynamic drawing in some way. To the programmer, the result is liberating. If you want to change the color of the clockface, you don’t have to ask the permission of a framework designer, you just do it. The abstractions implicit in Elm have pushed analog clocks under the critical complexity, which is the point above which you need to build a tunnel.

There’s still a tunnel involved, though: the compiler written in Haskell that converts Elm to JavaScript. But this tunnel is already behind us when we set out to make an analog clock. Moreover, this tunnel leads to open terrain where we can build many roads and reach many places, rather than the single destination offered by the framework. What’s important isn’t the avoidance of tunnels, but of tunnels to nowhere. Each abstraction should have a purpose, which is to open up new terrain where abstractions are not needed, because getting around is trivial.

However, the notion of what’s trivial is subjective. It’s not always clear what’s a road and what’s a tunnel. Familiarity certainly makes any abstraction seem simpler. Though we gain a better grasp on an abstraction by becoming familiar with it, we also lose sight of the underlying objective nature of abstractions: some are more intuitive or more powerful than others. Familiarity can be born both by understanding where an idea comes from and how it relates to others, and by practicing using the idea on its own. I suspect that better than either one is both together. With familiarity comes automaticity, where we can quickly answer questions by relying on intuition, because we’ve seen them or something similar before. But depending on the abstraction, familiarity can mean never discarding naïveté (turtle), contorting into awkward mental poses (Cartesian) – or achieving something truly elegant and powerful.

It’s tempting to decry weak or crippling abstractions, but they too serve a purpose. Like the fancy algorithms that are slow when n is small, fancy abstractions are unnecessary for simple problems. Yes, one should practice using them on simple problems as to have familiarity when moving into hard ones. But before that, one needs to see for oneself the morass weak or inappropriately-chosen abstractions create. Powerful abstractions, I am increasingly convinced, cannot be be constructed on virgin mental terrain. For each individual, they must emerge from the ashes of an inferior system that provides both experience and motivation to build something stronger.

Visualizing Complexity

This post could be considered the third of three responding to Bret Victor’s trio of talks; the previous ones were Abstraction and Standardization and Media for Thinking the Eminently Thinkable.

Question: what makes a program complex? Answer: time.

There are multiple types of time. The obvious one, the one measured by stopwatches, I’ll call physical time. A small program promotes a tight feedback loop, where the effect of a change is visible in a few seconds. Large programs take longer to compile and run, as do those dealing with large amounts of data. To help offset this, programming languages developed threading and concurrency. An expensive task can be queued up, so it will happen at some point in the future. This sort of parallelism makes programs much harder to reason about and test.

Then there’s logical time. This includes event-based programming, which usually involves a GUI. A user’s clicks and drags become events to which the system responds. Code is called when something happens, not when the thing before it ends. Rather than a procedural recipe racing towards the final answer, these programs are state machines, looping indefinitely and branching in response to an arbitrary sequence of events.

Finally, for completeness, there’s developer time. Memories fade, requirements change, people join or leave the project, the technology itself changes. Like people, code deteriorates as it ages, although measures can be taken to mitigate the decline. Any large codebase has annoying “legacy code” kept around for “historical reasons” and “backwards compatibility”.

In his talk Drawing Dynamic Visualizations, Bret Victor presents a drawing tool where the user creates a program by direct manipulation. This program can be used to create custom pictures for multiple datasets (different measurements of the same variables). The results feel very “PowerPoint-y”, the sort of thing you’d present in a board room or paper scientific journal. The method of creation is new, but the results emulate old media.

If you’re reading this, Bret, and think I’m a bit obsessed, (1) thanks for noticing (2) this will likely be the last post about you for awhile.

Users can step through the drawing program, but it’s capable of running instantaneously (i.e. without time and therefore much complexity) . There’s no need for a visualization to wait for a network event, but a good visualization reacts to mouse motions, like hover, click and drag. Interactivity is a crucial component of visualization. Victor has made a great way to make charts for boardroom presentations, but he hasn’t allowed the user to interact with the data. The visualization is “dynamic” in that it accommodates new data, but it doesn’t react to the user. I’m not asking to drag a bar to set the data to that point; we’re working under the assumption that the underlying data is fixed. And I can abstract any color or magic number as a parameter, so I can set data-independent features of the graph like error bars dynamically. But without interactivity, we can only accomplish the first third of Shneiderman’s mantra:

Overview first, zoom and filter, then details-on-demand.

Without interactivity, we can’t zoom or filter or otherwise adjust the visualization to get our details on demand. All we have is an overview. Our visualization sits there, dead. This is not good enough. Data today is so complex that even the best static image isn’t going to elucidate subtle meanings. Instead we need interactivity so that the visualization reacts to the user, not just the designer. I should clarify straight away that I’m not talking about the interactivity Victor rails against in his paper Magic Ink, where the user has a specific, often mundane question she wants answered. (How long will I wait for the bus? What move theater should I go to?) I’m talking about systems where the user doesn’t know what she wants, and needs to explore it to develop an intuition or notice something strange.

There is a disconnect, an asymmetry, in Victor’s work. In Stop Drawing Dead Fish, we had a novel direct manipulation tool and interactivity with its results. In Drawing Dynamic Visualizations, we had a novel direct manipulation tool and dynamic data. In Media For Thinking The Unthinkable, specifically the part on signal processing, we had interactivity and dynamic data. Create with direct manipulation, load dynamic data, interact with the result: pick two.

How can we add interactivity to Victor’s tool? Let’s start with mouseover. As a function of the mouse position, which can be considered an input, hover is stateless. The tool can run geometric collision detection on its primitives and provide a boolean measurement (output) as to whether the shape is being hovered over. If you want to use this information to, say, change the shape’s color, you have to have shapes able to read their own outputs. This can lead to feedback loops. If we draw nothing on mouseover, then we’re not being moused over anymore, so we go back to drawing our shape, which is now being moused over…and suddenly our system has state, and “flickers” indefinitely. Worse, by creating a place for code that is executed only if certain external conditions are true, we create asynchronous, “jumpy” code. This is a large increase in physical and logical time.

Selecting a shape runs into similar problems, an additionally requires a global variable (internal, not a parameter) to keep track of the selection. It gets even worse if we want to add collision detection between objects, like a force-directed diagram. (Actually that’s not collisions, just proximity-based repulsion, but roughly the same idea.) Even if the system manages to detect when two subpictures touch, they now will need a place for code that is called when that is the case. Victor briefly demonstrated how physics can be simulated by using a guide vector to geometrically represent velocity, which we could presumably shift in reaction to the angle of line contact. But we’re starting to think of shapes as their own entities, having state represented in guides associated with them, rather than being drawn and forgotten about. This brings us into Object Oriented Programming, the traditional choice for this sort of interactivity. It’s a great paradigm but it’s quite foreign to Victor’s tool. (Although his Stop Drawing Dead Fish tool lets its user do exactly this. Maybe the line between data scientists and visual artists, or perhaps their tools, is blurring…)

There’s a brief moment where Victor abstracts a color from a shape, and makes it a parameter. This parameter starts off as numeric but changes to a color, likely in response to a keyboard command. If the tool already has a notion of types, that a number is not a color, then there’s a piece of complexity already imported from traditional programming. Types, OOP, collision detection, encapsulated mutable state – all of these ideas have their place in code. There are also a number of mainstays of drawing tools we could include, like gradients, z-order, manual kerning, and the dropper tool. Deciding which of these to include and which to omit, and how to integrate them into the tool, will require some careful design decisions. I wonder how many of these ideas we can pile into the tool before either the graphical interface or the layperson’s acuity collapses under the weight of the complexity.

I, like Victor, and frustrated at the state of software and computing. Most artists are able to react with their medium, to try something, revise it, and use mistakes as inspiration. Listen to how creators in fields as diverse as painting and carpentry discuss a problem. It’s always about sculpting the unruly workpiece material with capable and precise tools, man vs. nature. So why then are computers, where the tool and the medium are one and the same, so limiting that our society can’t see them as anything other than frustratingly complex? Instead of using the medium to find out what she wants to create, the creator has trouble merely realizing what’s in her head. We need a fundamental overhaul of the tools we use for thinking. These tools should let humans, data, and simulation work together. Victor produces tantalizing glimpses of such tools with ease.

Maybe there are true geniuses out there. People who effortlessly sift through complexity, and can make bold pronouncements with confidence. The rest of us have to fake it. We get mired in the cacophany of voices and ideas, paths to choose and tools to use. We do our best to make sense of them but always wonder, in a grass-is-greener sort of way, if we’re missing the point entirely.

Sherlock Holmes and Hard Problems

“With a few simple lines of computer code,” boasts Moriarty in BBC Sherlock, “I can crack any bank, open any door”. (Paraphrased from memory, shh.) Without any spoilers, I can tell you that Sherlock’s nemesis is portrayed as controlling every detail, forseeing every possibility, and manipulating a web of individuals through blackmail, bribery, snipers, and sowing distrust. And, he makes vague claims of having the ultimate computer hack, stronger than any security system.

What kind of software would this be? Most of computer security relies on mathematics that is computationally hard. Consider a traditional padlock. If you know the combination, it takes almost no time to open the lock. If you don’t know the combination, you have to try every possible code. The combination is easy to check, but difficult to discover.

A completely general computer hack of the sorts Moriarty claims to have would be like being able to open a padlock without the combination as fast as you could with the combination. Sherlock operates in much the same way. Anyone can verify his string of deductions after he’s made them; his genius is to devise them in the first place.

So that’s what separates fact from fiction. These portrayals of genius are unrealistic because they take the same amount of time to produce a solution as it takes to verify one. Right?

Not quite. “Can any answer than can be checked quickly also be created quickly?” is one of the great unsolved problems of computer science. We don’t know.

The Diamond Age: An Edtech Reading

I recently reread Neal Stephenson’s The Diamond Age. It’s a work of science fiction that depicts a future infused with nanotechnology, set in Shanghai and the surrounding areas. It offers some great material for a discussion on the role of technology in education and the limits of computers. Its themes are also relevant to edtech, which is pretty impressive for something published in 1995.

As a quick summary, Lord Finkle-McGraw asks engineer John Hackworth to create a computerized book (the Primer) to supplement his granddaughter Elizabeth’s schooling. Hackworth attempts to create a second copy for his own daughter illicitly, but is mugged and the book falls into the hands of the young street urchin Nell. The Primer guides Nell through leaving her abusive domestic situation and educates her using a customized fantasy story. Though the Primer is capable of reacting to voice commands and displaying a wealth of information, its narration is performed by a human actress Miranda whom Nell does not know. Hackworth, charged with intellectual property theft, makes a plea bargain to provide the source code of the Primer so copies may be distributed to tens of thousands of young abandoned Chinese girls. In the process of modifying the Primer to use a computerized voice, Hackworth is finally able to secure a copy for his daughter Fiona, before disappearing to serve his ten-year sentence.

That’s the first act. To do a proper analysis, I’m going to have to drop a few more spoilers from the most memorable parts of the book, so be warned.

Continue reading

Programming our Children

A generation ago, computers only understood text. You would program the computer in English text. You would ask your questions on punchcards encoding text. Your answer would be provided as monospaced, unadorned text. Since the early 1980s we have refined the graphical user interface, or GUI, to allow humans to communicate with computers on more familiar terms. Although a boon for the layperson, GUIs have been troublesome for computer scientists. They are hard to build because they are so open-ended. They are hard to test, because rather than printing a single correct answer there are many paths the user may take to accomplish the same goal.

Computer science still starts with a text editor and a compiler, because programming is better served by text. Text affords programmers absolute control over their programs. Written language is far more expressive than pointing and clicking, allowing for a explicit and precise descriptions. Clean code is a clear explanation of an algorithm directed to a mindless worker. The struggle of a programmer is to achieve sufficient clarity for both the computer and him- or herself. It can be a very enlightening experience, to debug an algorithm and then discover it doesn’t quite do what you wanted it to do, and so adapt it further. That said, the sheer austerity of the task can make it daunting without the right training and motivation on the part of the programmer.

GUIs are quite the opposite. They show many available options, reward experimentation, and make complex actions easy by hiding detail.  GUIs make computing accessible to a wide audience. A user interacts with a GUI as a peer, clicking and dragging and seeing how the interface reacts. Ultimately, convinced the GUI is logical and predictable, they embrace it as a new way of thinking. But GUIs are limited. They make it very difficult to perform analogous actions repeatedly or store a sequence of actions for later use.

There is an analogy to be made with education. Programming is like direct instruction, where knowledge is relayed linguistically and authoritatively. (No wonder Bill Gates and Salman Khan like it.) GUIs are like constructionism, where feedback loops reveal non-arbitrary behavior of a system that the user/student slowly begins to internalize. (So I constructed my own definition. How meta.)

Both methods of interacting with a computer are valid and potentially productive, so it seems both educational philosophies are valid as well. But there is a critical flaw in the analogy. For GUIs, students are analogous to the user and the computer is akin to some representation of the material itself: manipulatives, an experiment, a video, a graph or plot. But for text-based programming, the student is not the programmer; they’re the computer! The teacher is the programmer, the direct instructor, who crafts clear explanations of algorithms for the students to mechanically follow.

Direct instruction is degrading. It robs them of their ability, desire, right to explore and create. Knowledge transfer is not like copying a file, where we wait as it is methodically duplicated. Knowledge is personal, with idiosyncrasies and unique contexts. To insist on teaching children the same way we program a computer is simply wrong. It cuts to the core of what Dethorning STEM is about: our society treats people like computers and computers like people.

On a positive note, this analysis suggests that we should introduce computational thinking as another way for students to interact with the material in a constructionist setting. Having students write their own psuedocode for long division may be a viable way to teach it, if  it needs to be taught at all. Computational literacy will play an increasing role in the next century as computers become more ingrained in out lives. In the future, following an algorithm won’t be good enough — you’ll have to be able to write one.

Unfortunately, the state of computer science education is in shambles. Basic computer classes often teach how to use Microsoft Office by following rote algorithms — truly the blind leading the blind. Computer science itself takes a back seat to all other subjects, and is only sometimes offered as an elective. But I think that computational literacy does not require a computer scientist, a computer lab, or even a computer. It’s not content; it’s a technique. By cleverly inserting the right activities into the existing curriculum, teachers can cover computational thinking alongside any subject. Training teachers how to do that, and getting the administrators to sign on, will prove difficult.

A new, innovate approach is needed. One that breaks from the ossified red tape and small scale of the classroom and equally from the poor pedagogy underlying of most edtech products. The next generation of children deserve no less.

The Anti-Mac User Interface

I came across this 1996 paper that, as a thought experiment, took the principles of Macintosh user interface design and inverts them, just to see what would happen. For example, by accepting the Mac’s point-and-click interface, “it’s as if we have thrown away a million years of evolution, lost our facility with expressive language, and been reduced to pointing at objects in the immediate environment. Mouse buttons and modifier keys give us a vocabulary equivalent to a few different grunts.” They go on the show that metaphors can be crippling, direct manipulation can be tedious, consistency can be boring, and stability can be unhelpful. We expect our computers to stand still and not touch anything for fear of confusing us or breaking something, a relationship that paints the computer as an incompetent servant and the human as a weak-minded control freak.

What if your computer was actively helpful? What it it opened your mail every morning, and your webcomics only on MWF when they update? What if it cleaned off the 30 items on your desktop and put them into the right folders, and then changed the desktop picture to something it knows you like? This sort of user experience is extremely difficult to do well; it may even be AI-complete.

Nevertheless I feel that as computers become more prevalent and more capable our relationship with them needs to change. In 40 years nearly everyone will be a “digital native”, and this can be either a blessing or a curse. If we are locked into the interaction paradigms of our immigrant parents, we will be crippled by them forever. But we will reap benefits if we can raise a generation capable of enjoying lingual, contextualized, and diverse computing experiences.

What’s most interesting though is that the authors identify many of the user experiences seen today, more than a decade and a half later, on Linux (primarily) and Mac (ironically). Users encounter the desktop metaphor still, although with third-party apps creating some different experiences and branding. In the browser, the idea of contextualized locations is manifest, and it reflects reality. Different web pages look different because they are made by different people, much in the same way people have houses of different sizes, cleanliness, and decor. Far from having a single word processor interface, we deal with several on a daily basis, both in and out of the cloud. WordPress looks different from Gmail, which looks different from MS Word, which behaves very differently compared to vim. That’s okay, because they each have their own flavor and slightly different use case. Finally, the command line is resurgent, with new interest in programming, its tight integration with Linux, and the ubiquity of search bars. The command line is lingual, promoting automation and semantic meaning. Today’s user is presented with an arrangement of user interfaces, each tailored to a specific need, which in turn places requirements on the user. Or is it the other way around: our computers are “optimized for the category of users and data that we believe will be dominant in the future: people with extensive computer experience who want to manipulate huge numbers of complex information objects while being connected to a network shared by immense numbers of other users and computers.” Sounds like 2012 to me.

Operating in this diverse computing environment requires a large amount of cognitive work. We need to bring children through the evolution of language and thought quickly, teaching them to understand hierarchies, modularities, and contexts, the differences that characterize computing today. This will take time. The authors write that “it would not be unreasonable to expect a person to spend several years learning to communicate with computers, just as we now expect children to spend 20 years mastering their native language.” We should teach computers using computers. It’s time to use technology to strengthen and organize human thought. In order to do this, we need a new generation of educational technologies that not only do a better job teaching about math or history but also teach about thinking itself. Working a computer is a puzzle, and it can take years of practice. We need a way to simulate these nested challenges, and to promote the structured ways of thinking that solve them.