## Archive for the ‘Best of Dethorning STEM’ Category

### The Fallacy of the Right Answer

The Fallacy of the Right Answer is everywhere. With regards to education technology, it dates back at least to BF Skinner.

Skinner saw education as a series of definite, discrete, linear steps along a fixed, straight road; today this is called a curriculum. He referred to a child who guesses the password as “being right”. Khan Academy uses similar gatekeeping techniques in its exercises, limiting the context. Students must meet one criterion before proceeding to the next, being spoon-fed knowledge and seeing through a peephole not unlike Skinner’s machines. Furthermore, these steps are claimed to be objective, universal and emotionless. Paul Lockhart calls this the “ladder myth”, the conception of mathematics as a clear hierarchy of dependencies. But the learning hierarchy is tangled, replete with strange loops.

It is fallacious yet popular to think that a concept, once learned, is never forgotten. But most educated adults I know (including myself) find value in rereading old material, and make connections back to what they already have learned. What was once understood narrowly or mechanically can, when revisited, be understood in a larger or more abstract context, or with new cognitive tools. There are two words for “to know” in French. Savoir means to know a fact, while connaitre means to be familiar with, comfortable with, to know a person. The Right Answer loses sight of the importance, even the possibility, of knowing a piece of information like an old friend, to find pleasure in knowing, to know for knowing’s sake, because you want to. Linear teaching is workable for teaching competencies but not for teaching insights, things like why those mechanical methods work, how they can be extended, and how they can fail.

Symbol manipulation according to fixed rules is not cognition but computation. The learners take on the properties of the machines, and those who programmed them. As Papert observed, the computer programs the child, not the other way around (as he prefers). Much of this mechanical emphasis is driven by the SAT and other unreasonable standardized tests which are nothing more than timed high-stakes guessing games. They are gatekeepers to the promised land of College. Proponents of education reform frequently cite distinct age-based grades as legacy of the “factory line model” dating back to the industrial revolution. This model permeates not only how we raise children, but more importantly, what we raise them to do, what we consider necessary of an educated adult. Raising children to work machinery is the same as, or has given way to, raising them to work like machinery. Tests like the SAT emphasize that we should do reproducible de-individualized work, compared against a clear, ideal, unachievable standard. Putting this methodology online does not constitute a revolution or disruption.

(source)

Futurists have gone as far to see the brain itself as programmable, in some mysteriously objective sense. At some point, Nicholas Negroponte veered off his illustrious decades-long path. Despite collaborating with Seymour Papert at the Media Lab, his recent work has been dropping tablets into rural villages. Instant education, just add internet! It’s great that the kids are teaching themselves, and have some autonomy, but who designed the apps they play with? What sort of biases and fallacies do they harbor? Do African children learning the ABCs qualify as cultural imperialism? His prediction for the next thirty years is even more troublesome: that we’ll acquire knowledge by ingesting it. Shakespeare will be encoded into some nano-molecular device that works its way through the blood-brain barrier, and suddenly: “I know King Lear!”. Even if we could isolate the exact neurobiological processes that constitute reading the Bard, we all understand Shakespeare in different ways. All minds are unique, and therefore all brains are unique. Meanwhile, our eyes have spent a few hundred million years of evolutionary time adapting to carry information from the outside world into our mind at the speed of an ethernet connection. Knowledge intake is limited not by perception but by cognition.

Tufte says, to simplify, add context. Confusion is not a property of information but of how it is displayed. He said these things in the context of information graphics but they apply to education as well. We are so concerned with information overload that we forget information underload, where our brain is starved for detail and context. It is not any particular fact, but the connections between them, that constitute knowledge.  The fallacy of reductionism is to insist that every detail matters: learn these things and then you are educated! The fallacy of holism is to say that no details matter: let’s just export amorphous nebulous college-ness and call it universal education! Bret Victor imagines how we could use technology to move from a contrived, narrow problem into a deeper understanding about generalized, abstract notions, much as real mathematicians do. He also presents a mental model for working on a difficult problem:

I’m trying to build a jigsaw puzzle. I wish I could show you what it will be, but the picture isn’t on the box. But I can show you some of the pieces… If you are building a different puzzle, it’s possible these pieces won’t mean much to you. You might not have a spot for them to fit, or you might not yet. On the other hand, maybe some of these are just the pieces you’ve been looking for.

One concern with Skinner’s teaching machines and their modern-day counterparts is that they isolate each student and cut off human interaction. We learn from each other, and many of the things that we learn fall outside of the curriculum ladder. Learning to share becomes working on a team; show-and-tell becomes leadership. Years later, in college, many of the most valuable lessons are unplanned, a result of meeting a person with very different ideas, or hearing exactly what you needed to at that moment. I found that college exposed to me brilliant people, and I could watch them analyze and discuss a problem. The methodology was much more valuable than the answer it happened to yield.

The hallmark of an intellectual is do create daily what has never existed before. This can be an engineer’s workpiece, an programmer’s software, a writer’s novel, a researcher’s paper, or an artist’s sculpture. None of these can be evaluated by comparing them to a correct answer, because the correct answer is not known, or can’t even exist. The creative intellectual must have something to say and know how to say it; ideas and execution must both be present. The bits and pieces of a curriculum can make for a good technician (a term I’ve heard applied to a poet capable of choosing the exact word). It’s not so much that “schools kill creativity” so much as they replace the desire to create with the ability to create. Ideally schools would nurture and refine the former (assuming something-to-say is mostly innate) while instructing the latter (assuming saying-it-well is mostly taught).

What would a society look like in which everyone was this kind of intellectual? If everyone is writing and drawing, who will take out the trash, harvest food, etc? Huxley says all Alphas and no Epsilons doesn’t work. Like the American South adjusting to an economy without slaves, elevating human dignity leaves us with the question of who will do the undignified work. As much as we say that every child deserves an education, I think that the creative intellectual will remain in an elite minority for years to come, with society continuing to run on the physical labor of the uneducated. If civilization ever truly extends education to all, then either we will need to find some equitable way of sharing the dirty work (akin to utopian socialist communes), or we’ll invent highly advanced robots. Otherwise, we may need to ask ourselves a very unsettling question: can we really afford to extend education to all, given the importance of unskilled labor to keep society running?

If you liked this post, you should go read everything Audrey Watters has written. She has my thanks.

### Infographics and Data Graphics

I’d like to set the record straight about two types of graphical documents floating around the internet. Most people don’t make a distinction between infographics and data graphics. Here are some of each – open them in new tabs and see if you can tell them apart.

No peeking!

No, really, stop reading and do it. I can wait.

Okay, had a look and made your categorizations? As I see it, dog food, energy, and job titles are infographics, and Chicago buildings, movie earnings, and gay rights are data graphics. Why? Here are some distinctions to look for, which will make much more sense now that you’ve seen some examples. Naturally these are generalizations and some documents will be hard to classify, but not as often as you might think.

Infographics emphasize typography, aesthetic color choice, and gratuitous illustration.
Data graphics are pictorially muted and focused; color is used to convey data.

Infographics have many small paragraphs of text communicate the information.
Data graphics are largely wordless except for labels and an explanation of the visual encoding.

In infographics, numeric data is scant, sparse, and piecemeal.
In data graphics, numeric data is plentiful, dense, and multivariate.

Infographics have many components that relate different datasets; sectioning is used.
Data graphics have single detailed image, or less commonly multiple windows into the same data.

An infographic is meant to be read through sequentially.
A data graphic is meant to be scrutinized for several minutes.

In infographics, the visual encoding of numeric information is either concrete (e.g. world map, human body), common (e.g. bar or pie charts), or nonexistent (e.g. tables).
In data graphics, the visual encoding is abstract, bespoke, and must be learned.

Infographics tell a story and have a message.
Data graphics show patterns and anomalies; readers form their own conclusions.

You may have heard the related term visualization – a data graphic is a visualization on steroids. (An infographic is a visualization on coffee and artificial sweetener.) A single bar, line, or pie chart is most likely a visualization but not a data graphic, unless it takes several minutes to absorb. However, visualizations and infographics are both generated automatically, usually by code. It should be fairly easy to add new data to a visualization or data graphic; not so for infographics.

If you look at sites like visual.ly which collects visualizations of all stripes, you’ll see that infographics far outnumber data graphics. Selection bias is partially at fault. Data graphics require large amounts of data that companies likely want to keep private. Infographics are far better suited to marketing and social campaigns, so they tend to be more visible. Some datasets are better suited to infographics than data graphics. However, even accounting for those facts, I think we have too many infographics and too few data graphics. This is a shame, because the two have fundamentally different worldviews.

An infographic is meant to persuade or inspire action. Infographics drive an argument or relate a story in a way that happens to use data, rather than allowing the user to infer more subtle and multifaceted meanings. A well-designed data graphic can be an encounter with the sublime. It is visceral, non-verbal, profound; a harmony of knowledge and wonder.

Infographics already have all the answers, and serve only to communicate them to the reader. A data graphic has no obvious answers, and in fact no obvious questions. It may seem that infographics convey knowledge, and data graphics convey only the scale of our ignorance, but in fact the opposite is true. An infographic offers shallow justifications and phony authority; it presents that facts as they are. (“Facts” as they “are”.) A data graphic does not foster any conclusion upon its reader, but at one level of remove, provides its readers with tools to draw conclusions. Pedagogically, infographics embrace the fundamentally flawed idea that learning is simply copying knowledge from one mind to another. Data graphics accept that learning is a process, which moves from mystery to complexity to familiarity to intuition. Epistemologically, infographics ask that knowledge be accepted on little to no evidence, while data graphics encourage using evidence to synthesize knowledge, with no prior conception of what this knowledge will be. It is akin to memorizing a fact about the world, or accepting the validity of the scientific method.

However, many of the design features that impart data graphics with these superior qualities can be exported back to infographics, with compelling results. Let’s take this example about ivory poaching. First off, it takes itself seriously: there’s no ostentatious typography and the colors are muted and harmonious. Second, subject matter is not a single unified dataset but multiple datasets that describe a unified subject matter. They are supplemented with non-numeric diagrams and illustrations, embracing their eclectic nature. Unlike most infographics, this specimen makes excellent use of layout to achieve density of information. Related pieces are placed in close proximity rather than relying on sections; the reader is free to explore in any order. This is what an infographic should be, or perhaps it’s worthy of a different and more dignified name, information graphic. It may even approach what Tufte calls “beautiful evidence”.

It’s also possible to implement a data graphic poorly. Usually this comes down to a poor choice of visual encoding, although criticism is somewhat subjective. Take this example of hurricanes since 1960. The circular arrangement is best used for months or other cyclical data. Time proceeds unintuitively counterclockwise. The strength of hurricanes is not depicted, only the number of them (presumably – the radial axis is not labeled!). The stacked bars make it difficult to compare hurricanes from particular regions. If one wants to compare the total number of hurricanes, one is again stymied by the polar layout. Finally, the legend is placed at the bottom, where it will be read last. Data graphics need to explain their encoding first; even better is to explain the encoding on the diagram itself rather than in a separate legend. For example, if the data were rendered as a line chart (in Cartesian coordinates), labels could be placed alongside the lines themselves. (Here is a proper data graphic on hurricane history.)

An infographic typically starts with a message to tell, but designers intent on honesty must allow the data to support their message. This is a leap of faith, that their message will survive first contact with the data. The ivory poaching information graphic never says that poaching is bad and should be stopped, in such simple words. Rather it guides us to that conclusion without us even realizing it. Detecting bias in such a document becomes much more difficult, but it also becomes much more persuasive (for sufficiently educated and skeptical readers). Similarly, poor data graphics obscure the data, either intentionally because they don’t support the predecided message, or unintentionally because of poor visual encoding. In information visualization, as in any field, we must be open to the hard process of understanding the truth, rather than blithely accepting what someone else wants us to believe.

I know which type of document I want to spend my life making.

### Critical Complexity

Here’s a task for you: draw a circle radius three around the origin.

What system do you use? Well, you could use an intuitive system like Piaget’s turtle. Walk out three, turn ninety degrees, and then walk forward while turning inward. By identifying as a specific agent, you take advantage of having a brain that evolved to control a body. If it doesn’t seem intuitive, that’s because you’ve been trained to use other systems. Your familiarity is trumping what comes naturally, at least to children.

You’re probably thinking in Cartesian coordinates. You may even recall that $x^2 + y^2 = 3^2$ will give you the circle I asked for. But that’s only because you memorized it. Why this formula? It’s not obvious that it should be a circle. It doesn’t feel very circular, unless you fully understand the abstraction beneath it (in this case, the Pythagorean theorem) and how it applies to the situation.

Turtle geometry intuitively fits the human, but it’s limited and naive. Cartesian geometry accurately fits your monitor or graph paper, the technology, but it’s an awkward way to express circles. So let’s do something different. In polar coordinates, all we have to say is $r=3$ and we’re done. It’s not a compromise between the human and the technology, it’s an abstraction – doing something more elegant and concise than either native form. Human and technology alike  stretch to accommodate the new representation. Abstractions aren’t fuzzy and amorphous. Abstractions are crisp, and stacked on top of each other, like new shirts in a store.

We’ve invented notation that, for this problem, compresses the task as much as possible. The radius is specified; the fact that it’s a circle centered around the origin are implicit in the conventional meaning of $r$ and the lack of other information. It’s been maximally compressed (related technical term: Kolmogorov complexity).

Compression is one of the best tools we have for fighting complexity. By definition, compression hides the meaningless while showing the meaningful. It’s a continuous spectrum, on which sits a point I’ll call critical complexity. Critical complexity is the threshold above which a significant abstraction infrastructure is necessary. But that definition doesn’t mean much to you — yet.

Think of knowledge as terrain. To get somewhere, we build roads, which in our metaphor are abstraction. Roads connect to each other, and take us to new places. It was trivial to abstract Cartesian coordinates into polar by means of conversions. This is like building a road, with one end connecting to the existing street grid and another ending somewhere new. It’s trivial to represent a circle in polar coordinates. This is what we do at the newly accessible location. We’ve broken a non-trivial problem into two trivial pieces – although it wasn’t a particularly hard problem, as otherwise we wouldn’t have been able to do that.

Delivering these words to your machine is a hard problem. You’re probably using a webbrowser, which is written in software code, which is running on digital electronics, which are derived from analog electronics obeying Maxwell’s equations, and so on. But the great thing about abstractions is that you only need to understand the topmost one. You can work in polar coordinates without converting back to Cartesian, and you can use a computer without obtaining multiple engineering degrees first. You can build your own network of roads about how to operate a computer, disconnected from your road network about physics.

Or perhaps not disconnected, but connected by a tunnel through the mountain of what you don’t understand. A tunnel is a way to bypass ignorance to learn about other things based on knowledge you don’t have, but don’t need. Of course, someone knows those things – they’ve laboriously built roads over the mountain so that you can cruise under it. These people, known as scientists and engineers, slice hard problems into many layers of smaller ones. A hard problem may have so many layers that, even if each is trivial on its own, they are non-trivial collectively. That said, some problems are easier than they look because our own sensemaking abstractions blind us.

If you want to write an analog clock in JavaScript, your best bet is to configure someone else’s framework. That is, you say you want a gray clockface and a red second hand, and the framework magically does it. The user, hardly a designer, is reduced to muttering incantations at a black box hoping the spell will work as expected. Inside the box is some 200 lines or more, most of it spent on things not at all related to the high-level description of an analog clock. The resulting clock is a cul-de-sac at the end of a tunnel, overlooking a precipice.

By contrast, the nascent Elm language provides a demo of the analog clock. Its eight lines of code effectively define the Kolmogorov complexity: each operation is significant. Almost every word or number defines part of the dynamic drawing in some way. To the programmer, the result is liberating. If you want to change the color of the clockface, you don’t have to ask the permission of a framework designer, you just do it. The abstractions implicit in Elm have pushed analog clocks under the critical complexity, which is the point above which you need to build a tunnel.

There’s still a tunnel involved, though: the compiler written in Haskell that converts Elm to JavaScript. But this tunnel is already behind us when we set out to make an analog clock. Moreover, this tunnel leads to open terrain where we can build many roads and reach many places, rather than the single destination offered by the framework. What’s important isn’t the avoidance of tunnels, but of tunnels to nowhere. Each abstraction should have a purpose, which is to open up new terrain where abstractions are not needed, because getting around is trivial.

However, the notion of what’s trivial is subjective. It’s not always clear what’s a road and what’s a tunnel. Familiarity certainly makes any abstraction seem simpler. Though we gain a better grasp on an abstraction by becoming familiar with it, we also lose sight of the underlying objective nature of abstractions: some are more intuitive or more powerful than others. Familiarity can be born both by understanding where an idea comes from and how it relates to others, and by practicing using the idea on its own. I suspect that better than either one is both together. With familiarity comes automaticity, where we can quickly answer questions by relying on intuition, because we’ve seen them or something similar before. But depending on the abstraction, familiarity can mean never discarding naïveté (turtle), contorting into awkward mental poses (Cartesian) – or achieving something truly elegant and powerful.

It’s tempting to decry weak or crippling abstractions, but they too serve a purpose. Like the fancy algorithms that are slow when n is small, fancy abstractions are unnecessary for simple problems. Yes, one should practice using them on simple problems as to have familiarity when moving into hard ones. But before that, one needs to see for oneself the morass weak or inappropriately-chosen abstractions create. Powerful abstractions, I am increasingly convinced, cannot be be constructed on virgin mental terrain. For each individual, they must emerge from the ashes of an inferior system that provides both experience and motivation to build something stronger.

### Abstraction and Standardization

What is the future of art? What media will it use? Computers, obviously. Information technology is very good at imitating old media: drawing programs, music programs, word processors designed for playwrights or authors. But none of these tap into the intrinsic strengths of the computer, able to do something no other medium can: simulate. Bret Victor, the man so demanding of user interfaces he left Apple, is dissatisfied with the tools available to artists that allow them to simulate. So he made his own, and gave a one-hour talk on it.

Those interested should definitely take the time to watch it, but to summarize, he demonstrates the power of simulation in creating art that is part animation and part performance, with the human and computer reacting to one another. He then lifts the curtain and show us the tools he used to simulate the characters in the scene, and it’s not code. Instead, it’s a drawing program, with lines and shapes, that he uses to define behavior. Code, he points out, is based on algebra, but his system is based on geometry. Finally, he concludes with a short performance that he built with these tools. Higher is the story of earth, from the stars to cells to civilization to space travel back to the stars.

What blew my mind about Higher is that a few years ago, I had independently created a short film on exactly that topic, with exactly the same background music (Kyle Gabler’s Best Of Times from World of Goo). Victor’s piece was far more polished, but we had both been inspired by the same music to express the same idea, the journey of life to the stars. Remember when I complained about not finding people who shared my narrative? So this is what that feels like.

What drove Victor to create his tools was the belief that art is an attempt to communicate that which cannot be put into words. By binding simulation to lingual code, we make it inaccessible and unsuitable for art and artists. Direct manipulation of the art, which is how art has been created going back to cave paintings, allows the artist to interact with and lend emotion to the art in ways not possible through code’s layer of indirection, of abstraction.

The reason artists’ needs have been neglected by developers is that, for the rest of the world, code works just fine. As I’ve previously blogged, language is one of humankind’s most powerful inventions. The direct manipulation that is liberating to the artist is confining to the engineer. Language is how we manage many layers of abstraction at once; without it we are reduced to pointing and grunting. It’s harder to communicate with a computer in code than a well-designed direct manipulation interface, but code is more powerful. In the sciences, a good result is consistent with what is already known; in art, a good piece is unexpected and shakes our established worldview. More fundamentally, the sciences observe and record some objective outside truth; art looks inward to offer one of many interpretations of the subjective human experience.

This tension that we see between science and art also shows up in schools. In a recent TED talk, Sir Ken Robinson extols diversity as a fundamental human trait, which schools attempt to erase and replace with standardization. We agree that standardization has its place, but I personally think he downplays its importance. Standardization is writing, is language; those things can’t happen without common ways of thinking. At first, children need to explore concepts and use their own terms, without a top-down lesson plan imposed by school administrators. Nevertheless, the capstone is always learning what the rest of the world calls it. That isn’t smashing creativity, but rather empowering the child to learn more about the topic from others and from reference sources. It’s creating a minimum level of knowledge common every adult member of society, which is assumed by all media. Being able to communicate  facts with others isn’t just the result of education, it’s what makes education possible in the first place. With language, groups of people can unambiguously refer to things not present, a shared imagination. Verbalization is a form of abstraction.

Let’s get back to the role of diversity in school. Students should be able to explore what interests them, but the converse is not true: some topics must be taught to everyone, even if some people do not find them interesting. This is especially true before high school. I know you’re not passionate about fractions, Little Johnny, but you need to learn them. Society expects everyone to have a minimum level of competence in every subject. Additionally, passion for a field isn’t always “love at first sight”. The future mathematician isn’t always the first in the class to get basic arithmetic.

Although the curriculum needs to be largely standardized, the pedagogy does not. The neglect of diversity in schools is most heavily felt not in what kids are or are not learning, but how they are learning it. The inflexibility imposed on lesson plans is degrading to teachers and failing our kids. Teachers should be trusted to adapt lessons to their class, and empowered with testing results they find useful, early enough to use it. Standardized testing as it exists today does not fit the bill. Every student needs to achieve the same core competencies, but the paths to doing so will be as diverse as the children themselves. A broad exposure to both methods and topics promotes the development not just of knowledge, but of personality and identity. The reason to have art in school isn’t to improve test scores but because it’s part of being human.

To be more precise, we should distinguish between “the arts” and “art”. The arts are how to create with the media classically used for art: paint, music, poetry, drama, dance, and so on. Like any other discipline, the arts require a standardized language to record and transfer this knowledge. Sometimes it’s plain English, sometimes it’s jargon, sometimes it’s symbols, but it’s still an agreed-upon abstraction. Diversity of ideas expressed in the language is inventive and healthy; diversity of the language itself is nonstandard and chaotic. With this in mind, the arts take their place at one end of a spectrum of knowledge: mathematics, natural science, social science, and history. And the arts.

But art is something entirely different. It is the personal and emotional perception of an experience that communicates without words. Art is direct and concrete; it is subjective and sublime. Much of the arts attempt to create art. Victor’s tools advance the arts; what he creates with them is art.

It’s a defensible position to say that art, because it does not rely on language as all the other fields of knowledge do, is not knowledge at all. But I’ll indulge Victor and say that not all knowledge can be verbalized. That doesn’t mean that art is beyond classification; Victor and I saw the same artistic ideas in the same piece of lyricless music. Conversely, just because something is written down doesn’t mean it’s standardized or useful knowledge. Recently, the mathematics community has been bewildered by an inscrutable set of papers which claim to prove a fundamental piece of number theory. No one can decipher them to tell if the proof is valid, and their author has not been forthcoming with an oral explanation. So in extreme cases, the analogy between language and standardization breaks down. The wordless expression is more coherent than words.

For all the knowledge that abstract language has brought us, ineffable art remains part of the human experience. It is important for our children to learn about art to become mature and thoughtful adults. It is equally important for us to provide tools that support the nonverbal side of thought, to engage the visual and auditory parts of our brains in ways words never can. These are the same failure: the refuge in abstraction, the desire to have everything neat and orderly and predictable. Art exists to explore ambiguity and paradox; it does not demand simple answers but asks complex questions.

A lot of futurists imagine a time when technology makes everything easy. There is a faith in technological convergence, where everything speaks the same language and interacts intelligently and flawlessly. But historically we see technologies become incompatible. If there’s an open standard underneath, such as email, you still get dozens of providers and clients; and if there’s not, you get the walled gardens of social media, loosely tied together by third-party “integration”. What’s important to realize is that the path of technology is not fixed. Our gadgets don’t have to make us more productive and connected; they can make us more artistic and provide privacy, if we design them so. We should stop aspiring to a monoculture of technology because, not only will it not happen for technical and economic reasons, it shouldn’t happen. Standardized technology leads to standardized thinking, especially when coupled with standardized social institutions. Creativity is  not only what drives technology further, but art and humanity as well.

### This Is You: Agency in Education

This is the opening of the ambient puzzle game Osmos, by Hemisphere Games. “This is you,” is all it says, as if you’ve always been a glowing blue orb. Most games start by introducing the player to their avatar, but it’s usually a human character with a backstory. Puzzle games are an exception: they rarely give the player an avatar whatsoever. Normally you play an unacknowledged manipulator of abstract blocks according to fixed rules and patterns. Osmos is an exception to the exception.

Osmos also has masterful use of player training and learning curve. It begins in the simplest possible setting with the simplest possible task: “Move to the blue circle and come to a stop.” You accelerate by ejecting mass, which propels the rest of you in the opposite direction. The game tells you these things in the same order I relayed them to you: first objective, then the means. Osmos could have said, “Hey, look how you can move by ejecting mass! Now use this ability to move to this outlined circle.” But it didn’t. The progression is guided, focused, and objective-based, especially at first. The levels build on each other, reinforcing knowledge from previous levels as the player gains experience.

Impasse 1 In a rare moment of explanation, Osmos introduces players to the idea of using ejected mass to move the red motes out of the way so they can get to the blue ones.

Impasse 2 Immediately afterwards, players are asked to apply that principle in a puzzle that looks harder than it is.

Osmos presents players with the Odyssey, an sequence of levels that introduce gameplay concepts in a logical order. The Odyssey runs from the tutorial described above up through medium difficulty levels. After that, players gain access to a Sandbox mode where they can explore different game types at different difficulties. That is, a level of Osmos is distinguished not only by quantitative difficulty by qualitatively, by the kinds of game mechanics and obstacles found. More fundamentally, Osmos is played in discrete levels that can be won, lost, restarted, and randomized, rather than an endless arcade of increasing difficulty like Bejeweled. Players can skip to and play any level they have unlocked at will; a session of Osmos can last three minutes or three hours.

Players are incentivized to complete the Odyssey and get as far as they can in the sandbox, but there’s no climactic end of the game. No explanation is given why some levels seem to take place in a petri dish while others in orbit around a star; it’s wonderfully abstract in that regard. It’s impossible to “win” and there’s no victory cutscene. It’s neither so boring and open-ended you don’t want to play nor so scripted you only want to play once. There are achievements (badges) awarded but they seem extraneous to me.

And now to the point of all this: what can we learn from Osmos when designing software for education?

By the structure of its gameplay and incentives, Osmos lends itself to the sporadic and time-limited technology access found in many schools. Instead of leaving students behind who didn’t win the game, or trying to pry a child who’s “gotten far” away from the computer or tablet, it’s easier to take a break from Osmos. Meanwhile the nature of gameplay means that it’s very much a solitary experience, a personal journey of discovery. For all the hype given to social gaming over the last few years, it’s not conducive to deep thinking.

And yes: agency, in the sense of being a specific agent. In Osmos, the player is someoneor at least something: this is you. As I’ve said, most puzzle games don’t give the player anything to latch on to. Neither does formal arithmetic, nor algebra. Symbol manipulation provides no agency. It forces mathematics into the unnatural third person perspective (unnatural from a human’s point of view). When I played with blocks as a child I would often imagine an ant climbing and exploring my structures. Pen and paper mathematics allows the mathematician to move blocks around but not to be an ant inside his or her own creation.

Seymour Papert developed LOGO to provide children with agency. LOGO is a cross between a game and a programming language. Players manipulate the “Turtle” by telling it to turn left or right or move forward, where forward is relative to how it is turned. When children first encounter difficulties, they are told to “play turtle”. By moving their own bodies through space, they are able to debug and refine their program. And by thinking how they move their own bodies through space, they are given a tangible grip on both computation and thinking.

Scratch was developed at the MIT Media Lab, which was co-founded by Papert. Scratch, though very much a descendent of LOGO, adds more commands to increase the user’s power and control. Many of the commands were discussed by Papert in his book Mindstorms or seem to be reasonable extensions of it. Others (thought balloons, sounds, arithmetic, color effects) are superfluous. Still others, like if-else statements, while loops, and Boolean operations, are taken from the nuts and bolts of programming. This comes at the cost of downplaying the two high-level skills which Papert thought were so vital to learning any subject: subprocedures and debugging. With LOGO, children learned to compartmentalize knowledge into reusable pieces, and to make incremental improvements to see the results they wanted.

One of LOGO’s defining characteristics was its limited set of commands, which are relative to the current position and heading of the Turtle. Osmos players can eject mass in any direction, but nothing more. In both cases, artificial scarcity of control forces users to think in a particular way. On the other hand, Scratch freely mixes LOGO-style “move forward” with Cartesian commands, both relative (“move up by”) and absolute (“move to”). It’s impossible to have agency with something that can be teleported across the map. Rather than force the user out of lazy and weak ways of thinking, Scratch offers multiple paths and users take the one of least resistance. Often this will be a hodgepodge of many different styles and systems of commands, reflecting incomplete and imprecise thinking.

The large number of commands create a cluttered and unintuitive interface. 78% of Scratch’s default interface is control while only 22% of it is the canvas. The results, the thing the user cares about, are squished in a corner. Osmos has minimal controls that disappear when not in use, leaving the entire screen as the portal into the game world. Moreover, Osmos has just enough visual detail to be eye candy and not clutter. Games, in general, have excellent usability because bad usability is by definition not fun.

Scratch’s default user interface, with overlaid percentages. A similar image for Osmos would be 100% purple.

The differences in the command set and user interfaces belie the different purposes of the software. Scratch is meant to provide a canvas for a play or a animation, and so gives the user plenty of options for control. Osmos and LOGO are both puzzles in the sense that the controls are extremely few, yet powerful. A tool is designed to give a competent user maximum power to create; a puzzle is designed to teach new ways of thinking under constraints. By this metric, Scratch has more in common with CAD software used by engineers to design mechanical parts than it has with Osmos and LOGO.

But there is another feature that groups the three differently. Both LOGO and Scratch are sandboxes; they enforce no requirements or value judgements on the player’s actions. Papert envisioned a teacher guiding the student and keeping her on task. Osmos takes a different route. As a game, it has clear objectives to complete and obstacles to avoid. There are good moves and bad moves. There are levels, with definite beginnings and ends. The Odyssey is just a long tutorial: it presents each feature and some advanced ideas before handing the player full control. Scratch and LOGO do just that as soon as they’re opened. In particular, Scratch provides no guidance on its cockpit’s worth of controls.

There is a misconception, common among edtech types but not among traditional teachers, that the answer to all problems is better distribution. People are ignorant because they don’t have access to knowledge. People can’t code because they don’t have access to the software and documentation. But this is simply not true. Give people tools and they won’t know what to do with them or how to use them. Instead, we need to give students of all ages training, knowledge, and understanding. We need to force students to think about wrong ideas and make them right, and to see why they are right. We need to to show students the metacognitive tools to solve problems. An educational game isn’t about what to think, but how to think.

Now read the follow-up post: Beyond Agency: Why Good Ideas Only Take Us So Far.

### Internet Idea Books: Roundup, Review, and Response

What Technology Wants (Kevin Kelly, 2010) is a sweeping history of technology as a unified force which he calls “the technium”. Kelly starts slowly, drawing ever larger circles of human history, biological evolution, and the formation of planet earth from starstuff. His scope, from the Big Bang to the Singularity, is unmatchable. But the purpose of this incredible breadth is not readily apparent, and isn’t for the first half of the book, as Kelly talks about everything but technology. I advise the reader to sit back and enjoy the ride, even if it covers a lot of familiar ground.

In not the first chapter on evolution, Kelly argues that the tree of life is not random, but instead is constrained by chemistry, physics, geometry, and so on. The author points to many examples of convergent evolution, where the same “unlikely” anatomical feature was evolved multiple times independently. For example, both bats and dolphins use echolocation but their common ancestor did not. Kelly is careful to attribute this phenomenon to the constraints implicit in the system and not supernatural intelligence. He argues that, in the broadest strokes, evolution is “preordained” even as the details are not.

Kelly begins the next chapter by noting that evolution itself was discovered by Alfred Russel Wallace independently and concurrently as it was by Charles Darwin. This becomes the segue into convergent invention and discovery, insisting that the technium should be regarded as an extension of life, obeying most of its rules, although human decision replaces natural selection. Technology becomes an overpowering force that loses adaptations as willingly as animals devolve (which is to say, not very).

The premise that technology extends life becomes the central to Kelly’s predictions. He paints a grandiose picture of technologies that are as varied and awe-inspiring as the forms of life, encouraging ever-more opportunities in an accelerating dance of evolution. “Extrapolated, technology wants what life wants,” he claims, and lists the attributes technology aspires to. Generally speaking, Kelly predicts technological divergence, where your walls are screens and your furniture thinks, and the death of inert matter. Like the forms of life, technology will specialize into countless species and then become unnoticed, or even unnoticeable.

Much of what Kelly predicts has already happened for passive technologies. We don’t notice paper, government, roads, or agriculture. But I don’t think that information technology will achieve the same saturation. No matter how cheap an intelligent door becomes, a non-intelligent version will be cheaper still, and has inertia behind it. Kelly claims that such resistance can only delay the adoption of technology, not prevent it. Nevertheless something about Kelly’s book disturbed me. It was wrong, I felt, but I couldn’t articulate why. So I read a trio of books that take a more cautioned view of information and communication technologies. As I read, I asked of them: what has the internet taken from us, and how to we take it back? Continue reading

### How to save the world

The end of World War I was a bad time to be an optimist. It wasn’t that millions of young men had died or that western Europe had been transfigured into a hellish bombed-out landscape, although that was certainly true. It was the inescapable philosophical consideration that civilization had done this to itself. The “progress” of the industrial revolution and German unification led inexorably to total war. Civilization itself was fundamentally flawed and unsustainable; the only alternative was to admit Rousseau was right and go back to the trees.

Of course, that’s not what happened, and twenty years later they were at it again. The technology changed dramatically, but it didn’t change the fact that people were still killing each other, only how they did it. The changes that mattered were the social institutions built afterwards. Instead of the outrageous reparations in the Treaty of Versailles, there was the conciliatory Marshall Plan. Instead of the League of Nations, there was the United Nations. It wasn’t technological improvements that saved lives and improved the quality of living after the war. It was the people, with their resiliency, their forgiveness, and their intent not to make the same mistake twice.

We now find ourselves, once again, on the brink of destruction. It is not destruction by military means, but rather, economic and environmental means. Natural resources are being depleted faster than they can be renewed, if they can be renewed at all. Industrialization has spread concrete, steel, and chemicals across previously untouched land. The established political institutions are being challenged by forces as diverse as the Arab Spring and the Occupy movement. The economy is still largely in shambles. And then there’s the small matter of climate change. And so on. We’ve heard it all before. At TED 2012, this grim view was presented by Paul Gilding (talk, follow-up blog post). He’s pretty blunt about it: the earth is full.

Around a third of the world lives on less the two dollars a day. They have dramatically different cultures, education, living conditions, access to technology than the typical American or European. You honestly think that they’re the ones that are going to fix the problems? The people who are illiterate, innumerate, and don’t know where their next meal is coming from are going to fix climate change?

Depending on your answer, I have two different responses. I’ll give both of them, but you might want to think about it first. Continue reading