Brief Thoughts On Scratch

Previously, I’ve lambasted the children’s programming language Scratch for its cockpit’s worth of controls.  This encourages its users to try anything and see what works, rather than plan, predict, and understand exactly what each piece of code is doing. It’s instant gratification … and a tight feedback loop.

Scratch is not a tool to learn programming or metacognition; Scratch is a tool to create artistic displays that could not otherwise be created (by children). Scratch thus allows children to explore ideas not related to mathematics or programming. They have creative freedom, much like art class. And what elementary schooler produces anything particularly good, objectively speaking, in art class? So don’t judge the Scratch projects too harshly.

Scratch is a social platform, except that the socialization happens in real life. Get a few kids in a room using it, and they’ll share both creations and code,  motivate each other, and change goals on the fly. This differs from more mature programming, where one has a specific goal in mind. The other key difference is that most languages discourage straight up experimentation; one have to know what one is doing in order to do it. Scratch reverses this: a kid can learn what a command does through using it. This is because all the commands are displayed, ready to be used.

Not only displayed, but also labelled, unlike the Khan Academy programming language that drops down four numbers with no context. It’s a way to slyly introduce relative and absolute motion – move up by, move to – in a way that lets kids work out the rules. No, they won’t work out all the rules, but I think they’ll come to fewer incorrect conclusions (misconceptions) in a reactive medium than with marks on paper. They will figure it out later, much later.

Scratch is a way to put Lego bricks into the bucket. The kid will reassemble them into many different knowledge structures over the years before creating something strong and beautiful – an educated mind. It’s during that process, that struggle, that they can learn to program with planning and expressiveness, rather than tacking on bricks ad-hoc. It’s a stage everyone goes through, and Scratch can help a child make the most of it. But don’t confuse acquiring bricks with figuring out how to assemble them.

This isn’t to say that Scratch as it exists is perfect; far from it. We need to keep rethinking what tools are best to give to our children (and adults, for that matter). But I’m backing off my previous stance that guided minimalism is the answer. (Or is my new “wait and let them figure it out” view too fatalist?)

Critical Complexity

Here’s a task for you: draw a circle radius three around the origin.

What system do you use? Well, you could use an intuitive system like Piaget’s turtle. Walk out three, turn ninety degrees, and then walk forward while turning inward. By identifying as a specific agent, you take advantage of having a brain that evolved to control a body. If it doesn’t seem intuitive, that’s because you’ve been trained to use other systems. Your familiarity is trumping what comes naturally, at least to children.

You’re probably thinking in Cartesian coordinates. You may even recall that x^2 + y^2 = 3^2 will give you the circle I asked for. But that’s only because you memorized it. Why this formula? It’s not obvious that it should be a circle. It doesn’t feel very circular, unless you fully understand the abstraction beneath it (in this case, the Pythagorean theorem) and how it applies to the situation.

Turtle geometry intuitively fits the human, but it’s limited and naive. Cartesian geometry accurately fits your monitor or graph paper, the technology, but it’s an awkward way to express circles. So let’s do something different. In polar coordinates, all we have to say is r=3 and we’re done. It’s not a compromise between the human and the technology, it’s an abstraction – doing something more elegant and concise than either native form. Human and technology alike  stretch to accommodate the new representation. Abstractions aren’t fuzzy and amorphous. Abstractions are crisp, and stacked on top of each other, like new shirts in a store.

We’ve invented notation that, for this problem, compresses the task as much as possible. The radius is specified; the fact that it’s a circle centered around the origin are implicit in the conventional meaning of r and the lack of other information. It’s been maximally compressed (related technical term: Kolmogorov complexity).

Compression is one of the best tools we have for fighting complexity. By definition, compression hides the meaningless while showing the meaningful. It’s a continuous spectrum, on which sits a point I’ll call critical complexity. Critical complexity is the threshold above which a significant abstraction infrastructure is necessary. But that definition doesn’t mean much to you — yet.

Think of knowledge as terrain. To get somewhere, we build roads, which in our metaphor are abstraction. Roads connect to each other, and take us to new places. It was trivial to abstract Cartesian coordinates into polar by means of conversions. This is like building a road, with one end connecting to the existing street grid and another ending somewhere new. It’s trivial to represent a circle in polar coordinates. This is what we do at the newly accessible location. We’ve broken a non-trivial problem into two trivial pieces – although it wasn’t a particularly hard problem, as otherwise we wouldn’t have been able to do that.

Delivering these words to your machine is a hard problem. You’re probably using a webbrowser, which is written in software code, which is running on digital electronics, which are derived from analog electronics obeying Maxwell’s equations, and so on. But the great thing about abstractions is that you only need to understand the topmost one. You can work in polar coordinates without converting back to Cartesian, and you can use a computer without obtaining multiple engineering degrees first. You can build your own network of roads about how to operate a computer, disconnected from your road network about physics.

Or perhaps not disconnected, but connected by a tunnel through the mountain of what you don’t understand. A tunnel is a way to bypass ignorance to learn about other things based on knowledge you don’t have, but don’t need. Of course, someone knows those things – they’ve laboriously built roads over the mountain so that you can cruise under it. These people, known as scientists and engineers, slice hard problems into many layers of smaller ones. A hard problem may have so many layers that, even if each is trivial on its own, they are non-trivial collectively. That said, some problems are easier than they look because our own sensemaking abstractions blind us.

If you want to write an analog clock in JavaScript, your best bet is to configure someone else’s framework. That is, you say you want a gray clockface and a red second hand, and the framework magically does it. The user, hardly a designer, is reduced to muttering incantations at a black box hoping the spell will work as expected. Inside the box is some 200 lines or more, most of it spent on things not at all related to the high-level description of an analog clock. The resulting clock is a cul-de-sac at the end of a tunnel, overlooking a precipice.

By contrast, the nascent Elm language provides a demo of the analog clock. Its eight lines of code effectively define the Kolmogorov complexity: each operation is significant. Almost every word or number defines part of the dynamic drawing in some way. To the programmer, the result is liberating. If you want to change the color of the clockface, you don’t have to ask the permission of a framework designer, you just do it. The abstractions implicit in Elm have pushed analog clocks under the critical complexity, which is the point above which you need to build a tunnel.

There’s still a tunnel involved, though: the compiler written in Haskell that converts Elm to JavaScript. But this tunnel is already behind us when we set out to make an analog clock. Moreover, this tunnel leads to open terrain where we can build many roads and reach many places, rather than the single destination offered by the framework. What’s important isn’t the avoidance of tunnels, but of tunnels to nowhere. Each abstraction should have a purpose, which is to open up new terrain where abstractions are not needed, because getting around is trivial.

However, the notion of what’s trivial is subjective. It’s not always clear what’s a road and what’s a tunnel. Familiarity certainly makes any abstraction seem simpler. Though we gain a better grasp on an abstraction by becoming familiar with it, we also lose sight of the underlying objective nature of abstractions: some are more intuitive or more powerful than others. Familiarity can be born both by understanding where an idea comes from and how it relates to others, and by practicing using the idea on its own. I suspect that better than either one is both together. With familiarity comes automaticity, where we can quickly answer questions by relying on intuition, because we’ve seen them or something similar before. But depending on the abstraction, familiarity can mean never discarding naïveté (turtle), contorting into awkward mental poses (Cartesian) – or achieving something truly elegant and powerful.

It’s tempting to decry weak or crippling abstractions, but they too serve a purpose. Like the fancy algorithms that are slow when n is small, fancy abstractions are unnecessary for simple problems. Yes, one should practice using them on simple problems as to have familiarity when moving into hard ones. But before that, one needs to see for oneself the morass weak or inappropriately-chosen abstractions create. Powerful abstractions, I am increasingly convinced, cannot be be constructed on virgin mental terrain. For each individual, they must emerge from the ashes of an inferior system that provides both experience and motivation to build something stronger.

As We May Have Thought

Vannevar Bush wanted to build machines that made people smarter. His 1945 paper, As We May Think, described analog computers that captured and stored information, the earliest vision of today’s internet. All of Bush’s hopes for chemical photography have been surpasses by today’s digital cameras, and digital storage media are more compact than the most hopeful predictions of microfilm. He also predicts dictation, and though today’s software does a passable but not perfect job, it has not reached the level of ubiquity Bush predicts. He is also wrong about the form factor of cameras, predicting a walnut-sized lens mounted like a miner’s lamp. The result is similar to Google Glass, and no other product:

One can now picture a future investigator in his laboratory. His hands are free, and he is not anchored. As he moves about and observes, he photographs and comments. Time is automatically recorded to tie the two records together.

As for selecting information from the ensuing gigantic database, Bush posits the “Memex”, a desk with displays built into it. The memex is personal, allowing users to connect pieces of information together into trails for later examination. The memex is personal and built on connections, much like the mind.

The late Douglas Engelbart expanded on the purely hypothetical Memex with NLS, short for oNLine System. In “the mother of all demos”, he showed how users traverse and manipulate trees of data, with rich transclusion of content. Unlike the Memex, real-time sharing is possible by way of video chat. Like the memex, NLS was primary text, and the user-facing component was the size of a desk.

And yet … Bush and Englebart’s systems are not psychologically or sociologically realistic. Though Bush was writing in 1945, his vision seemed Victorian: a facade of proper intellectualism with no grounding in the less dapper side of human nature. One can hardly imagine multimedia beyond classical music and Old Master paintings emanating from the memex.  Bush used the effectiveness of Turkish bows in the crusades as an example of what one could research on a Memex. He missed the target. The Memex and NLS were designed for a race of hyper-rational superhumans that do not exist.

The fictitious enlightened user would emphasize restraint, but today’s technology can, for all intents and purposes, do anything. The ability to do anything is less productive and more dangerous than it sounds. Intellectually, such a system encourages slapdash and incomplete thought. It does not force you to abandon weaker ways of thinking; it gives you no guidance towards what will work, what will work well, and what will not work at all. Sociologically, the availability of information on a scale beyond what Bush could have dreamed hasn’t made us an enlightened society. Having correct information about (for example) evolution readily available online has not made a dent in the number of people who read Genesis literally. And it gets worse.

Moore’s law may be the worst thing to happen to information technology. With computing so cheap and so ubiquitous, with the ability to do anything, we have overshot the island of scarcity inhabited by Bush and Engelbart and into the land of social media, entertainment, and vice. The universal systems for the betterment of humanity have fallen to fragmented competitors in an open market. The emphasis on mobile these last six years has led to apps of reduced capability, used in bursts, on small screens with weak interaction paradigms. This is what happens when there’s more computing power in your pocket than Neil Armstrong took to the moon: we stop going to the moon.

Recreational computation is here to stay, but we may yet partially reclaim the medium. Clay Shirky is found of pointing out that erotic novels appeared centuries before scientific journals. Analogously, we should not be deterred by the initial explosion of inanity accompanying the birth of a new, more open medium.

I can only hazard a guess as to how this can be done for recreational computing: teach the internet to forget. (h/t AJ Keen, Digital Vertigo) One’s entire life should not be online (contrary to Facebook’s Timeline – it’s always hard to change ideas when corporations are profiting on them). A good social website would limit the ways in which content can be produced and shared, in an attempt to promote quality over quantity. Snapchat is a promising experiment in this regard. There’s been talk of having links decay and die over time, but this sees like a patch on not having proper transclusion in the first place.

As for programming, though, the future is constrained, even ascetic. If Python embodies the ability to do anything, then the future is Haskell, the most widely-used [citation needed] functional programming language.

Functional programming is a more abstract way of programming than most traditional languages, which use the imperative paradigm. If I had to describe the difference between imperative programming and functional programming to a layperson, it would be this: imperative programming is like prose, and functional programming is like poetry. In imperative programming, it’s easy to tack on one more thing. Imperative code likes to grow. Functional code demands revision, and absolute clarity of ideas that must be reforged for another feature to be added. In functional languages, there are fewer tools available, so one needs to be familiar with most of them in order to be proficient. Needless to say, imperative languages predominate outside of the ivory tower. Which is a shame, because imperative languages blithely let you think anything.

The problem with thinking anything is similar to doing anything: there’s no structure. But if we can’t think anything than some good ideas may remain unthought. There is a tension between thinking only good ideas and thinking any idea. In theory at least, this is the relationship between industry and academia. While companies want to produce a product quickly,  academia has developed programming paradigms that are harder to use in the short term but create more manageable code over time. These are all various ways of imposing constraints, of moving away from the ability to do anything. Maybe then we’ll get something done.

Powerful Ways of Thinking

My principle, v0.1

Powerful Ways of Thinking:

Emphasize the meaningful and hide the meaningless

See the world as
comprehensible, not mysterious
malleable, not fixed

Use abstraction to make the complex simple, but never deceptively or impenetrably so, and never make the simple complex

Define notation that balances elegant orthogonality with practical readability

Are defined not by dogma but by goals and methodologies that lead to knowledge

Conform to neither the workings of the machine nor the naiveté of the human

Must be discovered, taught, and practiced; they are not intuitive

Permit objective comparisons and contrasts on which arguments may be based

Build diverse and meritocratic communities that grow the best creations and prune the worst

Provide large rewards in the long-term, often at the cost of short-term loss

Are a vision of what should be, not of what already exists

Define their own limits; they do not pretend to be universally or perpetually valid

Technologies and institutions must selectively promote powerful ways of thinking.

The Top 5 Things Done Wrong in Math Class

Sorry to jump on the top-n list bandwagon, as Vi Hart deliciously parodies, but that’s just how this one shakes out. Some of the reasons why these things are done wrong are pretty advanced, but if you’re a high school student who stumbled upon this blog, please stay and read. Know that it’s okay that you won’t get everything.

All of these gripes stem from the same source: they obfuscate what ought to be clear and profound ideas. They’re why math is hard. Like a smudge on a telescope lens, these practices impair the tool used to explore the world beyond us.

EDIT: This list focuses on notation and naming. There are other “things” done wrong in math class that any good teacher will agonize over with far more subtlety and care than this or any listicle.

5. Function Composition Notation

Specifically f \circ g, which is the same as g(f(x)). No wait, f(g(x)). Probably. This notation comes with a built-in “gotcha”, which requires mechanical memorization apart from the concept of function composition itself. The only challenge is to translate between conventions. In this case, nested parentheses are ready-made to represent composition without requiring any new mechanistic knowledge. They exploit the overloading of parentheses for both order of operations and function arguments; just work outwards as you’ve always done. We should not invent new symbols to describe something adequately described by the old ones.

Nested parentheses lend themselves to function iteration, f(f(x)). These functions are described using exponents, which play nice with the parens to make the critical distinction between f^2(x) = f(f(x)) and f(x)^2 = (f(x))^2 = f(x)f(x). This distinction becomes critical when we say arcsine aka \sin^{-1} and cosecant aka \frac {1}{\sin} are both the inverses of sine. Of course, things get confusing again when we drop the parens and get \sin^2x = (\sin x)^2 because \sin x^2 = \sin (x^2). This notation also supports  first-class functions: once we define a doubling function d(x) = 2x, what is meant by d(f)? I’d much rather explore this idea, which is “integral” to calculus (and functional programming), than quibble over a symbol.

4. The Word “Quadratic”

I’m putting “quadratic” where it belongs: number four. The prefix quadri- means four in every other context, dating back to Latin. (The synonym tetra- is Greek.) So why is x^2 called “quadratic”? Because of a quadrilateral, literally a four-sided figure. But the point isn’t the number of sides, it’s the number of dimensions. And dimensionality is tightly coupled with the notion of the right angle. And since x equals itself, then we’re dealing with not just an arbitrary quadrilateral but a right-angled one with equal sides, otherwise known as a square. So just as x^3 is cubic growth, x^2 is should be called squared growth. No need for any fancy new adjectives like “biatic”, just start using “square”. (Adverb: squarely.) It’s really easy to stop saying four when you mean two.

3.14 Pi

Unfortunately, there is a case when we have to invent a new term and get people to use it. We need to replace pi, because pi isn’t the circle constant. It’s the semicircle constant.

The thrust of the argument is that circles are defined by their radius, not their diameter, so the circle constant should be defined off the radius as well. Enter tau, \tau = \frac{C}{r}. Measuring radians in tau simplifies the unit circle tremendously. A fraction time tau is just the fraction of the total distance traveled around the circle. This wasn’t obvious with pi because the factor of 2 canceled half the time, producing \frac{5}{4}\pi instead of \frac{5}{8}\tau.

If you’ve never heard of tau before, I highly recommend you read Michael Hartl’s Tau Manifesto. But my personal favorite argument comes from integrating in spherical space. Just looking at the integral bounds for a sphere radius R:

\int_{\theta=0}^{2\pi} \int_{\phi=0}^{\pi} \int_{\rho=0}^{R}

It’s immediately clear that getting rid of the factor of two for the \theta (theta) bound will introduce a factor of one-half for the \phi (phi) bound:

\int_{\theta=0}^{\tau} \int_{\phi=0}^{\frac{\tau}{2}} \int_{\rho=0}^{R}

However, theta goes all the way around the circle (think of a complete loop on the equator). Phi only goes halfway (think north pole to south pole). The half emphasizes that phi, not theta, is the weird one. It’s not about reducing the number of operations, it’s about hiding the meaningless and showing the meaningful.

2. Complex Numbers

This is a big one. My high school teacher introduced imaginary numbers as, well, imaginary. “Let’s just pretend negative one has a square root and see what happens.” This method is backwards. If you’re working with polar vectors, you’re working with complex numbers, whether you know it or not.

Complex addition is exactly the the same as adding vectors in the xy plane. It’s also the same as just adding two numbers and then another two numbers, and then writing i afterwards. In this case, you might as well just work in R^2. (Oh hey, another use of exponents.) You can use the unit vectors \hat{x} and \hat{y}, rather than i and j which will get mixed up with the imaginary unit, and besides, you defined that hat to mean a unit vector. Use the notation you define, or don’t define it.

Complex numbers are natively polar. Every high school student (and teacher) should read and play through Steven Witten’s jaw-dropping exploration of rotating vectors. (Again students, the point isn’t to understand it all, the point is to have your mind blown.) Once we’ve defined complex multiplication – angles add, lengths multiply – then 1 \angle 90^{\circ} falls out as the square root of 1 \angle 180^{\circ} completely naturally. You can’t help but define it. And moreover, (1 \angle -90^{\circ})^2 goes around the other way, and its alternate representation (1 \angle 270^{\circ})^2 goes around twice, but they all wind up at negative one. Complex numbers aren’t arbitrary and forced; they’re a natural consequence of simple rules.

Even complex conjugates work better with angles. Instead of an algebraic argument and a formula to memorize, we can geometrically see that we we need to add an angle that brings us back to horizontal, which is just the negative of the angle we already have. This is mathematically equivalent to changing the sign on the imaginary component of the vector, but cognitively it’s very different. You can, with clarity and precision, see what you are doing in a way numerals can never express.

1. Boxplots

Boxplots make the top of the list because they’re taught at a young age and never challenged. They are brought up as a standard way to visualize data, when the boxplot was a relatively recent invention of one statistician, John Tukey. Edward Tufte has proposed variants which dramatically reduce the ink on the page. They are much easier to draw, which is important when you want to convince children that math isn’t about meticulous marks on the page. They have no horizontal component, so in addition to being more compact, they also do not encode non-information in their width.

tufte

Boxplots infuriate me because they indoctrinate the idea that there is one way to do it, and that it is not up for discussion. More time is spent on where to draw the lines than why quartiles are important, or how to read what a boxplot says about that data. Boxplots epitomize math as a recipebook, where your ideas are invalid by default and improvisation is prohibited. Nothing could be further from the truth. Moreover, boxplots slap a one-size-fits-all visualization on the data without bothering to ask what other things we could do with them. Tukey’s plots don’t just obscure the data, they obscure data science.

Visualizing Complexity

This post could be considered the third of three responding to Bret Victor’s trio of talks; the previous ones were Abstraction and Standardization and Media for Thinking the Eminently Thinkable.

Question: what makes a program complex? Answer: time.

There are multiple types of time. The obvious one, the one measured by stopwatches, I’ll call physical time. A small program promotes a tight feedback loop, where the effect of a change is visible in a few seconds. Large programs take longer to compile and run, as do those dealing with large amounts of data. To help offset this, programming languages developed threading and concurrency. An expensive task can be queued up, so it will happen at some point in the future. This sort of parallelism makes programs much harder to reason about and test.

Then there’s logical time. This includes event-based programming, which usually involves a GUI. A user’s clicks and drags become events to which the system responds. Code is called when something happens, not when the thing before it ends. Rather than a procedural recipe racing towards the final answer, these programs are state machines, looping indefinitely and branching in response to an arbitrary sequence of events.

Finally, for completeness, there’s developer time. Memories fade, requirements change, people join or leave the project, the technology itself changes. Like people, code deteriorates as it ages, although measures can be taken to mitigate the decline. Any large codebase has annoying “legacy code” kept around for “historical reasons” and “backwards compatibility”.

In his talk Drawing Dynamic Visualizations, Bret Victor presents a drawing tool where the user creates a program by direct manipulation. This program can be used to create custom pictures for multiple datasets (different measurements of the same variables). The results feel very “PowerPoint-y”, the sort of thing you’d present in a board room or paper scientific journal. The method of creation is new, but the results emulate old media.

If you’re reading this, Bret, and think I’m a bit obsessed, (1) thanks for noticing (2) this will likely be the last post about you for awhile.

Users can step through the drawing program, but it’s capable of running instantaneously (i.e. without time and therefore much complexity) . There’s no need for a visualization to wait for a network event, but a good visualization reacts to mouse motions, like hover, click and drag. Interactivity is a crucial component of visualization. Victor has made a great way to make charts for boardroom presentations, but he hasn’t allowed the user to interact with the data. The visualization is “dynamic” in that it accommodates new data, but it doesn’t react to the user. I’m not asking to drag a bar to set the data to that point; we’re working under the assumption that the underlying data is fixed. And I can abstract any color or magic number as a parameter, so I can set data-independent features of the graph like error bars dynamically. But without interactivity, we can only accomplish the first third of Shneiderman’s mantra:

Overview first, zoom and filter, then details-on-demand.

Without interactivity, we can’t zoom or filter or otherwise adjust the visualization to get our details on demand. All we have is an overview. Our visualization sits there, dead. This is not good enough. Data today is so complex that even the best static image isn’t going to elucidate subtle meanings. Instead we need interactivity so that the visualization reacts to the user, not just the designer. I should clarify straight away that I’m not talking about the interactivity Victor rails against in his paper Magic Ink, where the user has a specific, often mundane question she wants answered. (How long will I wait for the bus? What move theater should I go to?) I’m talking about systems where the user doesn’t know what she wants, and needs to explore it to develop an intuition or notice something strange.

There is a disconnect, an asymmetry, in Victor’s work. In Stop Drawing Dead Fish, we had a novel direct manipulation tool and interactivity with its results. In Drawing Dynamic Visualizations, we had a novel direct manipulation tool and dynamic data. In Media For Thinking The Unthinkable, specifically the part on signal processing, we had interactivity and dynamic data. Create with direct manipulation, load dynamic data, interact with the result: pick two.

How can we add interactivity to Victor’s tool? Let’s start with mouseover. As a function of the mouse position, which can be considered an input, hover is stateless. The tool can run geometric collision detection on its primitives and provide a boolean measurement (output) as to whether the shape is being hovered over. If you want to use this information to, say, change the shape’s color, you have to have shapes able to read their own outputs. This can lead to feedback loops. If we draw nothing on mouseover, then we’re not being moused over anymore, so we go back to drawing our shape, which is now being moused over…and suddenly our system has state, and “flickers” indefinitely. Worse, by creating a place for code that is executed only if certain external conditions are true, we create asynchronous, “jumpy” code. This is a large increase in physical and logical time.

Selecting a shape runs into similar problems, an additionally requires a global variable (internal, not a parameter) to keep track of the selection. It gets even worse if we want to add collision detection between objects, like a force-directed diagram. (Actually that’s not collisions, just proximity-based repulsion, but roughly the same idea.) Even if the system manages to detect when two subpictures touch, they now will need a place for code that is called when that is the case. Victor briefly demonstrated how physics can be simulated by using a guide vector to geometrically represent velocity, which we could presumably shift in reaction to the angle of line contact. But we’re starting to think of shapes as their own entities, having state represented in guides associated with them, rather than being drawn and forgotten about. This brings us into Object Oriented Programming, the traditional choice for this sort of interactivity. It’s a great paradigm but it’s quite foreign to Victor’s tool. (Although his Stop Drawing Dead Fish tool lets its user do exactly this. Maybe the line between data scientists and visual artists, or perhaps their tools, is blurring…)

There’s a brief moment where Victor abstracts a color from a shape, and makes it a parameter. This parameter starts off as numeric but changes to a color, likely in response to a keyboard command. If the tool already has a notion of types, that a number is not a color, then there’s a piece of complexity already imported from traditional programming. Types, OOP, collision detection, encapsulated mutable state – all of these ideas have their place in code. There are also a number of mainstays of drawing tools we could include, like gradients, z-order, manual kerning, and the dropper tool. Deciding which of these to include and which to omit, and how to integrate them into the tool, will require some careful design decisions. I wonder how many of these ideas we can pile into the tool before either the graphical interface or the layperson’s acuity collapses under the weight of the complexity.

I, like Victor, and frustrated at the state of software and computing. Most artists are able to react with their medium, to try something, revise it, and use mistakes as inspiration. Listen to how creators in fields as diverse as painting and carpentry discuss a problem. It’s always about sculpting the unruly workpiece material with capable and precise tools, man vs. nature. So why then are computers, where the tool and the medium are one and the same, so limiting that our society can’t see them as anything other than frustratingly complex? Instead of using the medium to find out what she wants to create, the creator has trouble merely realizing what’s in her head. We need a fundamental overhaul of the tools we use for thinking. These tools should let humans, data, and simulation work together. Victor produces tantalizing glimpses of such tools with ease.

Maybe there are true geniuses out there. People who effortlessly sift through complexity, and can make bold pronouncements with confidence. The rest of us have to fake it. We get mired in the cacophany of voices and ideas, paths to choose and tools to use. We do our best to make sense of them but always wonder, in a grass-is-greener sort of way, if we’re missing the point entirely.

Media for Thinking the Eminently Thinkable

Bret Victor has done it again. His latest talk, Media for Thinking the Unthinkable, gives some clues as to how scientists and engineers can better explore their systems, such as circuits or differential equations. He’s previously shown an artistic flair, demoing systems that allow artists to interact directly with their work, without the use of language. In both cases, the metric for success is subjective. What the novel user interface is doing is not determining the “right” outcome but allowing a human to better see and select an outcome. Victor is trying to help users develop intuition about these systems.

That strategy looks promising for a sufficiently complex systems, but when it comes to pre-college education, the goal is not to instill semiconscious, unarticulatable hunches. This relatively simple material demands full, clear, and lingual understanding. This decidedly different goal will, or at least should, guide the design of educational technology in a different direction than Victor’s demos.

What is an educational technology? In today’s world. you’re probably thinking of a game, video, recorded demonstration, or something else with a screen. Those all qualify, but so do manipulatives (teacher-speak for blocks and other objects for kids to play with the grok a concept) and pencil and paper. For the simplest of problems, say 2+2, there’s no need to get out the iPad, just grab four blocks. For advanced math, a computer is necessary. So where is the meeting point, where the see-saw of objects and devices balances?

What follows is a case study of educational technology applied to a specific mathematical idea, multiplication of negative numbers, which is pretty close to that balance point. We’re trying to explain why a negative times a negative equals a positive, a classic grade school trouble spot. (Instead of fixing a specific misconception, we could have to open-ended exploration of a slightly larger topic. Touch Mathematics has some interesting material on that front.)

The MIND Research Institute has developed a series of visual math puzzles that all involve helping a penguin traverse the screen. In their multiplication unit, the penguin stands on a platform in front of a gap. Multiplying by a number stretches the platform proportionally. A negative number will rotate the platform 180 degrees, so two rotations cancel each other. When a student interacts with this system, they gain a useful metaphor for thinking about negative numbers: go left on the number line. However, many of the more advanced puzzles seem contrived, and are difficult to do without resorting to pencil and paper because they don’t provide a cognitively useful metaphor. Moreover, adopting the unit, setting up computers for the students, and setting aside time has significant overhead. Can we go simpler?

Let’s take the rotation metaphor and find physical objects for it. We could use plastic spinners with telescoping arms that allow them to stretch. The arm should lock into place at regular intervals, and the rotation in two opposite directions. I don’t think this will work because the system no longer reacts, and the student has to provide the ideas, and it’s susceptible to mechanical breakage. We could have pentagonal pieces that look like squares with a slight arrow on one side that allows us to indicate direction. Then we place groups of them together, three groups of two for 2×3, and rotate them when we have a negative. But then we have to rotate all of them, which is time consuming. We could put blocks into “boats”, some way of grouping them and assigning a rotation, but that’s even more cumbersome. All of these methods require special manipulative to be purchased, organized, stored, and cleaned. To summarize, I can’t think of a good way to adopt the rotation metaphor into physical objects.

At even less fidelity, we can use square blocks. These are generic enough to justify keeping around for other lessons, and we can have magnetic squares for the teacher to put on the board and actual blocks for the kids to play with at their desks. We can use the grouping idea, and different colors for positive and negative blocks. Here’s how I envision it working:

multiplication

So the blue squares are positive numbers, and the red squares are negative ones, except in the second two examples when we’re on the other side of the number lines, in Negative Land. In Negative Land, the blue squares are negative! So that means that the red squares that were negative are now positive.

That doesn’t make much sense. We’ve created an abstract visualization, so that even as the kids grasp the squares they won’t grasp the concept. The Negative Land mumbo jumbo winds up hand waving away the crux of the issue: why is a negative times a negative positive? We’ve illustrated the idea of reversing something twice but haven’t done much to justify it. Even worse, we’ve created two representations of the same number. The middle two lines are both -2, and the outer two lines are +2, but they don’t look the same.

Even more low tech: repeat the same exercise on paper with Xs and Os standing in for red and blue squares. This is eminently practical but the students now lose the ability to hold something concrete in their hands. We’ve pared the metaphor of flipping twice down to its essence, but lost a lot of its effectiveness in the process.

To quote Edison, “I haven’t failed. I’ve found ten thousand ways that don’t work.” Well, not quite so many, but same idea. Hopefully I’ve illustrated some of the pitfalls in making educational technology both practical and effective. And maybe you learned something about negative numbers in the pro — wait a sec. If explaining different ideas for explanations can actually work, then we may not have to come away empty handed after all.

The computer is assisted imagination. We can take the metaphors expressed most clearly in software and give them to the kids directly. Tell them to imagine themselves on a stretching, rotating platform. Better yet, line up groups of students and have them all rotate together, sensing the math with their own bodies.

The hard part of crafting a lesson plan, whether in person or over technology, is devising the metaphor, a new way to think about a topic so that it makes sense. Once we see negative numbers as rotation, systems of inequalities as unbalanced mobiles, complex numbers as spinners, then the students can explore them. That can be in spoken or written word, symbols, movement, sculpture, drawing, or on the computer. That’s the easy part.

This doesn’t bode well for educational technology companies. If their products are effective, their central metaphors can be easily expatriated to classroom exercises. At best, the metaphor is wrapped in unnecessary packaging; at worst, the packaging is all there is, hawked by cargo cults worshipping motion and interactivity as if these things only exist on a screen.

* * *

An addendum, back to Bret Victor. In a note about the talk, he defines its subject matter as “a way of thinking. In particular — a way of using representations to think powerfully about systems.” (Emphasis his.) He is striving to create the next generation of tools that make unthinkable thoughts thinkable. Only these “powerful, general, technology-independent ways of thinking” will still be relevant a hundred years from now. It’s a daunting, open-ended task, especially considering how much trouble we got into just with arithmetic.

With the announcement of iOS 7, John Maeda criticized not just the OS but the debate it engendered. By phrasing interface design as a binary, of photorealism vs. flat abstraction, “To Skeu or Not To Skeu”, we lose sight of the possibilities that lie before us. Maeda writes, “What we need now is to move beyond the superficial conversation about styles and incremental adjustments to boldly invent the next frontier of interface design.” What do those new designs look like? “Something we haven’t even dreamed of yet.”

Reading visionaries like Victor and Maeda, it’s tempting to join the great quest to fundamentally alter how every person uses software, and by extension, how every person thinks. But part of me doubts the realism of their grandiose pronouncements. On the other side of the coin, Matt Gemmell is an iOS developer very much concerned with the present and its tools. He thoroughly deconstructs iOS 7, seeing it as an improvement, but not extravagantly so. It’s a needed update after six years. In another six we’ll be able to see the next UI paradigm, but he doesn’t waste breath trying to guess it now. Next century is of no concern. Write this month’s app, get this week’s paycheck, enjoy dinner tonight with friends and family. We’ll create the tiniest bit of the future in the morning.