Archive for the ‘Philosophy of Mind’ Category

Prefer Verbs to Nouns

My principle, v0.2

Prefer verbs to nouns.

When Bret Victor introduced the concept of a principle, he said a good principle can be applied “in a fairly objective way”. This is the biggest problem with my first draft, which took several sentences to define what a powerful way of thinking was. A principle must be general enough to apply to many situations, but also able to operationalize to find meaning in any specific situation. Devising or crafting a principle requires inductive reasoning (specific to general), but applying it demands deductive reasoning (general to specific). Forging a principle resembles Paul Lockhart’s vision of mathematics: an idea that at first may be questioned and refined, but at some point begins “talk back”, instructing its creator rather than being shaped by it.

I could have formulated the principle as verbs, not nouns or similar, but the principle itself demands a verb. I have chosen prefer, but I fear that may not be active enough; something closer to choose or emphasize verbs over nouns may more fitting. As the principle predicts, identifying a dichotomy and even choosing one side is easy compared to selecting the verb to encompass the process and relationship. This principle retains status as a draft, although unlike its predecessor it does not have the glaring flaw of subjective application. The verb (and preposition serving it) are still to be determined, and the possibility of cutting a new principle from whole cloth also remains open.

All of this without a discussion of the principle itself! Human language is endlessly versatile and adaptive, and therefore (in hindsight!) it is quite fitting that I use the terms of language itself. Of course the principle does not apply specifically to language, but any field that involves structures and the relationships between them, which is to say, any field at all. It can apply to essays, presentations, or works of art. Finding the verbs and nouns of a particular field is often easy, even if it is difficult to abstract the process. With that said, verbs are not always grammatically verbs; -ing and -tion nouns can be fine verbs for the purpose of the principle.

The verbs should be emphasized to your audience, but the setting will determine how you craft their experience. Most of the liberal arts require grappling with verbs directly; a good thesis is architected around a verb that relates otherwise disparate observations or schools of thought. By emphasizing the verbs, one communicates causal mechanisms, transformations, relationships, and differences across time, location, demographics, and other variables. The goal is not merely to show that the nouns differ (“the a had x but the b had y”), but why, what acted on them to cause the differences. Frequently the base material (often historical events or written works) are already known to your audience, and you need to contribute more than just a summary. You need to justify a distinction.

However, in the presence of detailed, substructured, and numeric nouns, it is often best to let them speak directly. Often the evidence itself is novel, such as a research finding, and you want to present it objectively. In such cases, more frequent in science and engineering, placing your audience’s focus on verbs requires that you place yours on presenting the nouns. The more nouns you have, the more ways they can relate to each other; the more detailed the nouns, the more nuanced those relationships can be. When the nouns are shown correctly, your audience will have a wide array of verbs available to them; Edward Tufte gives examples (Envisioning Information, 50):

select, edit, single out, structure, highlight, group, pair, merge, harmonize, synthesize, focus, organize, condense, reduce, boil down, choose, categorize, catalog, classify, list, abstract, scan, look into, idealize, isolate, discriminate, distinguish, screen, pigeonhole, pick over, sort, integrate, blend, inspect, filter, lump, skip, smooth, chunk, average, approximate, cluster, aggregate, outline, summarize, itemize, review, dip into, flip through, browse, glance into, leaf through, skim, refine, enumerate, glean, synopsize, winnow the wheat from the chaff, and separate the sheep from the goats

The ability to act in these ways is fragile.  Inferior works destroy verb possibilities (science and engineering) or never present them at all (liberal arts). Verbs are the casualties of PowerPoint bullets; nouns can often be picked out from the shrapnel but the connections between them are lost. But conversely, a focus on verbs promotes reason and the human intellect. Verbs manifest cognition and intelligence. Emphasizing verbs is a proxy and litmus test for cogent thought.

Brief Thoughts On Scratch

Previously, I’ve lambasted the children’s programming language Scratch for its cockpit’s worth of controls.  This encourages its users to try anything and see what works, rather than plan, predict, and understand exactly what each piece of code is doing. It’s instant gratification … and a tight feedback loop.

Scratch is not a tool to learn programming or metacognition; Scratch is a tool to create artistic displays that could not otherwise be created (by children). Scratch thus allows children to explore ideas not related to mathematics or programming. They have creative freedom, much like art class. And what elementary schooler produces anything particularly good, objectively speaking, in art class? So don’t judge the Scratch projects too harshly.

Scratch is a social platform, except that the socialization happens in real life. Get a few kids in a room using it, and they’ll share both creations and code,  motivate each other, and change goals on the fly. This differs from more mature programming, where one has a specific goal in mind. The other key difference is that most languages discourage straight up experimentation; one have to know what one is doing in order to do it. Scratch reverses this: a kid can learn what a command does through using it. This is because all the commands are displayed, ready to be used.

Not only displayed, but also labelled, unlike the Khan Academy programming language that drops down four numbers with no context. It’s a way to slyly introduce relative and absolute motion – move up by, move to – in a way that lets kids work out the rules. No, they won’t work out all the rules, but I think they’ll come to fewer incorrect conclusions (misconceptions) in a reactive medium than with marks on paper. They will figure it out later, much later.

Scratch is a way to put Lego bricks into the bucket. The kid will reassemble them into many different knowledge structures over the years before creating something strong and beautiful – an educated mind. It’s during that process, that struggle, that they can learn to program with planning and expressiveness, rather than tacking on bricks ad-hoc. It’s a stage everyone goes through, and Scratch can help a child make the most of it. But don’t confuse acquiring bricks with figuring out how to assemble them.

This isn’t to say that Scratch as it exists is perfect; far from it. We need to keep rethinking what tools are best to give to our children (and adults, for that matter). But I’m backing off my previous stance that guided minimalism is the answer. (Or is my new “wait and let them figure it out” view too fatalist?)

As We May Have Thought

Vannevar Bush wanted to build machines that made people smarter. His 1945 paper, As We May Think, described analog computers that captured and stored information, the earliest vision of today’s internet. All of Bush’s hopes for chemical photography have been surpasses by today’s digital cameras, and digital storage media are more compact than the most hopeful predictions of microfilm. He also predicts dictation, and though today’s software does a passable but not perfect job, it has not reached the level of ubiquity Bush predicts. He is also wrong about the form factor of cameras, predicting a walnut-sized lens mounted like a miner’s lamp. The result is similar to Google Glass, and no other product:

One can now picture a future investigator in his laboratory. His hands are free, and he is not anchored. As he moves about and observes, he photographs and comments. Time is automatically recorded to tie the two records together.

As for selecting information from the ensuing gigantic database, Bush posits the “Memex”, a desk with displays built into it. The memex is personal, allowing users to connect pieces of information together into trails for later examination. The memex is personal and built on connections, much like the mind.

The late Douglas Engelbart expanded on the purely hypothetical Memex with NLS, short for oNLine System. In “the mother of all demos”, he showed how users traverse and manipulate trees of data, with rich transclusion of content. Unlike the Memex, real-time sharing is possible by way of video chat. Like the memex, NLS was primary text, and the user-facing component was the size of a desk.

And yet … Bush and Englebart’s systems are not psychologically or sociologically realistic. Though Bush was writing in 1945, his vision seemed Victorian: a facade of proper intellectualism with no grounding in the less dapper side of human nature. One can hardly imagine multimedia beyond classical music and Old Master paintings emanating from the memex.  Bush used the effectiveness of Turkish bows in the crusades as an example of what one could research on a Memex. He missed the target. The Memex and NLS were designed for a race of hyper-rational superhumans that do not exist.

The fictitious enlightened user would emphasize restraint, but today’s technology can, for all intents and purposes, do anything. The ability to do anything is less productive and more dangerous than it sounds. Intellectually, such a system encourages slapdash and incomplete thought. It does not force you to abandon weaker ways of thinking; it gives you no guidance towards what will work, what will work well, and what will not work at all. Sociologically, the availability of information on a scale beyond what Bush could have dreamed hasn’t made us an enlightened society. Having correct information about (for example) evolution readily available online has not made a dent in the number of people who read Genesis literally. And it gets worse.

Moore’s law may be the worst thing to happen to information technology. With computing so cheap and so ubiquitous, with the ability to do anything, we have overshot the island of scarcity inhabited by Bush and Engelbart and into the land of social media, entertainment, and vice. The universal systems for the betterment of humanity have fallen to fragmented competitors in an open market. The emphasis on mobile these last six years has led to apps of reduced capability, used in bursts, on small screens with weak interaction paradigms. This is what happens when there’s more computing power in your pocket than Neil Armstrong took to the moon: we stop going to the moon.

Recreational computation is here to stay, but we may yet partially reclaim the medium. Clay Shirky is found of pointing out that erotic novels appeared centuries before scientific journals. Analogously, we should not be deterred by the initial explosion of inanity accompanying the birth of a new, more open medium.

I can only hazard a guess as to how this can be done for recreational computing: teach the internet to forget. (h/t AJ Keen, Digital Vertigo) One’s entire life should not be online (contrary to Facebook’s Timeline – it’s always hard to change ideas when corporations are profiting on them). A good social website would limit the ways in which content can be produced and shared, in an attempt to promote quality over quantity. Snapchat is a promising experiment in this regard. There’s been talk of having links decay and die over time, but this sees like a patch on not having proper transclusion in the first place.

As for programming, though, the future is constrained, even ascetic. If Python embodies the ability to do anything, then the future is Haskell, the most widely-used [citation needed] functional programming language.

Functional programming is a more abstract way of programming than most traditional languages, which use the imperative paradigm. If I had to describe the difference between imperative programming and functional programming to a layperson, it would be this: imperative programming is like prose, and functional programming is like poetry. In imperative programming, it’s easy to tack on one more thing. Imperative code likes to grow. Functional code demands revision, and absolute clarity of ideas that must be reforged for another feature to be added. In functional languages, there are fewer tools available, so one needs to be familiar with most of them in order to be proficient. Needless to say, imperative languages predominate outside of the ivory tower. Which is a shame, because imperative languages blithely let you think anything.

The problem with thinking anything is similar to doing anything: there’s no structure. But if we can’t think anything than some good ideas may remain unthought. There is a tension between thinking only good ideas and thinking any idea. In theory at least, this is the relationship between industry and academia. While companies want to produce a product quickly,  academia has developed programming paradigms that are harder to use in the short term but create more manageable code over time. These are all various ways of imposing constraints, of moving away from the ability to do anything. Maybe then we’ll get something done.

Programming our Children

A generation ago, computers only understood text. You would program the computer in English text. You would ask your questions on punchcards encoding text. Your answer would be provided as monospaced, unadorned text. Since the early 1980s we have refined the graphical user interface, or GUI, to allow humans to communicate with computers on more familiar terms. Although a boon for the layperson, GUIs have been troublesome for computer scientists. They are hard to build because they are so open-ended. They are hard to test, because rather than printing a single correct answer there are many paths the user may take to accomplish the same goal.

Computer science still starts with a text editor and a compiler, because programming is better served by text. Text affords programmers absolute control over their programs. Written language is far more expressive than pointing and clicking, allowing for a explicit and precise descriptions. Clean code is a clear explanation of an algorithm directed to a mindless worker. The struggle of a programmer is to achieve sufficient clarity for both the computer and him- or herself. It can be a very enlightening experience, to debug an algorithm and then discover it doesn’t quite do what you wanted it to do, and so adapt it further. That said, the sheer austerity of the task can make it daunting without the right training and motivation on the part of the programmer.

GUIs are quite the opposite. They show many available options, reward experimentation, and make complex actions easy by hiding detail.  GUIs make computing accessible to a wide audience. A user interacts with a GUI as a peer, clicking and dragging and seeing how the interface reacts. Ultimately, convinced the GUI is logical and predictable, they embrace it as a new way of thinking. But GUIs are limited. They make it very difficult to perform analogous actions repeatedly or store a sequence of actions for later use.

There is an analogy to be made with education. Programming is like direct instruction, where knowledge is relayed linguistically and authoritatively. (No wonder Bill Gates and Salman Khan like it.) GUIs are like constructionism, where feedback loops reveal non-arbitrary behavior of a system that the user/student slowly begins to internalize. (So I constructed my own definition. How meta.)

Both methods of interacting with a computer are valid and potentially productive, so it seems both educational philosophies are valid as well. But there is a critical flaw in the analogy. For GUIs, students are analogous to the user and the computer is akin to some representation of the material itself: manipulatives, an experiment, a video, a graph or plot. But for text-based programming, the student is not the programmer; they’re the computer! The teacher is the programmer, the direct instructor, who crafts clear explanations of algorithms for the students to mechanically follow.

Direct instruction is degrading. It robs them of their ability, desire, right to explore and create. Knowledge transfer is not like copying a file, where we wait as it is methodically duplicated. Knowledge is personal, with idiosyncrasies and unique contexts. To insist on teaching children the same way we program a computer is simply wrong. It cuts to the core of what Dethorning STEM is about: our society treats people like computers and computers like people.

On a positive note, this analysis suggests that we should introduce computational thinking as another way for students to interact with the material in a constructionist setting. Having students write their own psuedocode for long division may be a viable way to teach it, if  it needs to be taught at all. Computational literacy will play an increasing role in the next century as computers become more ingrained in out lives. In the future, following an algorithm won’t be good enough — you’ll have to be able to write one.

Unfortunately, the state of computer science education is in shambles. Basic computer classes often teach how to use Microsoft Office by following rote algorithms — truly the blind leading the blind. Computer science itself takes a back seat to all other subjects, and is only sometimes offered as an elective. But I think that computational literacy does not require a computer scientist, a computer lab, or even a computer. It’s not content; it’s a technique. By cleverly inserting the right activities into the existing curriculum, teachers can cover computational thinking alongside any subject. Training teachers how to do that, and getting the administrators to sign on, will prove difficult.

A new, innovate approach is needed. One that breaks from the ossified red tape and small scale of the classroom and equally from the poor pedagogy underlying of most edtech products. The next generation of children deserve no less.

Internet Idea Books: Roundup, Review, and Response

What Technology Wants (Kevin Kelly, 2010) is a sweeping history of technology as a unified force which he calls “the technium”. Kelly starts slowly, drawing ever larger circles of human history, biological evolution, and the formation of planet earth from starstuff. His scope, from the Big Bang to the Singularity, is unmatchable. But the purpose of this incredible breadth is not readily apparent, and isn’t for the first half of the book, as Kelly talks about everything but technology. I advise the reader to sit back and enjoy the ride, even if it covers a lot of familiar ground.

In not the first chapter on evolution, Kelly argues that the tree of life is not random, but instead is constrained by chemistry, physics, geometry, and so on. The author points to many examples of convergent evolution, where the same “unlikely” anatomical feature was evolved multiple times independently. For example, both bats and dolphins use echolocation but their common ancestor did not. Kelly is careful to attribute this phenomenon to the constraints implicit in the system and not supernatural intelligence. He argues that, in the broadest strokes, evolution is “preordained” even as the details are not.

Kelly begins the next chapter by noting that evolution itself was discovered by Alfred Russel Wallace independently and concurrently as it was by Charles Darwin. This becomes the segue into convergent invention and discovery, insisting that the technium should be regarded as an extension of life, obeying most of its rules, although human decision replaces natural selection. Technology becomes an overpowering force that loses adaptations as willingly as animals devolve (which is to say, not very).

The premise that technology extends life becomes the central to Kelly’s predictions. He paints a grandiose picture of technologies that are as varied and awe-inspiring as the forms of life, encouraging ever-more opportunities in an accelerating dance of evolution. “Extrapolated, technology wants what life wants,” he claims, and lists the attributes technology aspires to. Generally speaking, Kelly predicts technological divergence, where your walls are screens and your furniture thinks, and the death of inert matter. Like the forms of life, technology will specialize into countless species and then become unnoticed, or even unnoticeable.

Much of what Kelly predicts has already happened for passive technologies. We don’t notice paper, government, roads, or agriculture. But I don’t think that information technology will achieve the same saturation. No matter how cheap an intelligent door becomes, a non-intelligent version will be cheaper still, and has inertia behind it. Kelly claims that such resistance can only delay the adoption of technology, not prevent it. Nevertheless something about Kelly’s book disturbed me. It was wrong, I felt, but I couldn’t articulate why. So I read a trio of books that take a more cautioned view of information and communication technologies. As I read, I asked of them: what has the internet taken from us, and how to we take it back? Continue reading

The root of consciousness

I’d like to share another excerpt from Godel, Escher, Bach. Author Douglas Hofstadter is going through a pretty involved mathematical proof/logical argument that I can’t quite follow. He has finally derived, following Godel’s footsteps, statement G, which proves that every formal system is incomplete.  Continue reading

More thoughts on educational videos

There’s a lot of disagreement surrounding Khan’s teaching style because there is a fundamental disagreement about the goals of education. I don’t claim to have the answers, but I’ll start by providing some vocabulary. A problem is solved mechanistically and has a right answer. A puzzle requires creativity, consideration of nuance, and the ability to work in multiple ways simultaneously. Puzzles have multiple routes to a single solution. Sometimes the difference between a problem and a puzzle is the person (or machine) solving it. By contrast, a wicked problem has no solution, nor a clear set of rules, nor a finite number of solutions.

Just about any of the “big” issues of our time are wicked problems: war, poverty, climate change, population growth, and yes, education. I’m inclined to say that the ultimate goal of education is to teach students how to tackle wicked problems, to the extent that such skills can be taught. You might think that this means encouraging creativity trumps everything, and you’d be half wrong.

Continue reading