Using Simulation

Sherry Turkle’s Simulation and its Discontents takes a disapproving stance towards software that mimics the real world. She surveys many fields of science and engineering, where simulation takes many forms, including protein folding, architectural drawings, and physical phenomena. She highlights how older practitioners miss having their hands on their workpiece, while younger ones are anxious about knowledge they may never have. In the 1980s, simulation drove a wedge between MIT faculty and students; more recently it has been begrudgingly accepted by all.

There is certainly a generation gap here, but it exists as much in the technology itself as in the minds of the scientists. Turkle repeatedly emphasizes the “black box” nature of software, and how its users cannot examine the code themselves. She conveniently ignores the open source movement, which creates software that can be understood, modified, and redistributed by its users. True, much of science and engineering software is still proprietary, and open source offerings are frequently inferior to the paid versions, but she doesn’t even have that discussion.

Secondly, even if we users could see the source, understanding it is not trivial. Her book predates the “learn to code” movement by a few years, but the same objections apply: computer science is an engineering field in its own right, and software should be developed and maintained by specialized practitioners rather than done “on the side” by all engineers. Yes, domain knowledge experts should be brought in when necessary. Research into more advanced programming languages will likely only make the situation worse, as they typically rely on an ever-larger and more abstract body of knowledge in order to work in them, thus catering to the expert over the beginner.

Any simulation that performs calculations that could be done by hand is really an automation. A true simulation, arguably by definition, cannot be fully understood. I agree that all engineers should be able to dash off a few lines to automate a menial task, but simulations are harder. In particular, there are languages (Python and Ruby) that are easy to learn well enough to automate simple tasks. But most simulations aren’t written in these languages. The need for performance drives simulations to be written in C or C++, frequently incorporating many libraries (shared code written by someone else; even programmers don’t understand all of their program). Unlike the command line utilities of yore, graphical user interfaces and high-performance graphics rendering require specialized and complex programming techniques. Integrating with the internet or a database makes programs more complicated still. Finally the size of programs has ballooned. There is so much code in a typical piece of software, and it is so complex, that I find it naive when non-programmers to insist that if only they could see the code, they could understand it.

Programs and programming today are far more complicated than in the 1980s. The most advanced climate models consist of a million lines of FORTRAN code, simulating equations from many disparate fields of natural science. They are beyond the understanding of any single individual. And indeed, understanding is no longer the goal. Just as technology has allowed us to see things invisible to our eyes, and hear things inaudible to our ears, simulation allows us to think things incomprehensible to our brains.

Turkle certainly grasps that deep, full understanding is imperiled, but only in her conclusion does she so much as entertain the notion that this may be necessary or good. Simulation may open up new possibilities more than it closes them. Science has surpassed the point of what can be understood by crystaline thought; the future is noisier and approximate. Robert Browning wrote that “a man’s reach should exceed his grasp”. A fictionalized Nikola Tesla ponders this quote in The Prestige, with the context that what man can affect, he cannot always control. Turkle would do well to heed his next remark: “It’s a lie. Man’s grasp exceeds his nerve.”

How do we get the nerve to embrace the new future of simulation? In part, by addressing specific concerns raised by Turkle’s interviewees. Defaults are too tempting, so we shouldn’t provide them. A design can appear finalized before it actually is, preventing further iteration, so displays can mimic physical imprecision. High-level summaries should allow the user to see examples, to jump between layers, to see why the computer has classified or calculated the way that it did. Nevertheless I expect changes in simulation to come primarily from software engineering and culture rather than the technology itself.

Turkle gives an example of a protein folding simulation devising a molecule that is clearly wrong to the biologist, because it defies her understanding of how proteins work. But what is her thought process if not another simulation? Perhaps it is more correct than the software in this case, but in general, thinking and outlining should be considered the cheapest, fastest, and lowest-fidelity of many simulation tools available to us. Intuition can be powerful, but it can also be wrong, and any claim in science or engineering requires more than intuition to support it. More formalized methods of thinking (how did they build skyscrapers in the 1920s?) are just following algorithms that the machine can do today, faster and with (potentially!) fewer errors. If the biologist can articulate her thought process, it can be automated, and if the cannot, it’s mere intuition.

With regards to creativity, simulation  and I include the word processor here — is a double-edged sword. When the barrier to creation is low, we can get our thoughts out quickly, and complete the dreaded first draft. Ideas and structure form and reflow on the fly.  The work is crafted interactively, iteratively, in a continuous and tight feedback loop. Human and simulation play off each other. This is clearly better than the computer working in isolation, such as the protein folding program that merely produced the “right” answer. What I propose is that it may also be superior to humans working “blind”, creating ideas fully in their heads, attempting to simulate them there, in discrete drafts. This model is a relic of hand- or typewritten pages, technologies where copying wasn’t instant. The downside is that it’s easy to do less thought before writing, and the end product may lack a harmonious global structure as a result. The compromise is to work with many tools of increasing fidelity and expense. When an idea does not work, we desire to “fail fast”  in the least expressive medium in which the flaw is manifest.

Frequently ideas we think are brilliant fall over when confronted, and a simulation can fail fast, can confront them quickly. In user interface design, paper prototypes are created not for the designer but for a test subject, and all the intellectualized beauty in the world means nothing if the user can’t operate the interface. This echoes a fundamental tenant of engineering across the board: you are not designing for yourself. What you think doesn’t matter unless validated by a ruthlessly objective third party. Writing now forms the exception: to reify human thought itself without the consent of the external world shifts the burden onto the author. Yet even though the writer struggles to crystalize his ideas as much as possible prior to composing them, he knows the value of trusted readers to provide feedback.

This leads us to the notion of software testing, which is all but absent from Turkle’s book. Provably correct software is an active area of research, so those shipping software today verify its correctness empirically. Testing exists on many scales, from system-wide routines of many actions to “unit tests” that cover as small a functionality as possible. Although typically written by the developers, a user can also benefit from writing unit tests, as they force her think about and articulate how the simulation would act in a very controlled instance. She will build confidence that the simulation matches her ideas. When a test fails, either the simulation, her understanding, or her ability to articulate her understanding, is incomplete.

Testing software is a philosophical shift from understanding it, moving programming away from science and towards engineering. Scientists will protest that experimenting on a designed thing is less effective than understanding the design directly, to which I respond by challenging the notion that software is designed. I have already discussed the staggering complexity of some software, and this should lead us to treat it as discovered rather than invented, and therefore investigated by experimentation rather than reverse-engineering. We explore simulation as an image of the real world: highly accurate but imperfect, yet faster, cheaper, and more versatile.

George Box, the statistician (what are statistics if not a simulation?), wrote that “all models are wrong, but some are useful”. He diffuses Turkle’s claim that simulations mislead by agreeing with it, and then salvages utility. He deconstructs and reconstructs simulation as a practice in a sentence. It is certainly worthwhile to make our models less wrong or more transparent when possible (these are often competing aims), and to supplement simulations with the real. Still, as in all discussions on culture-shaping technology, we must conclude that the main approach must be to use the technology more prudently, rather than ban it or hope that it will go away. Whether simulation acts as a magnifying glass or a funhouse mirror depend on how you look at it.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: