- Home
- Edward Dolnick
The Clockwork Universe Page 16
The Clockwork Universe Read online
Page 16
Pinpointing a location was an old idea, as old as latitude and longitude. The new twist was to move beyond a static description of the present moment—the fly is 11 inches from here, 9 inches from there; Athens is at 38˚N, 23˚E—and to picture a moving point and the path it drew as it moved. Take a circle. It can be thought of in a static way, as a particular collection of points—all those points sitting precisely one inch from a given point, for instance. Descartes pictured circles, and other curves, in a more dynamic way. Think of an angry German Shepherd tethered to a stake and straining to reach the boys teasing him, just beyond his reach. The dog traces a circle—or, more accurately, an arc that forms part of a circle—as he moves back and forth at the end of his taut leash. A six-year-old on a swing, pumping with all his might, traces out part of a circle as the swing arcs down toward the ground and then up again.
From the notion of a curve as a path in time, it was but a step to the graphs that we see every day. The key insight was that the two axes did not necessarily have to show latitude and longitude; they could represent any two related quantities. If the horizontal axis depicted “time,” for instance, then a huge variety of numerical changes suddenly took on pictorial form.
The most ordinary graph—changes in housing prices over the last decade, rainfall this year, unemployment rates for the past six months—is an homage to Descartes. A table of numbers might contain the identical information, but a table muffles the patterns and trends that leap from a graph. We have grown so accustomed to graphs that show how something changes as time passes that we forget what a breakthrough they represent. (Countless expressions take this familiarity for granted: “off the charts,” “steep learning curve,” “a drop in the Dow.”) Any run-of-the-mill illustration in a textbook—a graph of a cannonball’s position, moment by moment, as it flies through the air, for example—is a sophisticated abstraction. It amounts to a series of stop-action photos. No such photos would exist for centuries after Descartes’ death. Only familiarity has dulled the surprise.41
Even in its humblest form (in other words, even aside from thinking of a curve as the trajectory of a moving point), Descartes’ discovery provided endless riches. With his horizontal and vertical axes in place, he could easily construct a grid—he could, in effect, tape a piece of graph paper to any spot he wanted. That assigned every point in the world a particular address: x inches from this axis, y inches from that one. Then, for the first time, Descartes could approach geometry in a new way. Rather than think of a circle, say, as a picture, he could treat it as an equation.
A circle consisted of all the points whose x’s and y’s combined in a particular way. A straight line was a different equation, a different combination of x’s and y’s, and so was every other curve. A curve was an equation; an equation was a curve. This was a huge advance, in the judgment of John Stuart Mill “the greatest single step ever made in the progress of the exact sciences.” Now, suddenly, all the tools of algebra—all the well-developed arsenal of techniques for manipulating equations—could be enlisted to solve problems in geometry.
But it was not simply that algebra could be brought to bear on geometry. That would have been a huge practical breakthrough, but Descartes’ insight was a conceptual revolution as well. Algebra and geometry had always been seen as independent subjects. The distinction wasn’t subtle. The two fields dealt with different topics, and they looked different. Algebra was a forest of symbols, geometry a collection of pictures. Now Descartes had come along and showed that algebra and geometry were two languages that described a shared reality. This was completely unexpected and hugely powerful, as if today someone suddenly showed that every musical score could be converted into a scene from a movie and every movie scene could be translated into a musical score.
Chapter Thirty-Three
“Euclid Alone Has Looked on Beauty Bare”
Descartes unveiled his new graphs in 1637, in an appendix to a work called Discourse on Method. The book is a milestone in the history of philosophy, the source of one of the best known of all philosophical maxims. In the Discourse Descartes set out his determination to reject all beliefs that could possibly be incorrect and to build a philosophy founded on indisputable truths. The world and everything in it might be an illusion, Descartes argued, but even if the world was but a dream it was his dream, and so he himself could not be merely an illusion. “I think, therefore I am.”
In the same work he added three short afterwords, each meant to demonstrate the power of his approach to philosophy. In an essay called “Geometry,” Descartes talked about curves and moving points; he explained that a curve can be depicted in a picture or captured in an equation and showed how to translate between the two; he discussed graphs and the use of what are known today as Cartesian coordinates. He understood the value of what he had done. “I do not enjoy speaking in praise of myself,” he wrote in a letter to a friend, but he forced himself. His new, graph-based approach to geometry, he went on, represented a leap “as far beyond the treatment in the ordinary geometry as the rhetoric of Cicero is beyond the ABC of children.”
It did. The wonder is that something so useful and so obvious—in hindsight—should have eluded the world’s greatest thinkers for thousands of years. But this is an age-old story. In the making of the modern world, the same pattern has recurred time and again: some genius conceives an abstract idea that no one before had ever grasped, and in time it finds its way so deeply into our lives that we forget that it had to be invented in the first place.
Abstraction is always the great hurdle. Alfred North Whitehead argued that it was “a notable advance in the history of thought” when someone hit on the insight that two rocks and two days and two sticks all shared the abstract property of “twoness.” For countless generations no one had seen it.
The same holds for nearly every conceptual breakthrough. The idea that “zero” is a number, for instance, proved even more elusive than the notion of “two” or “seven.” Whitehead again: “The point about zero is that we do not need to use it in the operations of daily life. No one goes out to buy zero fish. It is in a way the most civilized of all the [numbers], and its use is only forced on us by the needs of cultivated modes of thought.” With zero in hand, we suddenly have a tool kit that lets us start building the conceptual world. Zero opens the way to place notation—we can distinguish 23 from 203 from 20,003—and to arithmetic and algebra and countless other spinoffs.
Negative numbers once posed similar mysteries. Today the concept of a $5 bill is easy to understand, and so is a $5 IOU. A temperature of 10 degrees is straightforward, and so is 10 degrees below zero. But in the history of the human race, for the greatest intellects over the course of millennia, the notion of negative numbers seemed as baffling as the idea of time travel does to us. (Descartes wrestled to make sense of how something could be “less than nothing.”) Numbers named amounts—1 goat, 5 fingers, 10 pebbles. What could negative 10 pebbles mean?
(Lest we grow too smug we should remember the dismay of today’s students when they meet “imaginary numbers.” The name itself [coined by Descartes, in the same essay in which he explained his new graphs] conveys the unease that surrounded the concept from the start. Small wonder. Students still learn, by rote, that “positive times positive is positive, and negative times negative is positive.” Thus, –2 × –2 = 4, and so is 2 × 2. Then they learn a new definition—an imaginary number is one that, when multiplied by itself, is negative! It took centuries and the labors of some of the greatest minds in mathematics to sort it out.)
The ability to conceive strange, unintuitive concepts like “twoness” and “zero fish” and “negative 10 pebbles” lies at the heart of mathematics. Above all else, mathematics is the art of abstraction. It is one thing to see two apples on the ground next to three apples. It is something else to grasp the universal rule that 2 + 3 = 5.
Reality versus abstraction. Photo of cow, left. Painting of cow by Dutch artist Theo van Doesburg, right, © T
he Museum of Modern Art/licensed by SCALA/Art Resources, NY.
In the history of science, abstraction was crucial. It was abstraction that made it possible to look past the chaos all around us to the order behind it. The surprise in physics, for instance, was that nearly everything was beside the point. Less detail meant more insight. A rock fell in precisely the same way whether the person who dropped it was a beauty in silk or an urchin in rags. Nor did it matter if the rock was a diamond or a chunk of brick, or if it fell yesterday or a hundred years ago, or in Rome or in London.
The skill that physics demanded was the ability to look past particulars to universals. Just as someone working on a geometry problem would not care whether a triangle was drawn in pencil or ink, so a scientist seeking to describe the world would dismiss countless details as true but irrelevant. Much of a modern physicist’s early training consists in learning to transform colorful questions about such things as elephants tumbling down mountainsides into abstract diagrams showing arrows and angles and masses.
The move from elephants to ten-thousand-pound masses echoes the transformation from Aristotle’s worldview to Galileo’s. The battle between the two approaches was as sweeping as a contest can be, far more than a debate over whether the sun circled the Earth or vice versa, big as that issue was. The broader questions had to do with how to study the physical world. For Aristotle and his followers, the point of science was to engage with the real world in all its complexity. To talk of weights plummeting through vacuums or perfect spheres rolling forever across infinite planes was to mistake idealized diagrams for reality. But the map was not the territory. Explorers needed to grapple with the world as it is, not with a dessicated and lifeless counterpart.
In Galileo’s view, this was exactly backward. The way to understand the world was not to focus on its every quirk and blemish but to look beyond those distractions to the deeper truths they obscured. When Galileo talked about whether heavy objects fall faster than light ones, for instance, he imagined ideal circumstances—objects falling in a vacuum rather than through the air—in order to avoid the complications posed by air resistance. But Aristotle insisted that no such thing as a vacuum could exist in nature (it was impossible, because objects fall faster in a thin medium, like water, than they do in a thick one, like syrup. If there were vacuums, then objects would fall infinitely fast, which is to say they would be in two places at once).42 Even if a vacuum could somehow be contrived, why would anyone think that the behavior of objects in those peculiar conditions bore any relation to ordinary life? To speculate about what might happen in unreal circumstances was an exercise in absurdity, like debating whether ghosts can get sunburns.
Galileo vehemently disagreed. Abstraction was not a distortion but a means of seeing truth unadorned. “Only by imagining an impossible situation can a clear and simple law of fall be formulated,” in the words of the late historian A. Rupert Hall, “and only by possessing that law is it possible to comprehend the complex things that actually happen.”
By way of explaining what the abstract, idealized world of mathematics has to do with the real world, Galileo made an analogy to a shopkeeper measuring and weighing his goods. “Just as the accountant who wants his calculations to deal with sugar, silk, and wool must discount the boxes, bales and other packings, so the mathematical scientist . . . must deduct the material hindrances” that might entangle him.
The importance of abstraction was a crucial theme, and Galileo came back to it often. At one point he exchanged his shopkeeper image for a more poetic one. With abstraction’s aid, he wrote, “facts which at first sight seem improbable will . . . drop the cloak which has hidden them and stand forth in naked and simple beauty.”
Galileo won his argument, and science has never turned back. Mathematics remains the language of science because, ever since Galileo, we have taken for granted that abstraction is the pathway to truth.
Chapter Thirty-Four
Here Be Monsters!
Science was now poised to confront one of its great taboos. The study of objects in motion had tempted and intimidated thinkers since ancient times. With his work on ramps and his discovery of the law of falling objects, Galileo had mounted the first successful assault. With his insights into graphs and the curves traced by moving points, Descartes had devised the tools that would make an all-out attack possible. Only one giant obstacle still blocked the way.
How did it happen that the Greeks, whose intellectual daring has never been surpassed, shied away from applying mathematics to objects moving through space? In part because, as we have seen, they deemed impermanence an unworthy subject for mathematics, which investigated eternal truths. But they were skittish, too. That uneasiness was largely due to one man, named Zeno, who lived in a middle-of-nowhere Greek colony in southern Italy sometime around 450 B.C. Zeno figured in one of Plato’s Dialogues (Plato called him “tall and fair to look upon”), but almost all the facts about his life have been lost to us. So has almost every scrap of his writing. The few snippets that have survived have tied philosophers in knots from his day to ours.
Zeno’s arguments sound silly at first, almost childish, for he was one of those philosophers who spoke not in polysyllables and abstractions but in stories. Only four of those tales have survived. Each is a tiny, paradoxical fable, a Borges parable unstuck in time by two millennia.
One story starts with a man standing in a room. His goal is to walk to the far side. Could anything be simpler? But before he can cross the room, Zeno points out, the man must first reach the halfway point. That will take a small but definite amount of time. And then he must cross half the distance that remains. Which will take a certain amount of time. Then half the still-remaining distance, and so on, forever. “Forever” is the key. A trip across the room, then, must pass through an infinite number of stages, each of which takes some definite, more-than-zero amount of time. And that could only mean, Zeno concluded gleefully, that a trip across the room would necessarily take an infinite amount of time.
Zeno certainly didn’t believe that a man in a room was doomed to die before he could reach a doorway on the other side. His challenge to his fellow philosophers was not to cross a room but to find a mistake in his reasoning. On the one hand, everyone knew how to walk from here to there. On the other hand, that seemed impossible. What was going on?
For two thousand years, no one came up with a satisfactory answer. Philosophers debated endlessly, for instance, whether it even made sense to talk of dividing time into ever tinier bits—is time continuous, like a ribbon, or is it more like a series of beads on a string? Can time be divided up forever or does it come in irreducible units, like atoms?
The Greeks quit in frustration early on. They did note that each of Zeno’s tales began as a commonplace story having to do with motion and ended up circling around the strange notion of infinity. The danger zone seemed clear enough. Motion meant infinity, and infinity meant paradox. Having failed to find Zeno’s error, Greek mathematicians opted to do the prudent thing. They put up emergency cones and yellow police tape and made a point of staying well clear of anything that involved the analysis of moving objects. “By instilling into the minds of the Greek geometers a horror of the infinite,” the mathematician Tobias Dantzig observed, “Zeno’s arguments had the effect of a partial paralysis of their creative imagination. The infinite was taboo, it had to be kept out, at any cost.”
That banishment lasted twenty centuries.
Occasionally, during the long hiatus, a particularly bold thinker tiptoed to the brink of infinity, glanced down, and then hurried away. Albert of Saxony, a logician who lived in the 1300s, was one of the most insightful of this small band. To demonstrate just how strange a concept infinity is, Albert proposed a thought experiment. Imagine an infinitely long wooden beam, one inch high and one inch deep. Now take a saw and cut the beam into identical one-inch cubes. Since the beam is infinitely long, you can cut an infinite number of cubes.
An infinitely long beam
can be cut into blocks and then reassembled into bigger and bigger cubes.
What Albert did next was as surprising as a magic trick. The original beam, with a cross section of only one square inch, certainly did not take up much room. It went on forever, but you could easily hop over it. But if you took cubes from that beam and arranged them cleverly, Albert showed, you could fill the entire universe to the brim. The scheme was simple enough. All you had to do was build a small cube and then, over and over again, build a bigger cube around it.
First, you set a single cube on the ground. Then you made a 3 × 3 × 3 cube with the original cube at its center. That cube in turn became the center of a 5 × 5 × 5 cube, and so on. In time, the skinny beam that you began with would yield a series of colossal cubes that outgrew the room, the neighborhood, the solar system!
Once again, the moral was plain. To explore infinity was to tumble into paradox. Like the Greeks fifteen centuries before, medieval mathematicians edged away from the abyss.
Three hundred years later, Galileo ventured back toward forbidden territory. He began so innocuously that it seemed impossible he could fall into danger. Consider, Galileo said, one of the humblest of all intellectual activities: matching, a skill even more primitive than counting. How can we tell if two collections are the same size? By taking one item from the first collection and matching it with one from the other collection. Then we set those two aside and start over. How do we know that there are five vowels? Because we can match them up with our five fingers—the letter a with the thumb, say, and e with the index finger, i with the middle finger, o with the ring finger, and u with the pinky. Each vowel pairs off with a finger; each finger pairs off with a vowel; no member of either group is left over or left out.