Математика и математический анализ

Suppose for example that a tree has eight chains emanating from a central point. Suppose further that each of these chains is made up of three segments folded so that the entire tree looks like a stylized flower with eight petals. Let us further suppose that the cost for Amity to build the sewage-treatment plant is 15 million...



478.5 KB

0 чел.


Proof settles a wickedly prickly question about unfurling crinkly shapes

Polygons come in all sorts of shapes: triangles, squares, hexagons, stars, and a host of other straight-edged forms.

Think of a polygon as a chain of rigid rods connected to each other in two dimensions with flexible joints. Start with any configuration, no matter how complex and intricately indented, or crinkly. Can you always find a sequence of moves that removes the indentations-unfurling the polygon into what mathematicians describe as a convex shape, like a triangle-without ever letting the rods cross each other?

That's not as easy to do as it may sound. Imagine, for example, the outline of a set of fearsome jaws with interlocking teeth.

Computational geometers and assorted others puzzled over this problem for more than a decade, ever since it came to the attention of robotics engineers who were trying to make a robot arm move from place to place. In recent years, the geometric speculation turned into a sort of game. Someone would propose a complicated configuration that appears to stay locked, and other enthusiasts would spend hours, even weeks, looking for the key to opening it up.

Most of those who tackled the polygon problem believed that someone ultimately would come up with a polygon that could not be unfurled, at least not in two dimensions.

No one ever came up with a stumper, however. Every tricky polygonal configuration anyone ever proposed was eventually cracked. "In a few cases, it took several months to find the answer," says Erik D. Demaine, a 19-year-old computer science graduate student at the University of Waterloo in Ontario.

Now, the question is finally settled. Demaine, Robert Connelly of Cornell University, and GUnter Rote of the Free University of Berlin have proved that any polygon can be uncrinkled in two dimensions without any sides crossing each other during the unfolding.

Pulling apart this jaw-shaped polygon (blue) without allowing any segments to cross proved to be a tough exercise in computational geometry.

The researchers announced their proof last June in Minneapolis at a Society for Industrial and Applied Mathematics conference on discrete mathematics.

Related geometric problems have practical applications, such as checking the range of movements of a jointed, robotic arm, designing a complicated antenna that opens up properly in space, or studying how a protein strand folds into a compact blob. At the moment, however, the new result appears to have no obvious applications.

The puzzle's real appeal has been aesthetic rather than practical. "It's simply a natural question to ask and a beautiful problem," insists computer scientist Joseph O'Rourke of Smith College in Northampton, Mass.

Over the years, studies of robotic arm movements have suggested purely mathematical questions about morphing one geometric shape into another. One important group of problems concerns chains made up of line segments. Such chains may be closed, like a polygon, or open-ended, like a segmented arc. Lines can also be linked to form a branched structure, termed a tree, where jointed segments sprout from a common vertex.

Suppose, for example, that a tree has eight chains emanating from a central point. Suppose further that each of these chains is made

up of three segments folded so that the entire tree looks like a stylized flower with eight petals.

Segment lengths determine whether it's possible to unfurl this eight-petal tree configuration without allowing segments to cross.

In 1998, Sue Whitesides of McGill University in Montreal and a large team of collaborators established that for certain segment lengths, the petals can't be straightened out without letting segments cross. Opening up one petal necessarily impinges on others.

Unlike a two- dimensional chain, this knotted, three-dimensional "knitting needle" chain in space can't be untangled. chers also found examples of three-dimensional chains, both open and closed, that are locked, or impossible to unfold. On the other hand, O'Rourke and Smith College colleague Roxana Cocan proved last year that in the roominess of four- or higher-dimensional space, one can straighten out any open chain and uncrinkle any polygon.

This polygon can be unlocked. The tree on which its shape is based can't be when its branches are close together. left the two-dimensional case as the major unsolved problem," O'Rourke says.

Last year in July, Demaine, Rote, and Connelly all happened to be at a geometry conference in Ascona, Switzerland. In considering the polygon puzzle,Rote suggested that uncrinkling polygons had to require some sort of expansion-as if a balloon were inflating inside the polygon and forcing

its sides outward.

A complex folded paper structure based on the hyperbolic paraboloid presents new geometric challenges.

"His suggestion was crucial, though we didn't realize why it was so helpful until later," Demaine remarks.

The trio did observe, however, that if they could somehow find a sequence of movements in which the distance between any pair of joints stayed the same or increased, then the segments could never cross. So, the problem could be converted from one about avoiding intersections into one about expanding movements.

The sequence of steps required to unlock the jaws configuration. time Demaine, Rote, and Connelly met again in November, this time in Budapest, Connelly had realized that the notion of expansion could be studied in the context of his own field of expertise: the rigidity of structures. With this concept, the team could look at polygons as frameworks of rods and invisible struts between nonadjacent joints, where rods have to stay the same size but struts can increase in length. The researchers could then consider stress patterns within that structure.

"This allowed us to apply some beautiful theorems in rigidity theory," Demaine notes.

A proof that flat polygonal chains can't lock followed from that insight. Along the way, Demaine, Rote, and Connelly also established that any open chain can always be straightened.

The big surprise is not the proof itself,

O'Rourke comments, but the conceptual breakthrough that the opening move in any successful uncrinkling process has to be one in which each joint moves apart or stays the same distance away from every other joint.

Last summer, Demaine, Connelly, and O'Rourke added another element to the original argument. They showed that the area inside an uncrinkling polygon must increase. "This seems almost obvious," Connelly notes, "but the proof that we have is not completely trivial."

Now that the two-dimensional case is solved, Demaine is tangling with other fierce geometric beasts. An origami enthusiast, he's tamed the hyperbolic paraboloid. Demaine developed instructions for folding and gluing this classic saddle shape into complex paper hats and starbursts.

A complex folded paper structure based on the hyperbolic paraboloid presents new geometric challenges.

"His suggestion was crucial, though we didn't realize why it was so helpful until later," Demaine remarks.

The trio did observe, however, that if they could somehow find a sequence of movements in which the distance between any pair of joints stayed the same or increased, then the segments could never cross. So, the problem could be converted from one about avoiding intersections into one about expanding movements.

The sequence of steps required to unlock the jaws configuration. time Demaine, Rote, and Connelly met again in November, this time in Budapest, Connelly had realized that the notion of expansion could be studied in the context of his own field of expertise: the rigidity of structures. With this concept, the team could look at polygons as frameworks of rods and invisible struts between nonadjacent joints, where rods have to stay the same size but struts can increase in length. The researchers could then consider stress patterns within that structure.

"This allowed us to apply some beautiful theorems in rigidity theory," Demaine notes.

A proof that flat polygonal chains can't lock followed from that insight. Along the way, Demaine, Rote, and Connelly also established that any open chain can always be straightened.

The big surprise is not the proof itself,

O'Rourke comments, but the conceptual breakthrough that the opening move in any successful uncrinkling process has to be one in which each joint moves apart or stays the same distance away from every other joint.

Last summer, Demaine, Connelly, and O'Rourke added another element to the original argument. They showed that the area inside an uncrinkling polygon must increase. "This seems almost obvious," Connelly notes, "but the proof that we have is not completely trivial."

Now that the two-dimensional case is solved, Demaine is tangling with other fierce geometric beasts. An origami enthusiast, he's tamed the hyperbolic paraboloid. Demaine developed instructions for folding and gluing this classic saddle shape into complex paper hats and starbursts.


Source: Smithsonian, Jun2001, Vol. 33 Issue 3, p48, 6p, 6c 

Author(s): Jablow, Valerie


On a wall of her office Margaret Geller has hung a picture of the stickman-her stickman. It is not large, perhaps a foot on each side. As stickmen go, in fact, this one is just average-or mind-bogglingly huge, depending on how you look at it. It is made up of astronomical structures extending for hundreds of millions of light-years. As most of us will see it, however, it's just this, a cartoon figure outlined by glowing galaxies set against the dark emptiness of space.

For the past 20 years Geller, an astronomer and professor at the Harvard-Smithsonian Center for Astrophysics (CFA) in Cambridge, Massachusetts, has mapped the universe by plotting positions of galaxies. In 1986 the first of her maps, often called the stickman map, was evidence of something few had believed possible: on the largest scale, the universe has a distinct structure. With more plots, it became clear that the stickman was part of a pattern in which "walls" of galaxies surround vast areas with very few galaxies. Suddenly, the stickman map heralded a sea change in human perception.

In public lectures on her work, Geller likens the universe's 3-D pattern to soap bubbles or foam. (Imagine, if you will, a universe comprised of your kitchen sponge, its air pockets delineated by walls of galaxies instead of sponge material). Though astronomers have argued over which-bubbles or foam-is more accurate, Geller hasn't let that bother her. Having convinced the astronomical community with the stickman map, she is set on cracking the universe's next big puzzle: how it got this way.

"One of the great challenges of modern cosmology is to discover what the geometry of the universe really is."

Margaret Geller's clear, ringing voice, a remnant of childhood acting lessons, reaches even the farthest corners of a sloping lecture hall at the Harvard Science Center. Freshmen to seniors, English to economics majors, listen attentively. Their professor has notes, but rarely consults them. For 15 years, she has taught Astronomy 14, "The Universe and Everything," for anyone interested in the space we inhabit. The course is nearly always filled.

Today's lecture, on Einstein's general theory of relativity, at times seems far afield from Geller's mapping of the universe. She rolls a small metal ball across a suspended rubber mat. By pressing on the mat, she shows how changing the shape of space (the rubber mat) determines how the metal ball moves. Gravity, we learn, is not simply a force, but geometry--the heart of her quest.

"My father was a crystallographer," explains Geller in her CfA office half a mile from Harvard Yard. "He worked on the relationship between the arrangement of atoms in solids and their properties. He showed me the relation between geometry and nature, and I have always been fascinated with it. So it's no accident that I would do projects like these maps."

Now 53, the crystallographer's daughter is busy planning her next act, a map of galaxies seven billion light-years away. The idea is not to chart the nearby, current universe. The stickman map did that--albeit for only a tiny slice of the sky. The goal this time is to discern meaning from galactic geometry over time. Only great distances allow such time travel, for the farther out you look, the farther back in time you go. Andromeda, for instance, the closest large galaxy to our own Milky Way, is more than two million light-years away. Light generated by it appears two million years afterward in our telescopes. Geller's new survey will look deep into space at a seven-billion-year-old universe. About half the universe's age, that region should allow Geller to chart geometric changes over time by comparison with the newer universe closer by.

The stickman map indicated that the universe has distinct patterns, shown here by galaxies (dots) lining vast dark areas. Previously, only smaller portions of the universe had been mapped, revealing few patterns.

To illustrate, Geller pulls a physics book off shelves opposite her desk. She flips to a picture of the stickman. On the next page is another map, showing undifferentiated blotches and blobs. Created from measurements of background radiation remaining from the big bang by NASA's COBE (Cosmic Background Explorer) satellite, this map is our best picture of the very early universe, just 200,000 years after it was formed. Geller cradles the book. "The idea is that the COBE map shows that there was very little structure, and the stickman shows that there are very large, very well-defined structures. The contrast between these two specifies the puzzle. I think these are used as a kind of icon for the real problem: How do objects form and evolve in the universe?"

To even start toward an answer for that, you need a big telescope. Geller will use the Smithsonian's newly redesigned one on Mount Hopkins in Arizona. The instrument was initially named the Multiple Mirror Telescope (MMT) because it contained six 72-inch mirrors that acted like one giant mirror 176 inches (or 14 feet, 8 inches) wide. In spring 1999, a single 21-foot mirror replaced the smaller ones. The converted MMT (as it is still called) is now ideally suited for peering at galaxies seven billion light-years away.

But in mapping the universe, it isn't enough just to see galaxies. Their distance must also be understood so that their location in space can be mapped. To do that, astronomers take advantage of an old idea: redshift. In 1929 astronomer Edwin Hubble recognized that galaxies appear to move away from us at speeds proportional to their distances. As a result, the spectrum of light from a galaxy "shifts" toward the red end of the spectrum with the apparent speed, and distance, of that galaxy: the larger the redshift, the greater the galaxy's speed and distance. Thus, to calculate the distance of far-off galaxies, astronomers measure redshift.

An artist's cross-section depicts an even larger section of the 3-D "bubble" universe that Margaret Geller and colleagues mapped, containing the stickman (red). Galaxies are arrayed on the surfaces of the bubbles.

Geller is finding faster ways to plot redshifts with help from a fellow CfA scientist, Dan Fabricant. He created a hectospec to permit the MMT to quickly scan the skies. The crowded CfA lab where Fabricant's creation has been realized looks more like a machine shop, however, than a testing ground for delicate astronomical instrumentation. Most of the room, in fact, is taken up by a clumsy-looking circular structure. With 300 thin steel rods radiating evenly from its edge toward a two-foot-diameter stainless steel plate in the middle, it has the appearance of a large eye with a two-foot metal pupil. This ersatz "eye," Fabricant explains, will be placed behind the new MMT mirror, with the mirror's gathered light focused onto prisms at the ends of the rods.

The real stars of Fabricant's creation are two small, innocuous-looking "boxes" suspended from metal tracks. Around this CfA lab, they are better known as Fred and Ginger, though at this moment their progenitor refuses to distinguish one from the other. "Your choice," Fabricant laughs. '"They look alike."

Named by Geller after that famously fluid pair of cinematic dancers, Fred and Ginger must in fact imitate the grace and skill of their namesakes while aiding in the observation of galaxies. Each robot glides along its track suspended above the steel eye. Moving as fast as three feet per second, the robots position the steel rods on the metal surface at points where observers want to get more information. Concealed within each rod is a fiber-optic strand of delicate synthetic quartz. Though each fiber-optic strand is only 250 micrometers in diameter--about the width of two human hairs--it can transmit a galaxy's light to machines that can determine its spectrum and redshift.

What all this ungainly machinery offers is speed. When Geller began mapping the universe, researchers counted themselves lucky to get redshifts for 30 galaxies a night. Fred and Ginger's swift telescopic "dance," however, will allow Geller and her team to get redshifts for several thousand a night.

For someone who dreams of other worlds, Margaret Geller began her journey to celestial cartography with more earthly concerns. Despite immersing herself from an early age in acting, by the time she went to college at the University of California at Berkeley, she says, "I realized that being an actress wasn't really what I thought, because you repeated the same thing over and over. But I was still fascinated by the idea of theater, performing and being some other character. I liked the attention."

Berkeley's big, impersonal classes were a turnoff for Geller. The ones in science, though, tended to be smaller, and Geller gravitated toward physics. Like her father, she planned to pursue graduate studies in solid-state physics, but was advised against it by one of her professors. "He said you want to look for a field that will be exciting ten years after you get your PhD, because that's when you'll be mature as a scientist." He suggested either astronomy or biophysics. Geller grins at the memory. "I couldn't imagine doing biophysics, whereas astronomy seemed sort of exciting."

Dan Fabricant developed this hectospec, which allows optical fibers to be placed on images of individual galaxies so their redshifts and thus distances can be measured.

After becoming only the second woman to get a physics PhD at Princeton, Geller came to the CfA in 1974 to continue work on galaxy clusters. Then, in the 198os, researchers found a large, clark region in the universe where nothing could be seen. Like many, Geller and CfA coworker John Huchra did not believe that such empty regions could be common.

"John and I started to do a survey of nearby clusters of galaxies. It took me a long time to understand that the real issue was the existence of large-scale patterns. People thought that these large-scale patterns didn't exist, so why go look for them?" Geller laughs, her hands as animated as her voice. "One of the things that made us recognize that there might be bigger structures was that study of the dark region. Of course, in the great wisdom of thinking you know the answer, I and many others thought it must be a mistake."

So she set out to prove it so. She and her colleagues decided to think big and map not just a thousand galaxies in a strip across the sky, as was the fashion, but many thousands over a period of years. With few qualms about nights in the cold, remote confines of mountaintop telescopes, her collaborators would obtain much of the data while Geller interpreted it back in Cambridge.

Geller decided that if they were to find any pattern, it was most likely over a wide range. Thus, to get an idea of the shape and size of the universe's "continents and oceans," she figured that neither random sampling nor intensive study of one small patch, which previous surveys had used, would work well. Instead, they would examine thin "slices" running across the sky, each six degrees wide. They hoped the very width of the slices would give a good sampling of the universe's structure.

No one could have predicted what that first slice of the sky would show. "We were lucky that it was such a clean picture," Geller says, "because if it hadn't been, we wouldn't have seen it, as few believed in those kinds of big patterns in the universe." Today, with large mapping surveys under way, studying the structure of the universe has never been more in vogue. "The fact that there were such clear patterns," says Geller, "really captured people's imaginations."

Geller enjoys talking about her own captive imagination during a lecture she gave to a group of Hollywood TV producers shortly before winning a MacArthur Fellowship in 1990. "I sat at the head table," she remembers with a smile, "and was introduced to a number of people, one of whom was Michael Eisner. It did not register with me that it was the Michael Eisner. I explained how we did a strip of the universe to figure out if it had 'continents' and 'oceans,' so I did a demo by cutting up a map. I needed somebody to hold it, so I turned to Eisner and asked him. As soon as he stood up, flashbulbs went off, and I thought, 'Ooh, I must be doing a great job!'"

Her laughter echoes against the stickman picture and into the adjacent corridor, where pictures of the MMT hang.

"Where are they? homework is due. Must be easy."

Margaret Geller sounds uncharacteristically anxious. In this hour before her class, she sits in the Harvard Science Center's car6 cum study hall, waiting for students to show up for help. Now, with only ten minutes to go, the first student finally comes around. A teaching assistant comes to his aid as Geller rushes off to make sure everything is in place for her class--her performance.

Today, most of the props from her previous class have returned, including a large, clear blue inflatable ball that Geller uses to show how parallel lines behave in a dosed universe. (They intersect.) But as she spends the first ten minutes explaining how students should prepare for the midterm, just a week away, their tension becomes palpable. This is not an easy course by anyone's description, and what is needed, Geller explains now with carefully staged gesture, tone and smile, is not just correct answers but understanding. "If you show you don't understand," she warns, "we'll deduct points. We're looking to see that you really understand what you've been learning."

For a scientist engaged in work that often takes decades before coming to fruition, Geller has achieved moderate fame not only as an astronomer, lecturer, TV show guest and member of the National Academy of Sciences but also as a teacher and adviser. Today, in fact, one of her former graduate students, a new assistant professor at Brown, meets up with her at CfA after class.

"Ian has taken the most beautiful images of clusters," Geller says by way of introduction. Ian Dell'Antonio himself is far more modest. He is studying where dark matter resides in galaxies and why. But for now, he is concerned about the introductory astronomy course he teaches, where a student of Egyptology keeps him on his toes about what the ancient Egyptians did, and did not, know about the practice of astronomy.

As her former student talks, Geller looks happy, dreamy. In a short film she cowrote on her work as an astronomical mapmaker, Geller included a photograph of herself as a young child. In a light summer dress and blowing a dandelion gone to seed, she appears to be not of the moment but of worlds beyond. Geller wears the exact same look now, as if she is indeed among galaxies, gazing out only occasionally to those who remain earthbound.

"There's something really beautiful about science, that human beings can ask these questions and can answer them," Margaret Geller says about her life as a mapmaker of the universe. "You can make models of nature and understand how it works. There's something exquisite and beautiful about that."


Source: Science News, 12/23/2000&12/30/2000, Vol. 158 Issue 26&27, p410, 1/3p, 1bw Author(s): Peterson, Ivars

Nowadays, mathematicians, computer scientists, and others have a variety of speedy computer-based methods for generating hyperbolic patterns and tilings. M.C. Escher didn't have such technology at his disposal. Neither did Henri Poincar and other 19th-century mathematicians who drew various pictures of the hyperbolic plane. They relied on the traditional tools of geometry-compass and straight- edge-to create their diagrams.

In an article to appear in the January 2001 American Mathematical Monthly, however, Chaim Goodman-Strauss of the University of Arkansas in Fayetteville suggests what these procedural details might have been. He offers techniques and instructions for drawing by hand some tilings of the Poincar model of the hyperbolic plane.

"Remarkably, this may be the first detailed, explicit synthetic construction of triangle tilings of the hyperbolic plane to appear," Goodman-Strauss notes.

His directions are built out of basic tasks familiar to a student of Euclidean geometry: bisecting a line segment, drawing a parallel through a given point, drawing a perpendicular through a given point, constructing a circle through three given points, and a handful of other operations.

A suitable combination of those activities enables one to construct, for example, the hyperbolic line that passes through two given points. To achieve this, one must create a geometric scaffolding of lines and points outside a Poincar disk to guide the drawing of arcs and points within the circle's boundary.

Goodman-Strauss worked out the method by extending his expertise in Euclidean geometry to encompass the types of curves and angles necessary to represent hyperbolic structures. He admits that there's probably nothing original in his contribution. "Surely, this was all well-known at the end of the 19th century, just as it has long been forgotten at the dawn of the 21st," he remarks.

Nonetheless, reviving long-lost construction techniques has value. Such exercises offer an illuminating window on not only Escher's art but also the remarkable work of earlier mathematicians who explored non-Euclidean geometries.

"It is wonderfully satisfying to make these pictures by hand, patiently, with pencil and paper, compass and straightedge," Goodman-Strauss adds. "I encourage you to test this theorem for yourself!"


Source: Mathematics Teacher, Oct2000, Vol. 93 Issue 7, p600, 4p, 2 diagrams 

Author(s): Goetz, Albert

Although the subject of cost allocation has been extensively discussed in the literature of political economics, it has been generally neglected in mathematical literature. However, cost allocation affords a practical extension of fair-division techniques-one that is readily accessible to secondary school students and that gives them a simple yet powerful application of mathematics to real-world problem solving. A study of the concepts and the mathematics involved in cost allocation is most appropriate in a discrete mathematics course or a modeling course, but a case can be made for including this topic in other courses, as well. This article presents a typical cost-allocation problem with possible solutions and includes suggestions for presenting similar problems in the classroom. The basics of the problem follow closely from Young (1994).


Let us consider two towns, Amity and Bender, each of which needs to build a new sewage-treatment plant. Let us further suppose that the cost for Amity to build the sewage-treatment plant is $15 million and that the cost for Bender to construct the plant is $9 million. Were the two towns to pool their resources, the cost of one sewage-treatment plant, built to service both towns, would be $19 million. Should the two towns decide to build only one plant, and if so, how should the cost be divided?

I find that having small groups work on this problem is both productive and enjoyable for students. Each group is first given one of the two towns to represent and asked to plan a negotiating strategy for the town. Each group is then paired with a group that represents the other town so that the groups can work out a solution.

One question that students frequently ask concerns the populations of the towns. I deliberately withhold this information initially, and I instruct students to devise possible solutions without knowing the populations.

Students should recognize that splitting the cost equally is an inferior solution for Bender. Students should devise two preferable kinds of solutions, either on the basis of the cost or on the basis of the savings involved. Splitting the savings equally between the two towns is an example of the latter. Since $5 million, that is, $24 million minus $19 million, represents the amount saved, each town should save $2.5 million, so that the $19 million cost would be divided in the ratio of 12.5 to 6.5, that is, ($15 - $2.5) to ($9 - $2.5).

A possible solution on the basis of cost is to allocate costs in proportion to opportunity, that is, stand-alone, costs. In this solution,

9/24 = 3/8

of the cost, or $7.125 million, should be borne by Bender; and

15/24 = 5/8,

or $11.875 million, by Amity. The same solution can be obtained by allocating savings in proportion to opportunity costs, so that the cost for Bender, for example, would be

9 - 9/24 x 5,

or $7.125 million. See table 1; where necessary, numbers in tables are rounded to three decimal places.

Students often rebel against first finding solutions without knowing the populations of the towns, and their concern is worthy of classroom discussion. But if the populations are cleverly constructed, the problem becomes more complex rather than easier. For example, if the population of Bender is 10 000 and the population of Amity is 40 000 and costs are allocated in proportion to population, then Amity should pay four-fifths of the cost, or $15.2 million. Such a solution is clearly not in Amity's best interest, just as splitting the cost equally is not in Bender's best interest. A question to ask students is, Under what circumstances does the ratio of the populations of the towns produce a solution that encourages each town to participate? However, if the savings are divided equally among the residents, then Amity pays $11 million, that is,

(15 - 4/5 x 5),

and Bender pays $8 million.

Three solutions appear to be in the best interests of both towns, as indicated in table 2:

Dividing the savings equally--A (Amity) pays $12.5 million, and B (Bender) pays $6.5 million

Dividing the savings equally among the residents--A pays $11 million and B pays $8 million on the basis of the given populations

Dividing the costs or the savings proportionally to opportunity costs or savings--A pays $11.875 million, and B pays $7.125 million

Which of the three solutions is the fairest? Young (1991) takes an interesting geometric approach to this question. Core is the term that game theorists and political economists give to the set of possible solutions in which neither player, or town, pays more than the opportunity costs. The colored segment in figure 1 represents the core. The x-axis represents Amity's payments; the y-axis, Bender's payments. The line segment joining the points (0, 19) and (19, 0) is the set of all possible allocations; the portion of that line segment between the horizontal at 9 and the vertical at 15 represents the core. Students can easily replicate this figure on a graphing calculator in a window that goes from 0 to 20 in each direction. The equation of the line segment in question is y = -x + 19, and the DRAW menu can be accessed from the home screen, as opposed to the graph, to obtain the desired horizontal and vertical segments. The previously discussed solutions, both those in the core and those outside it, are labeled in the figure.

A good case can be made for choosing the midpoint of the line segment representing the core as the solution to the problem. That point corresponds to equal savings for each town. In that solution, A pays $12.5 million and B pays $6.5 million. When students try to negotiate an equitable settlement in their groups, this solution is often the most appealing.


We next suppose that a third town, Cordial, is involved. The stand-alone cost for Cordial is $7 million, and the cost for a sewage-treatment plant that would service all three towns is $23 million.

Before students can break up into groups to decide how to solve this problem, costs for all possible coalitions must be assigned. One possible way follows:

The cost for Amity and Bender together remains as before, $19 million.

Were Amity and Cordial to participate together, the cost would be $17 million.

Were Bender and Cordial to participate together, their cost would be $13 million.

If we use the method of proportional allocation, which gave us a solution in the core in the two-person game, then Amity contributes $15.862 million, a solution that is not in the core. Moreover, dividing savings equally among residents fails to fall within the core because Bender and Cordial can form a coalition that leaves Amity out and build the plant for roughly $2.5 million less than by joining with Amity and using that method. Table 3 summarizes results from the other methods used in the two-town game. For these results, we assume that the population of Cordial is 8000 and that the populations of the other towns are as stated initially. Students can investigate which of these methods fall within the core and which are outside it.

In the classroom, letting students play with the problem before analyzing it in this fashion is advisable; fascinating student interactions can result. If the class is divided into three groups, each representing one of the towns, students can caucus among themselves to determine a "strategy," or method that is equitable from their point of view, to divide costs. Pairs of students from each group are then randomly assigned to negotiate a settlement; in other words, two students from A (Amity), two from B (Bender), and two from C (Cordial) work as one group; another two from A, two from B, and two from C work in a second group; and so on.

Young (1991) presents a geometric analog to the line segment that denoted the core in the two-town game. We construct an equilateral triangle with its altitude numerically equal to the cost if all three towns cooperate. Each vertex of the triangle represents one town's payment of the full cost, and any point in the interior of the triangle represents the towns' splitting the $23 million in some fashion. The core in this game is the shaded area in figure 2.


L. S. Shapley, a political economist at Princeton, developed a cost-allocation method (Shapley 1981) that is similar to his approach to power indices in voting games. We consider all possible permutations of the three towns. Each permutation is treated as if the towns join the coalition sequentially and make up the difference between what has already been contributed and the total cost for the coalition. For example, in the permutation ABC, A joins first and must contribute 15. When it joins the coalition, B must contribute 4, the difference between A's 15 and the cost for AB, which is 19. When C joins, C must also contribute 4, the difference between 23 and 19. The Shapley value is the average of all possible contributions for a town. The values for the problem are summarized in table 4.


The Shapley solution obtained previously is within the core and is thus a valid solution to the problem, but we have no guarantee that the Shapley value will be in the core (Young 1991). Can we guarantee a solution that is in the core of a three-player game if a core exists? We can easily construct a situation in which the core does not exist. We consider the core in figure 2. We try to extend the midpoint solution of the two-player game, called the standard solution, to three players. The core here is a triangle, although we have no guarantee that the core will be a triangle. To visualize this result, we move the line designated "A and B pay 19" parallel to itself and away from vertex C. As that line moves, the core changes from a triangle to a quadrilateral to a pentagon. The upper vertex of the core triangle represents B's paying a share of 9. This amount is B's maximum payment within the core. B's minimum payment is represented by the line designating "A and C pay 17," or 6. We average those payments at 7.5 and construct through that point the horizontal segment with endpoints on the borders of the core. See figure 3. The left endpoint of the segment represents C's minimum cost, and therefore A's maximum cost, given that B will pay 7.5. The right endpoint represents A's minimum cost and C's maximum cost. If we simply average the maximum and minimum costs for A and C, we obtain the solution that A pays 10.75, B pays 7.5, and C pays 4.75.

A spreadsheet that neatly summarizes all these solutions in the three-town game can be constructed. Such a spreadsheet appears as table 5. Entries in the top half of the spreadsheet represent the costs to each town or coalition of towns for each possible solution. Entries in the bottom half of the spreadsheet represent the savings for each coalition. Any negative entry in the bottom half of the table indicates that the solution does not fall within the core of the game.

Students usually need help in arriving at either the geometric solution or the Shapley value. They do have quite a bit to say about these and the other solutions that they may generate on their own, and talking through the solutions in class has always been interesting and provocative.

Problems of cost allocation are inherently interesting to students and are rich in mathematical applications. Those that come to mind most readily include graphing straight lines, geometric constructions, parallelism, combinatorics, and proportions. The aspect that makes cost-allocation problems so valuable in the classroom, however, is that students are motivated to talk about mathematics with one another and to experience a real-life application of the mathematics that they know.

TABLE 1 Payments by Town on the Basis of Costs or Savings

B - Amity Share (Millions of $)

C - Bender Share (Millions of $)

       A                B       C

Stand-alone costs      15      9

Split costs             9.5    9.5

Split savings          12.5    6.5

TABLE 2 Three Solutions in the Best Interests of Both Towns

Legend for Chart:

B - Amity Share (Millions of $)

C - Bender Share (Millions of $)

        A                             B        C

Dividing savings equally             12.5      6.5

Dividing savings equally

among residents                      11        8

Dividing costs or savings in

proportion to opportunity

costs                                11.875    7.125

TABLE 3 Payments by Town for the Three-Town Game

Legend for Chart:

B - Payments by Town Amity

C - Payments by Town Bender

D - Payments by Town Cordial

      A                                             B        C


Stand-alone costs                                  15       9


Split costs                                         7.67    7.67


Split savings                                      12.33    6.33


Cost divided in proportion to stand-alone costs    11.129   6.677


Costs divided among residents                      15.862   3.966


Savings divided among residents                     9.483   7.621


TABLE 4 Allocation Using a Combinatoric Approach

Legend for Chart:

A - Coalition order

B - Individual Contributions A

C - Individual Contributions B

D - Individual Contributions C

A                        B        C         D

ABC                      15        4         4

ACB                      15        6         2

BAC                      10        9         4

BCA                      10        9         4

CAB                      10        6         7

CBA                      10        6         7

Total contribution       70       40        28

Shapley value            11.67     6.67      4.67

TABLE 5 Summary of All Solutions in the Three-Town Game

Legend for Chart:

B - Amity

C - Bender

D - Cordial

       A                          B           C          D


Stand-alone costs               15          9          7

Split-cost solution              7.67       7.67       7.67

Split-savings solution          12.33       6.33       4.33

Costs prop. to oppty.           11.129      6.677      5.194

Prorated costs                  15.862      3.966      3.172

Prorated savings                 9.483      7.621      5.897

Geometric solution              10.75       7.5        4.75

Shapley solution                11.67       6.67       4.67


Split cost                       7.33       1.33      -0.67

Split savings                    2.67       2.67       2.67

Costs prop. to oppty.            3.871      2.323      1.806

Prorated costs                  -0.862      5.034      3.828

Prorated savings                 5.517      1.379      1.103

Geometric                        4.25       1.5        2.25

Shapley                          3.33       2.33       3.33

DIAGRAM: Fig. 1; A diagram of possible solutions in a two-town game

DIAGRAM: Fig. 2; A geometric diagram of possible solutions in a three-town game


Source: Christian Science Monitor, 09/12/2000, Vol. 92 Issue 203, p18, 0p, 13c 

Author(s): Jacobsen, Pamela D.

How a simple paper loop became a major breakthrough - in theory and in practice

Everyone knows that a flat piece of paper has two sides - a back and a front. But there's a way to turn a two-sided strip of paper into a one-sided object. This is not a magic trick. It's called a Mobius (pronounced "MAY-bee-uss" or "MOH-bee-uss") strip.

Jeff Weeks is a geometrician (someone who studies geometry). He's an expert on the Mobius strip, and he also enjoys discussing his favorite math topic, "understanding the three-dimensional world we live in."

Making a Mobius strip from a sheet of paper, Dr. Weeks says, is a good way to start to understand one-sidedness. It illustrates the concept, he says, but it's not perfect. Paper has thickness. A perfect Mobius strip has no thickness.

Imagine a two-dimensional person, Weeks says - someone with height and width but no depth. Now imagine that person walking along a perfect Mobius strip. "They would go around and come back reversed," he says. When they returned to their starting point, "they would be their own mirror image." That's weird.

It gets weirder. Try making Mobius strips and cutting them lengthwise. (See diagrams at left.)

To check to see whether the figures that result are one-sided or two-sided, use a colored marker. Start coloring one side of the strip. When you meet up with where you began, look at the strip. If there's a side that is not colored, it's not a Mobius strip. If you don't see an uncolored side, it is one. Why? Because being one-sided means that what we think of as the "outside" and the "inside" are continuous - the same.

The Mobius strip belongs to a field of geometry called topology (toh-POHL-uh-jee). Those who study it are called topologists. They examine the mathematical properties of various surfaces. One way to describe what topology is all about is to think of an imaginary rubber doughnut.

You can stretch, bend or even shrink this doughnut. What's important to a topologist is not how long or wide the doughnut gets - only that the resulting figure still has a hole.

A perfect Mobius strip can also be stretched or distorted. And as long as it is not cut the wrong way, it remains one-sided. (To learn more about topology, check out Dr. Weeks's Web site: www.northnet.org/weeks/TorusGames/ TorusGames.htm)

Who thinks up this stuff?

Nov. 17, 1790, was an important time in history. The French Revolution was being fought. George Washington was serving his first term as president of the United States. And August Ferdinand Mobius had just been born in Saxony (Germany).

Until he was 13, Mobius was home-schooled. His father, a master dancing instructor, had died when he was 3. His mother was a descendent of religious reformer Martin Luther. In 1813, Mobius left for Gottingen University. There he studied under brilliant mathematician Karl Friedrich Gauss.

Mobius was a patient man. He liked to work alone solving math problems. Unfortunately, mathematicians at that time were poorly paid. To earn a living, Mobius became an astronomer and professor. By 1848, he was director of the Leipzig Observatory in Germany.

But mathematics remained his real love. In 1858, Mobius was given credit for discovering the one-sided strip.

Historians, however, don't all agree that this breakthrough should have been named after him. Months before Mobius announced his finding, another mathematician, Johann Benedict Listing, produced an unpublished paper. In it he described Mobius's strip. Biographers suggest that Mobius and Listing were unaware of each other's work.

Was Mobius the first person to think of this shape? The mystery may never be solved. Ten years after announcing his discovery, Mobius died.

Making paper strips is fun. But is there any practical use for Mobius's band? You may be surprised to hear that the answer is yes.

Back in 1949, an inventor patented an abrasive belt that used the principle of one-sidedness. Another 1952 patent showed a conveyor belt that could transport hot objects - and it, too, used Mobius's concept.

And what's it good for, anyway?

Pamela Clute is a mathematics professor at the University of California at Riverside. She points to a number of other applications, including fan belts, typewriter ribbons, conveyor belts, even exercise equipment. Think of it: If the conveyor belt at the grocery checkout had a half-twist in it (and it probably does), the surface of the belt would last twice as long.

Even astrophysicists are interested in the Mobius strip.

Researchers at the University of Warwick (Rhode Island) have examined an electromagnetic region near the earth they call a tail. Scientists think the tail's charged particles follow a path that looks like a Mobius strip. Mobius may have even answered a bigger question: What is the shape of the universe? Some scientists think the universe may be like one giant Mobius strip!

An old saying goes, "There are two sides to every story." Do you suppose it was written by someone who'd never heard of a Mobius strip?

A piece of paper that has only one side

If you don't believe paper can be one-sided, prove it for yourself! You'll need: notebook paper, tape or a glue stick, scissors, and a pencil. Make the simple one first, at left. Then try the one on the right.

Step 1. Cut a strip, lengthwise, from a piece of notebook paper. Make it about two inches wide.

Step 1A. Cut another strip, same as the first. This time, fold it into thirds, lengthwise.

Step 2. Twist one end of the strip halfway (180 degrees). Keeping the paper twisted, tape or glue the ends together. The resulting band should NOT look like a simple round cylinder.

Step 2A. Now unfold it. Twist and tape (or glue) the paper into a Mobius strip as before. The folds will be your guides for cutting the Mobius strip.

Step 3. Using a pencil, put a dot halfway between the two edges of the strip. Beginning at the dot, draw a line lengthwise down the middle of your Mobius strip. Don't lift the pencil off the paper. Keep drawing until you reach the dot again. See? The strip has only one side!

Step 3A. You're ready to cut again. Use the folds as a guide for your scissors. But before you start, think. What will happen? Look again at the figures at the bottom of the page. Which one do you think the cut-in-thirds Mobius strip will look like?

Step 4. Look at your Mobius strip. What will happen if you cut it in half, lengthwise? Think about it, and look at the figures at the bottom of the page. Choose the one you think the cut-in-half Mobius strip will look like. Now carefully cut your strip in half. Did you guess correctly?

Step 4A. Start cutting. What happened? Did you guess correctly? One-sided figures sure don't behave the way two-sided figures do! Are you ready to try to predict what will happen if you cut a Mobius strip in fourths?

What do you think it will look like?

After you've cut the Mцbius strips above, they'll look like one of these four figures. Which ones? (Answers on page 18.)


Source: Representations, Fall2000 Issue 72, p145, 22p, 4bw

 Author(s): Galison, Peter

Purest Soul

FOR MOST OF THE TWENTIETH century, Paul Dirac stood as the theorist's theorist. Though less known to the general public than Albert Einstein, Niels Bohr, or Werner Heisenberg, for physicists Dirac was revered as the "theorist with the purest soul," as Bohr described him. Perhaps Bohr called him that because of Dirac's taciturn and solitary demeanor, perhaps because he maintained practically no interests outside physics and never feigned engagement with art, literature, music, or politics. Known for the fundamental equation that now bears his name--describing the relativistic electron--Dirac put quantum mechanics into a clear conceptual structure, explored the possibility of magnetic monopoles, generalized the mathematical concept of function, launched the field of quantum electrodynamics, and predicted the existence of antimatter.

In this paper I will explore the meaning of drawing for Dirac in his work. In the thirteen hundred or so pages of his published work between 1924 and World War II, aside from a few graphs and a diagram in a paper that he coauthored with an experimentalist, Dirac had practically no use at all for diagrams. He never used them publicly for calculation, and I know of only two, almost trivial, cases in which he even exploited a figure for pedagogical purposes. His elegant book on general relativity contained not a single figure; his famous textbook on quantum mechanics never departed from words and equations.(n1) If anything, diagrams appear to be antithetical to what Dirac wanted to be "visible" in his thinking. Dirac was known for the austerity of his prose, his rigorous and fundamentally algebraic solution to every physical problem he approached. (Even his fellow physicists found his ascetic style sometimes to be too terse--in response to questions, he would repeat himself verbatim; other physicists sometimes complained that his papers lacked words.) Now it is not the case that diagrams are simply absent from physics. To cite one famous example, there is the famous diagrammatic-visual reasoning of theorists like James Clerk Maxwell who insisted that full understanding would only come when joined to imagined, visualizable machines running with gears, straps, pulleys, and handles. Maxwell wanted objects described and drawn that could, in the mind's eye, be grasped with the hands and pulled with the muscles. Similarly visual were Einstein's thought experiments, his use of hurtling trains, spinning disks, and accelerating elevators. Dirac's papers contain none of this. Not even schematic diagrams appear in his writings, visualizations of the sort that Richard Feynman introduced to facilitate calculation and impart intuition about colliding, scattering, splitting, and recombining particles.(n2)

It would seem, then, that the corpus of Dirac's work would be the last place to look for pictures. But in the Dirac archives something remarkable emerges. I was astonished, for example, to find these comments penned by Dirac as he prepared a lecture in 1972: "There are basically two kinds of math [ematical] thinking, algebraic and geometric." This sounds like the theoretical twin of a contrast I have long pursued between laboratory methods that yielded images (analogous here to Dirac's geometric thinking) and those methods predicated on the logical or statistical compilations of data points (analogous to Dirac's algebraic thinking).(n3) So I was intrigued. Given Dirac's austere public predilection for sparse prose, crystalline equations, and the complete absence of diagrams of any sort, I assumed that in the next sentences he would go on to class himself among the algebraists. On the contrary, he wrote in longhand,

A good mathematician needs to be a master of both.

But still he will have a preference for one rather or the other.

I prefer the geometric method. Not mentioned in published work

  because it is not easy to print diagrams.

With the algebraic method one deals with equ[ations] between

  algebraic quantities.

Even tho I see the consistency and logical connections of

  the eq[uations], they do not mean very much to me.

I prefer the relationships which I can visualize

  in geometric terms.

Of course with complicated equations one may not be able to

  visualize the relationships e.g. it may need too

  many dimensions.

But with the simpler relationships one can often get

  help in understanding them by geometric pictures.(n4)

These pictures were not for pedagogical purposes: Dirac kept them hidden. They were not for popularization--even when speaking to the wider public, Dirac never used the diagrams to explain anything. Astonishing: across the great divide of visualization and formalism that has, for generations, split both physics and mathematics, we read here that Dirac published on one side and worked on the other.

The poverty of print technologies in and of itself seems rather insufficient as an explanation for the privacy of Dirac's diagrams, but in another (undated) account his characterization may be more apt: "The most exciting thing I learned [in mathematics in secondary school at Bristol] was projective geometry. This had a strange beauty and power which fascinated me." Projective geometry provided this Bristolean student new insight into Euclidean space and into special relativity. Dirac added, "I frequently used ideas of projective geometry in my research work in later life, but did not refer to them in my published work because I was doubtful whether the average physicist would know enough about them to appreciate them."(n5) Lecturing in Varenna, also in the early 1970s, he recalled the "profound influence" that the power and beauty of projective geometry had on him. It gave results "apparently by magic; theorems in Euclidean geometry which you have been worrying about for a long time drop out by the simplest possible means" under its sway. Relativistic transformations of mathematical quantities suddenly became easy using this geometrical reformulation. "My research work was based in pictures--I needed to visualise things--and projective geometry was often most useful--e.g, in figuring out how a particular quantity transforms under Lorentz transf[ormation]. When I came to publish the results I suppressed the projective geometry as the results could be expressed more concisely in analytic form."(n6)

So Dirac had one way of producing his physics in his private sphere (using geometry) and another of presenting the results to the wider community of physicists (using algebra). Nor is this a purely retrospective account. For there remains among his papers a thick folder of geometrical constructions documenting Dirac's extensive exploration of the way objects transform relativistically. These drawings are not dated but on their reverse sides are writings dated from 1922 forward. None of these drawings were ever published or, as far as I can tell, even shown to anyone (figs. 1 and 2).

The question arises: how ought we to think about Dirac's "suppressed" geometrical work? Dirac himself saw projective geometry as key to his entrance into a new field: "One wants very much to visualize the things which we are dealing with."(n7) Should one therefore split scientific reasoning, as Hans Reichenbach did, between a "logic of discovery" and a "logic of justification"? For Reichenbach there were some patterns of reasoning that were, in and of themselves, sufficient for public demonstration. Other procedures, more capricious and idiosyncratic, could not count as demonstrations though they might serve the acquisition of new ideas.(n8) This distinction saturates the philosophy of science of the postwar era. In Karl Popper's hands it helped to ground his demarcation criterion between science and non-science: only scientific theories, in the context of justification, were falsifiable, only in the realm of the justifiable was there anything dignified of the word logic. "My view," Popper wrote, "may be expressed by saying that every discovery contains 'an irrational element', or 'a creative intuition', in Bergson's sense."(n9) By contrast, Gerald Holton took the private-scientific domain to have a sharply articulable structure that can be characterized by commitments to particular thematic pairs (such as continuum/discretum or waves/particles). According to Holton, this rich, three-dimensional space of private thought is then "projected" onto the plane of public science (defined by the restricted axes of the empirical and the logical). In this empirical-analytic public plane, much of the private dynamic of science is necessarily lost.(n10) Recent work in science studies has either denied the force of the Reichenbachian distinction, or maintained the public/private distinction in other terms. For example, Bruno Latour, in his early work with Steve Woolgar, characterized private science by a different grammar: the private is filled with modifiers, modal qualifications that slowly are filtered out until only a public, assertoric language remains.(n11)

Certainly the common view of drawing as preparation would fit this sharp separation of public and private. Private sketches, in virtue of their schematic and exploratory form, would count as the precursors to the completed painting; private scientific visualization and sketches would, without requiring rigor, precede the public, published scientific paper. In such a picture the interior is psychological, aleatory, hermetic, and unrigorous while the exterior is fixed, formally constrained, communicable, and defensible. One thinks here of Sigmund Freud for whom the visual was primary, preceding and conditioning the development of language. To the extent that primitive reasoning is supplanted by language, the pictorial, unconscious form of reason is of a different species from that of conscious, logical, language-based thought.

For some analysts of science, the advantages of the radical public/private distinction is that it brought the private into a psychological domain that opened it up to studies of creativity. For others, the separation permitted a more formal analysis of the context of justification through schemes of confirmation, falsification, or verification. For those who saw published science as merely the last step of private science, the distinction helped shift the balance of interest toward "science-in-the-making" and away from the published end product.

I want here to pose the question differently and, specifically, to challenge the search for intrinsic markers of scientific drawing that would make it in some instances "private" and in others "public." As we learn from Jacques de Caso's essay on Theophile Bra, Bra's drawings surely cannot be understood as the expression of a purely interior or subjective sensibility. For example, at least one of Bra's cosmological sketches was clearly tied to his views of public discussion about changes in the structure of Saturn's rings; Bra even wrote to the French astronomer and optician, Dominique-Francois-Jean Arago, about the problem.(n12) Nor does the geometry of Dirac issue from an isolated form of reasoning. Dirac's fascination with projective geometry is anything but a private language in Ludwig Wittgenstein's sense--as we will see momentarily (fig. 3).

In both instances (Bra's cosmologies, Dirac's geometry) the drawings neither issue entirely from the public domain nor are they sourceless fountains from a reservoir of pure subjectivity. Tracking Bra's worldly iconological sources or Dirac's public sources in geometry would surely prove both possible and profitable. And yet there is something important in the circumstance that both Dirac and Bra constructed a domain of interiority around these practices. It is not that Dirac's geometric drawing or Bra's cosmogenic images were intrinsically interior or psychological--there is no separate logic here that could provide a universal demarcation criterion splitting the public from the private. Rather, both Dirac and Bra drew a line (so to speak) around their drawings. Both assiduously hid their pictures from the public gaze, and refused (in the case of Dirac) even to admit them into his published arguments. One suggestive concept helpful in capturing this delineation of the private might be Gilles Deleuze's notion of the fold. For Deleuze the "content" of what is infolded is not intrinsically separate from the exterior; there is no metaphysical otherness dividing inside from outside. Instead, interiority is itself the product of an outside pulled in, a process that Michel Foucault called subjectivation because it makes contingent, not inevitable, the formation of what is understood as self.(n13)

I want to push this notion of infolding or subjectivation in two directions. First, my concern here is with an aspect of the private that bears on the epistemic, rather than one that posits lines of individuation that separate a self from others and the world. That is, what interests me is the historical production of a kind of reason that comes to count as private (rather than, for example, the production of the psychological sense of self more generally).(n14) Second, building on this epistemic form of subjectivation, my concern is to explore the historical process by which this takes place. On such a view, the question shifts: How does a form of public inquiry and argument (geometry) come to count as private, cordoned-off.reason?

Public Geometry, Private Geometry 

The issue, therefore, is not what makes the interior or the private metaphysically distinct from the exterior and public, but rather how this inbound folding occurs over time. How, in our instance, did projective geometry pass from the status of a state religion at the time of the French Revolution to become, for Dirac, a repressed form of knowledge production that must remain consummately private--that is, how was geometry infolded to become, for Dirac, quintessentially an interior form of reasoning? What are the conditions of visibility that govern its place (or suppression) in demonstration?

So a new set of questions displaces those with which I began. Not the philosophical-psychological question: How do interior rules of combination differ from exterior rules of combination? But rather: What are the specific conditions that govern the separation of certain practices from the public domain? Not: How, linguistically or psychologically, does public science get created by successive transformations of the private domain? But rather the inverse: How do the "private" structures of visibility (specifically in drawing) get pulled in from the public arena to form a domain aimed, in the first instance, at the inward regulation of thought (rather than outward communication)? Consequently what we have is not quite the Deleuzian question either--not the transhistorical elucidation of what he calls the topology of the fold, but rather the historical process of the folding itself. What happens, over time and across places, such that features of public demonstration become private forms of reasoning?

During the late eighteenth century, descriptive geometry (later known as projective geometry) was first heralded by Gaspard Monge, as preeminent mathematician, as political revolutionary; and as director of the Parisian Ecole polytechnique. As Lorraine Daston and Ken Alder have shown, Monge's texts and the Polytechnique curriculum more generally were all oriented toward the school's mission to train engineers.(n15) Descriptive geometry, the science of a mathematical characterization of three-dimensional objects in two-dimensional projections, was supposed to serve not only mathematicians and engineers but also the Polytechniciens who would become the nation's future high-level carpenters, stonecutters, architects, and military engineers.(n16) For a generation of Monge's successors--Polytechnicien engineers including Charles Dupin, Michel Chasles, and Jean-Victor Poncelet--descriptive geometry became much more than a useful tool. Geometry, they contended, would hold together reason and the world.

For Monge and his school, physical processes including projection, section, duality, and deformation became means of discovery, proof, and generalization. This physicalized geometry defined a new role for the engineer as an intermediary lodged between the state and the artisan. Geometrical, technical drawing, "the geometry of the workshop" became at one and the same time a way of organizing the component parts of complex machines and a scheme for structuring a social and workplace order.(n17) Geometry became a way of being as well as the proper way of founding a basis for mathematics. Indeed, at the Ecole polytechnique, geometry became an empirical science. Auguste Comte came to speak of an empirical mathematics, Lazare Carnot exploited physically motivated transformations in geometry and identified correlates between mathematical entities and their geometrical twins.

Geometry was practical and more than practical. Certainly for Dupin, Chasles, Poncelet, and their students, geometry towered above all other forms of knowledge as the paragon of well-grounded argumentation, better grounded, in particular, than algebra. Projective geometry came to stand at that particular place where engineering and reason crossed paths, and so provided a perfect site for pedagogy. As Monge insisted, projective geometry could play a central role in the "improvement" of the French working class--"Every Frenchman of sufficient intelligence" should learn it, and, more specifically, geometry would be of great value to "all workmen whose aim is to give bodies certain forms.(n18) Enthusiastically Henri Saint-Simon and his followers adopted the cause in their utopian planning. Descriptive geometers established classes across Paris, joined the geometrical cause to republicanism, and launched a wider commitment to worker education. In 1825, Dupin proclaimed in his textbook that geometry "is to develop, in industrials of all classes, and even in simple workers, the most precious faculties of intelligence, comparison, memory, reflection, judgment, and imagination.... It is to render their conduct more moral while impressing upon their minds the habits of reason and order that are the surest foundations of public peace and general happiness."(n19) Both before and after the French Revolution, geometry, as Alder notes, became the foundational skill in the training of workers--several thousand passed through the various popular art training programs. Geometry would teach both transferable skills crossing the trades and at the same time stabilize society by locking workers into the social roles previously occupied by fathers.(n20)

Geometry did not, however, survive with the elevated status it had held in France at the highwater mark of the Polytechniciens' dominance. Analysts displaced the geometers. Among their successors was Pierre Laplace, for whom pictures were anathema and algebra was dogma. It was not in France, therefore, but rather in Britain and Germany that educators, scientists, and even politicians took up the cause of descriptive geometry with the conjoint promise of epistemic and pedagogical improvement. So although the French mathematical establishment had turned decisively to analysis in the last third of the nineteenth century, the British did not. Euclid had long reigned over British education as an exemplar of good sense and a pillar of mental training. By 1870, however, there was a widespread and disquieting sense that the British were losing to the Continent in the race for science-based industry. Geometry was no exception. In January 1871, leading mathematicians of the British Association for the Advancement of Science joined a committee known as The Association for the Improvement of Geometrical Teaching. Their goal was to produce a reform geometry better suited to technical and scientific education, in a form less rigid than that demanded by the purer mathematicians and enforced on schools. New methods of geometrical argument were introduced, and teachers began to step away from the definitions, forms of argument, and order of theorems dictated by the historical Euclidean texts. Such a loosening of Euclid's hold over the schoolchild's mind did not go undisputed. By 1901 the reformers (aiming to join geometry to the practical arts) and conservatives (hoping to preserve its purity) had settled into such powerfully opposed camps that separation seemed inevitable.(n21)

These, then, were some of the nineteenth century's territories of geometry: Up until the 1860s or so, the French celebrated projective geometry as joining high reason with practical engagement of the working class; then this physicalized geometry faded from the scene. In Britain, accompanying the rapid expansion of industrial, technical education, Victorian descriptive geometry became the symbol and means of socio-educational uplift, improving the lot of young workers, including those of the working class. For the mathematician-logician Augustus De Morgan, for example, geometry was a route to knowledge in general--as he argued in 1868: "Geometry is intended, in education,... to [unmask] the tricks which reason plays on all but the cautious, plus the dangers arising out of caution itself."(n22)

Over the last decades of the nineteenth century, the teaching of geometry in Britain gradually moved away from a rigid Euclid-based textual tradition toward a more expansive interpretation of geometry's basis. In part this shift issued from the marketplace. No longer would it be adequate for the teaching of geometry to exemplify sound reasoning as an end utterly unto itself. Instead, geometry came to have a practical significance as well--crucial for the upbringing of engineers, the upper tier of tradesmen, and scientists. One widely distributed encyclopedia of technical education put it bluntly: "It is impossible to overstate the importance of a knowledge of Geometry forming as it does the basis of all mechanical and decorative arts, constituting, in fact, the grand highway from which the various branches of drawing diverge."(n23) At the same time, part of the freeing of geometry from its purely descriptive roots was an increasing emphasis by reformers on "modern" methods including, prominently, non-Euclidean and projective geometry of higher dimensions. Pressured by both practical and research exigencies, geometry came to illustrate sound reasoning not by being purely descriptive of an ideal world, but rather by instantiating a reason best captured by a multiplicity of approaches.(n24)

So much for the general historical condition of geometry as a very public epistemic ideal and educational method: as a defining feature first of republican and then working-class French pedagogy it continued into the 1870s and beyond in Germany, and re-emerged within the technical education movement of Victorian England. What, then, are the specific historical conditions under which drawing came to count for Dirac both as a reliable home of reason and as a "private" science, judged by him variously as too hard to print, too arcane for physicists to understand, insufficiently persuasive, or insufficiently concise to merit publication?

Dirac's trajectory in mathematical physics took him across several of geometry's territories, temporal-spatial regions where geometrical drawing was laid out differently from one to the next. The goal in following that arc is to see how it came to pass that what had been the most public of mathematical regimes could become, for Dirac as he moved across this shifting map of geometry's fortune, a most private refuge of thought. Here is an account that begins not with an assumed intrinsic dynamics of interior (psychological) style, but rather with the historical creation of a kind of science judged private: the epistemic subjectivation of the geometrical. This is, therefore, not so much an attempt to follow Dirac's biography, but rather to observe Dirac as a kind of movable marker in order to track the conditions under which reasoning through drawing came to be classed as something to be, in his word, "suppressed," interiorized, made to constitute the private scientific subject.

Zero in on Dirac as we turn from the generic Victorian British trade school to Dirac's secondary school, the Merchant Venturers' Technical College, in Bristol. This was where Dirac's father, Charles Dirac, taught, and where Dirac himself received his primary and secondary scientific-engineering education. Created out of various mergers of the Free Grammar and Writing School, the Merchant Venturers' Navigation School, and various forms of the Bristol Diocesan Trade and Mining School, Dirac's school had stabilized both its structure and name in 1894.(n25) Charles Dirac took his degree at the University of Geneva and then, in 1896, came to Merchant Venturers' where he pursued a long career teaching French. A feared figure on the faculty ("a scourge and a terror" according to some of the students), Charles Dirac clearly reveled in the disciplined teaching of language--especially French, but others too, including Esperanto.(n26) Dirac the younger often claimed that he simply stopped speaking to avoid having to perform at home in perfect, grammatically correct French. Dirac's wife put it this way: "His domineering father made it a rule to be spoken to only in French. Often he had to stay silent, because he was unable to express his needs in French. Having been forced to remain silent may have been the traumatic experience that made him a very silent man for life."(n27)

Merchant Venturers', from its outset aimed, as such schools did across Britain, to provide a passage for students into specific trades including bricklaying, plasterwork, plumbing, metalwork, and shoemaking. Navigation had been central to its mission for decades, and continued to be of importance as did mathematics, chemistry, and physics.(n28) In every way distant from British public education, this school was not, in mission, in curriculum, or in student body, designed to prepare the upper class for their stations in empire through a study of the classics. In the school archives of 1912, for example, there survives correspondence between Merchant Venturers' and the nascent University College, about the advisability of teaching firemen and preparing students for their Mine Manager's Certificates. "The more we do for the working classes," the then headmaster wrote, "the better for the university."(n29) Like so many technical colleges around England, Merchant Venturers' held geometry front and center as a site for training in an appropriate, practical reason.

Paul Dirac entered Merchant Venturers' in 1914, at the age of 12, passing from it immediately into his study of electrical engineering at Bristol University, where the university's program was, in fact, run by Merchant Venturers' as an extension of their primary and secondary programs. Young Paul took up electrical engineering under the supervision of David Robertson; Dirac's notebooks show a diligent student, adept in the technical drawing that had accompanied geometry from France to Germany to England. Month after month, Dirac trained himself to confront the constant stream of practical problems: electrical motors, currents, shunts, circuits, generators. Graduating in June 1921, he had as his principal subjects electrical machinery, mathematics, strength of materials, and heat engines (fig. 4).(n30)

While he was in the midst of this engineering program, Dirac watched Arthur Eddington's 1919 eclipse expedition, "hit the world with tremendous impact," and Dirac, along with his fellow engineering students, desperately immersed themselves in the new theory of relativity. They picked up what physics they could from Eddington; Dirac even took a relativity course with the philosopher Charlie D. Broad. The relativity Dirac seized upon was not that presented in Einstein's 1905 paper--it was not a relativity of neo-Machian arguments and Gedankenexperimenten about trains and clocks. No, what enthralled Dirac was Hermann Minkowski's spacetime, relativity cast into the diagrams in which startling relativistic results issued from reasoning through well-defined, if not-quite Euclidean, geometry. The appeal of this geometrized relativity was no doubt doubled in virtue of the fact that Dirac himself had struggled, in vain, to formulate a consistent, physically meaningful four-dimensional space-time.(n31)

While a student, Dirac did some practical engineering work with the British Thompson Houston Works in Rugby and on graduation applied there for a job for which he was rejected. But Robertson was impressed by young Dirac and, with his engineering colleagues at Merchant Venturers', tried to lure him further into their field. They were bested by the mathematicians, who offered to include Dirac, gratis, in their courses for two years.(n32) Entranced by his Bristol mathematics instructor, Peter Fraser, Dirac seized on projective geometry as his favorite subject and immediately began applying it to relativity. More specifically, Dirac turned his attention to the geometrical version of relativity that Minkowski had developed and made so popular; with projective geometry Dirac could simplify the new space-time geometry even further.(n33)

In 1923 Dirac moved out of Bristol and up to Cambridge, where as a physics research student at St. John's, he entered the research group of Ralph H. Fowler. Fowler immediately introduced Dirac to Bohr's theory of the atom. But it took no time at all for Dirac to gravitate, on the side, back to the geometry he had come to love at Bristol. At 4:15, once a week, aspiring geometers would join the afternoon geometry tea parties held by the acknowledged Cambridge master of the subject, Henry Frederick Baker. Baker himself had just authored the first volume of his multitome text on projective geometry where he announced that whatever algebra was included, the geometry was sufficient unto itself. It was a form of mathematics that, Baker judged, would naturally appeal to engineers and physicists.(n34) Certainly this proved to be the case with Dirac; as Olivier Darrigol, Jagdish Mehra, and Helmut Rechenberg have shown, even Dirac's notation seems to follow in some detail the choices made by Baker in his 1922 text.(n35)

Sometime in 1924--the date cannot be deduced exactly from the handwritten fragment--Dirac delivered a talk to Baker's tea party. This was a tough audience to please. All of Baker's students and associates understood that silences would promptly be filled by grilling, and no quarter would be given in discussion.(n36) Dirac immediately turned to the intersection of relativity with geometry and expressed his heartfelt sense that pure mathematics had nothing over the applied. On the contrary, so Dirac contended, there was a deep mathematical beauty in the specificity of the "actual world" that was obscure to the pure mathematician.(n37) "I think," Dirac penciled onto his handwritten notes,

the general opinion among pure mathematicians is that applied mathematics consists of finding solutions of certain differential equations which are the mathematical expression of the laws of nature. To the pure mathematician these equations appear arbitrary. He can write down many other equations which are equally interesting to him, but which do not happen to be laws of nature. The modern physicist does not regard the equations he has to deal with as being arbitrarily chosen by nature. There is a reason, {which he has to find} why the equations are what they are, of such a nature that, when it is found, the study of these equations will be more interesting than that of any of the others.

Old Newtonian gravity had a force that varied as the distance squared--but from the pure mathematician's view, there was nothing special about the square--it could have been cube or the fourth power. But the new theory of gravity, built out of Riemannian geometry, was (from the physicist's perspective) anything but arbitrary?

"Again," Dirac added, "the geometrician at present is no more interested in a space of 4 dim[ensions] than space of any other number of dimensions. There must, however, be some fundamental reason why the actual universe is 4 dim[ensional], and I feel sure that when the reason is discovered 4 dimensional space will be of more interest to the geometrician than any other." Questions of applied mathematics, questions from the physical world, would, he believed, become of central concern to the mathematician. That which is arbitrary in pure terms became fixed, definite, and unique when put into the frame of a real-world geometry.(n39) To draw diagrams, to picture relationships--these were the starting points for grasping why the universe was as k was.

These words would have been music to Baker's ears, for he had little truck with the new, vastly more abstract, rigorous, and algebraic mathematics that was coming into prominence. For example, when the Indian abstract number theorist Ramanujan wrote to the leading mathematicians at Cambridge, Baker had evinced no particular interest in him or his work. G. F. Hardy and J. E. Littlewood welcomed the unknown Indian number theorist as something of a mathematical prophet.(n40) Hardy, who helped shape a generation of British mathematics, emphasized rigor, axiomatic presentations, and perfect clarity in definitions. By stark contrast, Baker began volume 5 of his famous series of works on geometry with the words, "The study of the fundamental notions of geometry is not itself geometry; this is more an Art than a Science, and requires the constant play of an agile imagination, and a delight in exploring the relations of geometrical figures; only so do the exact ideas find their value."(n41)

Dirac's fascination with the confluence of physical reasoning, geometrical pictures, and mathematical aesthetics became a theme to which he returned throughout his life. In a fragment called "The Physicist and the Engineer," Dirac contended that mathematical beauty existed in the approximate reality of the engineer, not in the realm of pure and exact proof. Mathematical beauty was the guide but it was a guide through the approximate reality of the engineer's world, the one actual world in which we live. Many times Dirac insisted that all physical laws--Isaac Newton's, Einstein's, his own, were but approximations. "I think I owe a lot to my engineering training because it did teach me to tolerate approximations," Dirac recalled. "Previously to that I thought any kind of an approximation was really intolerable.... Then I got the idea that in the actual world all our equations are only approximate.... In spite of the equations' being approximate they can be beautiful."(n42)

In a sense, Dirac's trajectory can be seen as a series of flights from world to world, flights away from home, no doubt from his dominating father specifically. Margit Dirac, his wife, recalled after Paul's death, that "The first letter he wrote to me [in 1935] after his father's death was to say, 'I feel much freer now.'"(n43) But my interest is not in reducing Dirac's views to his familial relations, but rather in following Dirac's path as it traversed a series of worlds of learning, a path that left mechanisms for circuits, circuits for geometry; projective geometry for physics, and eventually projective geometry and engineering for an algebra-inflected physics. It was a path at once ever further from trade work and from home. Schematically, one might summarize Dirac's trajectory as taking him across a surface that folded the geometrical, drawn world of pictures into a private space beneath the algebraic structures of the new quantum physics:

Merchant Venturers' (technical drawing)

Bristol Electrical Engineering (mechanical and circuit diagrams)

Bristol Mathematics (projective geometry)

Cambridge (relativity/projective geometry)

Cambridge (algebraic structures of quantum mechanics).

It was in the final transition beginning in 1925, just a few months after his tea party talk, that Dirac interiorized and privatized geometry, making public presentation purely in the mode of algebra. From this moment on, Dirac spoke the public ascetic language in which he couched all of his great contributions to quantum mechanics. But he had no, affective relation to algebra--it was, in his words, an equation language that for him "meant nothing." Reflecting back on the years since his Bristol days in projective geometry, Dirac told an interviewer: "All my work since then has been very much of a geometrical nature, rather than of an algebraic nature."(n44) These are statements characterizing Dirac as a subject in mathematical physics, carving out what is simultaneously a language, an affective structure, a form of argumentation, and a means of exploring the unknown.

The final step toward abstraction and toward the algebraic world for which he came to be considered a heroic figure in physics began in 1925 when his thesis advisor, Fowler, received the proof sheets for a new article from young Werner Heisenberg. The crux was this: he had dispensed with the Bohr orbits, he had developed a consistent calculus of the spectra emitted by various atomic transitions, and he had extended Bohr's "old" quantum theory of 1913 to cover a vastly more general domain. For Dirac there was something else that had fascinated him in Heisenberg's paper--the mathematics. In the course of his calculations Heisenberg had noted that there were certain quantities for which A times B was not equal to B times A. Heisenberg was rather concerned by this peculiarity. Dirac seized on it as the key to the departure of quantum physics from the classical world. He believed that it was precisely in the modification of this mathematical feature that Heisenberg's achievement lay. It may well be, as Darrigol, Mehra, and Rechenberg have argued, that the very idea of a multiplication that depends on order came from Dirac's prior explorations in projective geometry.(n45) Perhaps it was here that Dirac began to feel that he could recreate the public algebraic world in an interior geometrical one. In any case, from there Dirac was off and running with a new mathematics, accurate predictions, no (public) visualization at any level. On the side, geometry ruled.

Dirac's steps into the unvisualizable domain of quantum mechanics were taken with a certain ambivalence. As he generalized the basic equation of quantum mechanics to include relativity, as he accrued a sense of departing from safe land, the cost to him was movingly captured in an essay he wrote repeatedly over several years titled "Hopes and Fears in Theoretical Physics." In an early fragment Dirac scribbled:

The effect of fears are perhaps not so obvious.

The fears are of two kinds.

The first one is the fear of putting forward a new

  idea which may turn out to be quite wrong.

The fear of sticking one's neck out.

perhaps having to retract and being exposed to humiliation.

It may be that such a fear acts largely subconsciously

and inhibits one from making a bold step forward.

A man may get close to a great discovery and fail to

  make the last vital step.

Possibly it is such a fear that blocks this step.(n46)

In these highly inflected lines, Dirac explicitly touched on his own terror of the humiliating failure that abutted any chance of success, a terror expressed in an ambivalence at once drawn toward risk and success (in the form of the quantum theory he helped create) and yet recoiling with fear from possible failure and "sticking his neck" out from his own place of security. There is here a psychological story of the ambivalence of leaving home, a "home" that is conjointly familial, social, and epistemic--Merchant Venturers' was the workplace of his father, his training ground in engineering, and the place of his first encounter with the projective geometry to which Fraser (and later Baker) had introduced him.

But there is a further story that is only incompletely lodged in this geography of the psychological. This other narrative entails an account of how the logic of drawing was "suppressed"; how thinking through drawing diagrams went from being celebrated across Europe in the mid-nineteenth century to being marginalized at the beginning of the twentieth. To complete this broader narrative properly would take us into the shifting fortunes of geometry in France and Germany, and into fundamental changes in pedagogy at Cambridge.(n47) I have only begun to sketch here the shifting role of persuasive visibilities in physics and their function in shaping an epistemological interior life for Dirac.

The Suppression of Geometry 

To the mathematical generation that came of age after 1900 in England, geometry was no longer a science with claims to being descriptive of the world. Instead geometry, once the sun in the scientific sky, was being eclipsed by the formalized, devisualized system of logical relations exemplified on the Continent by mathematicians associated with David Hilbert and by physicists linked to Heisenberg. In Cambridge, it was Hardy who epitomized this new world of rigor--expressing the new mathematics in the formal relations of number theory not in a descriptive, physicalized, and drawn geometry. By the early 1920s, drawn diagrams felt ever more like a disappearing trace, a vestige of a system of inquiry, pedagogy, and values that was fast fading from the Cambridge scene. For the historian of mathematics Herbert Mehrtens, the geometrical-intuitive mathematicians in many ways stood for a Gegen-Moderne, an antimodernism fighting to bind mathematics to the physical world and beyond--to psychology, pedagogy, and progressive technology. The moderns, he argues, wanted to bound and restrict mathematics, guarding their authority through a professional autonomy; mathematics, they argued, was not "about" anything exterior to its own formal structure.(n48)

Dirac stood with one foot in the Cambridge of the older sort (through his association with Baker) and the other in the "new" Continent-leaning Cambridge (through his alliance with Heisenberg, Hilbert, and Hardy). It was a choice between Victorian geometrical tea parties and a post-Victorian modernism. Even as Dirac gave his own tea party talk in 1924, Baker's projective geometry was on the wane. Dirac had moved into the wing of Cambridge mathematics that had already lost the war to set the exam standards for the next generation of students and the mathematical standards for the next generation of researchers. Drawing diagrams gave Dirac an older safe point from which to venture into the new and, as he repeatedly emphasized, more fearsome unknown.

Heisenberg's paper of 1925 was antivisual without being, for that, formally and rigorously mathematical. It was physical and yet completely unvisual. Here was a final step away from the legacy of the Ecole polytechnique's physicalized geometry, away from Felix Klein's tactile mathematical models that formed part of his Erlanger program, away from the British Victorian effort to make descriptive geometry into the centerpiece of skilled reason binding head and hand. And yet, as Dirac launched a long and extraordinarily successful career expressed entirely in the language of algebra, there was another Dirac, privately sketching, figuring, reasoning with diagrams, translating the results back into algebra, and all but burying the scaffolding around an interior furnished with formerly public effects.

My inclination, then, is to use the biographical-psychological story not as an end in itself, but rather as a registration of Dirac's arc from Bristol to Cambridge, to an identification with Bohr's and Heisenberg's Continental physics. In that trajectory, Dirac was sequentially immersed in a series of territories in which particular strategies of demonstration were valued. Bristol University was a step away from the technical drawing of Merchant Venturers', the whole electrical engineering curriculum with its codified, abstracted, applied physics removed drawing to a form of depiction less tied to quasi-mimetic technical renderings and linked instead to more functional, topological circuit diagrams. Bristol's applied mathematics again took Dirac further away from engineering, as did Heisenberg's matrix mechanics.

Technical drawings idealize by removing nonfunctional textures; circuit drawings drop any pretense of mimetic depiction--they are topological insofar as they represent relationships and use icons to refer to component parts. Actual spatial positions and distances do not matter. Projective geometry is also topological in this sense--the distances are eliminated from consideration and only intersections and their relative locations count. Projective geometry began in the domain of the physical, crept somewhat away in higher dimensions and its representation of non-Euclidean geometries. But Dirac kept bringing projective geometry back to the world, using it to track each new topic in mathematical physics across a long career.

When Dirac moved to Cambridge to begin studying physics, he took with him this projective geometry and used it to think. But that thinking had now to be conducted only on the inside of a subject newly self-conscious of its separation from the scientific world. Dirac's maturity was characterized again by flight, this time to Heisenberg's algebra, an antivisual calculus that at once broke with the visual tradition in physics and with the legacy of an older school of visualizable, intuition-grounded descriptive geometry With an austere algebra and Heisenberg's quantum physics, Dirac stabilized his thought through instability: working through a now infolded projective geometry joined by carefully hidden passageways to the public sphere of symbols without pictures.

Freud often argued that what cannot be expressed in private is manifested in public. In a sense I am suggesting the contrary here: at the turn of the century in Britain, projective geometry was shifting away from the status of a state-endorsed liberal epistemology that joined university to factory and toward a form of knowledge that was distinctly second class. Physicalized geometry--geometry grounded in spatial intuitions, visualizations, diagrammatics--collapsed under the language of an autonomous science. In a sense Dirac's suppressed drawings were the hidden remnants of an infolded Victorian world. Public geometry became private reason.


Source: Mathematical Intelligencer, Summer2000, Vol. 22 Issue 3, p60, 13p, 14 graphs, 2bw Author(s): Bainville, Eric; Geneves, Bernard

The classical problems of constructibility using ruler and compass (duplication of the cube, trisection of an angle, quadrature of the circle, construction of the regular polygons) have been solved through the works of Rend Descartes (1637), Karl Friedrich Gauss (1796), Pierre Laurent Wantzel (1837), and Ferdinand Lindemann (1882) (see [3, 8]).

In a recent paper, Videla [11] characterizes the points constructible by ruler, compass, and a "conic drawing tool." In this article, we present constructions using these three tools. The effective realization of these constructions is possible using the Cabri-Geometry software, which integrates the conics as base objects.

To begin, we will recall the definitions of "constructible" using different tools. Then we give some theorems characterizing constructible objects. Next, we discuss construction of regular polygons. We show some known constructions of the polygons with 5, 7, 9, 13, and 17 sides, and some new constructions of the polygons with 19, 37, 73, and 97 sides. We close with some remarks on the automated construction of regular polygons.



Given two distinct points a and b, they define a unique line (containing a and b) and a unique circle (centered in a and containing b). No line or circle can be defined by two identical points.

DEFINITION 1 (RC-constructible point). RC stands for "ruler & compass." Let A +/- R[sup 2] be a set of points. Let RC(A) be the smallest set containing A such that the intersection points of two primitives (line and circle) defined from points of RC(A) are in RC(A). Trivial intersections (when the number of intersection points is infinite) are considered empty. RC(A) is called the set of points RC-constructible from A.

A complex number x + iy is RC-constructible if the corresponding point (x, y) is RC-constructible from the set {(0,0), (1,0)}.

DEFINITION 2 RC-constructible number), RC = RC({(0,0), (1,0)}) denotes the set of RC-constructible numbers.

We can now add conics as primitives. Five points are usually on a unique algebraic curve of degree 2 (conic). When more than one conic passes through five given points, we will say that these points define no conic. The definitions of RC-constructibility can then be extended as follows.

DEFINITION 3 (C[sub 2]-constructible point). Let A Subset R[sup 2] be a set of points. Let C[sub 2](A) be the smallest set containing A such that the intersection points of two primitives (line, circle, conic) defined from points of C[sub 2](A) are in C[sub 2](A). Trivial intersections are considered empty. C[sub 2](A) is called the set of points C[sub 2]-constructible from A.

DEFINITION 4 (C[sub 2]-constructible number). C[sub 2] = C[sub 2]({(0,0), (1,0)}) denotes the set of C[sub 2]-constructible numbers.

In the definition of "conic-constructible" by Videla [11], conics are defined from a point F (focus), a line L (directrix), and a number e (eccentricity). The conic is then the set of points M of the plane satisfying dist(M, F) = e dist(M, L). These definitions are equivalent (five distinct points of the conic can be RC-constructed from these elements and, conversely, the elements of the conic can be RC-constructed from five points defining the conic uniquely).


THEOREM 1 (Wantzel, 1832). RC is the smallest subfield of C stable under conjugation and square root.

THEOREM 2 (Videla, 1997). C[sub 2] is the smallest subfield of C stable under conjugation, square root, and cube root.

These theorems can be proven using the tools of Galois theory, as explained by Stewart in [10].


Sum and product, bisection and trisection of angles Given two complex numbers x and y, RC-constructions of x + y, xy, 1/x, -x, and x are well known. Given a positive real r, the RC-construction of r[sup 1/2] is also well known, C[sub 2]-construction of r[sup 1/3] was first presented by Menaechmus (350 BC): given two numbers a and b, he showed how to construct two numbers x and y such that a/x = x/y = y/b using two parabolas (see [3, 11]). The trisection of an arbitrary angle using conics was first accomplished by Pappus (third century); see [11].

Roots of polynomials of degree 2 and 3 Given reals s and p, the two real zeros of P = X[sup 2] - sX + p, whose sum is s and product is p, are RC(s, p)-constructible. A simple construction uses a Carlyle circle, named after Thomas Carlyle though found earlier by Descartes (see [4]). For A(0, 1) and B(s, p), let c be the circle of diameter [AB] (see Fig. 1). c intersects the x axis if and only if P has one or two real zeros, and in that case, the abscissas of the intersection points are the zeros of P [4].

Given reals a, b, and c, and P = X[sup 3] + aX[sup 2] + bX + c. The real roots of P are constructible as the abscissas of the intersection points between two conics defined from points RC-constructible from {a, b, c}. Several convenient choices of the pair of conics can be made, using either a fixed hyperbola (the right hyperbola XY = 1) or a fixed parabola (Y = X[sup 2]):

XY = 1 and cY[sup 2] + X + bY + a = 0. This parabola has Y = -b/2c as axis (dashed line) and passes through the points (-a + b - c, - 1), (-a - b - c, 1), and (-a, 0) (black dots). See Figure 2.

XY = 1 and X[sup 2] + aX + cY + b = 0, a parabola with axis X = -a/2. See Figure 3.

Y = X[sup 2] and XY + bX + aY + c = 0. This hyperbola has axes parallel to the x and y axes (dashed lines) and the point (-a, -b) as center. Its equation can be rewritten (X + a)(Y + b) = ab - c; that is, X'Y' = ab - c in a coordinate system with origin at the center. See Figure 4.

Descartes used such methods involving circles and parabolas to find the roots of third-degree polynomials. The methods presented here can easily be defined as macroconstructions and used as building blocks for complex figures. The constructions of the regular polygons with 73 and 97 sides presented at the end of this article would have been far more difficult to carry out without using these macroconstructions.

Regular Polygons

Let R[sub p] be the regular p-gon having the points (cos(2kpi/p), sin(2kpi/p)) as vertices, k = 0, l, ..., p - 1.

Gauss [6] has shown that the RC-constructible regular polygons have p = 2[sup n]p[sub 1]p[sub 2] ellipse p[sub k] sides, where n, k >/= 0 and p[sub i] are distinct prime numbers of the form 2[sup a] + 1 (numbers known as Fermat primes). Up to 300 sides, this corresponds to the 38 values 2, 3, 4, 5, 6, 8, 10, 12, 15, 16, 17, 20, 24, 30, 32, 34, 40, 48, 51, 60, 64, 68, 80, 85, 96, 102, 120, 128, 136, 160, 170, 192, 204, 240, 255, 256, 257, and 272 (prime numbers in boldface). This list is given by Gauss in [6] (item 366).

Videla [11] has shown that the C[sub 2]-constructible regular polygons have p = 2[sup n]3[sup m]p[sub 1]p[sub 2] ellipse p[sub k] sides, where m, n, k >/= 0 and p[sub i] are distinct prime numbers of the form 2[sup a]3[sup b] + 1. Up to 300 sides, this corresponds to the 130 values 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 24, 26, 27, 28, 30, 32, 34, 35, 36, 37, 38, 39, 40, 42, 45, 48, 51, 52, 54, 56, 57, 60, 63, 64, 65, 68, 70, 72, 73, 74, 76, 78, 80, 81, 84, 85, 90, 91, 95, 96, 97, 102, 104, 105, 108, 109, 111, 112, 114, 117, 119, 120, 126, 128, 130, 133, 135, 136, 140, 144, 146, 148, 152, 153, 156, 160, 162, 163, 168, 170, 171, 180, 182, 185, 189, 190, 192, 193, 194, 195, 204, 208, 210, 216, 218, 219, 221, 222, 224, 228, 234, 238, 240, 243, 247, 252, 255, 256, 257, 259, 260, 266, 270, 272, 273, 280, 285, 288, 291, 292, and 296.

If R[sub p] is known, R[sub 2p] can be RC-constructed from it by bisection of the sides, and R[sub 3p] can be C[sub 2]-constructed from it by trisection of the angles. If R[sub p] and R[sub q] are known, their superposition generates at least one side of R[sub m] where m is the least common multiple of p and q; this side can be replicated to obtain R[sub m].

Consequently, we have only to give RC-constructions for prime numbers of the form 2[sup a] + 1, which are 3, 5, 17, 257, 65, 537, .... There is very little probability that there exists another prime number of this form. For 2[sup a] + 1 to be prime, a must be a power of 2. Numbers of the form 2[sup 2][sup a] + i are called Fermat numbers. In June 1998, the smallest Fermat number 2[sup 2][sup a] + 1 not yet checked for primality was 2[sup 2][sup 24] + 1 [7]; this number has more than 5 million digits.

Similarly, we have only to give C[sub 2]-constructions for prime numbers of the form 2[sup a]3[sup b] + 1, with b > 0; there are 36 values up to 10[sup 6]: 7, 13, 19, 37, 73, 97, 109, 163, 193, 433, 487, 577, 769, 1153, 1297, 1459, 2593, 2917, 3457, 3889, 10,369, 12,289, 17,497, 18,433, 39,367, 52,489, 139,969, 147,457, 209,953, 331,777, 472,393, 629,857, 746,497, 786,433, 839,809, 995,329. There are 8 more candidates in [10[sup 6], 10[sup 7]], 8 in [10[sup 7], 10[sup 8]], 7 in [10[sup 8], 10[sup 9]], 7 in [10[sup 9], 10[sup 10]], and a total of 231 values in [1, 10[sup 30]].

Constructions of R[sub 2], R[sub 3], and R[sub 5] (and, consequently, R[sub n] for n = 2[sup k], n =3 Center dot 2[sup k], n = 5 Center dot 2[sup k], n = 15 Center dot 2[sup k]) were known in antiquity. For larger polygons, the only geometric constructions known are transpositions of algebraic solutions.

The complex numbers corresponding to the vertices of R[sub n] are zeros of the polynomial Z[sup n] - 1. For n an odd prime, the irreducible factors in Q[Z] of this polynomial are Z 1 and P[sub n] = Z[sup n - 1] + Z[sup n - 2] + ellipse + Z + 1.

The idea of the constructions is to decompose the extension of Q by the zeros of P[sub n] into successive extensions of degrees 2 and 3, whose elements can be obtained by C[sub 2]-constructions from the elements of the previous extension, starting with rationals. We will illustrate this in the following paragraphs. For the definition of field extensions, Galois groups, and their connection with geometric constructions, see [10].

P[sub n] has no real zero and (n - 1)/2 pairs of conjugate zeros. The monic polynomial Q[sub n] whose zeros are 2 cos(2kpi/n) has degree (n -1)/2 and integer coefficients. It is defined by the equation Z[sup (n-1)/2>Q[sub n](Z + 1/Z) = P[sub n]. It is easier to construct the zeros of Q[sub n]. In that case, the polygon obtained will be inscribed in the circle of radius 2 centered at the origin. Unless specified, this will be the case in all the following constructions.

Gauss has given in [6] an efficient algorithm to build the sequence of equations defining the zeros of Q[sub n]. In the following paragraphs, we try to introduce this algorithm step by step, with some simplifications proposed later in the literature.

R[sub 5]

Q[sub 5] = X[sup 2] + X - 1 has zeros (-1 +/- Square root of 5)/2. These zeros can be constructed with the Carlyle algorithm, as shown in Figure 5. The Carlyle circle has diameter [JB] and center L, with J(0, 1), B(-1, -1), and L(-1/2, 0).

R[sub 7]

We can construct the zeros of Q[sub 7] = X[sup 3] + X[sup 2] - 2X - 1 using one of the methods in the previous section. Figure 6 shows a construction using the method of Figure 2.

R[sub 9]

Although 9 is not a prime and can be constructed from R[sub 3] by trisection, we can give a simple construction of it (Fig. 7). Q[sub 9] = (X + 1)(X[sup 3] - 3X + 1). The zero -1 corresponds to the nontrivial vertices of R[sub 3]. The three other zeros can be obtained by intersecting the hyperbola XY = 1 and the parabola Y[sup 2] + X - 3Y = 0. The axis of this parabola is Y = 3/2, and it contains the points (-4, -1), (2, 1), and (0, 0) (dark dots).

R[sub 13]

With 13 sides, interesting things begin to happen. Q[sub 13] = X[sup 6] + X[sup 5] - 5X[sup 4] - 4X[sup 3] + 6X[sup 2] + 3X - 1 has degree 6. As we can only construct zeros of polynomials of degree 2 and 3, we have two choices:

1. Arrange the six zeros of Q[sub 13] in three groups of two

  values, each pair being the zeros of a polynomial of degree

  2. In that case, the coefficients of the polynomials of

  degree 2 are in an extension of Q of degree 3.

2. Arrange the six zeros of Q[sub 13] in two groups of three

  values, each triple being the zeros of a polynomial of degree

  3. In that case, the coefficients of the polynomials of

  degree 3 are in an extension of Q of degree 2.

Three groups of two values There are 15 ways of grouping the 6 zeros x[sub k] of Q[sub 13] into 3 pairs, as enumerated in the following array [the pair (x[sub i] x[sub j]) is noted i.j]:

         1.2, 3.4, 5.6;      1.2, 3.5, 4.6;      1.2, 3.6, 4.5

         1.3, 2.4, 5.6;      1.3, 2.5, 4.6;      1.3, 2.6, 4.5

         1.4, 2.3, 5.6;      1.4, 2.5, 3.6;      1.4, 2.6, 3.5

         1.5, 2.3, 4.6;      1.5, 2.4, 3.6;      1.5, 2.6, 3.4

         1.6, 2.3, 4.5;      1.6, 2.4, 3.5;      1.6, 2.5, 3.4

Let omega be a primitive root of 1; in this array, k represents the zero x[sub k] = omega[sup k] + omega[sup 13-k] of Q[sub 13]. If we consider, for example, the pair (x[sub 1], x[sub 3]) noted 1.3, its two values are zeros of X[sub 2] - sX + p, with s = x[sub 1] + x[sub 3] = omega + omega[sup 3] + omega[sup 10] + omega[sup 12] and p = x[sub 1]x[sub 3] = omega[sup 2] + omega[sup 4] + omega[sup 9] + omega[sup 11]. By taking the successive powers of s, reduced modulo P[sub 13](omega), and looking for vanishing rational linear combinations of these powers, we can find a polynomial of minimal degree with rational coefficients having s as a zero: X[sup 6] + 2X[sup 5] - 7X[sup 4] - 6X[sup 3] + 5X[sup 2] + 5X + 1.

Let theta = e[sup 2pii/13] When omega takes the 12 values {theta, theta[sub 2], ..., theta[sup 12]} of the primitive 13th roots of 1, x[sub 1] + x[sub 3] takes only 6 values. These values are the zeros of the polynomial we obtained.

Consequently, we have to select the pairs taking only 3 values of s when omega takes all its 12 possible values. We can restrict our search to the five pairs 1.k for k = 2, 3, 4, 5, 6 (because the other pairs are obtained from these when omega varies). It appears that only the pair 1.5 takes three values of s, which are theta[sup 12] + theta[sup 8] + theta[sup 5] + theta for omega is an element of {theta, theta[sup 5], theta[sup 8], theta[sup 12]}, theta[sup 11] + theta[sup 10] + theta[sup 3] + theta[sup 2] for w *[This character cannot be converted to ASCII text] {theta[sup 2], theta[sup 3], theta[sup 10], theta[sup 11]}, and theta[sup 9] + theta[sup 7] + theta[sup 6] + theta[sup 4] for omega is an element of {theta[sup 4], theta[sup 6], theta[sup 7], theta[sup 9]}. The pairs corresponding to the last two values of s are, respectively, 2.3 and 4.6. The polynomial having these three values of s as zeros is H = X[sup 3] + X[sup 2] - 4X + 1.

We see that among the 15 possible choices, only 1.5, 2.3, 4.6 meets our needs. We can factor Q[sub 13] in the extension of Q by the zeros of H. Using Maple, one would type

  >  alias(alpha = RootOf(_Z^3 + _Z^2 - 4*_Z + l)):

                                                   # This is H

  > factor(X^6 + X^5 - 5*X^4 - 4*X^3 + 6*X^2 + 3*X - l, alpha);

                                             # This is Q_13

         (X[sup 2] - alphaX + alpha[sup 2] + alpha -3)

         (X[sup 2] + (alpha[sup 2] + 2alpha -2)X + alpha,

         (X[sup 2] + (-alpha[sup 2] - alpha + 3)X

                   - alpha[sup 2] - 2alpha + 2)

The last two factors are obtained from the first one (X[sup 2] - alphaX + alpha[sup 2] + alpha - 3) when a takes its three possible values (the zeros of H).

We can now construct R[sub 13] (Fig. 8). First, we construct the zeros of H using the intersection of the hyperbola XY = 1 and the parabola p[sub 1]: Y[sup 2] + X - 4Y + 1 = 0 (dark gray). This parabola has axis Y = 2 (dark gray dashed line) and contains the points (2, 1), (-1, 0), and (3, 2) (dark gray points).

For each of the resulting values of alpha, we have to compute the zeros of X[sup 2] - alphaX + alpha[sup 2] + alpha - 3; that is, X[sup 2] - sX + p with s = alpha and p = alpha[sup 2] + alpha - 3.

For this, we construct the parabola p[sub 2]: Y = X[sup 2] + X - 3 (light gray), giving Y = p from X = s.p[sub 2] has axis X = - 1/2 and contains (0, -3), (1, -1), and (2, 3) (light gray points). Using this parabola, we obtain the points (s, p = s[sup 2] + s - 3) used in Carlyle's method. The three Carlyle circles (lighter gray) give the six zeros of Q[sub 13] (lighter gray vertical dashed lines).

Two groups of three values As we have seen earlier, we have to find a triple 1.u.v of zeros whose associated sum x[sub 1] + X[sub u] + X[sub v] = omega + omega[sup 12] + omega[sup u] + omega[sup 13 - u] + omega[sup v] + omega[sup 13 - v] takes only two values when omega takes all the values theta[sup k] for k = 1, 2, ..., 12. After computations, it appears that only the triple 1.3.4 satisfies this criterion. The two values it takes are the zeros of H = X[sup 2] + X - 3. One factor of Q[sub 13] in the extension of Q by the zeros of H is

             X[sup 3] - alphaX[sup 2] - X - 1 + alpha,

where alpha is an arbitrary zero of H (taking the other zero gives the other factor).

The construction of R[sub 13] (Fig. 9) deduced from this decomposition begins with the construction of the two zeros of H using the Carlyle circle centered at (- 1/2, - 1) and containing J(0, 1).

For each zero alpha of H, we then construct the three zeros of X[sup 3] - alphaX[sup 2] - X - 1 + alpha using the intersection of the hyperbola XY = 1 and the parabola (-1 + alpha)Y[sup 2] + X - Y alpha = 0. This parabola contains the points (0, - 1), (0, alpha + 3), (2, 1), (2, alpha + 1), (alpha, 0), and (alpha, alpha + 2). The points (0, - 1) and (2, 1) are shared by the two parabolas and are shown in black in Figure 9; the other points used in the construction of the parabolas are filled dots.

R[sub 17]

Gauss established the RC-constructibility of R[sub 17] in 1796 (see [1, 4, 5, 9] for historical details about this discovery and other constructions proposed afterward).

As we have seen for R[sub 13], the construction is obtained by finding the zeros of three successive polynomials of degree 2, the coefficients of a polynomial being constructed from the zeros of the previous polynomial.

We have Q[sub 17] = X[sup 8] + X[sup 7] - 7X[sup 6] - 6X[sup 5] + 15X[sup 4] + 10X[sup 3] 10X[sup 2] - 4X + 1. Only the pair 1.4 take four values: 1.4, 2.8, 3.5, and 6.7. They are the four zeros of H[sub 2] = X[sup 4] + X[sup 3] 6X[sup 2] - X + 1. Continuing to the next level, we look for a pair of these pairs taking only two values. The only choices are and, the two zeros of H[sub 1] = X[sup 2] + X - 4. Let alpha be a zero of H[sub 1], we can factor H[sub 2] as H[sub 2] = (X[sup 2] + X + alphaX - 1)(X[sup 2] - alphaX - 1). Let beta be a zero of X[sup 2] - alphaX - 1; Q[sub 17] can be factored in Q[beta], one of the factors being X[sup 2] - betaX 3/2 + beta/2 + alphabeta/2 - alpha/2 [the four factors are obtained by substituting the four possible values of (alpha,beta) in this factor].

In the construction (Fig. 10), the values of alpha ( and are obtained using the circle centered at (- 1/2, -3/2) containing J(0, 1) (dark gray). The values of beta (1.4, 2.8, 3.5, and 6.7) are constructed using two circles, centered at (alpha/2, 0) and passing through J (medium gray). The eight roots of Q[sub 17] are obtained by four circles (light gray). To simplify, instead of constructing -3/2 + beta/2 + alphabeta/2 - alpha/2, we can remark that x[sub 1]x[sub 4] = x[sub 3] + x[sub 5], x[sub 2]x[sub 8] = x[sub 6] + x[sub 7], x[sub 3]x[sub 5] = x[sub 2] + x[sub 8], and x[sub 6]x[sub 7] = x[sub 1] + x[sub 4]. Gauss used sign tests to match the sums with the different roots. We have used numerical approximations.

R[sub 19]

Q[sub 19] = X[sup 9] + X[sup 8] - 8X[sup 7] - 7X[sup 6] + 21X[sup 5] + 15X[sup 4] - 20X[sup 3] - 10X[sup 2] + 5X + 1. As in the case of 17 sides (8 = 2 Center dot 2 Center dot 2), we do not have a choice of the decomposition (9 = 3 Center dot 3). We have to find the zeros of a third-degree polynomial, and then the zeros of other third-degree polynomials whose coefficients depend on the first three zeros.

Gauss proposed a way to find directly the suitable triples or pairs, which we did by systematic search in the last paragraphs. We first find a number p in 1, 2, ..., 18 whose successive powers modulo 19 generate a permutation of 1, 2, ..., 18; for example, p = 2 can be chosen because the successive powers of 2 modulo 19 are 1, 2, 4, 8, 16, 13, 7, 14, 9, 18, 17, 15, 11, 3, 6, 12, 5, 10.

Then, for a divisor m of 18, the successive powers of p[sup m] take only 18/m values; for example, with m = 3, p[sup 3] = 8 and the powers of 8 modulo 19 are 1, 8, 7, 18, 11, 12. We can associate to this sequence the polynomial Z + Z[sup 8] + Z[sup 7] + Z[sup 18] + Z[sup 11] + Z[sup 12], which takes only three values when Z varies among the 19th roots of 1.

More generally, let n be a prime number and p a primitive nth root of 1 in the multiplicative group 1, 2, ..., n - 1. For a divisor m of n - 1, and d satisfying 0 </= d < (n - 1)/m, we define the sequence [m, d] as the polynomial [m, d] = Sigma[sub k = 0,1, ..., m - 1] Z[sup(sup[sup d + k(n - 1)/m)]mod n]. This notation is slightly different from Gauss's notation in [6].

As an example, with n = 19 and p = 2, we have the following sequences (the sequences of even length can be identified as the sums previously defined):

    [18, O] =

           = Z[sup 1] + Z[sup 2] + Z[sup 4] + Z[sup 8]

             + Z[sup 16] + Z[sup 13]+ Z[sup 7] + Z[sup 14]

             + Z[sup 9] + Z[sup 18] + Z[sup 17] + Z[sup 15]

             + Z[sup 11] + Z[sup 3] + Z[sup 6] + Z[sup 12]

             + Z[sup 5] + Z[sup 10],

    [9, 0] = Z[sup 1] + Z[sup 4] + Z[sup 16] + Z [sup 7]

             + Z [sup 9] + Z[sup 17 ]+ Z[sup 11] + Z[sup 6]

             + Z[sup 5],

    [9, 1] = Z[sup 2] + Z[sup 8] + Z[sup 13] + Z[sup 14]

            + Z[sup 18] + Z[sup 15]+ Z[sup 3] + Z[sup 12]

            + Z[sup 10],

    [6, 0] = 1.7.8 = Z[sup 1] + Z[sup 8] + Z[sup 7] + Z[sup 18]

             + Z[sup 11] + Z[sup 12],

    [6, 1] = 2.3.5 = Z[sup 2] + Z[sup 16] + Z[sup 14]

             + Z[sup 17] + Z[sup 3] + Z[sup 5],

    [6, 2] = 4.6.9 = Z[sup 4] + Z[sup 13] + Z[sup 9]

            + Z[sup 15] + Z[sup 6] + Z[sup 10],

    [3, O] = Z[sup 1] + Z[sup 7] + Z[sup 11],

    [3, 1] = Z[sup 2] + Z[sup 14] + Z[sup 3],

    [3, 2] = Z[sup 4] + Z[sup 9] + Z[sup 6],

    [3, 3] = Z[sup 8] + Z[sup 18] + Z[sup 12],

    [3, 4] = Z[sup 16] + Z[sup 17] + Z[sup 5],

    [3, 5] = Z[sup 13] + Z[sup 15] + Z[sup 10],

    [2, 0] = x[sub 1] = Z[sup 1] + Z[sup 18],

    [2, 1] = x[sub 2] = Z[sup 2] + Z[sup 17],

    [2, 2] = x[sub 4] = Z[sup 4] + Z[sup 17],

       *[This character cannot be converted to ASCII text]

    [2, 8] = x[sub 9] = Z[sup 9] + Z[sup 10],

    [1, 0] = Z[sup 1],

    [1, 1] = Z[sup 2],

       *[This character cannot be converted to ASCII text]

    [1, 17] = Z[sup 10].

When Z varies among the nth roots of 1, [m, d] takes only (n - 1)/m values, which are [m, 0], ..., [m, (n - 1)/ m - 1].

For n = 19, it follows that the sums 1.7.8, 2.3.5, and 4.6.9 are the zeros of H[sub 1] = X[sup 3] + X[sup 2] - 6X - 7. Let alpha be a zero of H[sub 1]; one of the three factors of Q[sub 19] in Q[alpha] is H[sub 2] = X[sup 3] alphaX[sup 2] + (alpha[sup 2] - 5)X + alpha[sup 2] - 6.

The zeros of H[sup 1] are found using the parabola 7Y[sup 2] - X + 6Y -1 = 0. Its axis is Y = -3/7 and it contains the points (-1, 0), (0, -1), and (15/4, 1/2). The zeros of H[sub 2] are found using the parabola (alpha[sup 2] - 6)Y[sup 2] + X + (alpha[sup 2] - 5)Y - alpha = 0. Its axis is Y = -alpha/2 - 1 and it contains the points (alpha, 0), (-alpha, alpha), and (alpha + 1, -1). The corresponding construction is given in Figure 11.

R[sub 37]

Q[sub 37] has degree 18, and we have the choice between three decompositions: 2Center dot3Center dot3, 3Center dot2Center dot3, and 3Center dot3Center dot2. We will present a construction using the first one. With the notations of the previous section, we take n = 37 and p = 2. The 2 sequences [18, *] are zeros of H[sub 1] = Z[sup 2] + Z - 9, the 6 sequences [6, *] are zeros of H[sub 2] = Z[sup 6] + Z[sup 5] - 15Z[sup 4] - 28Z[sup 3] + 15Z[sup 2] + 38Z - 1, and the 18 sequences [2, *] are zeros of Q[sub 37]. Let alpha be a zero of H[sub 1]; the zeros beta of H[sub 2] are zeros of Z[sup 3] - alphaZ[sup 2] + (-4 - 2alpha)Z + (7 + 2alpha); the zeros of Q[sub 37] are zeros of 11Z[sup 3] - 11betaZ[sup 2] + (4beta[sup 5] - 2beta[sup 4] - 57beta[sup 3] - 21beta[sup 2] + 97beta - 21)Z + (8beta[sup 5] - 4beta[sup 4] 114beta[sup 3] - 53beta[sup 2] + 194beta + 2). The coefficients of this last polynomials are not easy to construct. Fortunately, they can be expressed using linear combinations of longer sequences. We have the following relations, which allow us to construct R[sub 37]:

  [18, 0], [18, 1] zeros of Z[sub 2] + Z - 9,

  [18, 1] < [18, 0],

  [6, 0], [6, 2], [6, 4] zeros of

       Z[sup 3] - [18, 0] Z[sup 2] + (-4 - 2 [18, 0])Z

       + (7 + 2[18, 0]),

  [6, 4] < [6, 5] < [6, 1] < [6, 3] < [6, 0] < [6, 2],

  [2, 0], [2, 6], [2, 12] zeros of

       Z[sup 3] - [6, 0] Z[sup 2] + ([6, 0] + [6, 4])Z

       + (-2 - [6, 1]),

  [2, 17] < [2, 7] < [2, 4] < [2, 15] < [2, 13] < [2, 11]

          < [2, 10] < [2, 12] < [2, 6] < [2, 16] < [2, 3]

          < [2, 14]

          < [2, 9] < [2, 5] < [2, 2] < [2, 8] < [2, 1]

           < [2, 0].

The other relations are obtained by "shifting" the sequences: [m, d] is replaced by [m, (d + 1) mod (n - 1)/m].

In the proposed construction (Fig. 12), [18, *] are constructed using a circle (dark gray) of center (-1/2, - 4). [6, *] are constructed using two parabolas (medium gray). [2, *] are constructed using the six other parabolas (light gray).

R[sub 73] and R[sub 97]

As suggested by Bishop in [2], the construction can be restricted to give only one of the zeros of Q[sub n]. All the vertices of a regular polygon with a prime number of sides can be obtained by reflections from any pair of vertices. The reflections correspond to products of nth roots of 1.

For n = 73, we can choose p = 5 and the decomposition 36 = 2Center dot2Center dot3Center dot3. This leads to the following equations:

  [36, 0], [36, 1]          zeros of Z[sup 2] + Z - 18,

  [18, 0], [18, 2]          zeros of

                          Z[sup 2] - [36, 0] Z + 4 [36, 0] + 5

                                                      [36, 1],

  [6, 0], [6, 4], [6, 8]    zeros of

                Z[sup 3] - [18, 0] Z[sup 2] - (2 + [18, 0]

                    + [18, 3]) Z + 3 + 2 [18, 0] - 2 [18, 3],

  [2, 0], [2, 12], [2, 24] zeros of

                Z[sup 3] - [6, 0] Z[sup 2] + ([6, 0] + [6, 9])

                                               Z - 3 - [6, 8].

Using these equations, "shifted" when needed, we can construct (Fig. 13) the following 15 values in 6 steps (each line corresponds to the construction of the zeros of a secondor third-degree polynomial):

    [36, 0] = 3.772,      [36, 1] = -4.772,

    [18, 0] = 5.397,      [18, 2] = -1.625,

    [18, 1] = 0.047,      [18, 3] = -4.819,

     [6, 0] = 4.966,       [6, 4] = -1.967, [6, 8] = 2.398,

     [6, 1] = -1.580,      [6, 5] = -1.538, [6, 9] = 3.166,

     [2, 0] = 1.992,      [2, 12] = 1.429, [2, 24] = 1.544,

For n = 97, we can use p = 5 and the decomposition 48 = 2Center dot2Center dot2Center dot2Center dot3, which corresponds to the equations

  [48, 0]                zeros of [48, 1] Z[sup 2] + Z - 24,

  [24, 0], [24, 2]       zeros of

                            Z[sup 2] - [48, 0]Z + 2[48, 0] - 5,

  [12, 0], [12, 4]       zeros of

        Z[sup 2] - [24, 0] Z + 2 [48, 1] + 3 [24, 2] - [24, 1],

  [6, 0], [6, 8]         zeros of

                      Z[sup 2] - [12, 0] Z + [24, 1] + [12, 7],

  [2, 0] [2, 16] [2, 32] zeros of

Z[sup 3] - [6, 0] Z[sup 2] + ([6, 0] + [6, 1l]) Z - 2 - [6, 2].

From these equations, we can deduce the following construction (Fig. 14) in 11 steps involving 23 values:

    [48, 0] = 4.424,   [48, 1] = -5.424,

    [24, 0] = 1.189,   [24, 2] = 3.234,

    [24, 1] = 2.104,   [24, 3] = -7.529,

    [12, 0] = 2.493,   [12, 4] = -1.303,

    [12, 1] = 5.304,   [12, 5] = -3.199,

    [12, 2] = 0.079,   [12, 6] = 3.155,

    [12, 3] = -3.318,  [12, 7] = -4.210,

     [6, 0] = -0.666,   [6, 8] = 3.159,

     [6, 2] = 1.531,    [6, 10] = -1.452,

     [6, 3] = -0.441,   [6, 11] = -2.877,

     [2, 0] = 1.995,    [2, 16] = -1.379, [2, 32] = -1.282.

Toward Automatic Construction

In the previous sections, we have presented more and more optimized ways to build the relations leading to constructions of the regular polygons. The final version can be summarized as follows.

Let n be an odd prime number of the form 2[sup a]3[sup b] + 1. For example, n = 2[sup 8] Center dot 3 + 1 = 769. The first step is to find p such that the powers of p modulo n generate the set {1, 2, ..., n -1}. For example, p = 11 for n = 769. Then, choose the decomposition of (n - 1)/2 into an ordered product of 2's and 3's. For example, for n = 97 = 2[sup 5]Center dot3 + 1, we have five choices: 2Center dot2Center dot2Center dot2Center dot3, 2Center dot2Center dot2Center dot3Center dot2, 2Center dot2Center dot3Center dot2Center dot2, 2<cd.>3Center dot2Center dot2Center dot2, and 3Center dot2Center dot2Center dot2Center dot2. In the general case, we have ([sup a + b - 1][sub b]) possible choices. It turns out to be more convenient to first solve second-degree polynomial equations.

The next step is to find the second- and third-degree polynomials whose zeros are the sequences corresponding to the previous decomposition. For n = 433 = 2[sup 4]Center dot3[sup 3] + 1, with p = 5 and the decomposition 216 = 2Center dot2Center dot2Center dot3Center dot3Center dot3, the lengths of the sequences are 216, 108, 54, 18, 6, and 2. We have to find the polynomials whose zeros are {[216, 0], [216, 1]}, {[108, 0], [108, 2]}, {[54, 0], [54, 4]}, {[18, 0], [18, 8], [18, 16]}, {[6, 0], [6, 24], [6, 48]}, and {[2, 0], [2, 72], [2, 144]}. The coefficients of these polynomials can be expressed as linear combinations of longer sequences, with integer coefficients. The first polynomial has integer coefficients.

The whole polygon can be deduced from the knowledge of only one of the sequences [2, *], say [2, 0]. We have to find the smallest set (or at least a reasonably small set) of sequences allowing the computation of [2, 0]. See the constructions of R[sub 73] and R[sub 97] for example (in the construction of R[sub 37], we constructed all the [2, *] sequences, without this simplification).

Finally, use the constructions of the zeros of secondand third-degree polynomials to build the successive sequences and, eventually, [2, 0]. This value gives a second vertex--we already have the point (2, 0)--of the polygon, which can be used to build all the others using reflections.


After recalling definitions and results about the constructibility of a geometric object, we have shown by more and more efficient methods how the works of Gauss, computer algebra systems (Maple), and dynamic geometry software (Cabri-Geometry, distributed by Texas Instruments) could be used together to construct regular polygons, using ruler, compass, and simple conics. In particular, we have given the list of small C[sub 2]-constructible polygons, and presented new C [sub 2]-constructions of the regular polygons with 19, 37, 73, and 97 sides.

The ancient Greeks gave precedence to constructions using only ruler and compass, not because they did not know about the other curves (they invented a number of mechanical devices drawing some algebraic curves of degrees 2, 3, 4, and more), but for the neatness, perfection of reasoning, and the simplicity of the shapes involved (circle and straight line).

Today's tools such as Cabri-Geometry enlarge the notion of geometric simplicity by allowing the manipulation of algebraic expressions (the sequences defined by Gauss) and complex geometric objects (the conic sections).

Some generalizations of the questions treated here may be considered:

1. What does the set of constructible numbers become if we

  consider algebraic curves of higher degrees?

2. What is the asymptotic distribution of the primes of the form

  2[sup a]3[sup b] + 1?

3. Can the C[sub 2]-constructions of the regular polygons be

  fully automated?

4. Given n, what is the most efficient way of

  C[sub 2]-constructing R[sub n], in terms of number of steps

  and in terms of precision of the intersections involved

  (avoiding intersection between near-tangent curves)?


Source: Mathematics Teacher, Apr2000, Vol. 93 Issue 4, p276, 4p, 7 diagrams 

Author(s): Kolpas, Sidney J.; Massion, Gary R.

On 27 June 1916, The Educational Toy Manufacturing Company of Springfield, Massachusetts, patented "Consul," the Educated Monkey, a tin mathematical toy. According to the instructions accompanying the toy, the Educated Monkey was designed to

teach the multiplication tables to 12s, associated elementary division, and associated elementary factoring and

teach the addition tables to 12s and associated elementary subtraction.

When the monkey's feet are set to point at two numbers, its fingers locate their product. The photograph shows the left foot, from the reader's point of view, pointing to 4; the right foot, from the reader's point of view, pointing to 9; and the hands pointing to the product, 36. The entire multiplication table appears to form a 45 degrees-45 degrees-90 degrees triangle (fig. 1), and the triangle outlined by the monkey also appears to be a 45 degrees-45 degrees-90 degrees triangle, with the product 36 at the vertex of the right angle, as shown in figure 2. Figure 3 shows what appears to be another outlined 45 degrees-45 degrees-90 degrees triangle resulting from 7 x 11, with the product, 77, at the vertex of the right angle.

To square a number, the user sets the left foot to point to the number and sets the right foot to point to the symbol of a square (see fig. 1). The fingers then locate the square of the number.

To divide, the user sets one foot to point to the divisor and arranges the fingers to point to the dividend. The other foot then points to the quotient. To factor, the user makes the fingers point at a product. The feet then point to the factors.

For addition or subtraction, the toy comes with a cardboard addition table, shown in figure 4, that is slipped under the monkey and secured by paper fasteners to two slots on the plate of the toy. Addition and subtraction proceed in a manner similar to multiplication and division.

The instructions indicate that the toy works because the monkey is constructed around a plane mechanical linkage and because the products and sums are arranged on the plate in a "special" order. The linkage, which ensures that moving the feet to different factors forces the hands to move, consists of two upper arm-leg pieces; two arm-hand pieces; and a "tail" with an answer window that moves up and down with different products or sums. The two arm-hand pieces are attached below the answer window to the tail. The two upper arm-leg pieces are attached to the two arm-hand pieces at the "elbow" and to the upper part of the tail. The monkey's head is attached on top of this piece. Moreover, the feet on the upper arm-leg pieces slide along the straight-line opening at the bottom of the toy's plate. The instructions do not give any mathematical explanation indicating why the linkage and the "special" placement of products or sums work the way they do.

This article explains why the Educated Monkey works by looking at the geometry of the linkage, as well as at the special placement of the products and sums on the plate of the toy. We believe that this problem is an interesting one to present to plane geometry students, since it reviews many important geometric, algebraic, and arithmetic concepts. Moreover, students may want to recreate the linkage, or create their own linkages, from strips of cardboard and paper fasteners.

In explaining why the toy works, we eventually show that the entire multiplication table forms a 45 degrees-45 degrees-90 degrees triangle (see fig. 5). Moreover, we show that for any particular choice of factors, the product is found at the right-angle vertex of the 45 degrees-45 degrees-90 degrees triangle defined by the factors.


We first refer to the photograph and figure 6. Points A and B are at the tips of the monkey's feet; they can move only along the straight-line opening, line AB, at the bottom of the toy's plate. Point C is directly below the window where the monkey's hands point to the product or sum. Points D and F are at the monkey's elbows. Point E is hidden behind the monkey's nose. At all these points, the linkage can rotate. The toy is constructed so that angle ADE and angle BFE are constant, congruent right angles. However, most--but, surprisingly, not all--of the other angles vary as the monkey's hands and feet move. Segments AD, DE, EF, FB, DC, and FC are congruent in the toy; we mark these segments as congruent in figures 6 and 7. No physical links AD, BF, AC, or BC exist; these segments are auxiliary ones for the explanation.

We assume that A is stationary; that is, we have chosen one factor using the left foot and are about to slide the right foot at B to the other factor to obtain a product; the product will appear in a window directly above C, as shown in the photograph. We wish to show that for a fixed choice of A--that is, the origin, or left factor--as B points to different second factors beyond--that is, to the right of A--triangle ACB is always a 45 degrees-45 degrees-90 degrees triangle and the product appears directly above C in the window on the tail.

Without loss of generality, we assume that A is at the origin of a Cartesian coordinate system. We are given that

(1) m angle ADC + m angle CDE = m angle BFC + m angle CFE = 90 degrees.

CDEF is a parallelogram. More specifically, it is a rhombus. Therefore, its opposite angles, which are labeled 1 and 3, are respectively congruent.

m angle ADE - m angle 1 = m angle BFE - m angle 1;

therefore, the angles labeled 2 are congruent. Triangles ACD and BCF are congruent by SAS. Moreover, they are both isosceles. Therefore, all the angles labeled 4 are congruent. Since triangles ACD and BCF are congruent, segments AC and BC are congruent by corresponding parts. Therefore, triangle ABC is isosceles, and the angles labeled 5 are congruent by the isosceles triangle theorem.

We have established that triangle ABC is isosceles. We next establish that it is a 45 degrees-45 degrees-90 degrees triangle. Since the sum of the measures of the angles of a triangle equals 180 degrees,

(2) m angle 2 + m angle 4 + m angle 4 = 180 degrees.

Since consecutive angles of a parallelogram are supplementary,

(3) m angle 1 + m angle 3 = 180 degrees.

Because ABFED is a pentagon, its interior angles sum to (n - 2)180 degrees = (5 - 2)180 degrees = 540 degrees, where n is the number of sides. Thus,

(4) {m angle 3 + m angle 1 + m angle 1 + m angle 2 + m angle 2

   {+ m angle 4 + m angle 4 + m angle 5 + m angle 5

    = 540 degrees.

Looking at equation (4) and substituting equation (3) for the first two terms, equation (1) for the next two terms, and equation (2) for the next three terms gives us

180 degrees + 90 degrees + 180 degrees + 2m angle 5 = 540 degrees,


m angle 5 = 45 degrees.

Therefore, triangle ABC is always a 45 degrees-45 degrees-90 degrees triangle. Although angles 1, 2, 3, and 4 vary as the monkey's feet and hands move, angle CAB remains a 45 degree angle no matter where we move B, the second factor. Since we assumed that A is at the origin, then C, the product or sum, must move on the line y = x--on a line through the origin, A, at 45 degrees from the x-axis--as we move B to different factors beyond A. The multiplication table on the toy, shown in figure 1 is arranged so that ---,

the 1's tables from 1 x 2 to 1 x 12 are on a 45 degree line whose origin is the location of point A when the left foot is directly above 1.

the 2's tables from 2 x 3 to 2 x 12 are on a 45 degree line whose origin is the location of point A when the left foot is directly above 2.

the 3's tables from 3 x 4 to 3 x 12 are on a 45 degree line whose origin is the location of point A when the left foot is directly above 3.

the 10's tables from 10 x 11 to 10 x 12 are on a 45 degree line whose origin is the location of point A when the left foot is directly above 10.

the 11's table for 11 x 12 is on a 45 degree line whose origin is the location of point A when the left foot is directly above 11.

the 12's tables are obtained by the commutative property.

Thus, the multiplication tables on the plate are a family of 45 degree lines, each going from N x (N + 1) to N x (12), where N is the number to which the left foot, A, points, and 1 less than or equal to N less than or equal to 11. In fact, the entire multiplication table is itself arranged as a 45 degrees-45 degrees-90 degrees right triangle, as shown in figure 5. Any missing products are accomplished by the commutative property. For example, although 3 x 2 is not possible to compute with the monkey, 2 x 3 is possible.

In reality, the toy is arranged so that the product or sum lies directly above C, in the window between the monkey's hands. Therefore, the entire table is raised up one unit.


We pose the following challenges to students and teachers:

1. Prove why the toy works if you keep B stationary and vary A. (The commutative property gives the explanation.)

2. Prove why squaring a number works using the square symbol. (Note the position of the perfect squares on the toy's plate.)

3. Work with and construct such other plane linkages as the pantograph, which draws figures similar to those traced, and explore their geometric properties.

4. How many different 45 degrees-45 degrees-90 degrees triangles can you find in the entire multiplication or addition table?

5. Prove that line segment DF is always parallel to line segment AB.

6. Prove that the monkey's tail, line segment CE, is perpendicular to line segment AB.


"Consul," the Educated Monkey, is an outstanding, practical example of a plane linkage. In learning why the monkey works the way it does, students are required to review many important concepts from plane geometry, algebra, and arithmetic. Making their own "monkey" linkage similar to Consul, which one of the authors has done with construction paper and paper fasteners, would give students additional, hands-on experience with many important mathematical concepts. An outstanding primary resource for mathematical models, including linkages, is Cundy and Rollett (1961). When examining why this toy works, your students will not just be monkeying around; they will be learning some very interesting, highly motivating hands-on mathematics.

The authors would like to thank Susan Cisco for preparing all the figures used in this article.


Source: Mathematics Teaching in the Middle School, Jan2000, Vol. 5 Issue 5, p330, 5p, 1 chart, 5 graphs 

Author(s): Beigie, Darin

STUDENTS IN A MIDDLE SCHOOL MATHematics club used the zooming technology of Green Globs and Graphing Equations (Dugdale and Kibbey 1996) to study slope in curved graphs. Seventh and eighth graders investigated some elementary curved graphs by zooming in on evenly spaced points along a graph until the graph appeared linear and slope could be calculated. The slopes at the various points were then plotted on a separate grid and joined to make a graph of the slope itself and to discover the algebraic equation describing the new graph. The zooming technology gave the students a concrete, visual context in which to learn about the idea of slope in a curved graph and to study how slope varies along a curved graph.

Slope of a Curved Graph

WHEN STUDYING TWO-POINT CALCULATIONS OF the slope of a line, a seventh grader once asked in class, "What do you do if the graph is curved?" Another student responded without hesitation, "If you make the two points really close together, the graph will look straight and you can still find the slope." Such a penetrating insight led me to wonder if an age-appropriate way was available to introduce middle schoolers to the idea of slope in curved graphs. Indeed, after playing such a central role in middle school study of linear graphs, the idea of slope is effectively abandoned with nonlinear graphs until the introduction of calculus later in high school. An appropriate middle school exposure to slope in its more general context would be helpful in conveying the utility and flexibility of the concept.

The zooming technology of graphing calculators and certain graphing software offers an ideal environment for middle schoolers to extend their understanding of slope to curved graphs. Such technology allows a student to explicitly see a curved graph becoming effectively straight as one zooms in closer and closer on any point on the graph, much as the curved surface of the earth can appear flat from close range. The idea of slope of a curved graph can be illustrated by a skateboard on a curved ramp (see fig. 1). Even though the ramp is curved, the flat skateboard has a well-defined slope anywhere on the ramp, one that changes with location along the ramp. In a similar manner, one can always make a first definition of slope in a curved graph by simply selecting any two points on the graph and calculating the slope of the line segment joining the two points. By zooming in on any point on a curved graph, the student sees the graph becoming increasingly straight and the two-point slope calculation making sense.

For example, consider the graph of the quadratic equation y = x[sup 2] and the result of successive zooms on the point (2, 4), shown in figure 2. The graph appears less and less curved as the number of zooms increases, resulting in an effectively straight graph by the eighth zoom. Within the window of the eighth zoom, the student selects two points and registers the coordinates with the trace feature of the software: A = (1.98513, 3.93898) and B = (2.01511, 4.05906). The slope m of the graph at the point (2, 4) is then calculated as follows, rounded to the nearest thousandth:

   change in y   4.05906 - 3.93898

m = ----------- = ----------------- = 4.005

   change in x   2.01511 - 1.98513

By using this method, the mathematics-club members calculated the slope along various points of a curved graph.

Investigating Slopes and Finding Patterns

BY USING THE ZOOM AND TRACE FEATURES OF the graphing software, students investigated some elementary curved graphs in detail. The students had gained some familiarity with graphs of different powers of x through previous open-ended explorations using graphing calculators and software. They could recognize, for example, the graphs of the quadratic equation y = x[sup 2] and the cubic equation y = x[sup 3]. The study began with these two graphs, and the students zoomed in and performed two-point calculations to determine the slopes of these graphs at integral values of x ranging from -4 to 4. As in the previous example, the slopes were quite close to integral values, and the students were asked to round their answers to the nearest integer; for example, in the previous calculation, 4.005 would be rounded to 4. Once the slopes were determined, both the original equation and the corresponding slopes were graphed by hand, as shown in figure 3. Hand-drawn graphs helped students absorb the meaning of their calculations and the relationship between the original graphs and the slope graphs.

The students discovered that the quadratic graph y = x[sup 2] had a slope graph that was a line with a slope of 2 passing through the origin. The equation for the slope graph was thus determined to be m = 2x. The students then discovered that the cubic graph y = x[sup 3] had a slope graph with the familiar shape of a parabola. Determining the equation of this slope graph was like solving a puzzle. The students suspected that the parabolic shape meant that the equation involved an x[sup 2] somehow, and they tried to find an equation that would match their calculated table of values for slope. Soon they caught on that the slope m was triple the familiar x[sup 2] pattern, and they were able to deduce the slope equation m = 3x[sub 2].

After investigating more graphs, for example, y = x and y = x[sup 4], the students picked up on some general patterns between equations of the form y = x[sup n] and the corresponding slope equation:

The exponent in the slope equation is one less than the exponent in the original equation.

The coefficient in the slope equation is equal to the exponent in the original equation.

Their principal finding, summarized in table 1, was that the graph of the equation y = x[sup n] has a corresponding slope graph described by the equation m = nx[sup n-1]. Although such a finding is certainly advanced for a middle schooler, it was the result of a concrete activity involving two-point slope calculations along curved graphs that had been magnified by the zooming technology to look effectively linear. Discovering the equation patterns for these slope graphs was quite manageable for the students, who enjoyed pattern problems and were comfortable working with algebraic expressions. Some had even guessed the general pattern after studying the cubic equation.

Qualitative Understanding through Rate Problems

IN CONJUNCTION WITH THEIR QUANTITATIVE study of curved graphs, the students solidified their understanding of slope with qualitative study of rate problems involving distance-time graphs. The club members were presented with sketches of distance-time graphs (see fig. 4) and asked to think of a video character moving along a line on the video screen, with the distance variable measuring how far the character is from the leftmost part of the screen. For distance-time graphs, the slope represents the ratio of the change in distance to the change in time, or the velocity of the character.

Beneath each distance-time graph, the students were asked to make a graph of velocity versus time. For example, in figure 4, the character moves forward at a constant speed (A to B) and then slows down (B to C) until it eventually sits motionless for a while (C to D). The character then moves back with increasing speed (D to E), meaning increasingly negative velocity, until reaching a constant speed coming back (E to F). Although a verbal description of these graphs can be a bit cumbersome to read, making the velocity-time graphs was fairly straightforward for the students. Indeed, in my seventh- and eighth-grade classes I have presented the students with such distance-time graphs and asked them to walk in the front of the classroom according to the graph. The students have an intuitive understanding of how to translate a distance-time graph into walking speed, realizing, for example, that a curve in a graph means that one is slowing down or speeding up.

Connecting Qualitative Understanding with Slope Patterns

A NATURAL COMPARISON OF THE STUDENTS' quantitative and qualitative studies created a final activity for the project. The club members were asked to graph some simple polynomial equations using the computer, then make a qualitative prediction of the corresponding slope graph, just as they had done in the previous rate problems. The students next checked their predictions against a computer-generated slope graph, made by entering the slope equation deduced by the patterns that they had found in table 1.

An example is shown in figure 5a with the computer-generated graph of the equation y = x[sup 2] - 4x. Having performed many slope calculations through zooming, the students were comfortable enough to make qualitative predictions about the slope at various points along the curve. For example, zooming in would give a negative slope at the point (-1, 5), zero slope at the point (2, -4), and positive slope at the point (5, 5). Indeed, the students were able to predict qualitatively how slope would vary throughout the graph: going from left to right, the slope starts off very negative and keeps increasing to become very positive, passing through 0 at x = 2. The qualitative study in the previous rate problems helped the students step back and see the variation of slope throughout an entire graph.

The qualitative predictions were then confirmed by an exact calculation of the slope using the patterns of table 1, that is, m = nx[sup n-1] to deduce the actual slope equation. Knowing that the x[sup 2] term has a slope of 2x and that the 4x term has a slope of 4, the students deduced a slope equation of m = 2x - 4, whose computer-generated graph is shown in figure 5b. Comparing the slope of the resulting graph with the qualitative predictions, one indeed sees the slope increasing from left to right, passing through 0 at x = 2.


THE ZOOMING TECHNOLOGY OF GRAPHING CALculators and some computer software allows a concrete and visual setting for middle schoolers not only to be introduced to the idea of slope in a curved graph but also to determine these slopes using familiar two-point calculations. The exploration described here would, in its entirety, certainly be an advanced topic at the middle school level. Portions of the exploration, however, could fit comfortably in any middle school study of graphing where zooming technology is available. A good starting point would be two-point calculations of slope along a single curved graph, such as y = x[sup 2] with the analogy of a skateboard ramp in mind, or any other nonlinear equation representing a real-world situation.

TABLE 1: Slope Pattern Discovered by Students



y = 1               m = 0

y = x               m = 1

y = x[sup 2]        m = 2x

y = x[sup 3]        m = 3x[sup 2]

y = x[sup 4]        m = 4x[sup 3]

y = x[sup n]        m = nx[sup n-1]


Source: Mathematics Teacher, May2001, Vol. 94 Issue 5, p342, 6p 

Author(s): Keller, Rod; Davidson, Doris

Mathematical imagination and imagery, closely linked, provide the vision that allows us to see the hidden but exquisite structure below the surface.
--ROBERT OSSERMAN, Poetry of the Universe

It began simply enough, with a conversation between two high school teachers standing beside a copying machine:

Mathematics teacher. Have you ever thought about having your students write a math poem?

English teacher. No, but I like the idea.

Mathematics teacher. What would you think of doing something together--letting the students get credit in your class and in mine? We do share a lot of the same students. It might be fun for us and for them.

English teacher. I agree.

Mathematics teacher. We would be having them put together two subjects that they usually do not think of as having any connection. Sometimes it seems like interdisciplinary work just means English and history or mathematics and science.

English teacher. I know what you mean.

Mathematics teacher. But how would we do it?

English teacher. I do not know, right off, but I am reminded of this great poem by Stanley Kunitz, "The Science of the Night."(n1) It has all these terms from astrophysics, but it is a love poem. I do not want the students to write math poems about mathematics. The results would be predictable.

Mathematics teacher. Poems about how hard and boring mathematics is.

English teacher. I am afraid so. Same thing would happen with a poem about any class. But mathematics is a way to look at, well, almost anything. That should be the point.

Mathematics teacher. Then let's think about it and come up with something.

When we discussed the project a few days later, we articulated our objectives more clearly. We wanted students to apply their knowledge of mathematics to another field and give evidence that at least some of what they had learned had become a useful and easily accessible part of their general experience. In English, we wanted a method that would help young people avoid bland and hackneyed ideas; write fresh, clever, and memorable poems; and gain more of the skills and confidence necessary to approach new topics in new ways.

We formulated an assignment that helped us meet these objectives. Various factors inherent in our situation--including school climate, courses taught, and time of year--influenced our thinking. We teach at Lebanon High School (current enrollment, 790) in Lebanon, New Hampshire. Our school's administrators treat teachers with respect, grant them a great deal of independence, and encourage new approaches to classroom instruction, so we did not need to seek special permission for a project linking mathematics and English or expect anything from the administration other than support.

About 60 percent of our graduating seniors go to four-year colleges, with another 10 percent going to two-year schools or into the military. Most of our students want to succeed and know that they have to put forth effort to accomplish their goals; like many teenagers, however, they often exhibit a feisty independence and a keen desire to avoid routine assignments. They might be skeptical about the kind of project that we had in mind, but they would probably be intrigued by a new challenge.

The project involved two mathematics classes that had mostly ninth-grade students: transition mathematics, the focus of which was basic geometry integrated with arithmetic, algebra, and problem solving; and math topics 2 (honors), which covered such topics as inductive and deductive reasoning and proof, probability, statistics, matrices, coordinate geometry, and quadratic and cubic equations. The four ninth-grade English classes that also participated were general English courses that included instruction in such areas as vocabulary, grammar, and spelling; public speaking; classic and contemporary literature; composition; and analytical thinking.

The idea for a math poem came near the end of the school year. Although we felt rushed, the timing actually worked to our advantage. Our students were accustomed to unusual and creative assignments. They had almost a year's worth of mathematics topics and concepts to draw from, and the students who had this English teacher were about to begin the poetry unit.

We based the assignment on mathematics terms that had been taught during that particular school year, and we chose from a long list the words that seemed most appropriate for the assignment. Careful logic was not always used in selecting the words. Indeed, the English-teacher half of the team, who was only vaguely familiar with some terms and completely ignorant of others, simply liked the way some of them sounded. The students were expected to incorporate a certain number of words from the list in their poems. A minimum length would encourage students to develop their idea in some depth. We tried to offer easy, difficult, and ambiguous terms in a flexible mix that was long enough for students to find the right words to describe and explore their subject but narrow enough to force creative selectivity and the use of important terms learned during the school year. Because some advanced students knew all the words on the list and because many of them would be getting credit in both English and mathematics, we required them to use four more terms than students who were taking other mathematics courses.

We emphasized one other requirement--to write about anything other than mathematics class itself. Although we recognized the importance of students' feelings, we believed that writing about the class might lead to a trite description of it as difficult, boring, and purposeless--even when students actually enjoyed it. In contrast to our approach, Peggy A. House and Nancy S. Desmond, editors of the anthology of poems and stories Mathematics Write Now! (1994), have inspired students to write about mathematics itself with wit and intelligence.

We distributed the assignment sheets. The one shown in figure la was for the more advanced students, and the one in figure lb was for the less advanced classes. We allowed students several days to complete their first drafts.

The results were diverse, delightful, and thought-provoking. Students wrote about a wide range of topics, from amorous encounters to loneliness to shooting a basketball. The styles were as varied as the topics. Some poems barely reached the minimum number of lines; others were several pages long. Most used free verse, but students probably used as many styles and patterns of free verse as the number of poems without rhyme. The moods were many. We enjoyed--but were not particularly surprised by--the humorous pieces. We had not expected so many sincere, thoughtful, deeply personal poems.

The students have revised the sample poems included in this article. Thus, the poems do not necessarily have the number of mathematics terms or lines of poetry specified in the assignment.

When given the chance to write a poem--any kind of poem--young people often become reflective. In this situation, attempting to include the mathematics terms encouraged a healthy distance from, and verbal control over, emotions that might have otherwise run in deep, but narrow and predictable, patterns.

Although the phrasing was sometimes awkward and although the use of the mathematics terms was occasionally forced, the language was often fresh and fascinating. When one student, in a poem called "My World," wrote, "There is no need / to coordinate everything / Things don't need to add up / and balance the equation," terms usually associated with precision helped profess a desire for imprecision, as if a freely roaming imagination is itself a kind of calculation. In another poem, the terms helped a student compress the immensity of sadness into a small and vulnerable shape:

"I wish I knew where I am, to stop my endless rotation around the globe. The variation of the sky seems no longer of any importance, as if it was transformed into a box."

Even the well-worn use of box to suggest loneliness, conformity, and entrapment seemed justified somehow in the context of geometrical terms.

The completed poems were also instructive. Many students desperate for adjectives employed such clumsy expressions as parabola-shaped and cylinder-shaped. Suddenly obvious to us were the grace and efficiency of parabolic and cylindrical. We saw the importance of emphasizing the different forms of mathematics terms as they were introduced during the school year and requiring students to use them in writing. The next year, we added the following instructions to our assignment sheet: "You may use the words in whatever form you wish. (For example, you may use 'parabolic' instead of 'parabola' and 'matrices' instead of 'matrix.')"

Because some students also pointed out mathematics words that were not included but could have been, we encourage our students to help us create the list. This refinement indicates the mathematics words that the students know best and are most comfortable using in a different context. Their advice has promoted the inclusion of such marvelous words as quartic and factorial.

The students received credit for the assignment, but since it was only one of many given during the quarter in both mathematics and English, its impact on their grade for the quarter was minor. In mathematics, the poem counted as a quiz that received full credit if typed or written neatly, turned in on time, and completed according to the guidelines. The poems were then displayed on a bulletin board. Sharing the work gave students a glimpse of their peers' mathematical knowledge and creativity. In English, the math poem was one of seven or eight first drafts inspired by different guidelines during a three-week-long poetry unit. Each first draft turned in on time received a "check plus" and had a positive effect on the student's class-participation grade for the quarter. Students had to choose four of those first drafts to revise and include in a poetry portfolio, which was a major project and counted for a major grade. Some students did not choose to rework their math poems, but many of them did. As part of the revision process, some students cut lines or one or two mathematics terms to make their poems as coherent and compelling as possible.

The second year that we assigned the math poem, we asked students to assess the project in journals in their English classes. We were especially interested in how students felt about the assignment and what, if anything, they thought it had taught them about mathematics. Of the fifty-eight students who responded, forty-two enjoyed the assignment, some after initial skepticism; eleven did not; and five expressed no strong feeling either way.

Those students who liked the assignment did so for various reasons. Many appreciated the unusual nature of the task and the challenge of integrating mathematics terms in a poem. One student summed up the thoughts of many others when she wrote, "It makes you use your imagination. It also gets you to use vocabulary that usually wouldn't be used. Another reason for why I like the math poems is that when they are read out loud they sound really good."

Teachers who might have reservations about asking students to write a poem with such particular requirements should know that one student "appreciated having something to start with. Many of my poems have a problem with using the same set of words, over and over, even if used in a greatly different context." Another student thought that "having guidelines made it a little easier for me. It gave me a starting point to work from. I ended up using even more math words than required just because they worked in so well." Some students also enjoyed the impact that their work had on others. As one said, "Usually my poems are understood and make you think a little. But the one I did or wrote made people think a lot, which is a nice change for me." Another student reflected an important part of the assignment when she said, "I thought it was fun."

The students also learned about mathematics through its connection with poetry. One student expressed a common realization: "There are so many math terms that connect with everything." Another student articulated this idea from a slightly different angle, stressing the complex, ambiguous, and often unnoticed impact of mathematics on everyday existence: "A lot of math terms fit into real life, and some of them are things that when you see them surrounded by different words other than their math definitions, you would never think they would even be associated with math." One student mentioned a new awareness of the multiple meanings of many words in the English language.

Other students gained a new appreciation for the mathematics terms themselves. "This poem let me notice that all the frustrating vocabulary of mathematics can be used in different and beautiful ways." "Using the math words helps one to understand the words better and see how they can be applied to life." One student claimed that using the terms in her poem helped her remember their definitions, whereas another student wrote, "Math vocab. is a great resource of words to use in poems, stories, etc. They really help to give the detail you want to show when you're writing." Finally, one student summed up the assignment's potential significance with a grand perspective: "I realized that math is a much larger subject than we've always thought it to be. Math includes the whole world and we understand the world by math."

Some students did experience difficulties and frustrations. Several of them noted that the assignment was not easy. One said, for example, that it "was definitely the hardest poem I've had to do because we had to include the eight math terms, which made me stop and really think what went well and where." That student's problem was one of the teachers' goals.

Other students did not like the guidelines. "I tend to write a certain way with a certain feeling, and these words just weren't normal. I couldn't fit them in the story line." Another "couldn't write about anything which I was actually feeling because none of my emotions fit with 'parabola' and 'variation.' "A third student believed that he "was trying too hard to include math words." A student who wrote poems on her own expressed a position that will always be held by a few: "I just prefer to write poetry when I have no boundaries or guidelines." Some students who did not initially appreciate the guidelines did so by the time the writing process was completed. Others who maintained their opposition still wrote poems that seemed extraordinary to us.

The writers of Principles and Standards for School Mathematics (NCTM 2000) want high school students to "recognize and use connections among mathematical ideas" and to "apply mathematics in contexts outside of mathematics" (NCTM 2000, p. 354). Moreover, they call for students to "organize and consolidate their mathematical thinking through communication" (NCTM 2000, p. 348). Combining poetry and mathematics offers a fresh perspective on both disciplines and a chance to use mathematical terms to convey ideas and feelings in a creative and, at times, personally satisfying way.

Although nothing should stop an individual mathematics or English teacher from assigning the math poem, working together has advantages. By sharing our expertise, our students' feedback, and our impressions of our students' efforts, we have been able to refine the activity, promote high-quality student work, learn about and from each other's teaching methods, and simply enjoy the resulting poetry. The math poem affords an opportunity for a narrowly focused, yet rewarding, interdisciplinary project.

(n1) Stanley Kunitz, "The Science of the Night," in The Poems of Stanley Kunitz, 1928-1978 (Boston: Little, Brown & Co., 1979, pp. 97-98). When we first spoke of integrating mathematics and poetry, we were not familiar with Kunitz's elegant "Geometry of Moods," pp. 188-89, a more obvious example of a math poem.

Positive Energy

 Holding a basketball at mid court,

   Positive energy overflowing,

   I reach the three-point line.

The angles that my players have taken are


But the probability of my making the shot is


          When I pass the ball

My parallel teammate is dribbling along the

               base line.

            He lets the ball fly.

   It hits the square on the backboard.


       The positive energy multiplies.

The team makes a beeline to the locker room.


Seeing Stars

A star is born

 With magnitude proportional to the heavens.

Its radical beauty radiates cylindrical beams of


 A midpoint of the celestial matrix within a

  parallel universe,

A locus for those who have left us to spend eternity,

 This fiery ball is an exponent of God.

Is it probable that I will one day unite with my


Or forever admire it from our rotating Earth?

It only seems logical that my destiny should be


 And with death the equation of my life



My Victory

You won't find my coordinates.

 I am in constant translation.

 My position is unknown.

You can use all your fancy formulas and


 but you'll never be able

 to put a function to my name.

Your seemingly radical moves

 were only repeated rotations.

Now your pyramids are crushed,

 and your cylinders are empty.

You were once at the zenith

 of your parabolic dominance.

Now you're on your way down.


The Cylinder of Our Love

The probability that

I'm at the midpoint of life is very high.

Our love is parallel,

never intersecting.

The formula for our

relationship is never to see each other again.

Yet our rotation is not

looking good.

The radical thing

is how your mood is like a parabola,

      going up, down,

      and then up again.

The coordinate of my fate is almost like a


starting at a base, then shooting up to a

     point and never

     going any farther.

I'm not equilateral because everything is

out of proportion.

I think I need help.


Midnight at Sea

I caught a glimpse of light at dusk

Before the rotation of the earth

 Concealed the sun

And darkened the exponentially vast sea.

Looking to the horizon

The sea seemed to broaden

 Like a parabola,

My eyes the vortex.

Sailors were out tonight,

Their boats gently rocking,

 Bright waving sails always perpendicular

To the prominent cylindrical masts.

Gentle breezes

Interrupted the calm night.

They ran parallel to my sails,

 Propelling me away

From the worries of the mainland.

The radio crackled with static,

Transmitting an unimportant message.

I would have heard it if the radio functioned:

 The weatherman, a pessimist,

Merely guessing the high probability of showers.

With the light diminished,

And my boat seemingly at the midpoint

 Between civilization and eternity,

I dropped the anchor.

Alone without my map

I was at the mercy of the midnight seas.

My coordinates were a mystery.

 But why let that bother me

When I was where I wanted to be?


The Equation of Poetry

Mr. Keller + Ms. Davidson = Bad Poetry Assignment

   Period 3 + 4 = 7

Poetry and math don't mix.

   The probability that I get an A is low.

I only run 7 out of 8 cylinders.

   I just can't function.

This is a formula for disaster.

    It will blow the earth out of its rotation.

I'm not coordinated enough to do this.

    It is perpendicular to my nature.

This is the midpoint of my poem.

    Like a negative parabola, it can only go down.

Maybe if I don't think about it, I can parallel park.

    This poem is so bad, you'll need me to translate it.

This is no great wonder.

    The Egyptian pyramids were better.

Maybe it would look better if it were printed on a dot

matrix printer.

   It would look exponentially worse.

This assignment is radical.

    I hope an equilateral locus finds a congruent

    isosceles locus.

There, I used all the words.

    Now I can divide my grade in both classes by

    radical pi.



To put my life on a graph

would be pointless.

 It wouldn't just be a few parabolic lines;

 it would consist of so many peaks and valleys.

The most complex formula couldn't locate

all the coordinates.

I find that the people around me

are a function of my moods.

 People whose personalities parallel mine

 create an environment where I'm comfortable

 and content.

There are, however, the ill-accepted exponents

in the daily equation that give off

perpendicular vibes and overflow the

quadrants surrounding me with crooked

isosceles-shaped auras.

I wish I could engulf these outliers of my daily

scatterplot with a mighty radical and cut

them down to size.

 Then maybe I wouldn't feel quite so much

 like a rotation in which my midpoint is the

 people I interact with, who influence me

 when I least want them to.

My life might then possibly be a legible grid,

representing my plunging and leaping



Fig. 1

Poem Incorporating Mathematics Terms

Write a poem, at least fifteen lines long, about any subject, in which you incorporate at least eight of the words on the following list. If you have Ms. Davidson for mathematics, you must use at least twelve of these words.

























The advanced writing assignment

Poem Incorporating Math Terms

Write a poem, at least twelve lines long, about any subject, in which you incorporate at least six of the words on the following list. If you have Ms. Davidson for mathematics, you must use at least ten of these words.




















The regular writing assignment


Source: Mathematics Teacher, May2001, Vol. 94 Issue 5, p430, 3p 

Author(s): Isleb, Jo Ann; Albert, Maureen; Kasten, Peggy

Project CLIMB

Project CLIMB (Creating Links in Math and Business) is a teacher-developed project that was designed to help answer the students' question, When are we ever going to use this? The project allows precalculus students to communicate with people in the business world by using e-mail. Students are put into groups of three or four and assigned a business contact. The students determine from this contact person exactly what the company does, how teams are used in the company, and how specific mathematics topics are used by the contact person on the job. The student project includes six e-mail requests for information during a semester. The information requested centers on the precalculus topics of matrices, statistics, linear programming, logarithms, trigonometry, and probability. These broad topics are used by people in a variety of fields. The business contact uses e-mail to respond.

Project CLIMB was successfully initiated by starting small and expanding gradually. The first year of the project began with two classes and about twelve business contacts. The mathematics teachers who were developing the project carefully selected the business contacts. The contacts could easily access the Internet, used mathematics in their jobs, and could be relied on to give prompt and thorough responses. In the second year, the project was expanded from two to eight classes. An additional forty business contacts were located in a variety of ways, including referrals by individuals who were already involved in the project and solicitation over the Internet from businesses located in the community.

To participate in Project CLIMB, students must have an e-mail account. On the first day of the project, classes go to the computer lab and are taught how to use e-mail. At that time, one student in each group sends the business contact an e-mail message identifying himself or herself and asking for a description of the contact's job and company. As soon as the student receives a response, he or she shares it with the group and gives the teacher a copy. Another person in the group then takes responsibility for sending a second question. This procedure is followed until all questions have been asked and replies have been received.

The project was designed to make evaluation easy and to use little class time. Each group gets a folder to store paper copies of the e-mails sent and the e-mails received. Dates that the group sent or received e-mail are recorded on a cover sheet stapled inside the folder. Students have a few minutes of class time to discuss their progress. The assessment includes both individual and group components. Each student must send e-mail, communicate information to the group, prepare a summary describing what she or he learned, and participate in the group's presentation. The group's grade is based on the completeness of the folder.

The responses that students receive are rich and informative. For example, when asked by a student about real-life use of logs and exponents, a chemist replied as follows:

Chemists use pH as a measure of how acidic or basic a compound is. The scale for pH is from 1 to 14, where pH 1 is very acidic and pH 7 is neutral like water, pH is really a measure of how many hydrogen ions the compound releases into the water. Some compounds keep their hydrogen ions so that they are very acidic even though lots of the compound is in the water. Other compounds completely break apart, like hydrochloric acid and sulfuric acid, and so are quite acidic for the same amount of compound in the water. It also matters how much of the compound is in the water. It may be very dilute and not so acidic. For instance, the acetic acid in vinegar in your salad is pretty dilute and won't hurt you, but full-strength acetic acid will cause a very serious burn.

pH is really a negative log of the number of hydrogen ions.

pH = -log (hydrogen ions)

The base for the logs is base 10. If you have pH 7 water and add enough acid to make pH 6, it has 10 times as many hydrogen ions free in the water, so to go from 7 to 6 isn't a jump of just 1, but a jump of 10 when you consider the number of hydrogen ions.

Project CLIMB offers the following benefits to participating students:

The content addresses national, state, and district goals in an innovative and interesting way. For example, the Illinois Learning Standards state that

[s]tudents must have experiences which require them to make such connections among mathematics and other disciplines. They will then see the power and utility that mathematics brings to expressing, understanding and solving problems in diverse settings beyond the classroom.

High school students learn to use e-mail before they are required to use this technology in college or in the workplace.

Students learn from people in the real world how the mathematics that they are studying is applied.

Students develop an ongoing relationship with their business contact, and they learn to communicate in a professional manner.

Students enjoy the project.


Source: Newsweek, 03/05/2001, Vol. 137 Issue 10, p45, 2/3p, 1 diagram 

Author(s): Levy, Steven

The secret is a key that disappears when you use it

It may roundly be asserted..." wrote amateur cryptography maven Edgar Allan Poe, "that human ingenuity cannot concoct a cipher which human ingenuity cannot resolve." Harvard professor Michael Rabin begs to differ. Last week he revealed details of a scheme called "hyper-encryption" which purportedly delivers a means to protect information that's mathematically guaranteed to be unbreakable.

Rabin's system, which ignited heated debates on Internet discussion groups after an article in The New York Times, is far from being implemented. In fact, Rabin has yet to even publish a paper on the idea (developed with his doctoral student Yan Zong Ding). But it's worth thinking about because it addresses an important problem in protecting our private messages and conversations over the Internet and on mobile phones. How can we know that those systems can't be broken? It's true that there are plenty of ways to crack a code in addition to attacking the mathematical system that actually scrambles messages: there are plenty of potential pitfalls in implementation that may be exploited. And of course, if all else fails, you could pummel the sender or recipient until he coughs up the goods. But the mathematical formulas that make up the heart of those systems are critical, and Rabin's idea might cast light on how to make these permanently secure.

The idea begins with a source of an unending stream of random numbers, perhaps a satellite blasting huge volumes of bits in rapid fire. So many, in fact, that it's impossible for the most advanced storage systems imaginable to capture them all. When people want to communicate with hyper-encryption, their computers "agree" on a way to grab certain of those numbers, "like plucking raisins out of a vast pudding," says Rabin. Those random numbers (the equivalent of the normally impractical "one time pad," the only previous form of provably unbreakable cipher) are used to help the sender scramble the message--then the recipient uses them to help restore it to the original form. As the sender and recipient use those numbers, the computer discards them: think of the tape recording in "Mission: Impossible" when the message self-destructs. So even if a foe captures the scrambled message, then learns which pattern was used to grab numbers from the stream, the snoop won't be able to decode the message, because the crucial random numbers from the stream will be gone.

Policy issues, such as whether unbreakable codes will give terrorists an unbeatable edge in hiding their activities, can come later, when and if hyper-encryption is put into practice. For now, "we can prove secrecy," says Rabin, obviously delighted at going public with his brainchild.

How hyper-encryption works

Professor Michael Rabin's mathematically secure scheme allows people to pass secret messages that stay secret, forever.

A huge stream of random bits is broadcast by a source. Alice and Bob pluck bits in a secret, prearranged pattern from the stream. When Alice sends her message to Bob, those bits help scramble the message. Bob also knows which bits were taken so he can unscramble the cipher. Alice and Bob don't retain the random bits, and an eavesdropper can't break the code because the random stream can't be duplicated or completely stored.


Source: WorldLink, Mar/Apr2001, p10, 2p 

Author(s): Matthews, Robert

What connects the number of war dead or the intensity of an earthquake with mounds of grain? Robert Matthews describes research that shows a numerical correlation among the three

A December night in New York City: a taxi stops on Fifth Avenue, near Central Park. The passenger gets out and starts to cross the road. He is English and for a moment forgets that in the US they drive on the right. It is already too late: he is struck by a car travelling about 50 kilometres an hour.

By some miracle, he is not killed, but it takes him months to recuperate fully. His recovery was more than a personal victory. The year was 1931, and the man was Winston Churchill. Had the car been travelling just a little faster, the 57-year-old future British prime minister almost certainly would have died.

That road accident in Manhattan 70 years ago was one of those hinges of fate on which the history of entire nations has rested. It is far from unique, of course. The torrential rain that allowed the lightly armed force of England's Henry V to defeat the far larger French army at Agincourt in 1415 is a famed example, as is the heap of unwashed dishes in the laboratory of Alexander Fleming that led to the discovery of antibiotics. Churchill himself wrote of how in 1920 the king of Greece died after being bitten on the nose by a monkey, setting into motion events that led to a war in which 250,000 Greeks and Turks died.

In the past, some historians have seen such anecdotes as evidence that historical events are the result of actions by just a few key people. Or, as the 19th century British historian Thomas Carlyle put it, "history is the biography of great men".

Most modern historians see this as too glib, and the hinge-of-fate view of history has fallen into disrepute. Events of historical significance are now typically viewed as the product of a host of interacting forces, none of which can be identified as indisputably crucial.

But is this modern view of historical events anything more than current academic fashion? Is there a way of deciding which -- if any -- of these two views is closer to the truth? Intriguingly, a number of scientists are starting to claim that there is. They are talking of an astonishing possibility: of using mathematics to cast light on the causes of history.


To many, the suggestion that mathematics can be applied to human affairs is absurd. People and events surely do not follow the dictates of an equation to four decimal places. Of course, those making the claims are saying no such thing. What they are saying, however, is that mathematics, if used judiciously, can cast an altogether new light on historical arguments.

While the full implications are still being explored, the first hints of this intriguing possibility emerged more than 70 years ago in the work of English physicist Lewis Fry Richardson.

Eclectic in his interests, Richardson is now recognised as one of the pioneers of multidisciplinary research, one who saw no barriers between the physical and social sciences. Trained as a physicist at Cambridge University, he completed a degree in psychology in his late 40s, and spent the last years of his life investigating how mathematics might shed light on armed conflict.

Richardson also wrote two books based on some of his results. The first, Arms and Insecurity, was an attempt to understand arms races, and how they spiral out of control and lead to war. While Richardson was able to derive some support for his ideas from military expenditure figures of opposing forces before World War I, the predictions of the theory have proved hard to square with subsequent wars.

His second book was less ambitious, but is now emerging as potentially far more important. Entitled The Statistics of Deadly Quarrels, it attempted to bring together data on all wars since 1820.

Among the book's various tables is one listing the number of wars in which deaths exceeded a given figure. For example, he found that between the years of 1820 and 1929 there were 24 conflicts in the world, each of which resulted in more than 30,000 deaths. More accurately, Richardson stated the loss of life in terms of "magnitudes", calculated by converting the raw figure into logarithms, with 30,000 becoming a magnitude of 4.5.

His terminology is suggestive, for it hints at a link with another form of global catastrophe, earthquakes. The parallels go deeper than mere words, however. Plotting a graph of the numbers of wars against the death-toll magnitudes they produced, Richardson made a surprising discovery: they followed a straight line. At its extremes, the line reflected common sense, showing that there have been relatively few wars producing huge death-tolls. Skirmishes resulting in several thousand deaths are far more common. But, remarkably, conflicts between these two extremes also lie on the same straight line.

As such, Richardson's warfare graph bears a striking resemblance to another, more famous graph: the Law of Earthquake Violence. The law was first identified in 1956 by American geophysicist Charles Richter and his colleague, Beno Gutenberg.

Like conflicts, devastating quakes are mercifully rare, while minor tremors occur all the time. But, as Richardson also discovered with conflicts, quakes of intermediate magnitude also lie along the same straight line.

What could be the connection between the two? According to new research the answer lies, surprisingly, in the behaviour of heaps of grain.

If you empty a bag of rice onto a plate, you'll end up with a more or less stable conical heap once the grains have settled. Now slowly start adding more grains, one by one. At first it makes no difference. But some of the falling grains will eventually trigger avalanches down the sides of the heap. And every so often, a single grain will cause a whole side of the heap of rice to collapse.

In 1995, Kim Christensen and her colleagues at Imperial College, London, carried out a detailed study of the precise size and number of these avalanches. The results of the study showed that they too followed the same straight-line law as earthquakes and wars, and that the big ones are rare and the small ones common.

Some scientists believe this is significant. For the behaviour of grains is known to be a manifestation of so-called self-organising criticality (SOC), in which systems teetering on the brink of instability suddenly organise themselves into a more stable state.

There's no telling which of the grains will cause the sudden change, or how big the change will be. All that can be said is that the number of such changes and their magnitude will follow a straightline law.

A growing number of geophysicists suspect that the Richter-Gutenberg law of earthquakes is evidence that the earth's crust is in a critical state; the slightest disturbance is capable of generating an earthquake of any magnitude.

But now some researchers think that Richardson's law of the correlation between the size and frequency of conflicts is also linked to criticality. The implications are intriguing. For, as physicist and writer Mark Buchanan points out in his new book, Ubiquity: The Science of History, Richardson's law would then mean that "the world's political and social fabric tends to be organised on the very edge of instability."

It would also mean that, just as a single grain can bring on any size of avalanche, the ultimate size of a conflict cannot be predicted. Trivial causes could lead to a border skirmish or all-out war.

Such implications seem to strike a chord with some historians, who are starting to describe events as social "earthquakes" that follow the same basic law as their geologic counterparts. Niall Ferguson of Oxford University has said that the war-torn period between 1914 and 1945 "may be likened to the slipping of a continental plate, and to the resultant season of earthquakes". The mathematics of criticality adds new and surprising depth to such metaphors.

It also casts light on the key question of whether it makes any sense to search for the cause of a given historical event, in the hope that this same cause may be identified again one day.

If historical events are the result of society being in some form of critical state, then the answer is no. There is no more chance of being able to identify. the key stimulus unleashing historical upheaval than there is of being able to say which piece of falling rice will trigger a huge avalanche of grain.

Scientists working on this fascinating question are quick to insist that they are not claiming a mathematical proof of Henry Ford's famous dictum that "history is bunk". For a start, the evidence that any aspect of society is in a critical state is still tentative, though the search is on for more.

Even so, Richardson's law is certainly suggestive. His 70-year-old graph has since been updated and refined by a number of researchers. But its basic form remains unchanged, and its message is the same: some aspect of society is constantly teetering on the brink of immense change.

Carlyle's view that history boils down to the actions of a few key people is not so much wrong as incomplete. For the picture now emerging with mathematical clarity is that history is not merely the result of the actions of people. They must also be in the right place at the right time if they are to trigger a revolution rather than raise a few eyebrows.


Source: ETC: A Review of General Semantics, Spring2001, Vol. 58 Issue 1, p22, 14p


"Investigators have begun to address in earnest the effect that language has on mathematical development"

RESEARCH NOW SEEMS POISED to start in earnest to fill the great academic gap between English and mathematics.

If you're a regular reader of this series, you'll understand the Mathsemantic Monitor's glee in making this announcement. You know that most of the previous twenty-three pieces have deplored in one way or another the total separation of our two most basic disciplines.

The very first piece told of how reports in English said that students had "read 80,000 books" at a school in New Jersey, whose library held only 9,500 books, so that the 80,000 actually applied to the readings rather than to the books. (1) This kind of numerical displacement is a standard feature of good English. To illustrate the significance of such displacements, the article went on to note a Federal Aviation Administration report that it had handled "about 143 million aircraft" in fiscal 1992. Now what makes this statement remarkable is that the entire U.S. scheduled air-carrier passenger fleet at the time amounted to only about five thousand aircraft. With that in mind, you can see that the words, "143 million aircraft," cloak the fact that for the year any particular aircraft had about one chance in seven of encountering an FAA operational error.(2)

What's really significant here is that the cloaking of the one-in-seven chance per aircraft results not from any misadventure of English or math alone, but only from their combination. We fail to penetrate the cloak not because we're inadequate in either math or English, but because we've studied math and English only separately, each in its own special academic compartment. As a result, this kind of numerical displacement by English has not been brought to our attention. We have no inkling that there's a cloak to remove.

The most recent piece more clearly targeted the academic separation. (3) It contrasted our locally general language, English, with the globally special language, math, which happens to communicate relationships, such as interrelated rates of change, particularly well. Unfortunately the impact of the changes, whether environmental or of other practical concern, can't be appreciated without blending both languages, English and math, a blending ability few people have. This situation so concerned the Mathsemantic Monitor that he promulgated a 30th proposition, a summary to go with the 29 earlier ones featured in his book.(4)

30. Mathsemantic competence requires a reasonable combination of simple math and ordinary English (or whatever ordinary language you speak).

To remedy the situation, he pleaded for academic English-math cooperation. "How about it? Let's start tearing down the barrier."

So why the sudden glee? Surely the world has not changed that much. No, but it has changed a little, and so also has the Mathsemantic Monitor's outlook. "Perhaps," he now imagines smilingly to himself, "I'll get to see some English-math rapprochement in my own lifetime." (5)

Older Studies

Interest in the meanings of mathematics has existed since ancient times. Pythagoras, for example, around 540 B.C., presumably after discovering the "wonderful harmonic progressions in the notes of the musical scale, by finding the relation between the length of a string and the pitch ... saw in numbers the element of all things." For Pythagoreans doing geometry, for example, the number "one came to be identified with the point; two with the line; three with the surface, and four with the solid." However, it didn't stop there. "One was further identified with reason; two with opinion"; and "four with justice." "Five suggested marriage, the union of the first even [=2] with the first genuine odd number [=3]." The "attachment of seven to the maiden goddess Athene," seems to have stemmed from the fact that seven isn't involved with multiplication as much as six, eight, nine, and ten. (6)

A quite different interest in mathematical meanings began twenty-five centuries later with the efforts of child psychologist Jean Piaget. He wanted to know how a child came to develop various abilities, including mathematical ones. He and his associates, notably Barbel Inhelder, studied experimentally the child's "conception of" time, number, space, geometry, speed, and logic, among other topics. The many books produced by Piaget and the Piaget laboratories in Switzerland, starting (in French) in the 1920s, reached a flood several decades later. Their most basic finding in each case, always based on experimental results, was that a child's understanding develops in particular stages. (7)

In a child's developing a conception of relationship, for example, Piaget found three stages. To take a nonmathematical instance, in the first stage (at about age six), a child says simply that a brother is a boy. In the second and clearly intermediate stage (at about age nine) a child says that a brother is, "When there is a boy and another boy," but also says that only "the second brother that comes is called brother." Only in the third stage does the child reach an adult conception, "a brother is a relation, one brother to another." (8)

Despite his early emphasis on a child's language (9), Piaget did not question specifically how language might affect a child's acquisition of mathematics. Nor did his followers. For the most part they regarded language as a natural product of the child's thought, and the child's thoughts as a natural product of the child's stage of development. Their research concentrated on determining what these stages were, when they appeared, and how the child moved from one stage to another.

In an unrelated line of studies, ethologists demonstrated, starting with birds, that animals have at least some innate sense of number. The experiments of Otto Koehler (1889-1974) showed that pigeons, for example, could distinguish the differences between numbers of objects (or actions) only to five, after which they became confused. Ravens and parrots could get to seven. Humans do better because they can count aloud or to themselves in words, "one, two, three,..." To determine the human ability excluding language, Koehler flashed different numbers of objects on a screen too briefly to allow a count. Under these circumstances, people's abilities were remarkably similar to those of birds. Some human subjects could distinguish only to five, and few reached as far as eight. (10)

More recent studies have used the window into nonverbal cognitive capacities afforded by attention time. (11) It is now well established that animals and human infants will gaze at unexpected scenes longer than at expected ones. Researchers can thus investigate expectations ("thoughts," "built-in beliefs") without language. If, for example, a caged monkey pays more attention to an object that freefalls on a slant (controlled by invisible wires, say), than one that falls straight down, then one can conclude that the monkey expects the straight drop. This means that the monkey has a kind of prelinguistic grasp of the effects of gravity.

Using attention-time experimental designs, researchers have shown that children develop some abilities much sooner than Piagetian research had shown. For example, such studies reveal that human infants as young as four months expect ("think") that one and one will equal two. A typical experiment goes like this: build a miniature stage with a movable curtain. With the curtain open, show the infant a single doll approaching, entering into, and then appearing in the stage area. Close the curtain. Now show a second doll doing the same thing. Open the curtain. If the stage now has two dolls where only one had been before, the infant quickly loses interest. If, however, the stage (which has an unseen back door through which the experimenter can "alter reality," so to speak) now shows only one doll, the infant will gaze at the stage for a longer time. In this way the experiment demonstrates that the infant expects one and one to make two.

From such clever experiments it has now been shown that certain arithmetical abilities are clearly prelinguistic, probably innate, and at least that they develop without known instruction. These abilities include at least subitizing, the ability already noted to distinguish precisely between small numbers, the adding and subtracting of very small numbers, and approximating, the ability to make reliable distinctions between larger collections of quite different size (such as, say, between 20 and 12).

These modern studies, then, focused on how mathematical ability relates to factors other than language.

A More Recent Trend

More recently, however, language has come to the fore. Investigators have begun to address in earnest the effect that language has on mathematical development. I'll mention five books that illustrate this trend. Eleanor Wilson Orr's 1987 book, Twice as Less, focused on the effect that non-standard English usage might have on learning mathematics. (12) It specifically asked on the dust jacket, "Does Black English stand between black students and success in math and science?" It's not an easy book to read, because it deals with algebraic word problems, many people's worst dread. It suffers also from its narrow concentration on one group. For example, it attributes some usages to Black English (such as, "two times smaller than") that are also found in the scientific writings of astronomers, microbiologists, and computer scientists. That the book arrived while some people were pushing Black English (Ebonics) didn't help, and it has apparently not shaken its unfortunate status as politically incorrect.

The Mathsemantic Monitor's own book Mathsemantics: Making Numbers Talk Sense appeared in 1994. (13) It said there's a field available for scholarly study where math and the meanings of ordinary language interact, but which both disciplines have neglected, and it dubbed this area "mathsemantics." It described the dimensions of this field as involving both "the math side and the semantics side, childhood beliefs and stages, errors of all types, education and math anxiety, linguistic and cultural differences, evolution and history, math notation and number-memory, games and sports, childhood exercises that develop mathsemantic savvy, physical science and its philosophy, money and jobs, politics and the media, business and the professions, population and the environment, estimating and accounting, punctuality and time frames, gender differences, statistics and surveys, the future, what we can do about it, and so on; you name it." It wouldn't be fair, perhaps, to cite it, by itself, as particularly indicative of any trend.

Stanislas Dehaene's book The Number Sense: How the Mind Creates Mathematics appeared in 1997. (14) It presents neurological and other studies of innate mathematical abilities, the so-called "primitive module," and also the developmental effects of language.

Though mathematical language and culture have obviously enabled us to go way beyond the limits of the animal numerical representation, this primitive module still stands at the heart of our intuitions about numbers.

Dehaene asks, "How did Homo sapiens alone ever move beyond approximation?" and answers, "The uniquely human ability to devise symbolic numeration systems was probably the most crucial factor." He then traces some of the development. Languages distinguish through inflection the differences between one and two (in English, singular and plural), "but no language ever developed special grammatical devices beyond 3." Counting depends on the body, so that "in countless languages ... the etymology of the word 'five' evokes the word 'hand'"; "children spontaneously discover that their fingers can be put into one-to-one correspondence with any set of items"; and in New Guinea the counting by body parts has evolved so far that "the word six is literally 'wrist,' while nine is 'left breast.'" Try counting with your right hand on your left hand, follow through on your wrist and arm, and you'll see how this works quite naturally.

Keith Devlin's The Math Gene: How Mathematical Thinking Evolved and Why Numbers Are Like Gossip appeared in 2000. (15) It quickly disavows the idea of a single math gene, saying it is a purely metaphorical expression for the idea that one has "an innate facility for mathematics." Devlin then makes language the central player.

My argument that you possess the math gene -- i.e., that you have an innate facility for mathematics -- is simply this: your genetic predisposition for language is precisely what you require to do mathematics.

Devlin defines mathematical thought more narrowly than most people would. He excludes subitizing, counting, adding, subtracting, and approximating, whether these are the products of innate abilities or aided by language. For him, mathematics requires a level of abstraction beyond ordinary language.

As he sees it, language is the third level of abstraction, the first one that permits off-line thinking. (16) (To follow Devlin's arguments, one must regard language not as communication but as a representational system.) It is found only in humans, permits them to have "imaginary versions of real objects," which is " "to all intents and purposes, equivalent to having language."

What Devlin says distinguishes the fourth (or mathematical) level is that "mathematical objects are entirely abstract; they have no simple or direct link to the real world, other than being abstracted" from it. The simplest example is algebra, where letters (such as, a, b, x, y) stand for numbers, not particular numbers, but variable numbers, numbers in the abstract.

Devlin then discusses how particular characteristics of language, as given mainly in the works of Bickerton (17) and Chomsky (18), enable the further step to mathematics.

A Korzybskian could with good reason dispute some of Devlin's distinctions. Nevertheless, one should applaud his attempt to show that mathematics is an outgrowth of ordinary language abilities.

Another book published in 2000, the fifth and final book I wish to mention here, is Lakoff and Nunez's Where Mathematics Comes From. (19) The authors are, respectively, a linguist and a psychologist. They claim with some justification that their book is the first real attempt to ground mathematics totally in the human brain, body, and language.

It starts by reviewing the by-now familiar territory of innate abilities but quickly moves to more fundamental embodied-mind abilities, such as basic motor control, which gives rise to the source-path-goal schema. What makes the schema important are the distinctions it involves: a starting point, a moving trajector, an intended destination, an intended route, the actual trajectory, position at a given time, direction at that time, and the actual final location.

From this and simpler schemata (such as "container"), the book argues, each human creates categories and basic logical relations. The creations arise through metaphorical mappings. For example, take the "categories are containers" metaphor, which maps things true of containers onto categories. If one puts an item in container A, and puts container A in container B, then the item is in container B. Change "container" to "category," and you have the metaphorical mapping. The argument proceeds with laudable care.

From such beginnings, with the critical addition of the ability to use symbols (language), the book argues that humans have created mathematics by a long (and unended) series of metaphorical mappings (such as of "object collection" and "motion along a path" onto arithmetic). A critical metaphorical mapping was that of "iterative processes" ("John jumped and jumped again, and jumped again") onto the unending series of integers used for enumeration. This permitted a numerical characterization (but not the only one) of mathematical infinity.

The metaphorical mappings grow ever more complicated until they encompass algebra, symbolic logic, sets and hypersets, real numbers, transfinite numbers, infinitesimals, and the discretization program (in which innate geometry goes out the window, calculus and space become motionless arithmetic, and "points" are not "in" space; rather the numbers for the points define space). It's a veritable tour de force.

The book, as the authors say, is not mathematics; it's a cognitive study of mathematics. It tries to trace mathematics back to human experience, an embodied mind. In a way it's like Ernst Mach's program of showing that abstract concepts draw their ultimate power from sensuous sources, but with a sophistication Math might never have imagined.

Now, what these five books have in common is an emphasis on the relationship of mathematics and ordinary language. (20). Orr shows how non-standard English can disrupt mathematical learning. MacNeal identifies the area where mathematics and ordinary meanings mix and names it "mathsemantics." Dehaene says we share innate, primitive, mathematical abilities with many animals that language permits us to transcend. Devlin says mathematical ability (in his more abstract sense) depends on a further abstraction from ordinary language. Lakoff and Nunez detail the progression from innate math and language, via schemata and metaphor, to algebra and higher math.

Perhaps at no other time have so many books appeared that investigate, each in its own way, the relationship of math to ordinary language.

A Current Research Project

The Mathsemantic Monitor's book addressed a combined math language overlap field and asked a question outside the purview of most present-day math and English teachers. It is a question, however, that readers of this journal might see as central; for it is in the Korzybskian tradition. The question is this: In what ways does an unconscious reliance on the meanings implicit in ordinary language disturb mathematical understandings that conscious training could avoid? The book presents examples showing many kinds of disturbance at work, which reviewers have generally applauded. (21)

That much done, the next question would seem to be this: Where should the necessary conscious training take place? The answer is not easy, for neither of the usual candidates, math and English departments, has jurisdiction over mathsemantics. It's apparent that what's needed is some kind of alliance of Englishmath interests. Sadly, until a short time ago, the Mathsemantic Monitor had had no success in generating any such alliance. (22)

The turning point was an e-mail from a Dr. Kurtis H. Lemmert, Associate Professor of Mathematics at Frostburg State University (FSU), the westernmost institution of the University System of Maryland. Dr. Lemmert fired off his e-mail as a shot in the dark. He wanted to know if the author of Mathsemantics had any ideas that he (Lemmert) might pursue on a sabbatical he intended to seek. The answer was yes.

A few e-mails later, Dr. Lemmert had proposed a program of mathsemantics research for his sabbatical, and, through friends in the FSU English Department, had found a Dr. Glynn Baugher, who had recently become Professor Emeritus of English. Dr. Baugher was interested. Math had been his undergraduate major. The three of us now constitute a mathsemantics research team representing English, math, and business.

Our first objective is to determine where mathsemantics might be taught. The research plan approaches this by a survey that asks (a) whether, and in what courses, respondents have studied mathsemantic problems in their own classroom education and Co) whether, and in what courses, they think such problems should be addressed in formal education.

The medium is a questionnaire listing fourteen different mathsemantic problems. The questionnaire names each problem, gives an example, states why it is a problem, and then asks the two questions, "Did you study this in class; if so in what class?" and "Do you think such instruction should be given?" (with "English," "Math," and "Other" listed as choices, and room given to write in the others).

The questionnaire makes no attempt to resolve the problems. Its purpose is to collect information, not to teach. However, to avoid a single bias toward math or English, the questionnaire has two versions, one slanted toward each subject. Thus, where the math version speaks of "multiples," "adding and multiplying," "downward comparisons," and "countable and uncountable things," the English version refers to "plurals," "combining," "marked adjectives," and "mass versus count nouns." Both versions, however, provide examples in exactly the same wording, ask the same two questions, and request the same classifying information (age, sex, educational status and attainment, field of study or work, geographic location, etc.).

By now, the initial surveys will have been run at FSU, the primary site, and (for comparative purposes) at a private school near Philadelphia. An interact version will have collected responses from other places, which, with the assistance of Dr. John Gough of Deakin University, should include a batch from Australian math educators.

Preliminary results of the survey have probably also been announced by now in a symposium, "Making Mathematics Meaningful," scheduled for May 4, 2001, at FSU, the 30th in the mathematics series held there annually.

The initial returns (fewer than 100) already show some interesting results. In total, the "should be taught" responses run about double the "have studied" responses, although this varies by problem. Both English and math figure to some degree in the answers to every problem, but respondents assign some (like "approximations") more, but never exclusively, to math, and others (like "plurals versus collectives") more to English. Some interesting problems, like "adding (combining) unlike things," have many respondents picking both math and English, a choice made possible but nowhere mentioned in the questionnaires.

If you have access to the internet, you can see the English questionnaire at http://www.mathsemantics.comlQNRE.shtml. For the math version, just change QNRE to QNRM.

The survey is by invitation only, one questionnaire per person. Therefore, if you'd like to send in a response, please pick just one or the other, whichever you like. Then, in the box for group invitation code, enter TMM-24 (which means "The Mathsemantic Monitor, article #24," the number of this one). If you want to receive e-mail information on the survey's results, there's a box for that also, near the end.

If you wish to participate in any other way in the research, you can mention that in a final "comments" box. Please note that, to the extent possible, we hope all on-campus distributions of the questionnaires will involve two-person teams, one member to represent the more math-related fields and the other to represent the more English- or humanities-related fields.

Dr. Lemmert, assisted by Dr. Baugher and me, is planning to give a short mathsemantics seminar this summer or fall. This will permit collection of after-seminar responses, to show the extent to which the seminar raises mathsemantics awareness and how this alters views regarding where corrective instruction should be given.

Drs. Lemmert, Baugher, and I, as a team, intend to write reports for our respective audiences; and Dr. Gough, for his. If all goes well, you'll be receiving reports in future articles in this journal series. Look for them. Meanwhile, you could check at http://www.mathsemantics.com/MS-Research.html for news.

A persona of aviation consultant, demalogician, etc., Edward MacNeal, a regular contributor to these pages. Viking published his Mathsemantics: Making Numbers Talk Sense in 1994. The International Society for General Semantics published his MacNeal's Master Atlas of Decision Making in 1997. Copyright (C) 2001 Edward MacNeal.


Source: Journal for Research in Mathematics Education, Mar2001, Vol. 32 Issue 2, p195, 28p, 1 chart, 1 graph 

Author(s): Hershkowitz, Rina; Schwarz, Baruch B.; Dreyfus, Tommy

We propose an approach to the theoretical and empirical identification processes of abstraction in context. Although our outlook is theoretical, our thinking about abstraction emerges from the analysis of interview data. We consider abstraction an activity of vertically reorganizing previously constructed mathematics into a new mathematical structure. We use the term activity to emphasize that abstraction is a process with a history; it may capitalize on tools and other artifacts, and it occurs in a particular social setting. We present the core of a model for the genesis of abstraction. The principal components of the model are three dynamically nested epistemic actions: constructing, recognizing, and building-with. To study abstraction is to identify these epistemic actions of students participating in an activity of abstraction.

Key Words: Abstraction; Activity; Construction of knowledge; Context

Abstraction has been the focus of extensive interest in several domains, including mathematics education. Many researchers have taken a predominantly theoretical stance and have described abstraction as some type of decontextualization. In this article, we propose a different view of abstraction and show how our view leads to a fresh approach to research on abstraction.

We are practitioners who are informed about recent theoretical research, but we are also deeply involved in a curriculum design, development, and implementation project. This curriculum has been built around extended problem situations. We have been considering not only what abstraction could mean in the framework of this curriculum project but also how processes of abstraction manifest themselves empirically in project classrooms. Thus, although the outlook of this article is theoretical, our thinking about abstraction has emerged from the analysis of experimental data.

Our empirical approach led us to focus primarily on process aspects of abstraction rather than on outcomes. We see abstraction as a process in which students vertically reorganize previously constructed mathematics into a new mathematical structure. We also pay careful attention to the multifaceted context in which processes of abstraction occur: A process of abstraction is influenced by the task(s) on which students work; it may capitalize on tools and other artifacts; it depends on the personal histories of students and teachers; and it takes place in a particular social and physical setting. We thus take a sociocultural point of view, as opposed to a purely cognitive or a purely situationist one.

While investigating processes of abstraction in a number of case studies, we identified three observable epistemic actions that are characteristic for abstraction: constructing, recognizing, and building-with. We present one case study, a teaching interview of a Grade 9 student. We designed a task to encourage this student to capitalize on outcomes of previous abstractions to construct new knowledge. We show how the three epistemic actions emerged during the teaching interview. Finally, we propose a model of abstraction that integrates the three epistemic actions in a dynamically nested manner; the nesting takes into account how the student makes use of previous abstractions. Moreover, the context in which the process of abstraction occurs is vital in the characterization of the components of the model. The model constitutes the main result of this article. It is operational in the sense that it provides means to empirically study abstraction.


Abstraction has been an object of intense inquiry in philosophy. Not only did Plato and his followers see in abstraction a way to reach "eternal truths," but modern philosophers such as Russell (1926) characterized abstraction as one of the highest human achievements. Rather than provide a review of the vast domain of research on abstraction, we will refer only to work that is immediately relevant to this article.

Cognitivist Approaches to Abstraction

Classical cognitive psychologists considered (a) the extraction of commonalties from a set of concrete exemplars and (b) the corresponding categorization as the main features of abstraction (e.g., Rosch & Mervis, 1975). To them, abstraction is the transition from concrete to abstract, that is, to the set of commonalties. Piaget's (1970) idea of reflective abstraction led to a remarkable extension of this classical approach. It allowed him to deal with the categorization of mental operations and thus with abstraction of mental objects. The outcomes of reflective abstraction, the schemes, are the building blocks of knowledge at every level of development. Reflective abstraction extracts schemes from a pattern of related actions. This process leads to constructive theoretical models that are logically consistent.

Following Piaget, several mathematics educators have proposed descriptions of the process or mechanism by which students shift their foci from the concrete to the abstract (see Dreyfus, 1991, for a brief review). For most of these educators, abstraction proceeds from a set of mathematical objects (or processes) and consists of focusing on some distinguishing properties and relationships of these objects rather than on the objects themselves. The product of abstraction consists of the class of all objects that have the distinguishing properties and enter into the distinguishing relationships. This process of abstraction is thus a process of decontextualization--of ignoring both the objects and some of their features and relations, often those linked to a particular realization or representation. The process is linear, proceeding from the objects to the class or the structure, which may then be considered an object on a higher level. In the classical approach, abstract is considered an intrinsic property of this new object; this property is, however, not directly accessible.

In spite of the animated theoretical debate that has taken place on the nature of abstraction, little experimental research is available. For example, Stevenson (1998) has recently stated, "Although there is little or no empirical support for [von Glasersfeld's] specific assertions about the progressive abstraction of concepts, he does provide a clear account of how they might develop" (p. 94). We surmise that the lack of experimental evidence is due to the difficulty of observing the processes of abstraction (as opposed to the products, for which there is more evidence). A notable exception is a study conducted by Goodson-Espy (1998), who observed abstraction during problem solving. Although firmly anchoring her study in the framework proposed by Sfard (1991), Goodson-Espy used the notion of the levels of abstraction theoretically proposed by Cifarelli (1988/1989). For example, the lowest level, namely the ability to recognize characteristics of a previously solved problem in a new situation, is called recognition, a notion we will discuss; in other words, abstraction depends on the personal history of the solver.

This view of dependence on the personal history of the solver conforms to the views of theorists who recognize the importance of context in processes of abstraction. Not only personal history but also the use of tools and social interactions are contextual factors that may influence abstraction processes. The contradiction between decontextualization and the dependence of the abstraction process on context is only apparent. Two separate notions of context are involved: the context of mathematical objects and a set of external factors. The person who abstracts gradually ignores the context of the various mathematical objects. However, the set of external factors may influence this process of abstraction. In the cognitivist approach, the context that may influence the process of abstraction is thus considered as a set of external factors. In the next subsection, we take a different position with respect to context.

Recently, several authors have criticized the classical approach and proposed other approaches. For example, Ohlsson and Lehtinen (1997) stated that to identify an object as an instance of an abstraction, the knower must already possess that abstraction in some way. The cognitive mechanism of abstraction is the assembly of existing ideas into more complex ideas. Thus, the process does not lead unidirectionally from concrete to abstract. Concrete and abstract are not separate entities but are linked rather than detached during the process of abstraction.

Even more fundamentally, Confrey and Costa (1996) criticized the primacy given in the classical approach to the very notion of mathematical object. They claimed that this primacy may reinforce a narrow perspective of the mathematics community because it separates mathematical thinking from its origins in social contexts and neglects the development and use of mathematical tools. Others have expressed criticism of the fact that abstraction is considered as a mental activity of a solipsistic character in which the role of the environment (social interactions, tools) is disregarded (e.g., Greeno, 1997). Noss and Hoyles (1996) situated abstraction in relation to the conceptual resources students have at their disposal: When students progress through a succession of activities (in a social context, in the presence of tools), they learn to attune practices from previous contexts to new ones. Therefore, according to Noss and Hoyles, students do not detach from concrete referents at all. On the contrary, there is a process of webbing, in which students connect to previous similar activities and capitalize on the tools they have at their disposal to construct new mathematical knowledge. However, Noss and Hoyles did not clearly articulate the link between webbing and the construction of new knowledge and thus did not provide a framework within which to investigate the process of abstraction.

In the perspective on abstraction we take in this article, we may similarly be seen as critical of the cognitivist approach. We will discuss two essential differences between our approach and the cognitivist approach. One difference is rooted in how context is conceived, and the other is whether processes of abstraction are linear or dialectic.

Context From a Sociocultural Perspective

Regarding context, the most salient difference between cognitivist and sociocultural approaches concerns the unit of analysis of human behavior. In his analysis of human development, Vygotsky (1934/1986) pointed out that the study of individual actions is doomed to failure. Rather, one must identify the meaningful cultural activities in the course of which individual actions occur. For example, when defining the zone of proximal development, Vygotsky made clear that the learner does not imitate isolated actions modeled by a more capable peer but participates in activities that are meaningful for him. In activity theory, Leont'ev (1981) articulated Vygotsky's implicit view on context. According to this theory, context can be defined as the interconnected collection of factors that frame the structure and meaning of human actions. The activity rather than the individual human action is the unit of analysis because it is "the minimal meaningful context for understanding individual actions" (Kuutti, 1996, p. 28). Activities are chains of actions related by a common content and carried out cooperatively or individually; the common content is designated by the term object. The object can be material, but it can also be intangible (e.g., a problem to be solved or a common idea), as long as the participants in the activity can share it for manipulation or transformation. The activity is driven by the participants' overall goals, which are termed motives. Participants are aware of their motives. Although the motives of participants may vary, the motives of all participants in an activity must be compatible. An activity always includes various artifacts (e.g., instruments, ideas, signs, procedures) through which actions are mediated. Artifacts may be created, manipulated, and transformed during an activity. An outcome of an activity can be an artifact to be used again in later activities (Bodker, 1997). In particular, ideas, strategies, or conceptions may be such outcomes.

The context of an activity is not only an external, objective description of the material conditions of the activity but also includes subjective components such as a participant's personal history, conceptions, and social relationships. As a consequence, context becomes an inseparable component of the activity because participants choose to carry out actions that seem relevant to them in the given context. This inseparability between context and activity is in contrast to the role of conditions that facilitate or alter (mental) actions, a role assigned by cognitivist researchers to contextual factors.

Van Oers (1998) used the activity-theory view of context for conceptualizing abstraction: "Starting from an assumption that conceives of context as constitutive of meaning, it becomes clear that the notion of 'decontextualization' is a poor concept that provides little explanation for the developmental process toward meaningful abstract thinking" (p. 135). One way to describe abstraction without reference to decontextualization has been proposed by Davydov (1972/1990). Davydov's theory leads us to the second major difference between the cognitivist and the sociocultural views of abstraction.

The Dialectic Nature of Processes of Abstraction

Davydov (1972/1990) developed an epistemological theory to account for a dialectical connection between abstract and concrete. Some of the tenets of his theory are similar to principles of activity theory: Practical activity serves as a basis for human thought; during such activity, people are aware of a motive and use tools. They take into consideration not only the properties of artifacts but also their potentialities, and the corresponding thought processes are different in nature. Thought processes that relate to the properties of artifacts may concern similarities and differences and lead to categorization; in contrast, thought processes that relate to potentialities inherent in artifacts include hypothetical thinking and the construction of justifications.

According to Davydov, cognition thus functions at two levels, the level of empirical thought and the level of theoretical thought. One's goal in empirical thought is to interconnect features of reality (for example, by observing similarities of and differences between things), whereas one's goal in theoretical thought is to reproduce reality. As posited by Davydov, everyday conceptions are generally attained through empirical thought. In contrast, scientific concepts are often attained through theoretical thought.

Davydov (1972/1990) described theoretical thought as "an idealization of the basic aspect of practical activity involving objects[1] and of the reproduction in that activity of the universal forms of things, their measures, and their laws" (p. 298). This activity gradually turns into a cognitive experimentation characterized by the fact that one (a) mentally transforms objects during the activity and (b) forms a system of connections between these objects. Theoretical thought consists then of the expression of the symbol-mediated being of objects, of their universality, or (as worded by Davydov) a "theoretical reproduction of reality" (p. 302).

For scientific concepts, empirical thought does not lead to the attainment of abstract knowledge, because this knowledge consists of connections throughout a whole system. To proceed to the construction of abstract knowledge, one needs dialectical logic. The learner needs to consider the links of the new theoretical knowledge with other components within a comprehensive whole to consider possible contradictions and integration. Davydov, in his method of ascent, proposed a description of the genesis of abstraction.

Abstraction starts from an initial, simple, undeveloped first form, which need not be internally and externally consistent. The development of abstraction proceeds from analysis, at the initial stage of the abstraction, to synthesis. It ends with a consistent and elaborate final form. It does not lead from concrete to abstract but from an undeveloped to a developed form of the abstract in which new features of the concrete are emphasized.

Davydov's theory is incompatible with most cognitivist theories, because most cognitivist theorists consider abstraction as a move from the concrete to the abstract. An exception is the approach proposed by Ohlsson and Lehtinen (1997): Their view of abstraction as an organization of existing abstract entities into a more complex structure is compatible with Davydov's move from an undeveloped to an elaborate form of abstraction. There is, however, a difference. Davydov, on the one hand, starts from a single, undifferentiated abstract entity. His process of abstraction consists of establishing an internal structure with internal links and results in a differentiated and structured entity. Ohlsson and Lehtinen, on the other hand, start from two (or more) entities that have been previously constructed; these entities have internal structure (that may be ignored for the present purpose). The process of abstraction consists of one's establishing external connections between the two existing entities with the aim of integrating them into a single, more complex structure.

The dialectical approach described by Davydov is highly relevant to educational research and practice. Much school learning concerns scientific concepts that are not directly attainable through empirical thought. Designers and teachers need to elaborate ready-made sequences of tasks that are intended to mediate the construction of students' scientific knowledge. In this process, they could conceivably gain from relying on Davydov's theory. However, the theory is removed from practitioners, designers, and policymakers. Issues such as how to design sequences of activities leading students to abstraction or how to assess the abstractions they make cannot be addressed by a theory that is epistemological in essence and does not give tools for studying processes of abstraction. This shortcoming exists not because the theory is wrong but rather because practitioners, designers, and educational psychologists have not reworked this theory in the light of the lessons drawn in classrooms, from the development of materials (written, software, methods, etc.) and the description of learning episodes.

In the following sections, we will build on Davydov's theory from our practitioners' point of view and develop it into a functional definition of abstraction and a model to facilitate one's observation of processes of abstraction.


As mentioned in the introduction, we come to this research from the point of view of practitioners of mathematics education, and this position confers particular characteristics upon the research. The research presented in this article is an integral part of a long-term research and development project; our main goal in this project is to design and create a learning environment in which students will be engaged in meaningful mathematics. We view our research as being within a comprehensive setting, which includes all aspects of curriculum development from design considerations to large-scale implementation.

A main component of the present cycle of the project (Hershkowitz et al., in press) is an introductory course on functions for Grade 9. The course consists of a sequence of tasks around problem situations with a contextual frame that is thought to facilitate the growth of the function concept; through these problem situations, the mathematical structure of the topic of function is transparent. Most of the inquiry is done in small groups with computational tools at the students' disposal, and students are asked to write group or individual reports in which they report, compare, critique, and reflect on their hypotheses and solution processes.

We were overwhelmed and surprised by what we observed in trial classrooms. What occurred there was different in nature from what we had observed in the previous two decades of development and research. The need to meaningfully describe the observed learning practices led us to adopt the perspective of activity theory. We attempt to analyze students' construction of knowledge when they are investigating problem situations in context. As such, our research is bottom-up research: We do not hypothesize a theory then collect data and check them as to their consistency with the theory. Rather, from an activity-theory perspective, we use naive observation and documentation of the students' actions while they are doing meaningful mathematics. During the analysis and interpretation, we then adopt theories that fit our overall approach to mathematics learning as well as additional theoretical ideas that emerge from the data and from the need to explain these data meaningfully. Therefore the purpose of the data is often to serve as basis or partial basis for the emergence of a theoretical idea. This is the perspective for the present study.

Our goal is to experimentally investigate abstraction; we aim to identify processes in which mathematical abstractions occur and situations that enable learners to appreciate and make efficient use of abstractions. The term abstraction is thus being used to refer both to a process and to an outcome; to distinguish between them, we will also use the terms process of abstraction or abstracting, on the one hand, and abstracted entity, on the other.

Our definition for abstraction will later serve as the theoretical guide to the establishment of an operational model for studying abstraction experimentally. This definition is a result of the dialectical bottom-up approach described above; by means of this approach, we reached successively refined descriptions of abstraction. Our definition is thus a product of our oscillating between our theoretical perspective on abstraction and experimental observations of students' actions, actions we judged to be evidence of abstracting. This oscillation between theoretical principles and experimental data is highly nonlinear. However, our presentation of the oscillation needs to be linear. Thus, the model and the ways in which it emerged from data will be described in the following section, whereas the definition itself and its theoretical roots will be discussed here. These roots, listed below, are based on the epistemological principles and the sociocultural background presented earlier.

Abstraction is an activity (in the sense of activity theory), a chain of actions undertaken by an individual or a group and driven by a motive that is specific to a context.

Context is a personal and social construct that includes the student's social and personal histories, conceptions, artifacts, and social interaction.

Abstraction requires theoretical thought, in the sense of Davydov; it may also include elements of empirical thought.

A process of abstraction leads from initial unrefined abstract entities to a novel structure.

The novel structure comes into existence through reorganization of abstract identities and through establishment of new internal links within the initial entities and external links among them.

In view of our experience in classrooms and our need for an operational definition, we translated these theoretical principles into the following more applicable definition:

Abstraction is an activity of vertically reorganizing previously constructed mathematics into a new mathematical structure.

We will argue that this definition integrates all five of the epistemological principles, although only three of them appear explicitly in its wording. First, the term activity is to be taken in the sense of activity theory, implying that context needs to be fully taken into account.

Next, the term previously constructed mathematics refers to two points: first, that outcomes of previous processes of abstraction may be used during the present abstraction activity and, second, that the present activity starts from an initial unrefined form of abstraction as posited by Davydov (1972/1990) as well as by Ohlsson and Lehtinen (1997). These two points show the recursive nature of abstraction.

The term reorganizing into a new structure, which implies the establishment of mathematical connections, includes highly mathematical actions like (a) making a new hypothesis and (b) inventing or reinventing a mathematical generalization, a proof, or a new strategy for solving a problem. These actions require highly theoretical thought. Although theoretical thought is necessary, empirical thought cannot and should not be excluded. As will be seen in the next section, empirical thought, like observing similarities and differences, may make an essential contribution to abstraction.

We borrowed the term vertical from the Dutch culture of Realistic Mathematics Education, in which researchers speak about vertical mathematization as opposed to horizontal mathematization (de Lange, 1996; Treffers & Goffree, 1985). Horizontal mathematization refers to relations between nonmathematical situations and mathematical ideas. Vertical mathematization is "an activity in which mathematical elements are put together, structured, organized, developed, etc. into other elements, often in more abstract or formal form than the originals" (Hershkowitz, Parzysz, & van Dormolen, 1996, p. 177). Although these authors did not exclusively associate verticality with abstraction, they did include abstraction, and they emphasized the integrative role of vertical mathematization. It is mainly this integration, which comes about through the establishment of new connections during processes of abstraction, that we wanted to describe by means of the term vertical.[2]

Finally, we return to the important role of the seemingly unimportant word new in the definition. We intentionally used this word to express that as a result of abstraction, participants in the activity perceive something that was previously inaccessible to them. Although the newly perceived feature may consist only of connections between entities (which as isolated entities were previously available), it is exactly the focus on these connections that is often most important for abstraction (Dreyfus, 1991).

The study of abstraction raises an additional challenge. Whichever definition is used, abstraction implies mental activity, which is not observable. Because we want to empirically investigate processes of abstraction, we need to devise a way to make them observable. Put another way, we need to use (theoretical) spectacles that allow us to see processes of abstraction. As has been noted above, we consider processes of abstraction as they occur during students' activities. And it is precisely this view that provides us with the desired spectacles: Activities are composed of actions, and actions are frequently observable. We answer the question "Which actions are relevant for abstraction?" with reference to Pontecorvo and Girardet (1993): Epistemic actions are mental actions by means of which knowledge is used or constructed. Epistemic actions are often revealed in suitable settings. Therefore, settings with rich social interactions are good frameworks for observing epistemic actions. In the next section, we show that we can identify three particular epistemic actions that are constituent of abstraction and provide a strong indication that a process of abstraction is taking place. In conclusion, we consider epistemic actions because they are both characteristic for abstraction and observable. In other words, they provide us with an operational description of processes of abstraction.


This section is devoted to the experimental identification of epistemic actions involved in processes of abstraction. To this end, we analyze an activity during which, we claim, processes of abstraction occurred. Our aim in the analysis is to identify and illustrate specific epistemic actions that are constituent of processes of abstraction. We will also show that these epistemic actions occur nested in a particular manner. The epistemic actions and the manner in which they are nested are experimentally accessible; thus they constitute an operational model of abstraction.

We characterized abstraction as a process that takes place in a complex context that incorporates tasks, tools, and other artifacts; the personal histories of the participants; and the social and physical settings. Such processes may take place during work with any group of students and teachers. For simplicity, we focus in this section on a single student working on a specifically designed task in an interview situation. Our reason for choosing a single student in an interview situation as a first paradigmatic case is that in this setup the epistemic actions are relatively easy to identify. Even for this relatively simple setup, we had to separate, for clarity of presentation, aspects of abstraction that are intimately linked and present them one by one; these aspects thus appear more isolated in our writing than they are in reality. We deal with only one aspect of context, the task on which the student worked. Questions concerning other aspects of context, such as different social settings, will be briefly considered in later sections and taken up in more detail in future articles.

Experimental Setting and Task

The student (BL) who participated in the study was a ninth grader; the experiment took place toward the end of the school year during which she had participated in the functions course described in the previous section. She was asked to work on a task that was similar in spirit to tasks in the course, although the experimental task was somewhat more structured. She had access to a computer with a function-graphing program while she worked. The interviewer's task was to ask BL questions with two complementary aims: (a) to cause BL to explain what she was doing and why and (b) to induce her to reflect on what she was doing and thus possibly progress beyond the point she would have reached without the interviewer. In other words, the interviewer had some didactic intentions, and for this reason we term the interview a teaching interview. BL seemed to be at ease during the teaching interview; she was willing to explain what she meant and clearly expressed her thoughts when she was self-confident. She was ready to make conjectures when she was less sure; in these cases, she usually mentioned that she was not sure and tended to turn to the computer, hoping to get confirmation. She was quite familiar with functional representations and their interpretations; she comfortably handled graphs, tables, and building tables from graphs; she was somewhat less eager to use formulas, except for the simplest ones. She was proficient in the use of the function grapher that was available, and she easily interpreted its products.

The interview task deals with the development over time of three populations in an animal park. The animal populations were given as functions of time. Two functions were linear, one decreasing and presented graphically (zebras) and the other increasing and presented verbally as a story (lions); the third function was quadratic with a maximum and was presented algebraically (eagles). The exact task presented to BL is shown in Figure 1.

The task is structured into four parts that were presented to BL one by one, on four separate worksheets. Each of the first three parts introduces the development of one animal population during the first 10 years of the park's existence. All three parts start with brief periods of familiarization with the newly introduced population in different representations. Questions relate to the size of the populations at various times as well as to different settings to describe the populations and their development. The first part of the teaching interview contains no further questions beyond the introductory ones. In Part II, the student is asked to compare[3] the number of zebras to the number of lions during the 10-year period. At this stage in the student's mathematical education, comparing the values of two functions, for example by first finding the intersection point (e.g., by "walking on the graph"), was standard. In Part III, however, the student is asked to compare the (varying) rate of growth of the eagle population with that of the lion population. The functions are chosen so that the rate of growth of the eagles' population starts at a higher value than the constant rate of growth of the lion population and then decreases to zero. Comparing the rate of change of a linear function with that of a nonlinear function was an unusual task for the student and required the use of new concepts, notably a notion of rate of change as a varying function. The means by which BL dealt with the comparison will be central to our argument. Part III ends with the question of whether there is a point at which all three populations are equal (there is not). Finally, in Part IV, the student is asked to change the development of one of the populations so that such a point exists. This was a rather challenging task for the student and far more open than the previous tasks. Its completion requires problem-solving behavior rather than only activation of learned or new concepts.

BL was videotaped during the approximately 40 minutes she worked on the task. The entire teaching interview was transcribed and translated from Hebrew into English. We believe that the translation did not appreciably change the ideas and the spirit of the discourse, possibly with one exception.[3]

After about 20 minutes, during which BL had had the opportunity to familiarize herself with the three populations in several representations and to (mostly correctly) answer some comparison questions about them, the following dialogue took place. The 7-minute excerpt is taken from Part III of the teaching interview and was the main focus of our analysis. The excerpt is presented in its entirety to allow the reader to understand the continuity of events. B refers to the student, I refers to the interviewer, and numbers in braces refer to the time elapsed since the beginning of the teaching interview.

I177: Okay, let me ask you the following: The rate of growth of the eagle population ... {21:00}

B178: Yes?

I179: ... is it bigger than that of the lions or smaller than that of the lions? The lions, I remind you, you can look here [points to lions graph].

B180: Yes.

I181: You don't need reminders!

B182: The growth is bigger.

I183: The rate.

B184: Bigger and then it decreases.

I185: Then what?

B186: It gets to a point where it decreases. Not decreases but less, ...

I187: Let's, ...; there is a lot; the question is very subtle. Let's try, because you said a lot of things, and I want to understand them precisely. If I understood you correctly, you said that in the beginning the rate of growth ...? Once again!

B188: It is not equal; it is not, ...

I189: It is not equal, but which one is bigger?

B190: It changes all the time.

I191: [Nods.] The rate of growth of the eagles changes all the time. {22:00}

B192: [Nods.]

I193: Now, the question was to compare it [rate of growth of eagles' population] to the rate of growth of the lions' [population].

B194: In the beginning, it seems to me that it is bigger.

I195: Okay.

B196: And you see the point [points to the screen] when it meets.

I197: [Nods.]

B198: And then, it seems to me that it becomes smaller.

I199: Now, you say that in the beginning, the rate of growth of the lions is bigger--of the eagles is bigger. How do you see this? From what do you conclude this?

B200: That the graph is closer to the y-axis, the graph of the eagles.

I201: And then, at a later stage, the rate of growth of the eagles becomes smaller than that of the lions, yes?

B202: [Nods.] I203: Where does this happen?

B204: One can move on the graph ...

I205: [Nods.]

B206: ... and see at which point. Should I? {23:00}

I207: Yes.

B208: [Enters at keyboard and then moves on graph.] Here it is, this, (8, 480).

I209: Now, what happens at this point?

B210: [Hesitates.]

I211: What's its interpretation?

B212: That maybe the quantity of the ... decreases.

I213: The quantity of the eagles or of the lions? About the eagles you are speaking?

B214: Yes. Just a second; I want to enlarge the function. [Works at the keyboard.]

I215: You're enlarging? What do you make larger?

B216: The function.

I217: You're making the function larger? You're making the domain of the function larger, in fact. {24:00}

B218: [Terminates working at the keyboard and obtains the graphs of the same functions in a larger domain.]

I219: Again, now explain it to me. You are doing lots of things; you're thinking a lot, and I am trying to follow your thoughts, but I don't always succeed. You made here the scales larger, so that we can see more, right?

B220: [Nods.]

I221: For what purpose did you do this?

B222: In order to see what happens later and ... [thinks] ... see what happens later.

I223: Okay, what happens later?

B224: They continue and decrease, their quantity.

I225: The eagles?

B226: Yes.

I227: The quantity of eagles, ...

B228: Decreases.

I229: Right. Now, I want to return to the same point. What happens at this point [(8,480)]?

B230: There is the same quantity of eagles and lions, right?

I231: [Nods.]

B232: It's ... it's the lions, right? {25:00}

I233: And now, I go another step backwards, to the question I asked you. I asked you the question concerning the rate of change of the lions as compared to the rate of change of the eagles. I asked where the rate of change is bigger, whether the one of the eagles or the one of the lions, and you answered very nicely that the one of the eagles is not constant.

B234: Yes. I think one has to make a table in order to see the rate of change.

I235: [Nods.] Now, I prepared a table for you so you don't need to work so hard. [Gives her the tables for the eagles and lions.]

Time             0  1   2   3   4   5   6   7   8   9  10

No. of eagles    0 95 180 255 320 375 420 455 480 495 500

Time             0  1   2   3   4   5   6   7   8   9  10

No. of lions     0 60 120 180 240 300 360 420 480 540 600

B236: So, it changes. If there was a calculator here ... [turns to the computer].

I237: Maybe tell us what you plan to compute.

B238: Faster than I can compute?

I239: Yes, no. What do you want?

B240: The rate of change.

I241: How do you compute rate of change?

B242: This minus that [points to two consecutive numbers in the table]. Say, in each year, by how much it increases.

I243: Okay, I am willing to serve as your calculator.

B244: This is 95 [writes 95 between the initial 0 and the 95 of the first year]. Here [between the first and the second year] it is 180 minus 95.

I245: Eighty-five.

B246: [Writes at the appropriate place and points to the next one.]

I247: Seventy-five.

B248: [Writes.]

I249: Sixty-five, 55, 45.

B250: [Writes.]

I251: Now you understand how it continues, right?

B252: It decreases by 10.

I253: What's this say, this 65 here [points to the number 65 written in B250]?

B254: What? Here?

I255: Yes.

B256: That the rate of change decreases all the time.

I257: What's the interpretation of the number 65? Do you say that this is the rate of change?

B258: That each year fewer eagles joined.

I259: Okay, fine. What about the lions?

B260: The lions? They increased. There is ... [looks at the table]. They increase all the time.

I261: [Nods.]

B262: That's it.

I263: Once more, let me ask the question. I want to compare. I want to know whether the rate of change of the eagles is bigger, or that of the lions is bigger.

B264: [Thinks.] It's up to a certain point. [Points to the tables.] Here [in the lions table] all the time it [the rate of change] is 60, and here [in the eagles table] the rate of change is bigger until some point between 4 and 5. One can see this [points to the computer].

I265: How can one see this? {28:00}

B266: It's not easy to see.

I267: It's not easy to see? Why is it not easy to see?

B268: Because, it is here [points to the eagles graph], the rate of change, but it curves all the time.

In the following subsections, we analyze several excerpts from BL's teaching interview in more detail. Each excerpt has been chosen to illustrate one particular epistemic action of abstraction.


We start from the beginning of this episode. BL was asked whether the rate of growth of the eagle population was bigger than that of the lions or smaller than that of the lions {III.b} (from here on, numbers in braces refer to the interview tasks as listed in Figure 1). At that moment, from the story she was told, she had already generated the formula y = 60x for the lion population, conjectured from the given expression f(x) = 5x(20 - x) when the eagle population increases and when it decreases, typed both expressions into the computer, and obtained their graphs on the screen (Figure 2a). When asked to compare the rates, BL first answered intuitively that "the growth [of the eagles] is bigger, ... bigger and then it decreases ...; it gets to a point when it decreases, not decreases but less ..." (B182-B186). The interviewer's probing about what she meant induced three processes that occurred in parallel.

BL explained why she thought the rate of growth of the eagles' population was bigger than that of the lions: "The graph is closer to the y-axis, the graph of the eagles" (B200).

BL turned to the computer for details and reassurance. For example, when asked where the rate of growth of the eagles' population becomes smaller than that of the lions, she said, "One can move on the graph ... and see at which point. Should I? Here it is, this, (8, 480)" (B204-B208; see also Figure 2a). She turned to the computer again later when she took the initiative to zoom out and investigate the long-term behavior of the populations (B214-B228); she was much more familiar with population behavior than with rates of change.

BL's confusion between rate-of-growth of the population and size of the population, first apparent in the intuitive answer (B182-B186), became explicit when, for example, BL identified the intersection point (8,480) and interpreted that at this point "maybe the quantity of the eagles decreases" (B212). We note that the qualifying maybe expresses her uncertainty.

In these processes BL did not progress toward the goal of comparing rates of growth. In her favorite setting (graphs on the computer) she found identifying rates of change difficult--at times she confused rate with quantity--and she found comparing rates of change impossible. The structures she needed to progress were not available to her.

The interviewer decided to find out whether simply repeating questions would help BL. He started with the elementary (I229) "What happens at this point [(4, 480)]?" which BL immediately answered correctly. From then on, she did not confuse rate and quantity. The interviewer followed up by repeating the central question (I233) "I asked where the rate of change is bigger, whether the one of the eagles or the one of the lions." This repetition led to the important sequence B234 to B268. Although the interviewer did not refer to settings at all, BL proposed, in B234, to use tables of values. Using a table, she could locally compute a rate of change between two successive data points (B242); in the sequel, she used this knowledge to construct, step by step, the more complex notion of rate of change as a function taking on different values at different points in time and being amenable to comparison with the (constant) rate of change of another population. During this process, BL used structural elements at her disposal to build the new, more complex structure of a sequence of changing rates, that is, of rate as a function, the value of which can vary. She built the more complex structure from simpler structures. She clearly needed and used the number sequence 95, 85, 75, ... and her understanding of its interpretation to build up the more complex structure. Once she had built the structure, she was easily and clearly able to make the requested comparison (B264). She was even able to switch back to the graphical setting, in which, though, she still had difficulty pinpointing exactly the criterion of bigger, equal, or smaller rate of change (as she expressed in B268, which concluded this episode).

The above sequence shows clearly how BL reorganized her knowledge in response to the need to deal with varying rates of change. The reorganization was a vertical piecing together of elements that helped BL refine her notion of rate of change (rates of change can vary, and the varying values can be read from a functional table). The process included an integration: BL dealt with the many values of the rate of change as different values of a single quantity (the rate function). As a result, her conception of rate of change became deeper and more structured. Although our data are insufficient to show that this structure is novel to BL, the rather detailed information we have about the day-to-day activities of BL's class show that, at least within her mathematics classroom, she had no prior opportunity to deal with varying rates of change. We also know that, at this stage, the new structure was rather fragile for BL: She found coordinating the varying rate of change with a transition to the graphical representation difficult (B268), and we have no indication whether she would be able to apply her new knowledge to a different situation. As we will show, further stages of the process of abstraction are needed to consolidate such newly constructed knowledge.

We presented the process during which BL constructed a functional conception of rate of change as paradigmatic. Constructing in this sense is the first and most important of three epistemic actions that together constitute our proposed notion of abstraction. More generally, people may be constructing new methods, strategies, or concepts. Novelty implies construction. When a novel structure "enters the mind," it has to be cognized, or pieced together from components, usually simpler structures. According to the notion of abstraction proposed by Ohlsson and Lehtinen (1997), this constructive process that requires theoretical thought and implies vertical reorganization of knowledge is the central step of abstraction. From an activity-theory point of view, the participants who cognize a mathematical notion in this sense are assembling artifacts to produce a new structure.

We note that BL not only had reorganized her knowledge but also had become able to verbally express her reorganized knowledge. She had developed a language in which to compare rates and to explain her initial statement that the rate of growth of the eagles' population at first was greater than that of the lions' population and later was less. The reorganized structure could be and was used for explanation. Generally if the construction is an abstraction, learners develop in parallel a language for expressing their new knowledge and using it to explain or justify.

Observing the construction of structures presents a methodological problem because construction is a relatively rare event. Designing an experiment aimed at observing such events is also difficult. In fact, these events might often occur when students sit alone and think hard about mathematics. When the process is slow and incremental, the methodological problems are compounded. We consider ourselves fortunate to have encountered the above segment in an experiment that was designed to observe the use of abstractions before we knew exactly for which epistemic actions we were looking.


Identifying cases in which a student makes use of a construct or structure that has been constructed earlier is easier than identifying cases of construction. BL used a preconstructed structure when she

linearly interpolated the zebra population between the points (0, 400) and (10, 200) to find (5, 300), "because it decreases in steps of 100" {I. b};

described the development of the eagle population [with the graph on the screen in the domain 0 < x < 15 but not beyond], saying, "And the eagles, at some point, it seems, will die out or get to. ... It's possible to enlarge the function [meaning the domain] and see. ... Up to a point, to the 11th year, I think, or 10th, they grew, and from this year on they started to decrease." {III. e};

wrote "y = 60x" on the worksheet (B118) while the interviewer was still struggling to find the right words to describe the development of the lion population {before II. a};

volunteered information and focused on intersection points of the graphs when asked to compare populations {II. b}. In addition, she explained and interpreted her statements:

I127: Now, what we want is to compare between the zebra and the lion populations.

B128: [Nods.] So it is possible to put them [populations] on the computer and see.

I129: Yes.

B130: [Enters functions; obtains graph with both functions; see Figure 2b.]

I131: [Nods.]

B132: So, the lion population ... What was it [the population] before, zebras?

I133: [Nods.]

B134: So the zebras all the time became less, and the lions all the time grew.

I135: [Nods.]

B136: Should I tell you what else I see?

I137: Yes, maybe you can tell me something else about the comparison.

B138: [Points to the intersection point of the graphs.] In the fifth year, the number of lions and zebras was equal.

I139: [Nods.] How do you know that this was in the fifth year?

B140: Because, according to the ... [points to the number 5 on the horizontal axis and moves up to the point of intersection]. {17:00}

I141: According to the ... point, yes? [Laugh]

B142: The point is (5,300). There were 300 zebras and 300 lions.

Similar sequences occurred when BL was asked to compare the eagle population to the zebra population and when she was asked whether at some point all three populations were equal {III. f}.

The cases presented in this section are quite different, in terms of the nature of BL's knowledge, from the case of rate of change as a function presented in the previous section. She did not, here, need to construct new knowledge because she recognized structures that she had presumably used previously in other situations and was able to adapt them, at a structural level, to the present situation and make use of them as needed. This is the second epistemic action we associate with abstraction: recognizing.

Recognition of a familiar mathematical structure occurs when a student realizes that the structure is inherent in a given mathematical situation. Just as we used the word cognizing for describing what occurs when constructing, we propose to use the word recognizing here to make the point that this is not the first time the corresponding structure "enters the mind" of the student.

During the process, the recognizing subject may have been the single or most active participant, but conceivably she or he may simply have assisted and observed a process in which others were the main actors. This description may fit a rather passive student in a group working in a problem-solving-oriented investigative classroom and most students in classrooms in which the teacher tends to "chalk and talk." Even externally passive students may be able to recognize and meaningfully use some of what the teacher demonstrated.

Recognizing is often, though not always, at the level of empirical thought. For example, in the excerpt presented above (B134-B142), BL mainly observed, reported her observations, and classified them into categories she had formed at some earlier stage. It will become apparent in a later section that abstraction makes use of recognizing, and thus of empirical thought, but that no abstraction can take place without constructing, which requires theoretical thought.

We also emphasize the subjectivity of the recognizing process. Others (e.g., Chi, Feltovich, & Glaser, 1981; Lowe, 1993) have shown that when experts see deep structure in a problem situation or a diagram, novices often notice only surface structure. Whereas for the experts, this process is a matter of recognizing, for a suitably prepared novice, it might be an opportunity for engaging in a process of constructing a deep structure.


In the last part of the teaching interview, somewhat more elaborate questions were presented to BL. At that time, she had already obtained all three population graphs on a single screen (see Figure 2c). She was asked to change parameters in the function describing the development of one of the populations so as to generate a point in time when all three populations would be of equal size. For example {IV. a}

I287: Now let's assume that our same planners plan a similar park, a new one, but they want to plan it so that there will be a time when the three populations are equal; and they make proposals, and I want you to help them to realize these proposals. {31:00} The first planner proposes the following: He says he wants to change the living conditions of the lions so that their rate of growth changes and that a time will occur when all three populations are equal.

B288: [Nods and looks at the screen.]

I289: How can he do this? Can you help him do this?

B290: I'll try. There is a time here [points to screen] where the two populations of the zebras and the eagles meet, which is (4, 320).

I291: Excuse me, to keep order ... [hands her the worksheet with Part IV of Figure 1].

B292: So there is (4, 320) [writes (4, 320)], so we have to find a point ... [turns to the computer].

I293: [The computer] falls asleep, sometimes. {32:00}

B294: So we have to find an appropriate point, for the lions [takes the lions' table; see I235 in the protocol]. So one can tell, just a second [thinks, turns to computer]. I make them grow by 80 each year then ... [adds the graph of y = 80x].

I295: How did you so quickly enter 80x here? I am somewhat amazed from where this came. Where did it come from?

B296: Because I wanted a point; really, I just tried and it came out; no, but really I thought here.

I297: You tried by chance?

B298: No, there was a basis. {33:00}

I299: What's the basis?

B300: I computed at which point it is because they all the time increase by the same number and at the fourth point it needs to be 320; so 320 divided by 4.

I301: That's how you found the 80?

B302: [Nods.]

Several aspects of this excerpt are of interest. First, we noticed BL's excellent problem-solving behavior. In B290 she immediately focused on the point of intersection of the zebra and eagle populations. She clearly realized that the intersection point (4, 320) is the one through which all three population graphs would have to pass because the lion population is the one whose behavior would be modified. She was thus recognizing the logical deep structure of the problem that was posed to her; she was reorganizing the available information so she could effectively deal with the particular question at hand.

But BL did much more during this short episode. She invoked and combined many structural elements related in a dialectical manner to the question: the logical structure of the problem; the knowledge that equal populations appeared as intersection points in the case of two as well as in the case of three populations; the relation, at least for the lion population, between the linear graph through the origin and the corresponding formula y = mx; the relationship between the slope of the graph and the value of the coefficient m; a (presumably dynamic) view of the family y = mx and the corresponding graphs; and the relationship between the point (4, 320) and the value 80 for the slope.

During this episode much more was involved in BL's actions than her recognizing structural elements; nevertheless, BL was not constructing a new, more elaborate structure out of the given elements; rather, she was using the given elements in different and appropriate combinations to answer the question she was asked, to progress toward one of the goals of the activity in which she was engaged. Combining structural elements to achieve a given goal is the third epistemic action of abstraction, and we call it building-with. When building-with, the student is not enriched with new, more complex structural knowledge; however, she or he uses available structural knowledge to build with it a viable solution to the problem at hand.

Building-with is most likely to occur when students are engaged in achieving a goal such as solving a problem, understanding and explaining a situation, or reflecting on a process. For these purposes, students may appeal to strategies, rules, or theorems. For example, students passing to a new representation to find the solution of a problem on functions are building-with the problem-solving strategy of passing to a new representation. Building-with has a connotation of applying: To achieve the goal, students use structures that they recognize from earlier activity as artifacts for further action. As mentioned above, the recognized structure may be the outcome of other participants' activity. The artifacts used in building-with are tools to adapt to a new situation, to a new instantiation, to a modification of an existing method, or to greater complexity.

Building-with may take place when the teacher reminds students of a resource and the students take up the idea. For example, the teacher may hint to the student to notice how the graph looks. Students may also engage in building-with when they are hypothesizing. In that case, students may appeal to numerical data or to other resources, and the idea or hint stems from these resources. If the idea is some evidence for a new mathematical structure, the building-with is nested in a more global constructing activity. These interrelationships among the three epistemic actions are discussed in the next section.

An important difference between constructing a new structure and building-with is that in constructing, one's goals for the activity are the process of constructing and the creation of the structure to be constructed, and to reach the goal (solving a problem, justifying a solution, or making an hypothesis), students must use a new mathematical structure. In building-with structures, one can attain one's goal by combining existing structures. The goals students have (or are given) and their personal histories thus strongly influence whether they are building-with or constructing. If they solve a standard problem, they are likely to alternate between recognizing and building-with previously acquired structures. If they solve a nonstandard problem, they might be constructing: finding a new (to them) phenomenon and reflecting on it, on its internal structure, and on its external relationship to things they know already. Constructing is thus not at all independent of recognizing and building-with.

The Nested Relationships Among the Three Epistemic Actions

Up to this point, the three epistemic actions have been described separately. In this section, we discuss relationships among them. We claim that constructing often includes actions of building-with and of recognizing. In other words, constructing is a combination of the three epistemic actions whereas recognizing actions are nested in the other two, and building-with actions are nested in constructing actions. To show these relationships, we scrutinize a chain of actions undertaken with a single overall goal. In other words, we analyze a whole activity. Specifically, we focus on the episode we used to exemplify constructing.

At the very beginning of this episode {III. b}, the interviewer spelled out the main goal for the activity by asking, "The rate of growth of the eagle population ..., is it bigger than that of the lions or smaller than that of the lions?" (I177-I179). The idea units[4] BL expressed initially regarding the rate of growth of the eagle population were (a) "The growth is bigger" (B182); (b) "Bigger and then decreases" (B184); (c) "It gets to a point where it decreases. Not decreases but less ..." (B186); (d) "It is not equal" (B188); and (e) "It changes all the time" (B190). These utterances seemed to be immediate, intuitive answers. They were evidence of BL's recognizing already existing structures that rely on outcomes of her previous learning and serve as artifacts for constructing the growth of the eagle population. At the same time, they constitute her initial, immediate but undeveloped, fuzzy abstract image of the rate of change of the eagle population--Davydov's (1972/1990) initial form of abstraction.

In I193 the interviewer reminded BL of the main goal of this activity, namely to answer the question "Which rate of growth is bigger?" At this stage, she still answered the question intuitively: "In the beginning it seems to me that it [rate of eagles' population growth] is bigger [than that of the lions]" (B 194). But soon she began a process of reorganizing the already existing structures to achieve the main goal. This process started with a conscious, analytic focus on the growth rate of the eagle population and terminated with the synthesis by which the notion of varying rate of change emerges.

There were four existing structures she successively recognized and analyzed to approach her goal: (a) the steepness of the graph, (b) an interpretation of the intersection point of two graphs (B 196, B204, B206, and B208), (c) the idea that different representatives stand for a function (Schwarz & Dreyfus, 1995), and (d) tables of the varying quantities of the two animal populations (as given in the protocol, I235). BL borrowed the idea unit of the steepness of the graph from her knowledge of increasing linear functions: "That the graph is closer to the y-axis" (B200)--the closer the graph is to the y-axis, the steeper it is and the bigger is its rate of change.

As we have mentioned, BL confused the comparison of the populations' rates of growth (structures she presumably had never encountered before) with a comparison of the populations' quantities (structures with which she was familiar). In other words, she recognized the intersection point of the graphs, a structure of comparison between quantities (rather than rates) and built-with it, even though it was the inappropriate structure. This building-with did not help her progress toward her goal. She was probably aware that she was not "on the right track." She hesitated in B210, continued with "maybe" in B212, and then explicitly mentioned, still in B212, quantities rather than rates.

Part of the structured knowledge BL had from her history of learning about functions includes the fact that a function has many representatives and that the operation of changing scales may be used to produce other representatives. By choosing different scales, she changed the representative of the function (B214-B218) to see more of its graph. This is another case of recognizing a structure and building-with it (in this case a new graph). When the interviewer asked her, "For what purpose did you do this?" (I221), she replied, "In order to see what happens later" (B222), and, in more detail, "They ontinue and decrease, their quantity" (B224). The fact that she built a wider picture by changing the representatives is evidence that she saw, at least implicitly, the connection between the change of quantities and the change of their rates of growth. It is also a sign of the dialectic nature of BL's thinking during the abstraction activity.

Up to this point, we could see a process of reorganization of artifacts or already existing structures through the action of recognizing and then building-with these structures some additional structure, namely the change of the populations' quantities. But this process was insufficient to answer the main question concerning the comparison between the rates of growth of the two populations. BL was aware of this insufficiency (perhaps with the help of the interviewer in I233) and tried to overcome it by again using the structures she knew and moving to a different functional setting: a table of values of the changing quantities of the two animal populations. This switch of setting finally helped her to add the additional structure of the sequences of the differences of the populations' quantities along successive years.

In other words, she constructed a sequence of values of the rates of change between successive years. We identify here a precursor of the construction of quite a new structure, the notion of the rate of change as a function taking on different values at different points in time. At this stage the synthesis phase of abstraction occurred, and a novel structure was constructed. This constructing action was not sudden but stemmed from the recognition and restructuring of an already existing structure, the table of values of two changing quantities. At the same time, it was driven by BL's awareness of the motive for the activity, namely comparing the rates of growth.


Our analysis of BL's work exhibits a process of abstraction. In this process, structures constructed earlier in the student's learning history are recognized and reorganized into a new structure to fulfill the demands of the activity. The actions undertaken by the student include the three epistemic actions recognizing, building-with, and constructing, not as a chain but in a nested way. In other words, the action of constructing does not merely follow recognition and building-with in a linear fashion but simultaneously requires recognition of and building-with already constructed structures. We call this mechanism dynamic nesting of the epistemic actions.

These relationships among the epistemic actions naturally give rise to a model of abstraction, in which one can identify general mechanisms. The constructing of a novel structure stemming from and based on recognizing and building-with is a first effort in this direction. We take the occurrence of the three epistemic actions, nested during the construction of a new structure in the manner above (or in more complex ways), as a clear indication that a process of abstraction is occurring and is constituted by these dynamically nested epistemic actions.

We assume that BL's construction, as is true of any new construction, was rather fragile at first. We surmise that when recognized in a further activity, such structures will progressively become more consolidated. Because of this consolidation, the student will be able to recognize the structure more easily, just as BL recognized and built-with earlier structures in the present activity. The consolidation of the newly constructed structure will allow the student to recognize this structure in further activities and to build-with it with increasing ease. Hence we hypothesize that tracing the genesis of an abstraction passes through three stages: (a) a need for a new structure, (b) the constructing of a new abstract entity in which recognizing and building-with already existing structures are nested dialectically, and (c) the consolidation of the abstract entity, facilitating one's recognizing it with increased ease and building-with it in further activities.

The general mechanism described here outlines the functioning of our model of abstraction. The core of the model relates mainly to the second stage of the model, namely the constructing of a new abstract entity, and to a lesser extent, to the first one, the need for a new structure. We consider this core of the model to be established and supported by data from BL's teaching interview. Extending the model to include consolidation requires further elaboration and the analysis of more data.

To limit the complexity of this article, we have emphasized the cognitive components of the model more than the contextual ones. But the model is inherently contextual. In the remainder of this article, we briefly treat the multiple facets of context in the model. Although this treatment is predominantly theoretical, we do point out appropriate supportive data from BL's teaching interview.

The structure of the dynamically nested model itself has a contextual nature. The fact that building-with and recognizing are nested in constructing shows that construction is grounded in and concurrent with other epistemic actions. In other words, when a new structure is constructed, it already exists in a rudimentary form, and it develops through other structures that the learner has already constructed. This description of the development of abstraction echoes Davydov's theory (1972/1990), according to which the abstraction grows from an unarticulated form through a dialectical process.

In this description of the growth of abstraction, we emphasize the historical and subjective character of epistemic actions. For example, using rate of change as a function is a constructing action for BL but may be a building-with action for another learner who had constructed this concept earlier. Constructions become artifacts that may be used in further actions of recognizing and building-with; in these further epistemic actions, the use of such artifacts is part of the essence of the epistemic action. Epistemic actions may thus be nested over several activities, and the students contribute to the construction of the context in which further activities will take place. For example, in B290-B302, BL generated a modified development for the lion population by using several artifacts she had presumably constructed earlier; these artifacts include the strategy of passing to a new representation to add information and her knowledge that the values of two functions are equal when their graphs intersect.

The learner's history, embodied in the artifacts on which the learner capitalizes, forms the basis for the genesis of abstraction. However, abstraction will not occur without the need for a new structure. This need may stem from an intrinsic motivation to overcome obstacles such as contradictions, surprises, or uncertainty. Educators may purposely set such obstacles by designing appropriate series of activities. Similarly, common practices and sociomathematical norms that are accepted in the classroom are important in this connection (Yackel & Cobb, 1996). We have general information on sociomathematical norms established in BL's class (Hershkowitz & Schwarz, 1999). For example, intuitive reasons or isolated data alone did not count as acceptable evidence. Also, students' actions were driven by the students' eagerness to construct meaning. This fact is reflected in BL's urge to justify her answer to the question of whether the rate of growth of the eagle population is bigger than that of the lion population. The central step of BL's effort to justify her answer (B264, in response to I263) established conclusively that her initial intuition was correct because "the rate of change [of the eagles' population] is bigger until some point between 4 and 5" and then dips below that of the lions' population. It is this important step that led her to construct rate of change as a function. BL's need for justification was thus crucial for the process of abstraction.

A further component of context in BL's teaching interview was social interaction. Although BL constructed a mathematical structure that was new to her, she was not the only actor. For example, epistemic actions sometimes stemmed from the interviewer's probing questions (in I177-I179, repeated in I233, and yet again in I263). In fact, the interviewer played an essential role in mediating most of the epistemic actions. We point out just a few examples: When BL started to describe the rate of growth of the eagle population and raised a few ideas, the interviewer told her (I187), "You said a lot of things, and I want to understand them precisely." This utterance makes clear the interviewer's expectations concerning some norms for BL's statements: They had to be clear and well explained. Similarly, in I253 he coached her to interpret the changing numerical differences in terms of the rate of change of the population. Also, he expressed agreement or readiness to help when BL adopted a desirable track, for example, by providing her with a ready-made table of values (I235). The role of the interviewer in BL's construction of rate of growth as a function was thus not confined to uncovering BL's mental states. Constructing is mediated by human interaction and by a material tool.

More generally, the epistemic actions involved in processes of abstraction may be distributed among the participants. A teacher may bring to attention a fact or a method, leading to recognition by one participant, followed by building-with by another participant and by the collective construction of a new structure by other participants. In conformity with Vygotsky's (1934/1986) theory of human development, our hypothesis is that the individual who is participating in an activity of abstraction in which epistemic actions are mediated and distributed gradually interiorizes social interactions as well as material manipulations.


In this article, we presented the core of a model for the genesis of abstraction. We showed how the principal components of the model, the three epistemic actions of constructing, recognizing, and building-with, emerged from the analysis of a teaching interview. We also suggested how they are dynamically nested. The epistemic actions may use artifacts that are outcomes of earlier activities, and constructing leads to artifacts available in later epistemic actions. We then showed that the nested structure of the model confers upon this model an inherent contextual nature. In addition, in the teaching interview, many of the epistemic actions were mediated by the interviewer and some, by tools. In activities in which more participants are involved, the model sustains the social distribution of abstraction: Different participants can undertake different epistemic actions.

The extension of the model in this section was based on interpretations of our empirical data, generalizations drawn from them, and hypotheses concerning the genesis of abstraction. These generalizations and hypotheses are theoretically grounded. However, more research is needed to validate and refine the full model. For example, the case of distributed abstraction among interacting peers, some being more active than others, raises theoretical and methodological issues about abstraction as a collective or an individual process (Hershkowitz, 1999). Another important issue concerns the mediation of (computer) tools in abstraction. Finally, to provide an adequate experimental basis for the nesting of epistemic actions over several activities, sequences of several activities need to be investigated. The span of activities is to be oriented not only backward but also forward in time: Apprehending a construction means observing not only from what it stemmed and which artifacts are being used but also how the newly created structures are used as artifacts in further activities. We hypothesize that the traces of a construction in later activities are intimately connected to the consolidation or the absence of consolidation following a constructing action.

The empirical study we presented in this article and other studies we currently conduct lead us to anticipate that the model will facilitate further research and that the research will guide the development of the model into a tool suitable to describe, in a comprehensive manner, processes of mathematical abstraction.

Animal Park

Ten years after its opening, the board of the Belangoo animal park ordered a survey of the development of various animal populations in order to be better able to plan for the future.

I. The variation of the park's zebra population is described in the following graph:

a) How many zebras were there in the park when it first opened?

b) How many zebras were there in the park after three years?

c) Could you describe the variation of the zebra population over the years in a different manner?

d) Do you know still other ways to describe the variation?

II. When the park opened, there were no lions. In the course of the first year, 60 lions were brought in, and then the lion population continued to grow at the constant rate of 60 lions per year.

a) How many lions were there in the park after three and a half years?

b) Compare the number of lions to the number of zebras during the first ten years.

c) What can the planners say about the two populations in the future?

III. The eagle population in the park varied according to the expression f(x) = 5x(20 - x) (x denotes the time, in years).

a) Do you think the living conditions for the eagles in the park are good?

b) Is the rate of growth of the eagle population larger or smaller than that of the lion population?

c) Is your conclusion valid for the entire first ten-year period?

d) Compare the number of eagles to the number of zebras during the first ten years.

e) What can the planners say about the two populations in the future?

f) Is there any time at which the three populations (zebras, lions, and eagles) were equal?

IV. A park is being planned that will be similar in all aspects to the existing one, except that the planners want the three populations to be exactly equal at some point in time.

a) The first planner proposed to change the living conditions of the lions so that the rate of growth of this population will change. What exactly did he propose?

b) The second planner proposed to achieve the aim by means of a change in the number of zebras present at the time the park is opened. What exactly did he propose?

We thank Ruhama Even, Anna Sfard, and several anonymous reviewers for their thoughtful comments on an earlier version of this article. They helped us to theoretically situate our research and improve its presentation.

1 Davydov, writing before the activity theorists, used the term object to designate what activity theorists would probably have called artifacts.

2 Note that van Oers (1998) also used the term vertical in connection with processes of abstraction (although in a slightly different theoretical framework) to express the "added mathematical value" that is gained in processes of abstraction.

3 The teaching interview was conducted in Hebrew. In Hebrew, there is no distinction between the words for compare and equate; as a result, some students interpreted some questions as relating to equate when our intention was to ask about the more general compare.

4 The term idea units is employed in the analysis of written or oral explanations to designate a minimal grain size pertaining to understanding (e.g., Chi, 1997; Mayer, 1982).


Source: Dr. Dobb's Journal: Software Tools for the Professional Programmer, Mar2001, Vol. 26 Issue 3, p121, 3p 

Author(s): Swaine, Michael

The length of the representation of a number in Roman numerals increases fractally with the size of the number.

That's a factoid from one of the mathematicians I write about this month. Here's another: The idea of using the Greek letter pi for the ratio of the circumference of a circle to its radius was originally suggested by some character named William Jones, who though of it as a shorthand for the word perimeter.

One more: During the writing of Principia Mathematica, Bertrand Russell could be seen pushing wheelbarrows full of specially designed lead type to the Cambridge University Press--if true, an early example either of an author's technological needs oustripping a publisher's capabilities.

Stephen Wolfram, the mathematical and scientific prodigy who wrote the program Mathematica to help him with his research and founded Wolfram Research to sell it, and who not so long ago crossed the threshold into his 40s, is the source of those factoids. The Russell factoid is of some personal interest to Wolfram, who is hoping to publish his own Principia Mathematica this year, a book whose title A New Kind of Science is no overstatement of its ambitious scope. Like Russell, Wolfram is planning to push current printing capabilities with this massive tome, using cutting-edge printing technology and high-quality paper to render the gigabyte plus of graphics in the 992-page book with the highest resolution possible.

Massive and intellectually weighty this opus is likely to be, although perhaps not as physically hefty as the manual for Mathematica.

Mathematica, Java, and Linux

Four point one, the latest version of Wolfram Research's Mathematica, recently rolled off the UPS truck and I dutifully hand-carted it to my office and installed it. Okay, it doesn't really take a hand cart to move the documentation, but at roughly 1500 pages, The Mathematica Book is pretty hefty. And it still bears a single by line-Stephen Wolfram. This Wolfram guy is one of the reasons I always load up the latest version of Mathematica and play with it for a while before putting it away until the next version rolls off the UPS truck. Stephen Wolfram is a genius. He published his first scientific paper at age 15; got his Ph.D. in theoretical physics from Caltech at 20; became at 21 or 22 the youngest ever recipient of a MacArthur Prize Fellowship; and in the 1980s did foundational research in cellular automata, complexity theory, and artificial life.

Then, at the top of his game, he decided that the tools available to do mathematics were not adequate to his needs. So he took some time out to write his own mathematical software, calling it "Mathematica." One is reminded of Ted Nelson inventing hypertext because he needed a way to organize his index cards. But while Ted was not programmer enough to implement his concept himself, and while the goal that got sidetracked while Ted pursued Xanadu was to become the next Orson Welles, Wolfram wrote his own code, and the goal that he sidelined was, the best evidence suggests, to become the next Isaac Newton.

The product Mathematica led to the company Wolfram Research, and the product and the company occupied a lot of Wolfram's time for the next five years. Soon, though, he was back to the research, running the company by day and doing science by night.

More about his moonlighting work shortly, but the day job produced some impressive results. The company is actually a group of four companies with over 300 employees and an enviable record of profitability. The product that feeds the profits and the research and development, Mathematica, is an impressive piece of work, clearly reflecting Wolfram's no-compromises approach. When users report some limitation of Mathematica, the development team sees it as an opportunity to generalize solutions, to dig deeper into the math, rather than to come up with a quick fix. That's not just the corporate line, you can tell from using the product that those are the priorities at Wolfram Research.

The product itself is the other reason I feel I have to load it up and play with it for a while every time the UPS truck delivers a new version. Among other things, Mathematica is the classroom on programming paradigms. You can program with Mathematica in a C-like procedural fashion, with the usual assignments and loops and such, or you can treat it as a rule-based language like Prolog, or as a string-based language like Snobol, or as a pure functional language--pretty much everything you type in Mathematica is a function and returns a value. You can write programs that look something like Lisp. You can do object-oriented programming in Mathematica, to an extent. Certainly, you can do things like overloading the standard addition operator to accommodate new kinds of addition on new mathematical objects that you create. (Or discover?) The latest version extends the list of supported platforms to two more Linux implementations: LinuxPPC and A1phaLinux. There are speed improvements and improved algorithms and specific new capabilities in various components of the program. Then there's the Java integration. J/Link 1.1 lets Mathematica call Java functions and lets any Java program control the Mathematica kernel.

And there's the increased support for MathML, that's the W3C Standard for displaying and reusing mathematics on the Web. Mathematica is better at dealing with MathML; for example, you can grab MathML code from a web browser, paste it into Mathematica for evaluation, and copy the result back to a web page as MathML.

Math on the Web

Pulling together Java and web publishing of mathematics, Wolfram Research has come up with webMathematica, a server-based technology for supporting math on the Web, built on top of Java servlets. Basically, it's a collection of tools that let you embed Mathematica commands in HTML code. When the page is requested from the server, the embedded commands are routed to Mathematica for processing. No special technology on the browser side is required. One intriguing application is courseware: You could develop some pretty nifty online courses if you could have all the calculational, programming, typesetting, and display capabilities of Mathematica serverside.

Outside the halls of Wolfram Research, one researcher is doing work that could change the "feel" part of the Mathematica look and feel rather dramatically.

The Mathematica manual says, "You absolutely must know how to type your input to Mathematica." And so you must, but maybe not forever. Mathematicians since Euclid have done math by hand, and even the advent of the digital computer, Mathematica, and the computational solution to the four-color problem haven't cured mathematicians of their pigheaded preference for handwriting their work on blackboards or in notebooks (or on napkins or the backs of their hands when no better medium is available). They'd handwrite on graphics tablets if it would do them any good, but that would require software that knew how to interpret that input, and such software doesn't exist-yet. Soon, maybe, it will and mathematicians will be able to handwrite their input to Mathematica.

Masakazu Suzuki at Kyushu University is working on a system to edit mathematical expressions from a handwriting interface. The input and output formats he is initially trying to support are LaTeX and MathML. "Mathematical expressions written on the display of the system can be evaluated, factorized, expanded, or presented by graphs, etc., by Mathematica linked to the system by the protocol MathLink," Suzuki said at a recent MathML conference. The handwriting recognition component is good enough that less than an hour's training lets a user write complex expressions at a quite reasonable speed, including the time for the system to interpret the input and the user to enter any corrections. But that's the work of an independent developer. What's Stephen Wolfram himself been up to in his spare time? Ah, that would be A New Kind of Science.

The Man Who Would Be Newton

"Almost all the science that's been done for the past three hundred or so years has been based in the end on the idea that things in our universe somehow follow rules that can be represented by traditional mathematical equations. The basic idea that underlies A New Kind of Science is that that's much too restrictive, and that in fact one should consider the vastly more general kinds of rules that can be embodied, for example, in computer programs."

That's Stephen Wolfram, explaining the motivation behind A New Kind of Science. The computer programs that got Wolfram thinking along these lines back before he took time out to write Mathematica are simple indeed--cellular automata, simple systems like the game of Life, that start with some simple starting state and recursively apply some simple rule. The simplicity of such programs is what caught Wolfram's attention, or rather the fact that from extremely simple initial conditions and rules, enormous complexity can be derived.

Others have been intrigued by this complexity- from-simplicity, like the researchers in complexity theory and chaos theory and explorers of fractal geometry in nature and those who tend their flocks of a-life critters. Wolfram's view, characteristically, is that they are all just chipping away at the edges of something much bigger and more important. Something that he plumbed to a greater depth, and that has great promise for reshaping our approach to doing science. At least he thinks it does, but he's not quite there yet. Sometime this year, though, he hopes to open our minds with his big book.

The Man Who Loved Only Numbers

This forthcoming book of Wolfram's raises intriguing questions. Is it still possible for one person to make a significant contribution to such a wide range of scientific disciplines? Is it possible to discover something as fundamental as Wolfram hints his new approach to science is? And has this child prodigy still got it now that he's past 40?

As evidence that child prodigies can stay prodigies, at least if they stay childlike, I point to the case of Paul Erdos. I recently revisited a wonderful book about this mathematical genius, Paul Hoffman's The Man Who Loved Only Numbers (Hyperion, 1998; ISBN 0-7868-8406-1). I recommend it highly. The Erdos story is well known among mathematicians, less so among computer scientists and engineers and physicists, but not completely unknown among the general educated book reading public. Born in Budapest, Hungary, in 1913, the son of two mathematics teachers, a child prodigy who could multiply three-digit numbers in his head at the age of three, Erdos lived for mathematics and only mathematics. He never married or had children, had no hobbies and no possessions to speak of except his notebooks. He didn't even have a real job or a home. He relied on his mother for many things--he first buttered bread for himself at the age of 21, and that may also have been the last time. For years, he and his mother traveled together. When she died, others took over the mundane matters that he couldn't be bothered with-like driving, cooking, arranging plane flights, getting him to the plane on time, providing a place to stay when he got to his destination, getting him to that place to stay. The flights and the places to stay were crucial because from 1934, on Erdos really had no fixed home. His life had become, and remained until his death in 1996, a series of guest lectures and visiting scholar appointments all over the world. For all this traveling that he did, Erdos lacked any of the skills that other professional travelers acquire out of necessity. He arranged to have others make all his arrangements, he wangled invitations to stay in the homes of the mathematicians who made the arrangements, he got them to pay for his meals. What money he did have from speaking engagements and awards he tended to give away, as prizes for problems he set, as "loans" to promising young mathematicians, to charities, or, often, to beggars on the street.

One could say that Erdos never grew up. And in a way, even his professional career supports this view. In mathematics, it is the young prodigies who solve problems. As mathematicians mature, they either bum out or become system builders, leaving the mere puzzles behind for grander schemes of thought.

Not Erdos. Ignoring all social conventions, Erdos also ignored this intellectual convention. Throughout his life, Erdos remained mathematics' most formidable formulator--and solver--of problems. During the last 25 years of his life, he worked on mathematics 19 hours a day. How does an 80-year-old man keep up such a pace? The caffeine and amphetamines were a factor, I'm sure, but there was more to it than that. I think it had to do with staying childlike. Erdos was the premier mathematical problem solver of our time. He wrote or coauthored 1475 academic papers, all of them substantial. He collaborated with more mathematicians than any other mathematician in history, 485 coauthors; and this led to an interesting tradition among mathematicians--computing one's Erdos number. Those 485 have an Erdos number of 1, other mathematicians who have collaborated with them have an Erdos number of 2, and so forth.

I don't know what Stephen Wolfram's Erdos number may be. But I do know that Wolfram's plan to remake all science has an unworldly, childlike naivete about it, and that could be its strength. If Wolfram, like Erdos, has managed the trick of keeping his mind somehow childlike, maybe this Newton-like scheme of remaking science has a chance. Or maybe not. Anyway, I've got my order in at Amazon.com.

eBusiness Essentials

And now for something completely different--a clever bit from those wacky British comedians, the boys of BT, or British Telecom. While BT richly deserves to be laughed at for claiming to own hyperlinking, and even filing a lawsuit against an ISP to try to enforce its claim, the company is not all patent lawyers, and some useful work does come out of the company (I mean besides the invention of hyperlinking). As evidence of this, I mention eBusiness Essentials.. Technology and Network Requirements for the Electronic Marketplace, by Mark Norris, Steve West, and Kevin Gaughan (John Wiley & Sons Ltd. BT Series, Chichester, 2000; ISBN 0 471 85203 1).

The book is written for "planners, engineers, managers, and developers" and it seems to hit the right technical level for that mix of readers most of the time. The range of material that is covers is pretty broad: the different models of the electronic marketplace, many different kinds of catalogs and how to implement them, payment systems, trust and security, B2B and the need to move beyond traditional EDI in the supply chain, integration of diverse elements of an e-business, and an overview of underlying technologies and standards.

The overviews are helpful to one who is new to some or all of the subjects. I had just finished reading the book when my partner Nancy called and said she was filling out a form and needed to know what our EDI strategy was. Before reading the book, I could have told her that EDI stood for Electronic Data Interchange and that was about it. After-reading the book, I could chat knowledgeably about different EDI strategies, and tell her that we don't have one, and why we don't want one.

The book isn't a cookbook on setting up an e-business; the cookbook approach necessarily constrains your options, and that's not what this book is about. I was impressed with the variety of approaches presented and pleased to see some very low-tech implementations presented as serious options. For a retail business that already has systems in place for most of the operations of an e-commerce site, it may make sense to piggyback the e-commerce site onto the existing system initially with little or no effort to take advantage of the efficiencies of e-commerce, even to the point of keying in credit-card numbers into a card processor manually. Converting a working manual system into an electronic one may be easier than building the electronic one from scratch.

The two case studies worked through in the book are from the other end of the spectrum--monster e-commerce implementations from Federal Express and Cisco. It would be daunting to delineate the e-commerce strategy of either company, and the book scarcely tries; the case studies chapter is pretty skimpy and unenlightening.

Overall, though, the book is accurate, broad in its coverage, and willing to take a stand when appropriate. I think it's worth a look by anyone starting an e-business.

Of course, the most important piece of advice for anyone starting an e-business is never, ever refer to it as a dot-com. But you already knew that.


Source: Scientific American, Feb2001, Vol. 284 Issue 2, p68, 8p 

Author(s): Tegmark, Max; Wheeler, John Archibald

As quantum theory celebrates its 100th birthday, spectacular successes are mixed with persistant puzzles

In a few years, all the great physical constants will have been approximately estimated, and...the only occupation which will then be left to the men of science will be to carry these measurements to another place of decimals." As we enter the 21st century amid much brouhaha about past achievements, this sentiment may sound familiar. Yet the quote is from James Clerk Maxwell and dates from his 1871 University of Cambridge inaugural lecture expressing the mood prevalent at the time (albeit a mood he disagreed with). Three decades later, on December 14, 1900, Max Planck announced his formula for the blackbody spectrum, the first shot of the quantum revolution.

This article reviews the first 100 years of quantum mechanics, with particular focus on its mysterious side, culminating in the ongoing debate about its consequences for issues ranging from quantum computation to consciousness, parallel universes and the very nature of physical reality. We virtually ignore the astonishing range of scientific and practical applications that quantum mechanics undergirds: today an estimated 30 percent of the U.S. gross national product is based on inventions made possible by quantum mechanics, from semiconductors in computer chips to lasers in compact-disc players, magnetic resonance imaging in hospitals, and much more.

In 1871 scientists had good reason for their optimism. Classical mechanics and electrodynamics had powered the industrial revolution, and it appeared as though their basic equations could describe essentially all physical systems. But a few annoying details tarnished this picture. For example, the calculated spectrum of light emitted by a glowing hot object did not come out right. In fact, the classical prediction was called the ultraviolet catastrophe, according to which intense ultraviolet radiation and x-rays should blind you when you look at the heating element on a stove.

The Hydrogen Disaster

In his 1900 paper Planck succeeded in deriving the correct spectrum. His derivation, however, involved an assumption so bizarre that he distanced himself from it for many years afterward: that energy was emitted only in certain finite chunks, or "quanta." Yet this strange assumption proved extremely successful. In 1905 Albert Einstein took the idea one step further. By assuming that radiation could transport energy only in such lumps, or "photons," he explained the photoelectric effect, which is related to the processes used in present-day solar cells and the image sensors used in digital cameras.

Physics faced another great embarrassment in 1911. Ernest Rutherford had convincingly argued that atoms consist of electrons orbiting a positively charged nucleus, much like a miniature solar system. Electromagnetic theory, though, predicted that orbiting electrons would continuously radiate away their energy and spiral into the nucleus in about a trillionth of a second. Of course, hydrogen atoms were known to be eminently stable. Indeed, this discrepancy was the worst quantitative failure in the history of physics--underpredicting the lifetime of hydrogen by some 40 orders of magnitude.

In 1913 Niels Bohr, who had come to the University of Manchester in England to work with Rutherford, provided an explanation that again used quanta. He postulated that the electrons' angular momentum came only in specific amounts, which would confine them to a discrete set of orbits. Thc electrons could radiate energy only by jumping from one such orbit to a lower one and sending off an individual photon. Because an electron in the innermost orbit had no orbits with less energy to jump to, it formed a stable atom. Bohr's theory also explained many of hydrogen's spectral lines--the specific frequencies of light emitted by excited atoms. It worked for the helium atom as well, but only if the atom was deprived of one of its two electrons. Back in Copenhagen, Bohr got a letter from Rutherford telling him he had to publish his results. Bohr wrote back that nobody would believe him unless he explained the spectra of all the elements. Rutherford replied: Bohr, you explain hydrogen and you explain helium, and everyone will believe all the rest.

Despite the early successes of the quantum idea, physicists still did not know what to make of its strange and seemingly ad hoc rules. There appeared to be no guiding principle. In 1923 Louis de Broglie proposed an answer in his doctoral thesis: electrons and other particles act like standing waves. Such waves, like vibrations of a guitar string, can occur only with certain discrete (quantized) frequencies. The idea was so unusual that the examining committee went outside its circle for advice. Einstein, when queried, gave a favorable opinion, and the thesis was accepted.

In November 1925 Erwin Schr6dinger gave a seminar on de Broglie's work in Zurich. When he was finished, Peter Debye asked, You speak about waves, but where is the wave equation? Schr6dinger went on to produce his equation, the master key for so much of modern physics. An equivalent formulation using matrices was provided by Max Born, Pascual Jordan and Werner Heisenberg around the same time. With this powerful mathematical underpinning, quantum theory made explosive progress. Within a few years, physicists had explained a host of measurements, including spectra of more complicated atoms and properties of chemical reactions.

But what did it all mean? What was this quantity, the "wave function," that Schr6dinger's equation described? This central puzzle of quantum mechanics remains a potent and controversial issue to this day.

Born had the insight that the wave function should be interpreted in terms of probabilities. When experimenters measure the location of an electron, the probability of finding it in each region depends on the magnitude of its wave function there. This interpretation suggested that a fundamental randomness was built into the laws of nature. Einstein was deeply unhappy with this conclusion and expressed his preference for a deterministic universe with the oft-quoted remark, "I can't believe that God plays dice."

Curious Cats and Quantum Cards

Schrbdinger was also uneasy. Wave functions could describe combinations of different states, so-called superpositions. For example, an electron could be in a superposition of several different locations. Schr6dinger pointed out that if microscopic objects such as atoms could be in strange superpositions, so could macroscopic objects, because they are made of atoms. As a baroque example, he described the now well-known thought experiment in which a nasty contraption kills a cat if a radioactive atom decays. Because the radioactive atom enters a superposition of decayed and not decayed, it produces a cat that is both dead and alive in superposition.

The illustration on the opposite page shows a simpler variant of this thought experiment. You take a card with a perfectly sharp edge and balance it on its edge on a table. According to classical physics, it will in principle stay balanced forever. According to the Schr6dinger equation, the card will fall down in a few seconds even if you do the best possible job of balancing it, and it will fall down in both directions--to the left and the right--in superposition.

If you could perform this idealized thought experiment with an actual card, you would undoubtedly find that classical physics is wrong and that the card falls down. But you would always see it fall down to the left or to the right, seemingly at random, never to the left and to the right simultaneously, as the Schr6dinger equation might have you believe. This seeming contradiction goes to the very heart of one of the original and enduring mysteries of quantum mechanics.

The Copenhagen interpretation of quantum mechanics, which grew from discussions between Bohr and Heisenberg in the late 1920s, addresses the mystery by asserting that observations, or measurements, are special. So long as the balanced card is unobserved, its wave function evolves by obeying the Schr6dinger equation--a continuous and smooth evolution that is called "unitary" in mathematics and has several very attractive properties. Unitary evolution produces the superposition in which the card has fallen down both to the left and to the right. The act of observing the card, however, triggers an abrupt change in its wave function, commonly called a collapse: the observer sees the card in one definite classical state (face up or face down), and from then onward only that part of the wave function survives. Nature supposedly selects one state at random, with the probabilities determined by the wave function.

The Copenhagen interpretation provided a strikingly successful recipe for doing calculations that accurately described the outcomes of experiments, but the suspicion lingered that some equation ought to describe when and how this collapse occurred. Many physicists took this lack of an equation to mean that something was intrinsically wrong with quantum mechanics and that it would soon be replaced by a more fundamental theory that would provide such an equation. So rather than dwell on ontological implications of the equations, most physicists forged ahead to work out their many exciting applications and to tackle pressing unsolved problems of nuclear physics.

That pragmatic approach proved stunningly successful. Quantum mechanics was instrumental in predicting antimatter, understanding radioactivity (leading to nuclear power), accounting for the behavior of materials such as semiconductors, explaining superconductivity, and describing interactions such as those between light and matter (leading to the invention of the laser) and of radio waves and nuclei (leading to magnetic resonance imaging). Many successes of quantum mechanics involve its extension, quantum field theory, which forms the foundations of elementary particle physics all the way to the present-day experimental frontiers of neutrino oscillations and the search for the Higgs particle and supersymmetry.

Many Worlds

By the 1950s this ongoing parade of successes had made it abundantly clear that quantum theory was far more than a short-lived temporary fix. And so, in the mido1950s, a Princeton University student named Hugh Everett III decided to revisit the collapse postulate in his doctoral thesis. Everett pushed the quantum idea to its extreme by asking the following question: What if the time evolution of the entire universe is always unitary? After all, if quantum mechanics suffices to describe the universe, then the present state of the universe is described by a wave function (an extraordinarily complicated one). In Everett's scenario, that wave function would always evolve in a deterministic way, leaving no room for mysterious nonunitary collapse or God playing dice.

Instead of being collapsed by measurements, microscopic superpositions would rapidly get amplified into byzantine macroscopic superpositions. Our quantum card would really be in two places at once. Moreover, a person looking at the card would enter a superposition of two different mental states, each perceiving one of the two outcomes. If you had bet money on the queen's landing face up, you would end up in a superposition of smiling and frowning. Everett's brilliant insight was that the observers in such a deterministic but schizophrenic quantum world could perceive the plain old reality that we are familiar with. Most .important, they could perceive an apparent randomness obeying the correct probability rules [see illustration above].

Everett's viewpoint, formally called the relative-state formulation, became popularly known as the many-worlds interpretation of quantum mechanics, because each component of one's superposition perceives its own world. This viewpoint simplifies the underlying theory by removing the collapse postulate. But the price it pays for this simplicity is the conclusion that these parallel perceptions of reality are all equally real.

Everett's work was largely disregarded for about two decades. Many physicists still hoped that a deeper theory would be discovered, showing that the world was in some sense classical after all, free from oddities like big objects being in two places at once. But such hopes were shattered by a series of new experiments.

Could the seeming quantum randomness be replaced by some kind of unknown quantity carried about inside particles--so-called hidden variables? CERN theorist John S. Bell showed that in this case quantities that could be measured in certain difficult experiments would inevitably disagree with the standard quantum predictions. After many years, technology allowed researchers to conduct the experiments and to eliminate hidden variables as a possibility.

A "delayed choice" experiment proposed by one of us (Wheeler) in 1978 was successfully carried out in 1984, showing another quantum feature of the world that defies classical descriptions: not only can a photon be in two places at once, but experimenters can choose, after the fact, whether the photon was in both places or just one.

The simple double-slit interference experiment, in which light or electrons pass through two slits and produce an interference pattern, hailed by Richard Feynman as the mother of all quantum effects, was successfully repeated for ever larger objects: atoms, small molecules and, most recently, 60-atom buckyballs. After this last feat, Anton Zeilinger's group in Vienna even started discussing conducting the experiment with a virus. In short, the experimental verdict is in: the weirdness of the quantum world is real, whether we like it or not.

Quantum Censorship--Decoherence

The experimental progress of the past few decades was paralleled by great advances in theoretical understanding. Everett's work had left two crucial questions unanswered. First, if the world actually contains bizarre macroscopic superpositions, why don't we perceive them?

The answer came in 1970 with a seminal paper by H. Dieter Zeh of the University of Heidelberg, who showed that the Schr6dinger equation itself gives rise to a type of censorship. This effect became known as decoherence, because an ideal pristine superposition is said to be coherent. Decoherence was worked out in great detail by Los Alamos scientist Wojciech H. Zurek, Zeh and others over the following decades. They found that coherent superpositions persist only as long as they remain secret from the rest of the world. Our fallen quantum card is constantly bumped by snooping air molecules and photons, which thereby find out whether it has fallen to the left or to the right, destroying ("decohering") the superposition and making it unobservable [see box on preceding page].

It is almost as if the environment acts as an observer, collapsing the wave function. Suppose that your friend looked at the card without telling you the outcome. According to the Copenhagen interpretation, her measurement collapses the superposition into a definite outcome, and your best description of the card changes from a quantum superposition to a classical representation of your ignorance of what she saw. Loosely speaking, decoherence calculations show that you do not need a human observer (or explicit wave-function collapse) to get much the same effect--even an air molecule bouncing off the fallen card will suffice. That tiny interaction rapidly changes the superposition to a classical situation for all practical purposes.

Decoherence explains why we do not routinely see quantum superpositions in the world around us. It is not because quantum mechanics intrinsically stops working for objects larger than some magic size. Instead macroscopic objects such as cats and cards are almost impossible to keep isolated to the extent needed to prevent decoherence. Microscopic objects, in contrast, are more easily isolated from their surroundings so that they retain their quantum behavior.

The second unanswered question in the Everett picture was more subtle but equally important: What mechanism picks out the classical states--face up and face down for our card--as special? Considered as abstract quantum states, there is nothing special about these states as compared to the innumerable possible superpositions of up and down in various proportions. Why do the many worlds split strictly along the up/down lines that we are familiar with and never any of the other alternatives? Decoherence answered this question as well. The calculations showed that classical states such as face up and face down were precisely the ones that are robust against decoherence. That is, interactions with the surrounding environment would leave face-up and face-down cards unharmed but would drive any superposition of up and down into classical face-up/face-down alternatives.

Decoherence and the Brain

physicists have a tradition of analyzing the universe by splitting it into two parts. For example, in thermodynamics, theorists may separate a body of matter from everything else around it (the "environment"), which may supply prevailing conditions of temperature and pressure. Quantum physics traditionally separates the quantum system from the classical measuring apparatus. If unitarity and decoherence are taken seriously, then it is instructive to split the universe into three parts, each described by quantum states: the object under consideration, the environment, and the observer, or subject [see box at left].

Decoherence caused by the environment interacting with the object or the subject ensures that we never perceive quantum superpositions of mental states. Furthermore, our brains are inextricably interwoven with the environment, and decoherence of our firing neurons is unavoidable and essentially instantaneous. As Zeh has emphasized, these conclusions justify the long tradition of using the textbook postulate of wave function collapse as a pragmatic "shut up and calculate" recipe: compute probabilities as if the wave function collapses when the object is observed. Even though in the Everett view the wave function technically never collapses, decoherence researchers generally agree that decoherence produces an effect that looks and smells like a collapse.

The discovery of decoherence, combined with the ever more elaborate experimental demonstrations of quantum weirdness, has caused a noticeable shift in the views of physicists. The main motivation for introducing the notion of wave-function collapse had been to explain why experiments produced specific outcomes and not strange superpositions of outcomes. Now much of that motivation is gone. Moreover, it is embarrassing that nobody has provided a testable deterministic equation specifying precisely when the mysterious collapse is supposed to occur.

An informal poll taken in July 1999 at a conference on quantum computation at the Isaac Newton Institute in Cambridge, England, suggests that the prevailing viewpoint is shifting. Out of 90 physicists polled, only eight declared that their view involved explicit wave function collapse. Thirty chose "many worlds or consistent histories (with no collapse)." (Roughly speaking, the consistent-histories approach analyzes sequences of measurements and collects together bundles of alternative results that would form a consistent "history" to an observer.)

But the picture is not clear: 50 of the researchers chose "none of the above or undecided." Rampant linguistic confusion may contribute to that large number. It is not uncommon for two physicists who say that they subscribe to the Copenhagen interpretation, for example, to find themselves disagreeing about what they mean.

This said, the poll clearly suggests that it is time to update the quantum textbooks: although these books, in an early chapter, infallibly list explicit nonunitary collapse as a fundamental postulate, the poll indicates that today many physicists--at least in the burgeoning field of quantum computation--do not take this seriously. The notion of collapse will undoubtedly retain great utility as a calculational recipe, but an added caveat clarifying that it is probably not a fundamental process violating the Schr6dinger equation could save astute students many hours of confusion.

Looking Ahead

After 100 years of quantum ideas, what lies ahead? What mysteries remain? How come the quantum? Although basic issues of ontology and the ultimate nature of reality often crop up in discussions about how to interpret quantum mechanics, the theory is probably just a piece in a larger puzzle. Theories can be crudely organized in a family tree where each might, at least in principle, be derived from more fundamental ones above it. Almost at the top of the tree lie general relativity and quantum field theory. The first level of descendants includes special relativity and quantum mechanics, which in turn spawn electromagnetism, classical mechanics, atomic physics, and so on. Disciplines such as computer science, psychology and medicine appear far down in the lineage.

All these theories have two components: mathematical equations and words that explain how the equations are connected to what is observed in experiments. Quantum mechanics as usually presented in textbooks has both components: some equations and three fundamental postulates written out in plain English. At each level in the hierarchy of theories, new concepts (for example, protons, atoms, cells, organisms, cultures) are introduced because they are convenient, capturing the essence of what is going on without recourse to the theories above it. Crudely speaking, the ratio of equations to words decreases as one moves down the tree, dropping near zero for very applied fields such as medicine and sociology. In contrast, theories near the top are highly mathematical, and physicists are still struggling to comprehend the concepts that are encoded in the mathematics.

The ultimate goal of physics is to find what is jocularly referred to as a theory of everything, from which all else can be derived. If such a theory exists, it would take the top spot in the family tree, indicating that both general relativity and quantum field theory could be derived from it. Physicists know something is missing at the top of the tree, because we lack a consistent theory that includes both gravity and quantum mechanics, yet the universe contains both phenomena.

A theory of everything would probably have to contain no concepts at all. Otherwise one would very likely seek an explanation of its concepts in terms of a still more fundamental theory, and so on in an infinite regress. In other words, the theory would have to be purely mathematical, with no explanations or postulates. Rather an infinitely intelligent mathematician should be able to derive the entire theory tree from the equations alone, by deriving the properties of the universe that they describe and the properties of its inhabitants and their perceptions of the world.

The first 100 years of quantum mechanics have provided powerful technologies and answered many questions. But physics has raised new questions that are just as important as those outstanding at the time of Maxwell's inaugural speech--questions regarding both quantum gravity and the ultimate nature of reality. If history is anything to go by, the coming century should be full of exciting surprises.


By Max Tegmark and John Archibald Wheeler

Inset Article



According to quantum physics, an ideal card perfectly balanced on its edge will fall down in both directions at once, in what is known as a superposition. The card's quantum wave function (blue) changes smoothly and continuously from the balanced state (left) to the mysterious final state (right) that seems to have the card in two places at once. In practice, this experiment is impossible with a real card, but the analogous situation has been demonstrated innumerable times with electrons, atoms and larger objects. Understanding the meaning of such superpositions, and why we never see them in the everyday world around us, has been an enduring mystery at the very heart of quantum mechanics. Over the decades, physicists have developed several ideas to resolve the mystery, including the competing Copenhagen and many-worlds interpretations of the wave function and the theory of decoherence.

Inset Article


IDEA: Observers see a random outcome; probability given by the wave function.

ADVANTAGE: A single outcome occurs, matching what we observe.

PROBLEM: Requires wave functions to "collapse,"but no equation specifies when.

When a quantum superposition is observed or measured, we see one or the other of the alternatives at random, with probabilities controlled by the wave function. If a person has bet that the card will fall face up, when she first looks at the card she has a 50 percent chance of happily seeing that she has won her bet. This interpretation has long been pragmatically accepted by physicists even though it requires the wave function to change abruptly, or collapse, in violation of the Schrodinger equation.

Inset Article


IDEA: Superpositions will seem like alternative parallel worlds to their inhabitants.

ADVANTAGE: The Schr6dinger equation always works: wave functions never collapse. ..BI.-PROBLEMS:

The bizarreness of the idea. Some technical puzzles remain.

If wave functions never collapse, the Schrbdinger equation predicts that the person looking at the card's superposition will herself enter a superposition of two possible outcomes: happily winning the bet or sadly losing. These two parts of the total wave function (of person plus card) carry on completely independently, like two parallel worlds. If the experiment is repeated many times, people in most of the parallel worlds will see the card falling face up about half the time. Stacked cards (right) show 16 worlds that result when a card is dropped four times.

Inset Article


IDEA: Tiny interactions with the surrounding environment rapidly dissipate the peculiar quantumness of superpositions.

ADVANTAGES: Experimentally testable. Explains why the everyday world looks "classical" instead of quantum.

CAVEAT: Decoherence does not completely eliminate the need for an interpretation such as many-worlds or Copenhagen.

The uncertainty of a quantum' superposition (left) is different from the uncertainty of classical probability, as occurs after a coin toss (right).A mathematical object called a density matrix illustrates the distinction. The wave function of the quantum card corresponds to a density matrix with four peaks. Two of these peaks represent the 50 percent probability of each outcome, face up or face down. The other two indicate that these two outcomes can still, in principle, interfere with each other. The quantum state is still "coherent." The density matrix of a coin toss has only the first two peaks, which conventionally means that the coin is really either face up or face down but that we just haven't looked at it yet.

Decoherence theory reveals that the tiniest interaction with the environment, such as a single photon or gas molecule bouncing off the fallen card, transforms a coherent density matrix very rapidly into one that, for all practical purposes, represents classical probabilities such as those in a coin toss. The Schrbdinger equation controls the entire process.


It is instructive to split the universe into three parts: the object under consideration, the environment, and the quantum state of the observer, or subject. The Schrodinger equation that governs the universe as a whole can be divided into terms that describe the internal dynamics of each of these three subsystems and terms that describe interactions among them. These terms have qualitatively very different effects.

The term giving the object's dynamics is typically the most important one, so to figure out what the object will do, theorists can usually begin by ignoring all the other terms. For our quantum card, its dynamics predict that it will fall both left and right in superposition. When our observer looks at the card, the subject-object interaction extends the superposition to her mental state, producing a superposition of joy and disappointment over winning and losing her bet. She can never perceive this superposition, however, because the interaction between the object and the environment (such as air molecules and photons bouncing off the card) causes rapid decoherence that makes this superposition unobservable.

Even if she could completely isolate the card from the environment (for example, by doing the experiment in a dark vacuum chamber at absolute zero), it would not make any difference. At least one neuron in her optical nerves would enter a superposition of firing and not firing when she looked at the card, and this superposition would decohere in about 10-20 second, according to recent calculations. If the complex patterns of neuron firing in our brains have anything to do with consciousness and how we form our thoughts and perceptions, then decoherence of our neurons ensures that we never perceive quantum superpositions of mental states. In essence, our brains inextricably interweave the subject and the environment, forcing decoherence on us.

M. T. and J.A.W.

The Authors

MAX TEGMARK and JOHN ARCHIBALD WHEELER discussed quantum mechanics extensively during Tegmark's three and a half years as a postdoc at the Institute for Advanced Studies in Princeton, N.J. Tegmark is now an assistant professor of physics at the University of Pennsylvania. Wheeler is professor emeritus of physics at Princeton, where his graduate students included Richard Feynman and Hugh Everett III (inventor of the many-worlds interpretation). He received the 1997 Wolf Prize in physics for his work on nuclear reactions, quantum mechanics and black holes. In 1934 and 1935 Wheeler had the privilege of working on nuclear physics in Niels Bohr's group in Copenhagen. On arrival at the institute he asked a workman who was trimming vines running up a wall where he could find Bohr. "I'm Niels Bohr," the man replied. The authors wish to thank Emily Bennett and Ken Ford for their help with an earlier manuscript on this topic and Jeff Klein, Dieter Zeh and Wojciech H. Zurek for their helpful comments.


Source: Science News, 01/27/2001, Vol. 159 Issue 4, p56, 1/3p 

Author(s): Peterson, Ivars

From New Orleans at the Joint Mathematics Meetings

The famous Mesopotamian clay tablet known as Plimpton 322 has tantalized historians of mathematics ever since its discovery more than 60 years ago. Scholars have considered the tablet to be an anomalous mathematical exercise well in advance of its time. They have variously interpreted the cryptic columns of numbers, written in the wedge-shaped script called cuneiform, as a trigonometric table or a sophisticated scheme for generating Pythagorean triples. A Pythagorean triple is a set of three whole numbers, a, b, and c, such that a[2] + b[2] = c[2].

Now, Eleanor Robson of the Oriental Institute at the University of Oxford in England offers an alternative explanation of the tablet's purpose. The tablet served as a guide for a teacher preparing exercises involving squares and reciprocals, she suggests. Robson also pinpoints the tablet's date to within 40 years of 1800 B.C. and says that it probably came from Larsa, a Mesopotamian city about 100 miles southeast of Babylon.

Previous historians had typically failed to consider the tablet's cultural context and relied on later mathematical developments to infer its purpose. For example, the concept of angle measurement, which is essential for a trigonometric table, was not developed until nearly 2,000 years after the tablet was made. New scholarly approaches to Mesopotamian mathematics, however, combine historical, linguistic, and mathematical techniques to address questions such as, How did Mesopotamians approach mathematical problems, and what role did these problems play in their society? "We need to understand the document in its historical and cultural context," Robson says. "Neglecting these factors can hinder our interpretations."

By comparing Plimpton 322 with other ancient tablets, Robson established that its style is consistent with temple records and documents of about 1800 B.C. in Larsa. Scrutiny of various mathematical tablets revealed the importance of computational methods based on reciprocals (1/x) and squares (chi square) of numbers. Robson also found examples of student exercises that consisted of problem lists, each one registering essentially the same problem with slightly different numbers.

Such evidence enables modern mathematicians to view Plimpton 322 "not as a freakish anomaly in the history of early mathematics but as the epitome of Mesopotamian mathematical culture at its best," Robson says. "It's a well-organized, well-executed, beautiful piece of mathematics." Robson describes her findings in a report scheduled for publication in HISTORIA MATHEMATICA.


Source: Science News, 12/02/2000, Vol. 158 Issue 23, p357, 1/2p 

Author(s): Peterson, I.

Fermat's last theorem is just one of many examples of innocent-looking problems that can long stymie even the most astute mathematicians. It took about 350 years to prove Fermat's tantalizing conjecture.

Now, Preda Mihailescu of the Swiss Federal Institute of Technology in Zurich has proved a theorem that is likely to lead to a solution of Catalan's conjecture, another venerable problem involving relationships among whole numbers. He describes his result in a paper to be published in the Journal of Number Theory.

"This is a very important contribution," says mathematician Andrew Granville of the University of Georgia in Athens. Mihailescu's work probably puts the resolution of Catalan's problem into the foreseeable future, he notes.

Named for Belgian mathematician Eugone Charles Catalan, the conjecture concerns powers of whole numbers. For example, the sequence of all squares and cubes of whole numbers greater than 1 begins with the integers 4, 8, 9, 16, 25, 27, and 36. In this sequence, 8 (the cube of 2) and 9 (the square of 3) are not only powers but also consecutive whole numbers.

In 1844, Catalan asserted that among powers of whole numbers, the only pair of consecutive numbers that arises is 8 and 9. Since then, Catalan's conjecture has posed a challenge to number theorists akin to that provided by Fermat's last theorem (SN: 11/5/94, p. 295).

Solving Catalan's problem amounts to a search for whole number solutions to the equation x[superscript]p - y[superscript]q = 1, where x, y, p, and q are all greater than 1. The conjecture suggests that there is only one such solution: 3[superscript]2 - 2[superscript]3 = 1.

In a major step toward resolving Catalan's conjecture, Robert Tijdeman of the University of Leiden in the Netherlands showed in 1976 that even if it is not true, there is a finite rather than an infinite number of solutions to the equation. In effect, each of the exponents p and q must be less than a certain value.

Last year, Maurice Mignotte of the Universite Louis Pasteur in Strasbourg, France, demonstrated that p had to be less than 7.15 5 1011 and q less than 7.78 5 10[superscript]16. Meanwhile, computations showed that no consecutive powers other than 8 and 9 occur below 10[superscript]7.

In the latest advance, Mihailescu proved that, if additional solutions to the equation exist, the exponents p and q are a pair of what are known as double Wieferich primes. These pairs obey the following relationship: p[superscript](q - 1) must leave a remainder of 1 when divided by q[superscript]2, and q[superscript](p - 1) must leave a remainder of 1 when divided by p[superscript]2. The pair of prime numbers 2 and 1,093 fits this relationship.

Only six examples of double Wieferich primes have been identified so far. All of these pairs are below the range specified by the computations addressing Catalan's conjecture. A major collaborative computational effort (http://www.ensor.org) has now been mounted to find additional double Wieferich primes, but mathematicians are betting that a theoretical approach to proving Catalan's conjecture will beat out the computers.


Source: Scholastic Parent & Child, Dec2000/Jan2001, Vol. 8 Issue 3, p50, 5p, 3c 

Author(s): Church, Ellen Booth

Math and music unite the two hemispheres of the brain--a powerful force for learning.

did you ever consider the skills your child uses when she sings a song such as "This Old Man"? She is matching and comparing (through pitch, volume, and rhythm), patterning and sequencing (through melody, rhythm, and lyrics), and counting numbers and adding. Add dramatic hand movements or clapping to the beat, and you have created an entire package of learning rolled into one song!

In recent years, there has been a considerable amount of research on the effect of music on brain development and thinking. Neurological research has found that the higher brain functions of abstract reasoning as well as spatial and temporal conceptualization are enhanced by music activities. Activities with music can generate the neural connections necessary for using important math skills.

Music and math seem to create a connection between the two hemispheres of the brain. Music is considered a rightbrain activity, while math is a left-brain activity. When combined, the whole child is engaged not only in the realm of thinking but in all the other domains of social-emotional, creative, language, and physical development. Music and math: Together they make a complete developmental package.

The melody of math

The next time your child is singing a song she learned at school, join in. Clap or tap a beat to go with it--even if you don't know the words. Rhythm is made up of patterns--just like math. By focusing on the beat, you will be making the structure of math audible. Tap the beat to a favorite song and see if your child can guess it. Clap the rhythm of her name.

Make up a rhythm for your child to echo back to you.

Take time to sing counting songs too. Remember "One Potato, Two Potato" or "10 Little Monkeys"? Songs like these help children learn about numbers by giving them a "hands-on" experience. Instead of counting by rote, children can count to a beat, a tune, a motion, or an object--or all of the above.

Math around the house

Besides making music, other "homegrown" activities can help your child understand matching, comparing, sorting, and making patterns and sequences and build a foundation for future math learning.

Match them up. Matching and comparing are essential skill activities in math development. Before your child can understand that 3 is more than 2, she needs to be able to recognize more than (bigger than), less than (smaller than), and same as (equal to) in the world around her.

Invite your child to build a tower that is as tall as the coffee table (or the couch or the dining table). How many blocks did she use? Is her tower bigger or smaller than the table? By how many blocks? Is her "couch tower" bigger than, the same as, or shorter than her "table tower"? By how many blocks?

Have a pizza party! Invite your child to match one piece to each person (one-to-one correspondence). You can extend the learning by asking, "How many slices will we need for everyone to have two?"

Patterns all around. One of the important skills in math is the ability to see (or "read") and verbalize a pattern.

Look for the patterns in your environment. Is there a pattern on the wallpaper, the parking lot, even the stripes on your child's shirt? Point these out and invite your child to say or clap the pattern with you: "red, blue, red, blue, red, blue." What color comes next?

Make patterns with the shells you collected at the beach this year, with the coins in her piggy bank, with socks--anything you have multiples of!

How big? How long? Measurement is a natural extension of matching and estimating. To measure, your child has to match a series of objects to the length or width of something. The first step in measuring is to create a standard of measure--but the item you use to measure with doesn't have to be standard at all.

How many (clean!) socks long is the kitchen counter? Count them and see. Then measure the counter with soup spoons. Suggest another item--toy cars, cereal boxes, magazines--and ask your child to guess (estimate) how many of these will be equal to the length of the counter. (Just remember that all your "measuring items" should be around the same size.)

Your child can use nonstandard measuring items for practical purposes too. How much room will the new picture take up on the wall? Why not measure around the frame with erasers? (And for comparison's sake, show your child the equivalent measurement in inches on a ruler.)

Exactly the opposite. The concept of opposites is important to math: For instance, in order for your child to understand low, she needs to experience high. When you use the comparative language of opposites, you are helping your child learn about proportion and number relationships.

Ask your child: "Can you put this can on a low shelf and reach up to put the cereal box on a high shelf?."

While you're taking a walk, say, "Can you take BIG steps? Now do the opposite."

Tally the score. Tally marks are up-and-down lines that are made in sets of four with the fifth mark made as a slash through the set of four. Children learn tally marks quickly because they are connected with an action or event.

Show your child how to keep score in family games such as go fish or tic-tac-toe.

For children over 3 who no longer put objects in their mouth, instead of tally marks, use objects such as buttons, beads, or coins and count five of each into egg carton sections.

As your child becomes comfortable with tally marks or objects, you can introduce a "shopping cart tally." Use a child-size calculator to add up how much each item costs as it goes into the shopping cart. Your child may not really understand what the numbers mean, but she will see that they get larger and larger as the cart gets fuller and fuller.

When you get home, play with the change that is left over. Your child may be becoming interested in money, and although she may not understand the value of each coin, she can begin to sort the pennies and other coins she is collecting. (This activity is for children age 4 years and older.) She can match them by size or color. Eventually your child will be able to match pennies to nickels and dimes just like she matched the tally marks to objects!

Numbers are everywhere in your child's world. You might find them on speed limit signs, route number signs, posters, and houses. Point out the numbers on household electronic devices. Set the timer together on the coffee machine, microwave, alarm clock, or VCR. It's through simple day-to-day activities--from singing songs to slicing pizza--that your child will have her first important math experiences.

PHOTO (COLOR): Rat-a-tat-tat, toot, toot. When playing musical instruments, children experiment with rhythmic patterns.

PHOTO (COLOR): "This tower is eight blocks higher than the coffee table." Parents can help children practice the language of math

PHOTO (COLOR): "The picture is five erasers wide." Your child can estimate and measure without using a ruler.

Great Books about numbers, size, and counting

Beep Beep, Vroom Vroom!

by Stuart J. Murphy, illustrated by Chris L. Demarest

HarperCollins, 2000; $4.95, paper. Ages 4-8.

Benny's Pennies

by Pat Brisson, illustrated by Bob Barner

Yearling, 1995; $5.99. Ages 4-8.

The Cheerios Counting Book: 1, 2, 3

by Barbara Barbieri McGrath, illustrated by Rob Bolster and

Frank Mazzola, Jr.

Scholastic Inc., 2000; $6.99. Ages 2-4.

Eating Fractions

by Bruce McMillan

Scholastic Inc., 1991; $15.95. Ages 4-8.

Learn to Count, Funny Bunnies

by Cyndy Szekeres

Scholastic Inc., 2000; $6.99. Ages 2-4.

More, Fewer, Less

by Tana Hoban

Greenwillow, 1998; $15. Ages 4-8.

The 1, 2, 3's of Math Learning

Your child's acquisition of math skills follows a developmental sequence just as children (most, anyway) crawl before they walk, they learn math relationships before they learn the names of numbers and how to use them. Children need to learn the structure of math before they can use (and, most important, understand) the vocabulary and symbols of math. Too often we present children with numbers before they have had the opportunity to understand what those number symbols or words mean. For instance, sometimes a young child can count to 10, but he doesn't really understand what he is doing. He is just saying a series of memorized words.


Source: Mathematics Teaching in the Middle School, Dec2000, Vol. 6 Issue 4, p262, 4p, 2 charts 

Author(s): Milliken, Paul; Little, Catherine

In 490 B.C., the messenger Pheidippedes ran twenty-six miles to Athens carrying the news of Greek victory at the battle of Marathon. He delivered the news and dropped dead from the effort. Today, we celebrate that famous run with one of the most demanding events in human athletics, the marathon. Like Pheidippedes, the modern runner strives to complete the distance in as little time as possible. Unlike that early messenger, today's competitors undergo extensive training to ensure that they remain alive when they have finished the run. Kevin Smith uses mathematics to help runners prepare for marathons.

Marathon Dynamics, Inc., in Mississauga, Ontario, is Kevin Smith's company. He provides a variety of services for the running community, including hosting running clinics, conducting fitness and health presentations and seminars, managing and promoting local running events, coaching runners, and creating customized training plans for individual runners using software that he developed himself.

"Basically my days are filled with crunching numbers," Kevin says, "involving calculations of distances, times, paces, and heart rates." He works with runners at all levels, from "recreational joggers to competitive athletes." Each training plan is customized "in order for an individual to improve running performance." Kevin brings an essential "appreciation and understanding of the relationships" among all the variables in his number-crunching.

Kevin says that he has "had to, at one time or another, apply the skills, formulae, and thought processes learned in" a whole range of mathematics disciplines, including "calculus, probability, algebra, and tons of more basic percentage and exponential calculation, and unit conversions (miles to kilometers, miles per hour to kilometers per hour, or meters per second, and so on)." All this work, on top of the accounting, record keeping, and financial management of operating his own company, means that Kevin is always doing mathematics on the job.

Does that mean that Kevin studied mathematics in college to prepare for his career? No. "I had no idea," he says, "I would be using math in this way or to this extent. I created the job I do when we founded our company." The mathematics-related courses that have been most useful to him are the accounting and economics courses that he took as part of his business administration degree at the University of Western Ontario. "As co-manager of the business, my responsibilities include most of the financial management duties of any small business, but the entrepreneurial spirit and desire will only get one so far." He still has to do the bookkeeping.

"I suppose my attitude toward mathematics would have been a little different," he admits, "had I known how it would all end up." He always did well but looked on mathematics "as a chore. I was not a natural, so I had to work at it. If I had known how vitally essential a comfort level with numbers was going to be in my career, I might have had a little more pure motivation to excel in math."

Instead of a specific interest in mathematics, a technology connection spurred Kevin's career and got his company started. "I started to toy around with a simple Lotus spreadsheet idea I had six or seven years ago," he explains, "about a way to help runners of vastly different experience and ability" plan their training. The resulting software program helped runners calculate "how frequently, how much, and how fast to train." Kevin claims that he "had no idea it would turn into what our customized training software has become--a matrix of over sixty interrelated Microsoft Excel spreadsheets, each of which has hundreds of lines of code and formulae embedded in it. I created the job I do after university, so I had no preconceived notion of how math would be involved in my current career."

Mathematics can save your life only if you know how to apply it. The first marathoner, Pheidippedes, did not know how to pace himself and died as a result. With help from Kevin Smith and some number-crunching through his customized-training-plan software, the famous messenger might have lived to deliver more news.

Teacher Notes

Begin work on the activity sheet on page 265 by having students measure their heart rates. In pairs, have them take each other's pulses by having one person count and the other time the beats per minute. Record the beats per minute for each student. Graph the results, and look for trends. Have the students exert themselves by running in place, for one minute, and measure the rates again. Compare the results. This exercise prepares the students for question 1.

For the other questions, make sure to review conversion strategies. For example, when converting from kilometers per hour to meters per second, students must change units of both distance and time.

When calculating the amount of running done in one year for question 2, remember that every fifth day is a rest day.

See figure 1 for one student's solution to the activity sheet.

"Math at Work" explores how mathematics is used in the workplace. Each article will highlight a particular career and the mathematics specific to that discipline. Readers are encouraged to submit manuscripts for this department by sending them to "Math at Work," MTMS, NCTM, 1906 Association Drive, Reston, VA 20191-9988.

Calling All Teacher-Educators

To find out more about writing for the journal, contact Kathleen Lay at klay@nctm.org and ask for the "MTMS Writer's Packet." If you have a manuscript ready to go, send it directly to Mathematics Teaching in the Middle School, NCTM, 1906 Association Drive, Reston, VA 20191-9988. All submissions must include five double-spaced copied of the manuscript.

You have probably spent much time and effort encouraging your student teachers to write about their thinking. The Editorial Panel of Mathematics Teaching in the Middle School invites you, a teacher-educator who specializes in middle-grades mathematics, to do the same and share your ideas with your colleagues by writing for the journal. Teacher-educators have a special role to play in helping to translate the theory of good practice into pedagogical ideas that teachers can employ in their classrooms. Furthermore, many teacher-educators read the journal to get new ideas for their teaching.

Fig. 1 One student's solution to the activity sheet

Running by the Numbers Activity Sheet

NAME -----

1. An athlete's maximum exertion heart rate is calculated by subtracting his or her age from a fixed number: 220 for males and 226 for females. For example, a 24-year-old female runner has a maximum exertion heart rate of 202 beats per minute, or 226 - 24 = 202. The target performance heart rate is 80% of maximum. Calculate the target performance heart rate for the following runners:

a. 14-year-old female

b. 23-year-old male

c. 35-year-old female

d. 49-year-old male

e. yourself

2. A runner follows the training schedule in the table below.

DAY 1     DAY 2      DAY 3      DAY 4     DAY 5

5 km     7.5 km      10 km      5 km      Rest

If the runner maintains an average rate of 8 km/hr., how much time does he spend training in one year?

3. A runner trained for three days in a row. On day 1, she ran 7.5 km in 37 minutes. On day 2, she ran 8.3 km in 41 minutes. On day 3, she ran 6.8 km in 32 minutes. What was her average pace expressed in km/hr. and in m/sec.?

4. The record for the Math-at-Work 5-km Mini-Marathon is 38 minutes and 12 seconds. What is the likely finishing time for the runner in question 3?

5. One mile is approximately 1.6 km. How many miles does the runner in question 3 run in training in one year?

Fig. 1 One student's solution to the activity sheet

1. a) Max.: 226 - 14 = 212

     Target: 0.8 x 212 = 169.6 congruent to 170

  b) Max.: 220 - 23 = 197

     Target: 0.8 x 197 = 157.6 congruent to 158

  c) Max.: 226 - 35 = 191

     Target: 0.8 x 191 = 152.8 congruent to 153

  d) Max.: 220 - 49 = 171

     Target: 0.8 x 171 = 136.8 congruent 137

  e) Max.: 226 - 13 = 213

     Target: 0.8 x 213 = 170.4 congruent 170

2. Total distance ran: 5 km + 7.5 km + 10 km + 5 km = 27.5 km

Therefore 27.5 km/8 km/hr. = 3.44 hrs.

Number of 5-day cycles within a year: 365 days/5 days = 73 5-day


(This character cannot be represented in ASCII text) The total

amount of time the runner spends on training in one year = 3.44

hrs. x 73 5-day cycles = 251.12 hrs.

3. Total distance: 7.5 km + 8.3 km + 6.8 km = 22.6 km

  Total time: 37 min. + 41 min. + 32 min. = 110 min.

  110 min./60 min. = 1.83 hrs.

Therefore Average pace (km/hr.) = 22.6 km/1.83 hrs.

= 12.35 km/hr.

Average pace (m/sec): 12.35 km x 1 000 = 12 350 m

1 hour = 3 600 sec.

then 12 350 m/3 600 sec. = 3.43 m/s

Therefore Average pace (m/sec.): 3.43 m/sec.

4. Time = distance/speed = 5 km/12.35 km/hr. =0.405 hrs. = 24.30


= 24 min. 18 sec.

5. Total distance: 7.5 km + 8.3 km + 6.8 km = 22.6 km

Therefore Total distance (miles) = 22.6 km/1.6 km = 14.125 miles

Amount of 3-day cycles in a year: 365/3 = 121.67

Therefore Total amount of miles ran in training in one year:

14.125 miles x 121.67 = 1718.59 miles


А также другие работы, которые могут Вас заинтересовать

75189. Сравнительно-исторический метод. Техника сравнения языков 40 KB
  Техника сравнения языков. На основе метода была создана и продолжает создаваться генеалогическая классификация языков устанавливающая родственные языки. Сравнивают ряд языков у которых схожие окончания глаголов настоящего времени изъявительного наклонения. Закон Гримма показал почему слова родственных языков отличны друг от друга.
75190. Германская группа 19.15 KB
  Северногерманская скандинавская подгруппа 1 Датский; письменность на основе латинского алфавита; служил литературным языком и для Норвегии до конца XIX в. 2 Шведский; письменность на основе латинского алфавита. 3 Норвежский; письменность на основе латинского алфавита первоначально датская так как литературным языком норвежцев до конца XIX в. 4 Исландский; письменность на основе латинского алфавита; письменные памятники с XIII в.
75191. Техника сравнения языков. Сравнительно-исторический метод 43.5 KB
  Техника сравнения языков Этот метод очень важен. На его основе бала создана генеалогическая классификация языков. Объединение языков в семьи группы подгруппы основывается на нем. устанавливает родственные связи языков.
75192. Системы грамматических категорий, форм 21.64 KB
  Во многих языках существует 3 рода и они никак не связаны с животным миром. В тюркских языках тоже нет деления на роды. Основным падежом особенно в индоевропейских языках является именительный или абсолютный. В индоевроп языках процесс исчезновения падежной системы происходил с древнейших времен – процесс синкретизма – объединение нескольких значений в одно.
75195. Искусственные языки 27 KB
  Искусственные языки Система изучающая данную область называется Интерлингвистика Она изучает аналоги человеческого языка и языки используемые в узколокальных целях например машинные языки Эсперанто окциденталь или интерлингве идо волапюк – это искусственные языки которые изначально создавались с целью преодоления языкового барьера который существует в человечестве легенда о языковом барьере появилась ещё в дохристианское время во время строительства Вавилонской башни. в Варшаве появился проект языка эсперанто составленный...
75196. Язык и культура 22.01 KB
  Язык и культура. Язык самым тесным образом связан с культурой. Вопросом связи языка и культуры начали заниматься в конце 18 начале 19 в. Вильгельм фон Гумбольдт Гумбольдт утверждал что язык и материальная культура связаны.
75197. Просторечие и жаргон 30.5 KB
  Литературное просторечие – это когда в речи образованного человека встречается фамильярная лексика: братан земляк пацан батя к пожилому мужчине. Следует различать салонные жаргоны социальной верхушки которые возникают из ложной моды как стилистический нарост на нормальном языке; практической ценности в них нет; особенно опасно их проникновение в литературу и практические жаргоны исходящие из профессиональной речи и преследующие цели языкового обособления данной группы и тайноречия для осуществления своего ремесла и засекречивания...