Industrial Revolution

Конспект урока

Иностранные языки, филология и лингвистика

The Industrial Revolution was the result of many fundamental interrelated changes that transformed agricultural economies into industrial ones. The most immediate changes were in the nature of production: what was produced as well as where and how. Goods that had traditionally been made in the home or in small workshops...



1.33 MB

3 чел.

Industrial Revolution



Industrial Revolution, widespread replacement of manual labor by machines that began in Britain in the 18th century and is still continuing in some parts of the world. The Industrial Revolution was the result of many fundamental, interrelated changes that transformed agricultural economies into industrial ones. The most immediate changes were in the nature of production: what was produced, as well as where and how. Goods that had traditionally been made in the home or in small workshops began to be manufactured in the factory. Productivity and technical efficiency grew dramatically, in part through the systematic application of scientific and practical knowledge to the manufacturing process. Efficiency was also enhanced when large groups of business enterprises were located within a limited area. The Industrial Revolution led to the growth of cities as people moved from rural areas into urban communities in search of work.

The changes brought by the Industrial Revolution overturned not only traditional economies, but also whole societies. Economic changes caused far-reaching social changes, including the movement of people to cities, the availability of a greater variety of material goods, and new ways of doing business. The Industrial Revolution was the first step in modern economic growth and development. Economic development was combined with superior military technology to make the nations of Europe and their cultural offshoots, such as the United States, the most powerful in the world in the 18th and 19th centuries.

The Industrial Revolution began in Great Britain during the last half of the 18th century and spread through regions of Europe and to the United States during the following century. In the 20th century industrialization on a wide scale extended to parts of Asia and the Pacific Rim. Today mechanized production and modern economic growth continue to spread to new areas of the world, and much of humankind has yet to experience the changes typical of the Industrial Revolution.

The Industrial Revolution is called a revolution because it changed society both significantly and rapidly. Over the course of human history, there has been only one other group of changes as significant as the Industrial Revolution. This is what anthropologists call the Neolithic Revolution, which took place in the later part of the Stone Age. In the Neolithic Revolution, people moved from social systems based on hunting and gathering to much more complex communities that depended on agriculture and the domestication of animals. This led to the rise of permanent settlements and, eventually, urban civilizations. The Industrial Revolution brought a shift from the agricultural societies created during the Neolithic Revolution to modern industrial societies.

The social changes brought about by the Industrial Revolution were significant. As economic activities in many communities moved from agriculture to manufacturing, production shifted from its traditional locations in the home and the small workshop to factories. Large portions of the population relocated from the countryside to the towns and cities where manufacturing centers were found. The overall amount of goods and services produced expanded dramatically, and the proportion of capital invested per worker grew. New groups of investors, businesspeople, and managers took financial risks and reaped great rewards.

In the long run the Industrial Revolution has brought economic improvement for most people in industrialized societies. Many enjoy greater prosperity and improved health, especially those in the middle and the upper classes of society. There have been costs, however. In some cases, the lower classes of society have suffered economically. Industrialization has brought factory pollutants and greater land use, which have harmed the natural environment. In particular, the application of machinery and science to agriculture has led to greater land use and, therefore, extensive loss of habitat for animals and plants. In addition, drastic population growth following industrialization has contributed to the decline of natural habitats and resources. These factors, in turn, have caused many species to become extinct or endangered.



Ever since the Renaissance (14th century to 17th century), Europeans had been inventing and using ever more complex machinery. Particularly important were improvements in transportation, such as faster ships, and communication, especially printing. These improvements played a key role in the development of the Industrial Revolution by encouraging the movement of new ideas and mechanisms, as well as the people who knew how to build and run them.

Then, in the 18th century in Britain, new production methods were introduced in several key industries, dramatically altering how these industries functioned. These new methods included different machines, fresh sources of power and energy, and novel forms of organizing business and labor. For the first time technical and scientific knowledge was applied to business practices on a large scale. Humankind had begun to develop mass production. The result was an increase in material goods, usually selling for lower prices than before.

The Industrial Revolution began in Great Britain because social, political, and legal conditions there were particularly favorable to change. Property rights, such as those for patents on mechanical improvements, were well established. More importantly, the predictable, stable rule of law in Britain meant that monarchs and aristocrats were less likely to arbitrarily seize earnings or impose taxes than they were in many other countries. As a result, earnings were safer, and ambitious businesspeople could gain wealth, social prestige, and power more easily than could people on the European continent. These factors encouraged risk taking and investment in new business ventures, both crucial to economic growth.

In addition, Great Britain’s government pursued a relatively hands-off economic policy. This free-market approach was made popular through British philosopher and economist Adam Smith and his book The Wealth of Nations (1776). The hands-off policy permitted fresh methods and ideas to flourish with little interference or regulation.

Britain’s nurturing social and political setting encouraged the changes that began in a few trades to spread to others. Gradually the new ways of production transformed more and more parts of the British economy, although older methods continued in many industries. Several industries played key roles in Britain’s industrialization. Iron and steel manufacture, the production of steam engines, and textiles were all powerful influences, as was the rise of a machine-building sector able to spread mechanization to other parts of the economy.


Changes in Industry

Modern industry requires power to run its machinery. During the development of the Industrial Revolution in Britain, coal was the main source of power. Even before the 18th century, some British industries had begun using the country’s plentiful coal supply instead of wood, which was much scarcer. Coal was adopted by the brewing, metalworking, and glass and ceramics industries, demonstrating its potential for use in many industrial processes.


Iron and Coal

A major breakthrough in the use of coal occurred in 1709 at Coalbrookedale in the valley of the Severn River. There English industrialist Abraham Darby successfully used coke—a high-carbon, converted form of coal—to produce iron from iron ore. Using coke eliminated the need for charcoal, a more expensive, less efficient fuel. Metal makers thereafter discovered ways of using coal and coke to speed the production of raw iron, bar iron, and other metals.

The most important advance in iron production occurred in 1784 when Englishman Henry Cort invented new techniques for rolling raw iron, a finishing process that shapes iron into the desired size and form. These advances in metalworking were an important part of industrialization. They enabled iron, which was relatively inexpensive and abundant, to be used in many new ways, such as building heavy machinery. Iron was well suited for heavy machinery because of its strength and durability. Because of these new developments iron came to be used in machinery for many industries.

Iron was also vital to the development of railroads, which improved transportation. Better transportation made commerce easier, and along with the growth of commerce enabled economic growth to spread to additional regions. In this way, the changes of the Industrial Revolution reinforced each other, working together to transform the British economy.



If iron was the key metal of the Industrial Revolution, the steam engine was perhaps the most important machine technology. Inventions and improvements in the use of steam for power began prior to the 18th century, as they had with iron. As early as 1689, English engineer Thomas Savery created a steam engine to pump water from mines. Thomas Newcomen, another English engineer, developed an improved version by 1712. Scottish inventor and mechanical engineer James Watt made the most significant improvements, allowing the steam engine to be used in many industrial settings, not just in mining. Early mills had run successfully with water power, but the advancement of using the steam engine meant that a factory could be located anywhere, not just close to water.

In 1775 Watt formed an engine-building and engineering partnership with manufacturer Matthew Boulton. This partnership became one of the most important businesses of the Industrial Revolution. Boulton & Watt served as a kind of creative technical center for much of the British economy. They solved technical problems and spread the solutions to other companies. Similar firms did the same thing in other industries and were especially important in the machine tool industry. This type of interaction between companies was important because it reduced the amount of research time and expense that each business had to spend working with its own resources. The technological advances of the Industrial Revolution happened more quickly because firms often shared information, which they then could use to create new techniques or products.

Like iron production, steam engines found many uses in a variety of other industries, including steamboats and railroads. Steam engines are another example of how some changes brought by industrialization led to even more changes in other areas.



The industry most often associated with the Industrial Revolution is the textile industry. In earlier times, the spinning of yarn and the weaving of cloth occurred primarily in the home, with most of the work done by people working alone or with family members. This pattern lasted for many centuries. In 18th-century Great Britain a series of extraordinary innovations reduced and then replaced the human labor required to make cloth. Each advance created problems elsewhere in the production process that led to further improvements. Together they made a new system to supply clothing.

The first important invention in textile production came in 1733. British inventor John Kay created a device known as the flying shuttle, which partially mechanized the process of weaving. By 1770 British inventor and industrialist James Hargreaves had invented the spinning jenny, a machine that spins a number of threads at once, and British inventor and cotton manufacturer Richard Arkwright had organized the first production using water-powered spinning. These developments permitted a single spinner to make numerous strands of yarn at the same time. By about 1779 British inventor Samuel Crompton introduced a machine called the mule, which further improved mechanized spinning by decreasing the danger that threads would break and by creating a finer thread.

Throughout the textile industry, specialized machines powered either by water or steam appeared. Row upon row of these innovative, highly productive machines filled large, new mills and factories. Soon Britain was supplying cloth to countries throughout the world. This industry seemed to many people to be the embodiment of an emerging, mechanized civilization.

The most important results of these changes were enormous increases in the output of goods per worker. A single spinner or weaver, for example, could now turn out many times the volume of yarn or cloth that earlier workers had produced. This marvel of rising productivity was the central economic achievement that made the Industrial Revolution such a milestone in human history.


Changes in Society

The Industrial Revolution also had considerable impact upon the nature of work. It significantly changed the daily lives of ordinary men, women, and children in the regions where it took root and grew.


Growth of Cities

One of the most obvious changes to people’s lives was that more people moved into the urban areas where factories were located. Many of the agricultural laborers who left villages were forced to move. Beginning in the early 18th century, more people in rural areas were competing for fewer jobs. The rural population had risen sharply as new sources of food became available, and death rates declined due to fewer plagues and wars. At the same time, many small farms disappeared. This was partly because new enclosure laws required farmers to put fences or hedges around their fields to prevent common grazing on the land. Some small farmers who could not afford to enclose their fields had to sell out to larger landholders and search for work elsewhere. These factors combined to provide a ready work force for the new industries.

New manufacturing towns and cities grew dramatically. Many of these cities were close to the coalfields that supplied fuel to the factories. Factories had to be close to sources of power because power could not be distributed very far. The names of British factory cities soon symbolized industrialization to the wider world: Liverpool, Birmingham, Leeds, Glasgow, Sheffield, and especially Manchester. In the early 1770s Manchester numbered only 25,000 inhabitants. By 1850, after it had become a center of cotton manufacturing, its population had grown to more than 350,000.

In pre-industrial England, more than three-quarters of the population lived in small villages. By the mid-19th century, however, the country had made history by becoming the first nation with half its population in cities. By 1850 millions of British people lived in crowded, grim industrial cities. Reformers began to speak of the mills and factories as dark, evil places.


Effects on Labor

The movement of people away from agriculture and into industrial cities brought great stresses to many people in the labor force. Women in households who had earned income from spinning found the new factories taking away their source of income. Traditional handloom weavers could no longer compete with the mechanized production of cloth. Skilled laborers sometimes lost their jobs as new machines replaced them.

In the factories, people had to work long hours under harsh conditions, often with few rewards. Factory owners and managers paid the minimum amount necessary for a work force, often recruiting women and children to tend the machines because they could be hired for very low wages. Soon critics attacked this exploitation, particularly the use of child labor.

The nature of work changed as a result of division of labor, an idea important to the Industrial Revolution that called for dividing the production process into basic, individual tasks. Each worker would then perform one task, rather than a single worker doing the entire job. Such division of labor greatly improved productivity, but many of the simplified factory jobs were repetitive and boring. Workers also had to labor for many hours, often more than 12 hours a day, sometimes more than 14, and people worked six days a week. Factory workers faced strict rules and close supervision by managers and overseers. The clock ruled life in the mills.

By about the 1820s, income levels for most workers began to improve, and people adjusted to the different circumstances and conditions. By that time, Britain had changed forever. The economy was expanding at a rate that was more than twice the pace at which it had grown before the Industrial Revolution. Although vast differences existed between the rich and the poor, most of the population enjoyed some of the fruits of economic growth. The widespread poverty and constant threat of mass starvation that had haunted the preindustrial age lessened in industrial Britain. Although the overall health and material conditions of the populace clearly improved, critics continued to point to urban crowding and the harsh working conditions for many in the mills.



The economic successes of the British soon led other nations to try to follow the same path. In northern Europe, mechanics and investors in France, Belgium, Holland, and some of the German states set out to imitate Britain’s successful example. In the young United States, Secretary of the Treasury Alexander Hamilton called for an Industrial Revolution in his Report on Manufactures (1791). Many Americans felt that the United States had to become economically strong in order to maintain its recently won independence from Great Britain. In cities up and down the Atlantic Coast, leading citizens organized associations devoted to the encouragement of manufactures.

The Industrial Revolution unfolded in the United States even more vigorously than it had in Great Britain. The young nation began as a weak, loose association of former colonies with a traditional economy. More than three-quarters of the labor force worked in agriculture in 1790. Americans soon enjoyed striking success in mechanization, however. This was clear in 1851 when producers from many nations gathered to display their industrial triumphs at the first World’s Fair, at the Crystal Palace in London. There, it was the work of Americans that attracted the most attention. Shortly after that, the British government dispatched a special committee to the United States to study the manufacturing accomplishments of its former colonies. By the end of the century, the United States was the world leader in manufacturing, unfolding what became known as the Second Industrial Revolution. The American economy had emerged as the largest and most productive on the globe.


American Advantages

The United States enjoyed many advantages that made it fertile ground for an Industrial Revolution. A rich, sparsely inhabited continent lay open to exploitation and development. It proved relatively easy for the United States government to buy or seize vast lands across North America from Native Americans, from European nations, and from Mexico. In addition, the American population was highly literate, and most felt that economic growth was desirable. With settlement stretched across the continent from the Atlantic Ocean to the Pacific Ocean, the United States enjoyed a huge internal market. Within its distant borders there was remarkably free movement of goods, people, capital, and ideas.

The young nation also inherited many advantages from Great Britain. The stable legal and political systems that had encouraged enterprise and rewarded initiative in Great Britain also did so, with minor variations, in the United States. No nation was more open to social mobility, at least for white male Protestants. Others—particularly African Americans, Native Americans, other minorities, and women—found the atmosphere much more difficult. In the context of the times, however, the United States was relatively open to change. It quickly adopted many of the technologies, forms of organization, and attitudes shaping the new industrial world, and then proceeded to generate its own advances.

One initial American advantage was the fact that the United States shared the language and much of the culture of Great Britain, the pioneering industrial nation. This helped Americans transfer technology to the United States. As descriptions of new machines and processes appeared in print, Americans read about them eagerly and tried their own versions of the inventions sweeping Britain.

Critical to furthering industrialization in the United States were machines and knowledgeable people. Although the British tried to prevent skilled mechanics from leaving Britain and advanced machines from being exported, those efforts mostly proved ineffective. Americans worked actively to encourage such transfers, even offering bounties (special monetary rewards) to encourage people with knowledge of the latest methods and devices to move to the United States.

The most dramatic early example of a successful technical transfer is the case of Samuel Slater. Slater was an important figure in a leading British textile firm who sailed to the United States masquerading as a farmer. He eventually moved to Rhode Island, where he worked with mechanics, machine builders, and merchants to create the first important textile mill in the United States. Slater had worked as an apprentice under Richard Arkwright, and Slater’s mill used Arkwright’s innovative system of mechanized spinning. The firm of Almy, Brown, and Slater inspired many imitators and gave birth to a vast textile industry in New England.

The lure of the open, growing United States was strong. Its opportunities attracted knowledgeable, ambitious individuals not only from Britain but from other European countries as well. In 1800, for example, a young Frenchman named Eleuthère Irénée du Pont de Nemours brought to the United States his knowledge of the latest French advances in chemistry and gunpowder making. In 1802 he founded what would become one of the largest and most successful American businesses, E. I. du Pont de Nemours and Company, better known simply as DuPont.


American Challenges

Soon the United States was pioneering on its own. Because local circumstances and conditions in the United States were somewhat different than those in Britain, industrialization also developed somewhat differently. Although the United States had many natural resources in abundance, some were more plentiful than others. The profusion of wood in North America, for example, led Americans to use that material much more than Europeans did. They burned wood widely as fuel and also made use of it in machinery and in construction. Taking advantage of the vast forest resources in their country, Americans built the world’s best woodworking machines.

Transportation and communication were special challenges in a nation that stretched across the North American continent. Economic growth depended on tying together the resources, markets, and people of this large area. Despite the general conviction that private enterprise was best, the government played an active role in uniting the country, particularly by building roads. From 1815 to 1860 state and local governments also provided almost three-quarters of the financing for canal construction and related improvements to waterways.

When the British began building railroads, Americans embraced this new technology eagerly, and substantial public money was invested in rail systems. By 1860 more than half the railroad tracks in the world were in the United States. The most critical 19th-century improvement in communication, the telegraph, was invented by American Samuel F. B. Morse. The telegraph allowed messages to be sent long distances almost instantly by using a code of electronic pulses passing over a wire. The railroad and the telegraph spread across North America and helped create a national market, which in turn encouraged additional improvements in transportation and communication.

Another challenge in the United States was a relative shortage of labor. Much more than in continental Europe or in Britain, labor was in chronically short supply in the United States. This led industrialists to develop machinery to replace human labor.


Changes in Industry

Americans soon demonstrated a great talent for mechanization. Famed American arms maker Samuel Colt summarized his fellow citizens’ faith in technology when he declared in 1851, “There is nothing that cannot be produced by machinery.”


Continuous-Process Manufacturing

An important American development was continuous-process manufacturing. In continuous-process manufacturing, large quantities of the same product, such as cigarettes or canned food, are made in a nonstop operation. The process runs continuously, except for repairs to or maintenance of the machinery used. In the late 18th century, inventor Oliver Evans of Delaware created a remarkable water-powered flour mill. In Evans’s mill, machinery elevated the grain to the top of the mill and then moved it mechanically through various processing steps, eventually producing flour at the bottom of the mill. The process greatly reduced the need for manual labor and cut milling costs dramatically. Mills modeled after Evans’s were built along the Delaware and Brandywine rivers and Chesapeake Bay, and by the time of the American Revolution (1775-1783) they were arguably the most productive in the world. Similar milling technology was also used to grind snuff and other tobacco products in the same region.

As the 19th century passed, Americans improved continuous-process technology and expanded its use. The basic principle of utilizing gravity-powered and mechanized systems to move and process materials proved applicable in many settings. The meatpacking industry in the Midwest employed a form of this technology, as did many industries using distilling and refining processes. Items made using continuous-process manufacturing included kerosene, gasoline, and other petroleum products, as well as many processed foods. Mechanized, continuous processing yielded uniform quantity production with a minimum need for human labor.


The American System

In a closely related development, by the mid-19th century American manufacturers shaped a set of techniques later known as the American system of production. This system involved using special-purpose machines to produce large quantities of similar, sometimes interchangeable, parts that would then be assembled into a finished product. The American system extended the idea of division of labor from workers to specialized machines. Instead of a worker making a small part of a finished product, a machine made the part, speeding the process and allowing manufacturers to produce goods more quickly. This method also enabled goods of much more uniform quality than those made by hand labor. The American system appeared first in New England in the manufacture of clocks, locks, axes, and shovels. Around the same time, the federal armories used an advanced version of this same system to produce large numbers of firearms, coining the term armory practice.

Soon a group of knowledgeable mechanics and engineers spread the American system. Many industries began to use special-purpose machines to produce large quantities of similar or even interchangeable parts for assembly into finished goods. The American system was used by inventor and manufacturer Cyrus Hall McCormick to produce his innovative reapers; Samuel Colt used it to make revolver pistols; and inventor Isaac Merrit Singer produced his popular sewing machines using this system. These kinds of products won prizes and attracted much attention at the Crystal Palace exhibition of 1851.


The Second Industrial Revolution

As American manufacturing technology spread to new industries, it ushered in what many have called the Second Industrial Revolution. The first had come on a wave of new inventions in iron making, in textiles, in the centrally powered factory, and in new ways of organizing business and work. In the latter 19th century, a second wave of technical and organizational advances carried industrial society to new levels. While Great Britain had been the birthplace of the first revolution, the second occurred most powerfully in the United States.

With the second revolution came many new processes. Iron and steel manufacturing was transformed in the 1850s and 1860s by vastly more productive technologies, the Bessemer process and the open-hearth furnace. The Bessemer process, developed by British inventor Henry Bessemer, enabled steel to be produced more efficiently by using blasts of air to convert crude iron into steel. The open-hearth furnace, created by German-born British inventor William Siemens, allowed steelmakers to achieve temperatures high enough to burn away impurities in crude iron.

In addition, factories and their production output became much larger than they had been in the first stage of the Industrial Revolution. Some industries concentrated production in fewer but bigger and more productive facilities. In addition, some industries boosted production in existing (not necessarily larger) factories. This growth was enabled by a variety of factors, including technological and scientific progress; improved management; and expanding markets due to larger populations, rising incomes, and better transportation and communications.

American industrialist Andrew Carnegie built a giant iron and steel empire using huge new plants. John D. Rockefeller, another American industrialist, did the same in petroleum refining. Soon there were enormous advances in science-based industries—for example, chemicals, electrical power, and electrical machinery. Just as in the first revolution, these changes prompted further innovations, which led to further economic growth.

It was in the automobile industry that continuous-process methods and the American system combined to greatest effect. In 1903 American industrialist Henry Ford founded the Ford Motor Company. His production innovation was the moving assembly line, which brought together many mass-produced parts to create automobiles. Ford’s moving assembly line gave the world the fullest expression yet of the Second Industrial Revolution, and his production triumphs in the second decade of the 20th century signaled the crest of the new industrial age.


Organization and Work

Just as important as advances in manufacturing technology was a wave of changes in how business was structured and work was organized. Beginning with the large railroad companies, business leaders learned how to operate and coordinate many different economic activities across broad geographic areas. During the first phase of the Industrial Revolution, many factories had grown into large organizations, but even by 1875 few firms coordinated production and marketing across many business units. Leaders such as Carnegie and Rockefeller changed this, and firms grew much larger in numerous industries, giving birth to the modern corporation.

Within the business unit, Americans pioneered novel ways of organizing work. Engineers studied and modified production, seeking the most efficient ways to lay out a factory, move materials, route jobs, and control work through precise scheduling. Industrial engineer Frederick W. Taylor and his followers sought both efficiency and contented workers. They believed that they could achieve those results through precise measurement and analysis of each aspect of a job. Taylor’s The Principles of Scientific Management (1911) became the most influential book of the Second Industrial Revolution. By the early 20th century, Ford’s mass production techniques and Taylor’s scientific management principles had come to symbolize America’s place as the leading industrial nation.


Changes in Agriculture

As it had done in Britain, industrialization brought deep and often distressing shifts to American society. The influence of rural life declined, and the relative economic importance of agriculture dwindled. Although the amount of land under cultivation and the number of people earning a living from agriculture expanded, the growth of commerce, manufacturing, and the service industries steadily eclipsed farming’s significance. The proportion of the work force dependent on agriculture shrank constantly from the time of the first federal census in 1790. From that time until the end of the 19th century, farm workers dropped from about 75 percent of the work force to about 40 percent.

New technology was introduced in agriculture. The scarcity of labor and the growth of markets for agricultural products encouraged the introduction of machinery to the farms. Machinery increased productivity so that fewer hands could produce more food per acre. New plows, seed drills, cultivators, mowers, and threshers, as well as the reaper, all appeared by 1860. After that, better harvesters and binding machines came into use, as did the harvester-threshers known as combines. Farmers also used limited steam power in the late 19th century, and by about 1905 they began using gasoline-powered tractors. At about the same time, Americans began to apply science systematically to agriculture, such as by using genetics as a basis for plant breeding. These techniques, plus fertilizers and pesticides, helped to increase farm productivity.


Changes in Society

As in Britain, the Industrial Revolution in the United States led to major social changes. Urban population grew, rural population declined, and the nature of labor changed dramatically.


Growth of Cities

As a result of the shift in economic importance from agriculture to manufacturing, American cities grew both in number and in population. From 1860 to 1900 the number of urban areas in the United States expanded fivefold. Even more striking was the explosion in the growth of big cities. In 1860 there were only 9 American cities with more than 100,000 inhabitants; by 1900 there were 38. Like the British critics of the preceding century, many Americans viewed these industrial and commercial centers as dark and dirty places crowded with exploited workers. But whatever the drawbacks of city life, urban growth in the United States was unstoppable, fueled both by the movement of rural Americans and a swelling tide of immigrants from Europe. In 1790 only about 5 percent of the American population lived in cities; today more than 75 percent does. This long-term trend is characteristic of societies experiencing industrialization and is evident today in regions of Asia and Latin America that are now undergoing an industrial revolution.


Effects on Labor

Division of Labor in Industry

Division of labor is a basic tenet of industrialization. In division of labor, each worker is assigned to a different task, or step, in the manufacturing process, and as a result, total production increases. As this illustration shows, one person performing all five steps in the manufacture of a product can make one unit in a day. Five workers, each specializing in one of the five steps, can make 10 units in the same amount of time.

Industrialization brought to the United States conflicts and stresses similar to the ones encountered in Britain and in Europe. Those who had a stake in the traditional economy lost ground as mechanized production replaced household manufacturing. Often, skilled workers found their income and their status under attack from the new machines and the relentless division of labor. Businesses had always enjoyed considerable power in their relationships with the labor force, but the balance tipped even more in their favor as firms grew larger.

In order to counter the power of business, workers tried to form trade unions to represent them and bargain for rights. Initially they had only limited success. Occasional strikes, sometimes violent, appeared as signs of underlying tensions. Until the Great Depression of the 1930s, skilled craft workers were almost the only groups able to sustain unions. The most successful of these unions were those in the American Federation of Labor. They did not seek fundamental social or economic change, such as socialists advocated; instead they accepted industrial society and concentrated on improving the wages and working conditions of their members.

Eventually the United States digested the tensions and dislocations caused by the coming of industry and the growth of cities. The government began to enact regulations and antitrust laws to counter the worst excesses of big business. The Sherman Antitrust Act of 1890 was created to prevent corporate trusts, monopoly enterprises formed to reduce competition and allow essentially a single business firm to control the price of a product. Laws such as the Fair Labor Standards Act, enacted in 1938, mandated worker protections, including the maximum 8-hour workday and 40-hour workweek. Above all, the rising incomes and high rates of economic growth proved calming. Material progress convinced most Americans that industrialization had been a positive development, although the challenge of balancing business growth and worker rights remains an issue to this day.



After the first appearance of industrialization in Britain, many other nations eagerly pursued similar changes. In the 19th century the Industrial Revolution spread not only to the United States, but also to Germany, France, Belgium, and much of the rest of western Europe. Often, skilled British workers and knowledgeable entrepreneurs moved to other countries and taught the manufacturing techniques they had learned in Britain.

Change happened somewhat differently in each setting because of varying resources, political conditions, and social and economic circumstances. In France, industrial development was somewhat delayed by political turmoil and a lack of coal, but the central government played a more active role in development than Britain’s had. Both countries created railroad networks, for example, but the British did so entirely through private companies, while the French central government funded much of its country’s railways. Craft production, in which people make decorative or functional items by hand, also remained a more significant element in the French economy than it did in Britain. In some industries, such as furniture manufacturing, the extent of mechanization was not as great as it had been in Great Britain.

In Germany the central government’s role was also greater than it had been in Great Britain. This was partly because the German government wanted to hasten the process and catch up with British industrialization. Germany used its rich iron and coal resources to develop heavy industry, such as iron and steel manufacture. It also proved to be an environment that encouraged big businesses and cooperation among large firms. The German banking sector, for example, was dominated by a few large banks that coordinated efforts to increase industry.

In Russia, the government made repeated efforts to enable industrialization, sometimes hiring foreigners to build and operate whole factories. On the whole, however, industrialization spread more slowly there, and the Russian economy remained overwhelmingly agricultural for a long time. Even in largely industrialized areas, such as western Europe and the United States, some areas lagged behind in industrial development. Southern Italy, Spain, and the American South remained largely agrarian until much later than their neighbors. In Asia, industrialization varied, although as a whole it came much later than Western European development.

In Japan, the first industrial Asian nation, the central government made industrialization a national goal during the late 19th century. Industrialization in some areas of China began in the early 20th century and increased near the end of the century. Other Asian and Pacific Rim countries, such as South Korea and Taiwan, began to industrialize after the 1960s.

In Southeast Asia, sub-Saharan Africa, India, and much of Latin America—areas that were colonies of Western nations, or that were dominated by other nations for long periods—industrialization was much more delayed than in many other areas. The legacies of colonialism made widespread change difficult because the society and economy of colonies were heavily controlled by and dependent on the parent country.

Although different cultures produced distinctive variations of an industrial revolution, the similarities are striking. Mechanization and urbanization were central to each area in which the Industrial Revolution succeeded, as were accompanying tensions and disruptions. In most societies, the truly revolutionary changes came during the first 75 to 100 years after the process of industrialization began. After that, factory production dominated manufacturing, and most people moved to cities.



The modern, industrial societies created by the Industrial Revolution have come at some cost. The nature of work became worse for many people, and industrialization placed great pressures on traditional family structures as work moved outside the home. The economic and social distances between groups within industrial societies are often very wide, as is the disparity between rich industrial nations and poorer neighboring countries. The natural environment has also suffered from the effects of the Industrial Revolution. Pollution, deforestation, and the destruction of animal and plant habitats continue to increase as industrialization spreads.

Perhaps the greatest benefits of industrialization are increased material well-being and improved healthcare for many people in industrial societies. Modern industrial life also provides a constantly changing flood of new goods and services, giving consumers more choices. With both its negative aspects and its benefits, the Industrial Revolution has been one of the most influential and far-reaching movements in human history.




Recycling, collection, processing, and reuse of materials that would otherwise be thrown away. Materials ranging from precious metals to broken glass, from old newspapers to plastic spoons, can be recycled. The recycling process reclaims the original material and uses it in new products.

In general, using recycled materials to make new products costs less and requires less energy than using new materials. Recycling can also reduce pollution, either by reducing the demand for high-pollution alternatives or by minimizing the amount of pollution produced during the manufacturing process. Recycling decreases the amount of land needed for trash dumps by reducing the volume of discarded waste.

Recycling can be done internally (within a company) or externally (after a product is sold and used). In the paper industry, for example, internal recycling occurs when leftover stock and trimmings are salvaged to help make more new product. Since the recovered material never left the manufacturing plant, the final product is said to contain preconsumer waste. External recycling occurs when materials used by the customer are returned for processing into new products. Materials ready to be recycled in this manner, such as empty beverage containers, are called postconsumer waste.



Just about any material can be recycled. On an industrial scale, the most commonly recycled materials are those that are used in large quantities—metals such as steel and aluminum, plastics, paper, glass, and certain chemicals.



There are two methods of making steel using recycled material: the basic oxygen furnace (BOF) method and the electric arc furnace (EAF) method. The BOF method involves mixing molten scrap steel in a furnace with new steel. About 28 percent of the new product is recycled steel. Steel made by the BOF method typically is used to make sheet-steel products like cans, automobiles, and appliances. The EAF method normally uses 100 percent recycled steel. Scrap steel is placed in a furnace and melted by electricity that arcs between two carbon electrodes. Limestone and other materials are added to the molten steel to remove impurities. Steel produced by the EAF method usually is formed into beams, reinforcing bars, and thick plate.

Approximately 68 percent of all steel is recycled, making it one of the world’s most recycled materials. In 1994 37 billion steel cans, weighing 2,408,478 metric tons (2,654,892 U.S. tons), were used in the United States, of which 53 percent were recycled. In 1995 more than 60 million metric tons (70 million U.S. tons) of scrap steel were recycled in the United States.



Recycling aluminum in the United States provides a stable, domestic aluminum supply amounting to approximately one-third of the industry’s requirement. In contrast, most of the ore required to produce new aluminum must be imported from Jamaica, Australia, Surinam, Guyana, and Guinea. About 2 kg (about 4 lb) of ore, a mixture of aluminum oxides called bauxite, are needed to make 0.5 kg (1 lb) of aluminum.

The U.S. aluminum industry has recognized the advantage of a domestic aluminum supply and has established systems for collection, transportation, and processing. For this reason, aluminum cans almost always produce a profit in community recycling programs. A number of states require deposits for beverage containers and have established redemption centers at supermarkets. The overall recycling rate of all forms of aluminum is about 35 percent.

Cans brought to collection centers are crushed, baled, and shipped to regional mills or reclamation plants. The cans are then shredded to reduce volume and heated to remove coatings and moisture. Next, they are put into a furnace, melted, and formed into ingots, or bars, weighing 10,000 kg (30,000 lb) or more. The ingots go to another mill to be rolled into sheets. The sheets are sent to a container plant and cut into disks from which new cans are formed. The cans are printed with the beverage makers’ logos and are shipped (with tops separate) to the filling plant.

About 100 billion aluminum beverage cans are used each year in the United States and about 65 percent of these are then recycled. The average aluminum can in the United States contains 40 percent postconsumer recycled aluminum. About 97 percent of all soft drink cans and 99 percent of all beer cans are made of aluminum.



Plastics are more difficult to recycle than metal, paper, or glass. One problem is that any of seven categories of plastics can be used for containers alone. For effective recycling, the different types cannot be mixed. Most states require that plastic containers have identification codes so they can be more easily identified and separated. The code assigns a particular number to each of the seven plastics used in packaging. The number 1 refers to polyethylene teraphthalate (PET) and the number 2 refers to high-density polyethylene (HDPE). PET can be made into carpet, or fiberfill for ski jackets and clothing. HDPE can be recycled into construction fencing, landfill liners, and a variety of other products. Plastics coded with the number 6 are polystyrene (PS), which can be recycled into cafeteria trays, combs, and other items.

The recycling process for plastic normally involves cleaning it, shredding it into flakes, then melting the flakes into pellets. The pellets are melted into a final product. Some products work best with only a small percentage of recycled content. Other products, such as HDPE plastic milk cases, can be made successfully with 100 percent recycled content. The plastic container industry has concentrated on weight reduction and source reduction. For example, the one-gallon HDPE milk container that weighed about 120 gm (about 4.2 oz) in the 1960s weighed just 65 gm (about 2.3 oz) in 1996.

In the United States, the overall recycling of plastic was under 4.7 percent in 1994, with the recycling rate of plastic containers at about 19 percent. Most discarded plastic is in the form of plastic containers. Plastics made up about 9 percent of the waste stream by weight in 1995.


Paper and Paper Products

Paper products that can be recycled include cardboard containers, wrapping paper, and office paper. The most commonly recycled paper product is newsprint.

In newspaper recycling, old newspapers are collected and searched for contaminants such as plastic bags and aluminum foil. The paper goes to a processing plant where it is mixed with hot water and turned into pulp in a machine that works much like a big kitchen blender. The pulp is screened and filtered to remove smaller contaminants. The pulp then goes to a large vat where the ink separates from the paper fibers and floats to the surface. The ink is skimmed off, dried and reused as ink or burned as boiler fuel. The cleaned pulp is mixed with new wood fibers to be made into paper again.

Paper and paper products such as corrugated board constitute about 40 percent of the discards in the United States, making it the most plentiful single item in landfills. Experts estimate the average office worker generates about 7 kg (about 15 lb) of wastepaper (about 1,500 sheets) per month. Every ton of paper that is recycled saves about 1.4 cu m (about 50 cu ft) of landfill space. One ton of recycled paper saves 17 pulpwood trees (trees used to produce paper).



Scrap glass taken from the glass manufacturing process, called cullet, has been internally recycled for years. The scrap glass is economical to use as a raw material because it melts at lower temperatures than other raw materials, thus saving fuel and operating costs.

Glass that is to be recycled must be relatively free from impurities and sorted by color. Glass containers are the most commonly recycled form of glass, and their colors are flint (clear), amber (brown), and green. Other glass, such as window glass, pottery, and cooking utensils, are considered contaminants because they have different compositions than glass used in containers. The recycled glass is melted in a furnace and formed into new products.

Glass containers make up 90 percent of the total glass used in the United States. The 1994 recycling rate for glass was about 33 percent. Other uses for recycled glass include glass art and decorative tiles. Cullet mixed with asphalt forms a paving material called glassphalt.


Chemicals and Hazardous Waste

Household hazardous wastes include drain cleaners, oven cleaners, window cleaners, disinfectants, motor oil, paints, paint thinners, and pesticides. Most municipalities ban hazardous waste from the regular trash. Periodically, citizens are alerted that they can take their hazardous waste to a collection point where trained workers sort it, recycle what they can, and package the remainder in special leak-proof containers called lab packs, for safe disposal. Typical materials recycled from the collection drives are motor oil, paint, antifreeze, and tires.

Business and industry have made much progress in reducing both the hazardous waste they generate and its toxicity. Although large quantities of chemical solvents are used in cleaning processes, technology has been developed to clean and reuse solvents that used to be discarded. Even the vapors evaporated from the process are recovered and put back into the recycled solvent. Some processes that formerly used solvents no longer require them.


Nuclear Waste

Certain types of nuclear waste can be recycled, while other types are considered too dangerous to recycle. Low-level wastes include radioactive material from research activities, medical wastes, and contaminated machinery from nuclear reactors. Nickel is the major metal of construction in the nuclear power field and much of it is recycled after surface contamination has been removed.

High-level wastes come from the reprocessing of spent fuel (partially depleted reactor fuel) and from the processing of nuclear weapons. These wastes emit gamma radiation, which can cause birth defects, disease, and death. High-level nuclear waste is so toxic it is not normally recycled. Instead, it is fused into inert glass tubes encased in stainless steel cylinders, which are then stored underground.

Spent fuel can be reprocessed and recycled into new fuel elements, although fuel reprocessing was banned in the United States in 1977 and has never been resumed for legal, political, and economic reasons. However, spent fuel is being reprocessed in other countries such as Japan, Russia, and France. Spent fuel elements in the United States are kept in storage pools at each reactor site.



Rare materials, such as gold and silver, are recycled because acquiring new supplies is expensive. Other materials may not be as expensive to replace, but they are recycled to conserve energy, reduce pollution, conserve land, and to save money.


Resource Conservation

Recycling conserves natural resources by reducing the need for new material. Some natural resources are renewable, meaning they can be replaced, and some are not. Paper, corrugated board, and other paper products come from renewable timber sources. Trees harvested to make those products can be replaced by growing more trees. Iron and aluminum come from nonrenewable ore deposits. Once a deposit is mined, it cannot be replaced.


Energy Conservation

Recycling saves energy by reducing the need to process new material, which usually requires more energy than the recycling process. The amount of energy saved in recycling one aluminum can is equivalent to the energy in the gasoline that would fill half of that same can. To make an aluminum can from recycled metal takes only 5 percent of the total energy needed to produce the same aluminum can from unrecycled materials, a 95 percent energy savings. Recycled paper and paperboard require 75 percent less energy to produce than new products. Significant energy savings result in the recycling of steel and glass, as well.


Pollution Reduction

Recycling reduces pollution because recycling a product creates less pollution than producing a new one. For every ton of newspaper recycled, 7 fewer kg (16 lb) of air pollutants are pumped into the atmosphere. Recycling can also reduce pollution by recycling safer products to replace those that pollute. Some countries still use chlorofluorocarbons (CFCs) to manufacture foam products such as cups and plates. Many scientists suspect that CFCs harm the atmosphere’s protective layer of ozone. Using recycled plastic instead for those products eliminates the creation of harmful CFCs.


Land Conservation

Recycling saves valuable landfill space, land that must be set aside for dumping trash, construction debris, and yard waste (see Solid Waste Disposal: Landfill). In the United States, each person on average discards almost a ton of municipal solid waste (MSW) per year. MSW is raw, untreated garbage of the kind discarded by homes and small businesses. Waste from industry and agriculture normally is not part of MSW, but construction and demolition wastes are. The United States has the highest MSW discard level of any country in the world.

Landfills fill up quickly and acceptable sites for new ones are difficult to find because of objections by neighbors to noise and smells, and the hazard of leaks into underground water supplies. The two major ways to reduce the need for new landfills are to generate less initial waste and to recycle products that would normally be considered waste.

In 1994 about 6.8 million metric tons (7.5 million U.S. tons) of food and yard debris were composted in the United States, accounting for about one-sixth of the overall 23.6 percent recycling rate. The combined effort of reducing waste and recycling resulted in 41 million fewer metric tons (45 million U.S. tons) of material going to landfills.

Solid waste can also be burned instead of buried in the ground. Typically, waste-to-energy (WTE) facilities burn trash to heat water for steam-turbine electrical generators. This WTE recycling keeps another 16 percent of municipal solid waste out of the landfills.


Economic Savings

Recycling in the short term is not always economically profitable or a break-even financial operation. Most experts contend, however, that the economic consequences of recycling are positive in the long term. Recycling will save money if potential landfill sites are used for more productive purposes and by reducing the number of pollution-related illnesses.



People have recycled materials throughout history. Metal tools and weapons have been melted, reformed, and reused since they came in use thousands of years ago. The iron, steel, and paper industries have almost always used recycled materials. Recycling rates were modest in the United States up through the 1960s, although rates increased during World War II (1939-1945). Since the 1960s, recycling has steadily increased. Recycling in the United States between 1960 and 1994 rose from 5.35 million metric tons (5.9 million U.S. tons) per year to 44.7 million metric tons (49.3 million U.S. tons). In 1930 about 7 percent of municipal solid waste was recycled. By 1994 that amount had climbed to 23.6 percent. Experts predict the MSW recycling rate will reach 30 percent by the year 2000.

European countries have a long history of recycling and, in some cases, stiff requirements. In 1991 the German parliament approved legislation setting recycling targets of 80 to 90 percent for packaging materials and banned the sale of products from companies that do not cooperate. France has set specific recycling goals. Other countries with significant overall recycling rates include Spain at 29 percent, Switzerland at 28 percent, and Japan at 23 percent.


Steam, water in vapor state, used in the generation of power and on a large scale in many industrial processes. The techniques of generating and using steam, therefore, are important components of engineering technology. The generation of electricity is largely accomplished by first generating steam, whether the heat is produced by burning coal or gas or by the nuclear fission of uranium (see Nuclear Energy; Steam Engine; Turbine). Steam also is still much in use for space heating purposes (see Heating, Ventilating, and Air Conditioning), and it propels most of the world's naval vessels and commercial ships (see Ships and Shipbuilding).

The boiling point of water at sea-level atmospheric pressure (760 torr or 14.7 lb/sq in) is about 100° C (212° F). At this critical temperature, the addition of 970.3 Btu of heat will convert 0.454 kg (1 lb) of water to 0.454 kg of steam at the same temperature. For water under pressure, the boiling point rises with the increase of pressure up to a pressure of 218 atmospheres of pressure (165,000 torr or 3200 lb/sq in). At this pressure, water boils at a temperature of 374° C (705° F), its critical point. Beyond critical pressure and temperature there is no distinction between liquid water and steam.

Pure steam is a dry and invisible vapor. In many cases, however, when water is boiling, a quantity of small droplets, or particles, of water are taken up with the steam, and the resulting mixture is visible as a white vapor. A similar effect occurs when dry steam is exhausted into the comparatively cool atmosphere. Some of the steam cools and condenses, forming the familiar white vapor seen when a kettle boils on a stove. Such steam is said to be wet.

Steam that is heated to the exact boiling point corresponding to the existing pressure is called saturated steam. Heating steam beyond this temperature produces so-called superheated steam. Superheating also occurs if saturated steam is compressed or if saturated steam is throttled by passing the steam through a valve from a high-pressure vessel to a low-pressure vessel. Throttling causes the temperature of the steam to drop somewhat, but the temperature of the throttled steam is still higher than that of saturated steam at the corresponding pressure. Steam in its superheated state is generally used in modern power generation systems.

Energy Supply, World



Energy Supply, World, combined resources by which the nations of the world attempt to meet their energy needs. Energy is the basis of industrial civilization; without energy, modern life would cease to exist. During the 1970s the world began a painful adjustment to the vulnerability of energy supplies. In the long run, conserving energy resources may provide the time needed to develop new sources of energy, such as hydrogen fuel cells, or to further develop alternative energy sources, such as solar energy and wind energy. While this development occurs, however, the world will continue to be vulnerable to disruptions in the supply of oil, which, after World War II (1939-1945), became the most favored energy source.



Wood was the first and, for most of human history, the major source of energy. It was readily available, because extensive forests grew in many parts of the world and the amount of wood needed for heating and cooking was relatively modest. Certain other energy sources, found only in localized areas, were also used in ancient times: asphalt, coal, and peat from surface deposits and oil from seepages of underground deposits.

This situation changed when wood began to be used during the Middle Ages to make charcoal. The charcoal was heated with metal ore to break up chemical compounds and free the metal. As forests were cut and wood supplies dwindled at the onset of the Industrial Revolution in the mid-18th century, charcoal was replaced by coke (produced from coal) in the reduction of ores. Coal, which also began to be used to drive steam engines, became the dominant energy source as the Industrial Revolution proceeded.


Growth of Petroleum Use

Although for centuries petroleum (also known as crude oil) had been used in small quantities for purposes as diverse as medicine and ship caulking, the modern petroleum era began when a commercial well was brought into production in Pennsylvania in 1859. The oil industry in the United States expanded rapidly as refineries sprang up to make oil products from crude oil. The oil companies soon began exporting their principal product, kerosene—used for lighting—to all areas of the world. The development of the internal-combustion engine and the automobile at the end of the 19th century created a vast new market for another major product, gasoline. A third major product, heavy oil, began to replace coal in some energy markets after World War II.

The major oil companies, which are based principally in the United States, initially found large oil supplies in the United States. As a result, oil companies from other countries—especially Britain, the Netherlands, and France—began to search for oil in many parts of the world, especially the Middle East. The British brought the first field there (in Iran) into production just before World War I (1914-1918). During World War I, the U.S. oil industry produced two-thirds of the world’s oil supply from domestic sources and imported another one-sixth from Mexico. At the end of the war and before the discovery of the productive East Texas fields in 1930, however, the United States, with its reserves strained by the war, became a net oil importer for a few years.

During the next three decades, with occasional federal support, the U.S. oil companies were enormously successful in expanding in the rest of the world. By 1955 the five major U.S. oil companies produced two-thirds of the oil for the world oil market(not including North America and the Soviet bloc). Two British-based companies produced almost one-third of the world’s oil supply, and the French produced a mere one-fiftieth. The next 15 years were a period of serenity for energy supplies. The seven major U.S. and British oil companies provided the world with increasing quantities of cheap oil. The world price was about a dollar a barrel, and during this time the United States was largely self-sufficient, with its imports limited by a quota.


Formation of OPEC

Two series of events coincided to change this secure supply of cheap oil into an insecure supply of expensive oil. In 1960, enraged by unilateral cuts in oil prices by the seven big oil companies, the governments of the major oil-exporting countries formed the Organization of Petroleum Exporting Countries, or OPEC. OPEC’s goal was to try to prevent further cuts in the price that the member countries—Venezuela and four countries around the Persian Gulf—received for oil. They succeeded, but for a decade they were unable to raise prices. In the meantime, increasing oil consumption throughout the world, especially in Europe and Japan, where oil displaced coal as a primary source of energy, caused an enormous expansion in the demand for oil products.


The Energy Crisis

The year 1973 brought an end to the era of secure, cheap oil. In October, as a result of the Arab-Israeli War, the Arab oil-producing countries cut back oil production and embargoed oil shipments to the United States and the Netherlands. Although the Arab cutbacks represented a loss of less than 7 percent in world supply, they created panic on the part of oil companies, consumers, oil traders, and some governments. Wild bidding for crude oil ensued when a few producing nations began to auction off some of their oil. This bidding encouraged the OPEC nations, which now numbered 13, to raise the price of all their crude oil to a level as high as eight times that of a few years earlier. The world oil scene gradually calmed, as a worldwide recession brought on in part by the higher oil prices trimmed the demand for oil. In the meantime, most OPEC governments took over ownership of the oil fields in their countries.

In 1978 a second oil crisis began when, as a result of the revolution that eventually drove the Shah of Iran from his throne, Iranian oil production and exports dropped precipitously. Because Iran had been a major exporter, consumers again panicked. A replay of 1973 events, complete with wild bidding, again forced up oil prices during 1979. The outbreak of war between Iran and Iraq in 1980 gave a further boost to oil prices. By the end of 1980 the price of crude oil stood at 19 times what it had been just ten years earlier.

The very high oil prices again contributed to a worldwide recession and gave energy conservation a big push. As oil demand slackened and supplies increased, the world oil market slumped. Significant increases in non-OPEC oil supplies, such as those in the North Sea, Mexico, Brazil, Egypt, China, and India, pushed oil prices even lower. Production in the Soviet Union reached 11.42 million barrels per day by 1989, accounting for 19.2 percent of world production in that year.

Despite the low world oil prices that have prevailed since 1986, concern over disruption has continued to be a major focus of energy policy in the industrialized countries. The short-term increases in prices following Iraq’s invasion of Kuwait in 1990 reinforced this concern. Owing to its vast reserves, the Middle East will continue to be the major source of oil for the foreseeable future. However, new discoveries in the Caspian Sea region suggest that countries such as Kazakhstan may become major sources of petroleum in the 21st century.


Current Status

In the 1990s, oil production by non-OPEC countries remained strong and production by OPEC countries rebounded. The result at the end of the 20th century was a world oil surplus and prices (when adjusted for inflation) that were lower than in 1972.

Experts are uncertain about future oil supplies and prices. Low prices have spurred greater oil consumption, and experts question how long world petroleum reserves can keep pace with increased demand. Many of the world’s leading petroleum geologists believe the world oil supply will peak around 80 million barrels per day between 2010 and 2020. (In 1998 world consumption was approximately 70 million barrels per day.) On the other hand, many economists believe that even modestly higher oil prices might lead to greater supply, since the oil companies would then have the economic incentive to exploit less accessible oil deposits.

Natural gas may be increasingly used in place of oil for applications such as power generation and transportation. One reason is that world reserves of natural gas have doubled since 1976, in part because of the discovery of major deposits of natural gas in Russia and in the Middle East. New facilities and pipelines are being constructed to help process and transport this natural gas from production wells to consumers.



Petroleum (crude oil) and natural gas are found in commercial quantities in sedimentary basins in more than 50 countries in all parts of the world. The largest deposits are in the Middle East, which contains more than half the known oil reserves and almost one-third of the known natural-gas reserves. The United States contains only about 2 percent of the known oil reserves and 3 percent of the known natural-gas reserves.



Geologists and other scientists have developed techniques that indicate the possibility of oil or gas being found deep in the ground. These techniques include taking aerial photographs of special surface features, sending shock waves through the earth and reflecting them back into instruments, and measuring the earth’s gravity and magnetic field with sensitive meters. Nevertheless, the only method by which oil or gas can be found is by drilling a hole into the reservoir. In some cases oil companies spend many millions of dollars drilling in promising areas, only to find dry holes. For a long time, most wells were drilled on land, but after World War II drilling commenced in shallow water from platforms supported by legs that rested on the sea bottom. Later, floating platforms were developed that could drill at water depths of 1,000 m (3,300 ft) or more. Large oil and gas fields have been found offshore: in the United States, mainly off the Gulf Coast; in Europe, primarily in the North Sea; in Russia, in the Barents Sea and the Kara Sea; and off Newfoundland and Brazil. Most major finds in the future may be offshore.



As crude oil or natural gas is produced from an oil or gas field, the pressure in the reservoir that forces the material to the surface gradually declines. Eventually, the pressure will decline so much that the remaining oil or gas will not migrate through the porous rock to the well. When this point is reached, most of the gas in a gas field will have been produced, but less than one-third of the oil will have been extracted. Part of the remaining oil can be recovered by using water or carbon dioxide gas to push the oil to the well, but even then, one-fourth to one-half of the oil is usually left in the reservoir. In an effort to extract this remaining oil, oil companies have begun to use chemicals to push the oil to the well, or to use fire or steam in the reservoir to make the oil flow more easily. New techniques that allow operators to drill horizontally, as well as vertically, into very deep structures have dramatically reduced the cost of finding natural gas and oil supplies.

Crude oil is transported to refineries by pipelines, barges, or giant oceangoing tankers. Refineries contain a series of processing units that separate the different constituents of the crude oil by heating them to different temperatures, chemically modifying them, and then blending them to make final products. These final products are principally gasoline, kerosene, diesel oil, jet fuel, home heating oil, heavy fuel oil, lubricants, and feedstocks, or starting materials, for petrochemicals.

Natural gas is transported, usually by pipelines, to customers who burn it for fuel or, in some cases, make petrochemicals from chemicals extracted, or “stripped,” from it. Natural gas can be liquefied at very low temperatures and transported in special ships. This method is much more costly than transporting oil by tanker. Oil and natural gas compete in a number of markets, especially in generating heat for homes, offices, factories, and industrial processes.


Pollution Problems

In its early days, the oil industry generated considerable environmental pollution. Through the years, however, under the dual influences of improved technology and more stringent regulations, it has become much cleaner. The effluents from refineries have decreased greatly and, although well blowouts still occur, new technology has tended to make them relatively rare. The policing of the oceans, on the other hand, is much more difficult. Oceangoing ships are still a major source of oil spills. In 1990 the Congress of the United States passed legislation requiring tankers to be double hulled by the end of the decade.

Another source of pollution connected with the oil industry is the sulfur in crude oil. Regulations of national and local governments restrict the amount of sulfur dioxide that can be discharged by factories and utilities burning fuel oil. Because removing sulfur is expensive, however, regulations still allow some sulfur dioxide to be discharged into the air.

Many scientists believe that another potential environmental problem from refining and burning large amounts of oil and other fossil fuels (such as coal and natural gas) occurs when carbon dioxide (a by-product of the burning of fossil fuels), methane (which exists in natural gas and is also a by-product of refining petroleum), and other by-product gases accumulate in the atmosphere. These gases are known as greenhouse gases, because they trap some of the energy from the Sun that penetrates Earth’s atmosphere. This energy, trapped in the form of heat, maintains Earth at a temperature that is hospitable to life. Certain amounts of greenhouse gases occur naturally in the atmosphere. However, the immense quantities of petroleum, coal, and other fossil fuels burned during the world’s rapid industrialization over the last 200 years are a contributing source of higher levels of carbon dioxide in the atmosphere. During that time period, these levels have increased by about 28 percent. This increase in atmospheric carbon dioxide, coupled with the continuing loss of the world’s forests (which absorb carbon dioxide), has led many scientists to predict a rise in global temperature. This increase in global temperature might disrupt weather patterns, disrupt ocean currents, lead to more violent storms, and create other environmental problems. In 1992 representatives of over 150 countries convened in Rio de Janeiro, Brazil, and agreed on the need to reduce the world’s emissions of greenhouse gases. In 1997 world delegations again convened, this time in Kyōto, Japan. During the Kyōto meeting, representatives of 160 nations signed an agreement known as the Kyōto Protocol, which would require 38 industrialized nations to limit emissions of greenhouse gases to levels that are an average of 5 percent below the emission levels of 1990. In order to reduce their fossil fuel emissions to achieve these levels, the industrialized nations would have to shift their energy mix toward energy sources that do not produce as much carbon dioxide, such as natural gas, or to alternative energy sources, such as hydroelectric energy, solar energy, wind energy, or nuclear energy. While the governments of some industrialized nations have ratified the Kyōto Protocol, others have not, including that of the United States.



Oil shale, heavy oil deposits, and tar sands are the most prevalent forms of petroleum found in the world. Reserves of these sources are many times more abundant than the world’s total known reserves of crude oil. Because of the high cost of converting shale oil and tar sands into usable petroleum products, however, only a small percentage of the available material is processed commercially. An industry to make oil products from tar sands has been started in Canada, and Venezuela is looking at the prospects of developing the vast reserves of tar sands in its Orinoco River basin. Nevertheless, the quantity of oil products produced from these two raw materials is small compared with the total production of conventional crude oil. Until world petroleum prices increase, the quantity of oil produced from oil shale and tar sands will likely remain small relative to the production of conventional crude oil.



Coal is a general term for a wide variety of solid materials that are high in carbon content. Most coal is burned by electric utility companies to produce steam to turn their generators. Some coal is used in factories to provide heat for buildings and industrial processes. A special, high-quality coal is turned into metallurgical coke for use in making steel.



The world’s coal reserves are vast. The amount of coal (as measured by energy content) that is technically and economically recoverable under present conditions is five times as large as the reserves of crude oil. Just four regions contain three-fourths of the world’s recoverable coal reserves: the United States, 24 percent; the countries of the former Soviet Union, 24 percent; China, 11 percent; and Western Europe, 10 percent.


Current Trends

In industrialized countries, the greater convenience and lower costs of oil and gas in the earlier 20th century virtually forced coal out of the market for heating homes and offices and driving locomotives. Oil and gas also ate heavily into the industrial market for coal. Only an expanding utility market enabled coal output in the United States, for example, to remain relatively constant between 1948 and 1973. Even in the utility market, as oil and gas captured a greater share, coal’s contribution to the total energy picture dropped dramatically—in the United States, for instance, from about one-half to less than one-fifth. The dramatic jumps in oil prices after 1973, however, gave coal a major cost advantage for utilities and large industrial customers, and coal began to recapture some of its lost markets. In contrast to the industrialized countries, developing countries that have large coal reserves (such as China and India) continue to use coal for industrial and heating purposes.

The average price of coal has remained virtually unchanged since the early 1980s and is forecast to decline in the early part of the 21st century. However, in industrialized countries the need to comply with stricter environmental regulations has made burning coal more costly.


Pollution Problems

Despite coal’s relative cheapness and huge reserves, the growth in the use of coal since 1973 has been much less than expected, because coal is associated with many more environmental problems than is oil. Underground mining can result in black lung disease for miners, the sinking of the land over mines, and the drainage of acid into water tables. Surface mining requires careful reclamation, or the unrestored land will remain scarred and unproductive. In addition, the burning of coal causes emission of sulfur dioxide particles, nitrogen oxide, and other impurities. Acid rain—rainfall and other forms of precipitation with a relatively high acidity that is damaging lakes and some forests in many regions—is believed to be caused in part by such emissions (see Air Pollution). The U.S. Clean Air Act of 1970 (revised in 1970 and 1990) provides the federal legal basis for controlling air pollution. This legislation has significantly reduced emissions of sulfur oxides—known as acid gases. For example, the Clean Air Act requires facilities such as coal-burning power plants to burn low-sulfur coal. In the 1990s concern over the possible warming of the planet as a result of the greenhouse effect caused many governments to consider policies to reduce the carbon dioxide emissions produced by burning coal, oil, and natural gas. During the world’s rapid industrialization through the 19th and 20th centuries, levels of carbon dioxide in the atmosphere increased approximately 28 percent from pre-industrial levels.

Solving these problems is costly, and who should pay is a matter of controversy. As a result, coal consumption may continue to grow more slowly than would otherwise be expected. The vast coal reserves, the improved technologies to reduce pollution, and the further development of coal gasification (see Gases, Fuel) still indicate, however, that the market for coal will increase in coming years.



Synthetic fuels do not occur in nature but are made from natural materials. Gasohol, for example, is a mixture of gasoline and alcohol made from sugars produced by living plants. Although making various types of fuel from coal is possible, the large-scale production of fuel from coal will likely be limited by high costs and pollution problems, some of which are not yet known. The manufacture of alcohol fuels in large quantities will likely be restricted to regions, such as parts of Brazil, where a combination of low-cost labor and land, plus a long growing season, make it economical. Thus, synthetic fuels are unlikely to make an important contribution to the world’s energy supply anytime soon.



Nuclear energy is generated by the splitting, or fissioning, of atoms of uranium or heavier elements. The fission process releases heat, which is used to produce steam to drive a turbine to generate electricity. The operation of a nuclear reactor and the related electricity-generating equipment is only one part of an interconnected set of activities. The production of a reliable supply of electricity from nuclear fission requires mining, milling, and transporting uranium; enriching uranium (increasing the percentage of the uranium isotope U-235) and packing it in appropriate form; building and maintaining the reactor and associated generating equipment; and treating and disposing of spent fuel. These activities require extremely sophisticated and interactive industrial processes and many specialized skills.



Britain took an early lead in developing nuclear power. By the mid-1950s, several nuclear reactors were producing electricity in that country. The first nuclear reactor to be connected to an electricity distribution network in the United States began operation in 1957 at Shippingport, Pennsylvania. Six years later, the first order was placed for a commercial nuclear power plant to be built without a direct subsidy from the federal government. This order marked the beginning of an attempt to convert rapidly the world’s electricity-generating systems from reliance on fossil fuels to reliance on nuclear energy. By 1970, 90 nuclear power plants were operating in 15 countries. In 1980, 253 nuclear power plants were operating in 22 countries. However, the attempt to move from fossil fuels to nuclear energy faltered because of rapidly increasing costs, regulatory delays, declining demand for electricity, and a heightened concern for safety.


Safety Problems

Questions about the safety and economy of nuclear power created perhaps the most emotional battle fought over energy. As the battle heated during the late 1970s, nuclear advocates argued that no realistic alternative existed to increased reliance on nuclear power. They recognized that some problems remain but maintained that solutions would be found. Nuclear opponents, on the other hand, emphasized a number of unanswered questions about the environment: What are the effects of low-level radiation over long periods? What is the likelihood of a major accident at a nuclear power plant? What would be the consequences of such an accident? How can nuclear power’s waste products, which will remain dangerous for centuries, be permanently isolated from the environment? These safety questions helped cause changes in specifications for and delays in the construction of nuclear power plants, further driving up costs. They also helped create a second controversy: Is electricity from nuclear power plants less costly, equally costly, or more costly than electricity from coal-fired plants? Despite rapidly escalating oil and gas prices in the late 1970s and early 1980s, these political and economic problems caused an effective moratorium in the United States on new orders for nuclear power plants. This moratorium took effect even before the 1979 near meltdown (melting of the nuclear fuel rods) at the Three Mile Island nuclear power plant near Harrisburg, Pennsylvania, and the 1986 partial meltdown at the Chernobyl’ plant north of Kyiv in Ukraine (see Chernobyl’ Accident). The latter accident caused some fatalities and cases of radiation sickness, and it released a cloud of radioactivity that traveled widely across the northern hemisphere.


Current Status

In 1998 a total of 437 nuclear plants operated worldwide. Another 35 reactors were under construction. Eighteen countries generate at least 20 percent of their electricity from nuclear power. The largest nuclear power industries are located in the United States (107 reactors), France (59), Japan (54), Britain (35), Russia (29), and Germany (20). In the United States, no new reactors have been ordered for more than 20 years. Public opposition, high construction costs, strict building and operating regulations, and high costs for waste disposal make nuclear power plants much more expensive to build and operate than plants that burn fossil fuels.

In some industrialized countries, the electric power industry is being restructured to break up monopolies (the provision of a commodity or service by a single seller or producer) at the generation level. Because this trend is pressuring nuclear plant owners to cut operating expenses and become more competitive, the nuclear power industry in the United States and other western countries may continue to decline if existing nuclear power plants are unable to adapt to changing market conditions.

Asia is widely viewed as the only likely growth area for nuclear power in the near future. Japan, South Korea, Taiwan, and China all had plants under construction at the end of the 20th century. Conversely, a number of European nations were rethinking their commitments to nuclear power.



Sweden’s political parties have committed to phasing out nuclear power by 2010, after Swedish citizens voted in 1980 against future development of this energy source. However, industry is challenging the policy in court. In addition, critics argue that Sweden cannot fulfill its commitment to reducing emissions of greenhouse gases without relying on nuclear power.



France generates 80 percent of its electricity from nuclear power. However, it has canceled several planned reactors and may replace aging nuclear plants with fossil-fuel plants for environmental reasons. As a result, the government-owned electricity utility, Electricité de France, plans to diversify the country’s electricity-generating sources.



The German government announced in 1998 a plan to phase out nuclear power. However, as in Sweden, nuclear plant owners may take the government to court to seek compensation for plants shut down before the end of their operating lives.



In Japan, several accidents at nuclear facilities in the mid-1990s have undercut public support for nuclear power. Japan’s growing stockpile of plutonium and its shipments of spent nuclear fuel to Europe have drawn international criticism.



China, which currently operates only three nuclear power plants, has plans to expand its nuclear capabilities. However, whether China will be able to obtain sufficient financing or whether it can develop the necessary skilled work force to expand is uncertain.


Eastern Europe

A number of eastern European countries—including Russia, Ukraine, Bulgaria, the Czech Republic, Hungary, Lithuania, and Slovakia—generate electricity from Soviet-designed nuclear reactors that have various safety flaws. Some of these reactors have the same design as the Chernobyl reactor that exploded in 1986. The United States and other western countries are working to address these design problems and to improve operations, maintenance, and training at these plants.



Solar energy does not refer to a single energy technology but rather covers a diverse set of renewable energy technologies that are powered by the Sun’s heat. Some solar energy technologies, such as heating with solar panels, utilize sunlight directly. Other types of solar energy, such as hydroelectric energy and fuels from biomass (wood, crop residues, and dung), rely on the Sun’s ability to evaporate water and grow plant material, respectively. The common feature of solar energy technologies is that, unlike oil, gas, coal, and present forms of nuclear power, solar energy is inexhaustible. Solar energy can be divided into three main groups—heating and cooling applications, electricity generation, and fuels from biomass.


Heating and Cooling

The Sun has been used for heating for centuries. The Mesa Verde cliff dwellings in Colorado were constructed with rock projections that provide shade from the high (and hot) summer Sun but allow the rays of the lower winter Sun to penetrate. Today a design with few or no moving parts that takes advantage of the Sun is called passive solar heating. Beginning in the late 1970s, architects increasingly became familiar with passive solar techniques. In the future, more and more new buildings will be designed to capture the Sun’s winter rays and keep out the summer rays.

Active solar heating and solar hot-water heating are variations on one theme, differing principally in cost and scale. A typical active solar-heating unit consists of tubes installed in panels that are mounted on a roof. Water (or sometimes another fluid) flowing through the tubes is heated by the Sun and is then used as a source of hot water and heat for the building. Although the number of active solar-heating installations has grown rapidly since the 1970s, the industry has encountered simple installation and maintenance problems, involving such commonplace occurrences as water leakage and air blockage in the tubes. Solar cooling requires a higher technology installation in which a fluid is cooled by being heated to an intermediate temperature so that it can be used to drive a refrigeration cycle. To date, relatively few commercial installations have been made.


Generation of Electricity

Electricity can be generated by a variety of technologies that ultimately depend on the effects of solar radiation. Windmills and waterfalls (themselves very old sources of mechanical energy) can be used to turn turbines to generate electricity. The energies of wind and falling water are considered forms of solar energy, because the Sun’s heating power creates wind and replenishes the water in rivers and streams. Most existing windmill installations are relatively small, containing ten or more windmills in a grid configuration that takes advantage of wind shifts. In contrast, most electricity from hydroelectric installations comes from giant dams. Many sites suitable for large dams have already been tapped, especially in the industrialized nations. However, during the 1970s small dams used years earlier for mechanical energy were retrofitted to generate electricity.

Large-scale hydroelectric projects are still being pursued in many developing countries. The simplest form of solar-powered electricity generation is the use of an array of collectors that heat water to produce steam to turn a turbine. Several of these facilities are in existence.

Other sources of Sun-derived electricity involve high-technology options that remain unproven commercially on a large scale. Photovoltaic cells (see Photoelectric Effect; Solar Energy), which convert sunlight directly into electricity, are currently being used for remote locations to power orbiting space satellites, gates at unattended railroad crossings, and irrigation pumps. Progress is needed to lower costs before widespread use of photovoltaic cells is possible. The commercial development of still other methods seems far in the future. Ocean thermal conversion (OTC) generates electricity on offshore platforms; a turbine is turned by the power generated when cold seawater moves from great depths up to a warm surface. Also still highly speculative is the notion of using space satellites to beam electricity via microwaves down to Earth.



Fuels from biomass encompass several different forms, including alcohol fuels (mentioned earlier), dung, and wood. Wood and dung are still major fuels in some developing countries, and high oil prices have caused a resurgence of interest in wood in industrialized countries. Researchers are giving increasing attention to the development of so-called energy crops (perennial grasses and trees grown on agricultural land). There is some concern, however, that heavy reliance on agriculture for energy could drive up prices of both food and land.


Current Status

The total amount of solar energy now being used may never be accurately estimated, because some sources are not recorded. In the early 1980s, however, two main sources of solar energy, hydroelectric energy and biomass, contributed more than twice as much as nuclear energy to the world energy supply. Nevertheless, these two sources are limited by the availability of dam sites and the availability of land to grow trees and other plant materials, so the future development of solar energy will depend on a broad range of technological advances.

The potential of solar energy, with the exception of hydroelectricity, will remain underutilized well beyond the year 2000, because solar energy is still much more expensive than energy derived from fossil fuels. The long-term outlook for solar energy depends heavily on whether the prices of fossil fuels increase and whether environmental regulations become stricter. For example, stricter environmental controls on burning fossil fuels may increase coal and oil prices, making solar energy a less expensive energy source in comparison.



Geothermal energy, an aspect of the science known as Geothermics, is based on the fact that the earth is hotter the deeper one drills below the surface. Water and steam circulating through deep hot rocks, if brought to the surface, can be used to drive a turbine to produce electricity or can be piped through buildings as heat. Some geothermal energy systems use naturally occurring supplies of geothermal water and steam, whereas other systems pump water down to the deep hot rocks. Although theoretically limitless, in most habitable areas of the world this subterranean energy source lies so deep that drilling holes to tap it is very expensive.



In addition to developing alternative sources of energy, energy supplies can be extended by the conservation (the planned management) of currently available resources. Three types of possible energy conservation practices may be described. The first type is curtailment, that is, doing without—for example, closing factories to reduce the amount of power consumed or cutting back on travel to reduce the amount of gasoline burned. The second type is overhaul, that is, changing the way people live and the way goods and services are produced—for example, slowing further suburbanization of society, using less energy-intensive materials in production processes, and decreasing the amount of energy consumed by certain products (such as automobiles). The third type involves the more efficient use of energy, that is, adjusting to higher energy costs—for example, investing in cars that go farther per unit of fuel, capturing waste heat in factories and reusing it, and insulating houses. This third option requires less drastic changes in lifestyle, so governments and societies most commonly adopt it over the other two options.

By 1980 many people had come to recognize that increased energy efficiency could help the world energy balance in the short and middle term, and that productive conservation should be considered as no less an energy alternative than the energy sources themselves. Substantial energy savings began to occur in the United States in the 1970s, when, for example, the federal government imposed a nationwide automobile efficiency standard and offered tax deductions for insulating houses and installing solar energy panels. Substantial additional energy savings from conservation measures appear possible without dramatically affecting the way people live.

A number of obstacles stand in the way, however. One major roadblock to productive conservation is its highly fragmented and unglamorous character; it requires hundreds of millions of people to do mundane things such as turning off lights and keeping tires properly inflated. Another barrier has been the price of energy. When adjusted for inflation, the cost of gasoline in the United States was lower in 1998 than it was in 1972. Low energy prices make it difficult to convince people to invest in energy efficiency. From 1973 to the mid-1980s, when oil prices increased in the United States, energy consumption per person dropped about 14 percent, in large part due to conservation measures. However, because oil has become cheaper during the 1990s, the U.S. Energy Department predicts that by the year 2000 energy use in the United States will increase to within 2 percent of 1973 levels. Over time, improvements in energy efficiency more than pay for themselves. However, they require large capital investments , which are not attractive when energy prices are low. Major areas for such improvements are described below.



Whereas transportation uses 25 percent of the total energy consumed in the United States, it accounts for 66 percent of the oil used in the United States. Cars built in other countries have long tended to be more efficient than American cars, partly because of the pressures of heavy taxes on gasoline. In 1975 the U.S. Congress passed a law that mandated doubling the fuel efficiency of new cars by 1985. This law, coupled with gasoline shortages in 1974 and 1979 and substantially higher gasoline prices (especially since 1979), caused the average efficiency of all U.S. cars to improve by about 40 percent between 1975 and 1990. However, much of this improvement has been offset by dramatic increases in the number of cars on the road and by the growth in sales of sport utility vehicles and light trucks (which are not covered by federal efficiency standards). By 1996 the number of automobiles used worldwide had grown to 652 million vehicles. This number is expected to increase to nearly 1 billion by 2018. Experts predict that unless more efficient technologies are developed, this growth will raise demand for gasoline by over 20 million barrels per day. Automobile manufacturers have the technical capability today to build cars with a much higher fuel efficiency than that mandated by Congress. Mass-production of cars with this efficiency would require vast capital investments, however. New engine technologies that rely on electric batteries or highly efficient fuel cells, as well as engines that run on natural gas, may play a much greater role in the early 21st century. Increases in the prices of gasoline and parking have encouraged two other modes of transportation: ride sharing (either van or car pools) and public transportation. These methods can be highly efficient, but the sprawling character of many U.S. cities can make their use difficult.



Profit-conscious business managers increasingly emphasize the modification of products and manufacturing processes in order to save energy. The industrial sector, in fact, has recorded more significant improvements in efficiency than either the residential or the transportation sector. Improvements in manufacturing can be classified into three broad, somewhat overlapping, categories: improved housekeeping—doing routine maintenance on furnaces and using only necessary lighting; recovery of waste—recovering heat and recycling waste by-products; and technological innovation—redesigning products and processes to embody more efficient technologies.



In the 1950s and 1960s efficient energy use was often neglected in constructing buildings and houses, but the high energy prices of the 1970s changed that. Some office buildings built since 1980 use only a fifth of the energy used in buildings constructed just ten years earlier. Techniques to save energy include designing and siting buildings to use passive solar heat, using computers to monitor and regulate the use of electricity, and investing in more efficient lighting and in improved heating and cooling systems. A life-cycle approach, which takes into account the total costs over the entire life of the building rather than merely the initial construction cost or sales price, is encouraging greater efficiency. Also, the retrofitting of old buildings, in which new components and equipment are used in existing structures, has been successful.

Chemistry, History of



Chemistry, History of, history of the study of the composition, structure, and properties of material substances, of the interactions between substances, and of the effects on substances of the addition or removal of energy in any of its several forms. From the earliest recorded times, humans have observed chemical changes and have speculated as to their causes. By following the history of these observations and speculations, the gradual evolution of the ideas and concepts that have led to the modern science of chemistry can be traced.



The first known chemical processes were carried out by the artisans of Mesopotamia, Egypt, and China. At first the smiths of these lands worked with native metals such as gold or copper, which sometimes occur in nature in a pure state, but they quickly learned how to smelt metallic ores (primarily metallic oxides and sulfides) by heating them with wood or charcoal to obtain the metals. The progressive use of copper, bronze, and iron gave rise to the names that have been applied to the corresponding ages by archaeologists. A primitive chemical technology also arose in these cultures as dyers discovered methods of setting dyes on different types of cloth, and as potters learned how to prepare glazes, and, later, to make glass.

Most of these craftspeople were employed in temples and palaces, making luxury goods for priests and nobles. In the temples, the priests especially had time to speculate on the origin of the changes they saw in the world about them. Their theories often involved magic, but they also developed astronomical, mathematical, and cosmological ideas, which they used in attempts to explain some of the changes that are now considered chemical.



The first culture to consider these ideas scientifically was that of the Greeks. From the time of Thales, about 600 bc, Greek philosophers were making logical speculations about the physical world rather than relying on myth to explain phenomena. Thales himself assumed that all matter was derived from water, which could solidify to earth or evaporate to air. His successors expanded this theory into the idea that four elements composed the world: earth, water, air, and fire. Democritus thought that these elements were composed of atoms, minute particles moving in a vacuum. Others, especially Aristotle, believed that the elements formed a continuum of mass and therefore a vacuum could not exist. The atomic idea quickly lost ground among the Greeks, but it was never entirely forgotten. When it was revived during the Renaissance, it formed the basis of modern atomic theory (see Atom).

Aristotle became the most influential of the Greek philosophers, and his ideas dominated science for nearly two millennia after his death in 323 bc. He believed that four qualities were found in nature: heat, cold, moisture, and dryness. The four elements were each composed of pairs of these qualities; for example, fire was hot and dry, water was cold and moist, air was hot and moist, and earth was cold and dry. These elements with their qualities combined in various proportions to form the components of the earthly planet. Because it was possible for the amounts of each quality in an element to be changed, the elements could be changed into one another; thus, it was thought possible also to change the material substances that were built up from the elements—lead into gold, for example.



Aristotle's theory was accepted by the practical artisans, especially at Alexandria, Egypt, which after 300 bc became the intellectual center of the ancient world. They thought that metals in the earth sought to become more and more perfect and thus gradually changed into gold. It seemed to them that they should be able to carry out the same process more rapidly in their own workshops and so artificially to transmute common metals into gold. Beginning about ad100 this idea dominated the minds of the philosophers as well as the metalworkers, and a large number of treatises were written on the art of transmutation, which became known as alchemy. Although no one ever succeeded in making gold, a number of chemical processes were discovered in the search for the perfection of metals.

At almost the same time, and probably independently, a similar alchemy arose in China. Here, also, the aim was to make gold, although not because of the monetary value of the metal. The Chinese believed that gold was a medicine that could confer long life or even immortality on anyone who consumed it. As did the Egyptians, the Chinese gained practical chemical knowledge from incorrect theories.


Dispersal of Greek Thought

After the decline of the Roman Empire, Greek writings were less openly studied in western Europe, and even in the eastern Mediterranean they were largely neglected. In the 6th century, however, a sect of Christians known as the Nestorians, whose language was Syriac, spread their influence throughout Asia Minor. They established a university at Edessa in Mesopotamia and translated a large number of Greek philosophical and medical writings into Syriac for use among scholars.

In the 7th and 8th centuries Arab conquerors spread Islamic culture over much of Asia Minor, North Africa, and Spain. The caliphs at Baghdād became active patrons of science and learning. The Syriac translation of Greek texts were again translated, this time into Arabic, and along with the rest of Greek learning the ideas and practice of alchemy once again flourished.

The Arabic alchemists were also in contact with China in the East, thus receiving the concept of gold as a medicine, as well as the Greek idea of gold as a perfect metal. A specific agent, the philosopher's stone, was thought to stimulate transmutation, and this became the object of the alchemists' search. The alchemists now had an added incentive to study chemical processes, for they might lead not only to wealth but also to health. The study of chemicals and chemical apparatus made steady progress. Such important reagents as the caustic alkalis (see Alkali Metals) and ammonium salts (see Ammonia) were discovered, and distillation apparatus was steadily improved. An early realization of the need for more quantitative methods also appeared in some Arabic recipes, where specific instructions were given regarding the amounts of reagents to be employed.


The Late Middle Ages

A great intellectual reawakening began in western Europe in the 11th century. This was stimulated in part by the cultural exchanges between Arabs and Western scholars in Sicily and Spain. Schools of translators were established, and their translations transmitted Arabic philosophical and scientific ideas to European scholars. Thus, knowledge of Greek science, passed through the intermediate languages of Syriac and Arabic, was disseminated in the scholarly tongue of Latin and so eventually came to all parts of Europe. Many of the manuscripts most eagerly read were those concerning alchemy.

These manuscripts were of two types: Some were almost purely practical, and some attempted to apply theories of the nature of matter to alchemical problems. Among the practical subjects discussed was distillation. The manufacture of glass had been greatly improved, particularly in Venice, and it now became possible to construct even better distillation apparatus than the Arabs had made and to condense the more volatile products of distillation. Among the important products obtained in this way were alcohol and the mineral acids: nitric, aqua regia (a mixture of nitric and hydrochloric), sulfuric, and hydrochloric. Many new reactions could be carried out using these powerful reagents. Word of the Chinese discovery of nitrates and the manufacture of gunpowder also came to the West through the Arabs. The Chinese at first used gunpowder for fireworks, but in the West it quickly became a major part of warfare. An effective chemical technology existed in Europe by the end of the 13th century.

The second type of alchemical manuscript transmitted by the Arabs was concerned with theory. Many of these writings reveal a mystical character that contributed little to the advancement of chemistry, but others sought to explain transmutation in physical terms. The Arabs had based their theories of matter on Aristotle's ideas, but their thinking tended to be more specific than his. This was especially true of their ideas concerning the composition of metals. They believed that metals consisted of sulfur and mercury—not the familiar substances with which they were perfectly well acquainted, but rather the “principle” of mercury, which conferred the property of fluidity on metals, and the “principle” of sulfur, which made substances combustible and caused metals to corrode. Chemical reactions were explained in terms of changes in the amounts of these principles in material substances.


The Renaissance

During the 13th and 14th centuries the influence of Aristotle on all branches of scientific thought began to weaken. Actual observation of the behavior of matter cast doubt on the relatively simple explanations Aristotle had given; such doubts spread rapidly after the invention around 1450 of printing with movable type. After 1500 printed alchemical works appeared in increasing numbers, as did works devoted to technology. The result of this increasing knowledge became apparent in the 16th century.


The Rise of Quantitative Methods

Among the influential books that appeared at this time were practical works on mining and metallurgy. These treatises devoted much space to assaying ores for their content of valuable metals, work that required the use of the laboratory balance, or scale, and the development of quantitative methods (see Chemical Analysis). Workers in other fields, especially medicine, began to recognize the need for greater precision. Physicians, some of whom were alchemists, needed to know the exact weight or volume of the doses they administered. Thus, they used chemical methods for preparing medicines.

These methods were combined and forcefully promoted by the eccentric Swiss physician Theophrastus von Hohenheim, generally called Paracelsus. He grew up in a mining region and became familiar with the properties of metals and their compounds, which he believed were superior to the herbal remedies used by orthodox physicians. He spent most of his life in violent quarrels with the medical establishment of the day, and in the process he founded the science of iatrochemistry (the use of chemical medicines), the forerunner of pharmacology. He and his followers discovered many new compounds and chemical reactions. He modified the old sulfur-mercury theory of the composition of metals by adding a third component, salt, the earthy part of all substances. He declared that when wood burns “that which burns is sulfur, that which vaporizes is mercury, and that which turns to ashes is salt.” As with the sulfur-mercury theory, these were principles and not the material substances. His emphasis on combustible sulfur was important for the later development of chemistry. The iatrochemists who followed Paracelsus modified some of his wilder ideas and collected his and their own recipes for preparing chemical remedies. Finally, at the end of the 16th century, Andreas Libavius published his Alchemia, which organized the knowledge of the iatrochemists and is frequently called the first textbook of chemistry.

In the first half of the 17th century a few men began to study chemical reactions experimentally, not because they were useful in other disciplines, but rather for their own sake. Jan Baptista van Helmont, a physician who left medical practice to devote himself to the study of chemistry, used the balance in an important experiment to show that a definite quantity of sand could be fused with excess alkali to form water glass, and that when this product was treated with acid, it regenerated the original amount of sand (silica). Thus were laid the foundations of the law of conservation of mass. Van Helmont also showed that in a number of reactions an aerial fluid was liberated. He called this substance “gas.” A new class of substances with its own physical properties was shown to exist.


Revival of Atomic Theory

Boyle’s Law

Boyle’s law, developed by English scientist Robert Boyle, states that the pressure of a gas times its volume is equal to a constant number, for a gas at a constant temperature. This relationship means that pressure increases as volume decreases, and vice versa. In this graph, the product of pressure and volume anywhere along one of the lines of constant temperate should be equal.

In the 16th century experimenters discovered how to create a vacuum, something that Aristotle had declared impossible. This called attention to the ancient theory of Democritus, who had assumed that his atoms moved in a void. The French philosopher and mathematician René Descartes and his followers developed a mechanical view of matter in which the size, shape, and motion of minute particles explained all observed phenomena. Most natural philosophers and iatrochemists at this time assumed that gases had no chemical properties, hence their attention was centered on the physical behavior of gases. A kinetic-molecular theory of gases began to develop. Notable in this direction were the experiments of Robert Boyle, the English physicist and chemist whose studies of the “spring of the air” (elasticity) led to the formation of what became known as Boyle's law, a generalization of the inverse relation between pressure and volume of a gas (see Gases).



While natural philosophers were thus speculating on mathematical laws, early chemists in their laboratories were attempting to use chemical theories to explain the very real chemical reactions they were observing. The iatrochemists paid particular attention to sulfur and the theories of Paracelsus. In the second half of the 17th century, the German physician, economist, and chemist Johann Joachim Becher built a system of chemistry around this principle. He noted that when organic matter burned, a volatile material seemed to leave the burning substance. His disciple, Georg Ernst Stahl, made this the central point of a theory that survived in chemical circles for nearly a century.

Stahl assumed that when anything burned, its combustible part was given off to the air. This part he called phlogiston, from the Greek word for “flammable.” The rusting of metals was analogous to combustion and therefore also involved loss of phlogiston. Plants absorbed the phlogiston from the air and thus were rich in it. Heating the calx, or oxides, of metals with charcoal restored phlogiston to them. It followed from this that the calx was an element, and the metal a compound. This theory is almost exactly the reverse of the modern concept of oxidation-reduction (see Chemical Reaction), but it involves the cyclic transfer of a substance—even if in the wrong direction—and some observed phenomena could be explained by it. However, recent studies of chemical literature of the period show that the phlogiston explanation had only minor influence among chemists until it was attacked by the wealthy amateur French chemist Antoine Laurent Lavoisier in the last quarter of the eighteenth century.


The 18th Century

At about the same time, another observation led to advances in the understanding of chemistry. As more and more chemicals were studied, chemists saw that certain substances combined more easily with, or had a greater affinity for, a given chemical than did others. Elaborate tables were drawn up showing relative affinities when different chemicals were brought together. Use of these tables made it possible to predict many chemical reactions before testing them in the laboratory.

All these advances led in the 18th century to the discovery of new metals and their compounds and reactions. Qualitative and quantitative analytical methods began to be developed, and the science of analytical chemistry was born. Nonetheless, as long as the part played by gases was believed to be only physical, the full scope of chemistry could not be recognized.

The chemical study of gases, generally called “airs,” became important after the British physiologist Stephen Hales developed the pneumatic trough to collect and measure the volume of gases released from various solids by heating in a closed system and collecting over water. The pneumatic trough became a valuable device for the collection and study of gases uncontaminated by ordinary air. The study of gases advanced rapidly and led to a new level of understanding of various different gases.

The initial understanding of the role of gases in chemistry occurred in Edinburgh in 1756, when British Chemist Joseph Black published his studies on the reactions of magnesium and calcium carbonates (see Carbonates). When these compounds were heated, they gave off a gas and left a residue of what Black called calcined magnesia, or lime (the oxides). The latter reacted with “alkali” (sodium carbonate) to regenerate the original salts. Thus, the gas carbon dioxide, which Black called fixed air, took part in chemical reactions (was “fixed,” as he said). The idea that a gas could not enter a chemical reaction was overthrown, and soon a number of new gases were recognized as being distinct substances.

The British physicist Henry Cavendish isolated “flammable air” (hydrogen) in the next decade. He also introduced the use of mercury instead of water as the confining liquid over which gases were collected, making it possible to collect water-soluble gases. This variant was used extensively by the British chemist and theologian Joseph Priestley, who collected and studied almost a dozen new gases. Priestley's most important discovery was oxygen, and he quickly realized that this gas was the component of ordinary air that was responsible for combustion and made animal respiration possible. However, he reasoned that combustible substances burned more energetically in this gas, and metals formed calxes more readily, since it was devoid of phlogiston. Hence, the gas accepted the phlogiston present in the combustible substance or the metal more readily than ordinary air, which was already partially filled with phlogiston. He named this new gas “dephlogisticated air” and defended that belief to the end of his life.

Meanwhile chemistry had been making rapid progress in France, particularly in the laboratory of Lavoisier. He was troubled by the fact that metals gained weight when heated in the air when presumably they were losing phlogiston.

In 1774 Priestley visited France and told Lavoisier about his discovery of dephlogisticated air. Lavoisier quickly saw the significance of this substance, and the way was opened for the chemical revolution that established modern chemistry. He used the name “oxygen,” meaning acid former.


The Birth of Modern Chemistry

Lavoisier showed by a series of brilliant experiments that air contains 20 percent oxygen and that combustion is due to the combination of a combustible substance with oxygen. When carbon is burned, fixed air (carbon dioxide) is produced. Phlogiston therefore does not exist. The phlogiston theory was soon replaced by the view that oxygen from the air combines with the components of the combustible substance to form oxides of the component elements. Lavoisier used the laboratory balance to give quantitative support to his work. He defined elements as substances that could not be decomposed by chemical means and firmly established the law of the conservation of mass. He replaced the old system of chemical names (which was still based on alchemical usage) with the rational chemical nomenclature used today, and he helped to found the first chemical journal. After his death on the guillotine in 1794, his colleagues continued his work in establishing modern chemistry. A little later the Swedish chemist Jöns Jakob Berzelius proposed symbolizing atoms of the elements by the initial letters or pairs of letters from their names.



By the beginning of the 19th century the precision of analytical chemistry had improved to such an extent that chemists were able to show that the simple compounds with which they worked contained fixed and unvarying amounts of their constituent elements. In certain cases, however, more than one compound could be formed between the same elements. At the same time the French chemist and physicist Joseph Gay-Lussac showed that the volume ratios of reacting gases were small whole numbers (which implies the interaction of discrete particles, later shown to be atoms). A major step in explaining these facts was the chemical atomic theory of the English scientist John Dalton in 1803.

Dalton assumed that when two elements combined, the resulting compound contained one atom of each. In his system, water could be given a formula corresponding to HO. He arbitrarily assigned to hydrogen the atomic weight of 1 and could then calculate the relative atomic weight of oxygen. Applying this principle to other compounds, he calculated the atomic weights of other elements and drew up a table of the relative atomic weights of all the then known elements. His theory contained many errors, but the idea was correct, and a precise quantitative value could then be assigned to the mass of each atom.


Molecular Theory

The major weaknesses in Dalton's theory were that he did not account for the law of multiple proportions and made no distinction between atoms and molecules. Thus, he could not distinguish between the possible formulas for water HO and H2O2, nor could he explain why the density of water vapor, with its assumed formula HO, was less than that of oxygen, assumed to have the formula O. The solution to these problems was found in 1811 by the Italian physicist Amedeo Avogadro. He suggested that the numbers of particles in equal volumes of gases at the same temperature and pressure were equal and that a distinction existed between molecules and atoms. When oxygen combined with hydrogen, a double atom of oxygen (a molecule in our terms) was split, each oxygen atom then combining with two hydrogen atoms, giving the molecular formula of H2 O for water and O2 and H2 for molecules of oxygen and hydrogen.

Unfortunately, Avogadro's ideas were overlooked for nearly 50 years, and during this time great confusion prevailed among chemists in their calculations. It was not until 1860 that the Italian chemist Stanislao Cannizzaro reintroduced Avogadro's hypotheses. By this time chemists had found it more convenient to take the atomic weight of oxygen, 16, as the standard to which to relate the atomic weights of all the other elements instead of taking the value 1 for hydrogen, as Dalton had done. The molecular weight of oxygen, 32, was then used universally and, expressed in grams, was called the gram molecular weight of oxygen, or more simply, 1 mole of oxygen. Chemical calculations were standardized, and fixed formulas written.

The old problem of the nature of chemical affinity remained unsolved. For a time it appeared that the answer might lie in the newly discovered field of electrochemistry. The discovery in 1800 of the voltaic pile, the first true battery, gave chemists a new tool, which led to the discovery of such metals as sodium and potassium. It seemed to Berzelius that positive and negative electrostatic forces might hold elements together; at first his theories were generally accepted. As chemists prepared and studied more new compounds and reactions in which electrical forces did not seem to be involved (the nonpolar compounds), the problem of affinity was shelved for a time.


New Fields of Chemistry

The most striking advances in chemistry in the 19th century were in the field of organic chemistry (see Chemistry, Organic). The structural theory, which gave a picture of how atoms were actually put together, was nonmathematical, but employed a logic of its own. It made possible the prediction and preparation of many new compounds, including a large number of important dyes, drugs, and explosives that gave rise to great chemical industries, especially in Germany.

At the same time, other branches of chemistry made their appearance. Stimulated by the advances in physics then being made, some chemists sought to apply mathematical methods to their science. Studies of reaction rates led to the development of kinetic theories that had value both for industry and for pure science. The recognition that heat was due to motion on the atomic scale, a kinetic phenomenon, led to the abandonment of the idea that heat was a specific substance (termed caloric) and initiated the study of chemical thermodynamics (see Thermodynamics). Continuation of electrochemical studies led the Swedish chemist Svante August Arrhenius to postulate the dissociation of salts in solution to form ions carrying electrical charges. Studies of the emission and absorption spectra of elements and compounds became important to both chemists and physicists (see Spectroscopy; Spectrum). In addition, fundamental research in colloid and photochemistry was begun. By the end of the 19th century, studies of this type were combined into the field known as physical chemistry (see Chemistry, Physical).

Inorganic chemistry also required organization. The number of new elements being discovered continued to grow, but no method of classification had been developed that could bring order to their reactions. The independent development of the periodic law by the Russian chemist Dmitry Ivanovich Mendeleyev in 1869 and the German chemist Julius Lothar Meyer in 1870 eliminated this confusion and indicated where new elements would be found and what their properties would be (see Elements, Chemical; Periodic Law).

At the end of the 19th century chemistry, like physics, seemed to have reached a stage in which no striking new fields remained to be developed. This view changed completely with the discovery of radioactivity. Chemical methods were used in isolating new elements such as radium, in the separation of the new class of substances known as isotopes, and in the synthesis and isolation of the new transuranium elements. The new picture of the actual structure of atoms obtained by physicists solved the old problem of chemical affinity and explained the relation between polar and nonpolar compounds. See Nuclear Chemistry.

The other major advance for chemistry in the 20th century was the foundation of biochemistry. This began with the simple analysis of body fluids; methods were then rapidly developed for determining the nature and function of the most complex cell constituents. By midcentury biochemists had unraveled the genetic code and explained the function of the gene, the basis of all life; the field had grown so vast that its study had become a new science, molecular biology. See also Genetics.


Recent Research in Chemistry

Recent advances in biotechnology and materials science are helping to define the frontiers of chemical research. In biotechnology, sophisticated analytical instruments have made it possible to initiate an international effort to sequence the human genome. Success in this project will likely completely change the nature of such fields as molecular biology and medicine. Materials science, an interdisciplinary combination of physics, chemistry, and engineering, is guiding the design of advanced materials and devices. A recent example is the discovery of high-temperature superconductors, ceramic compounds that lose their resistance to the flow of electricity above 77K (-196° C/-321° F; see Superconductivity). Characterization of surfaces is being advanced by the invention of the scanning tunneling microscope, which can provide images of certain surfaces with atomic-scale resolution. See Microscope; Superconductivity.

Even in conventional fields of chemical research, new, more powerful analytical tools are providing unprecedented detail of chemicals and their reactions. For example, laser techniques are providing snapshots of gas-phase chemical reactions on the femtosecond (a millionth of a billionth of a second) time scale. From the soot produced by graphite electrodes has been isolated a new form of carbon, called buckminsterfullerene, that has the shape of a soccerball, and the chemical formula C60. This compound and its chemistry have been characterized with astonishing rapidity using the vast array of analytical techniques currently available. Certain alkali metal salts of this compound have even been found to be superconducting.


The Chemical Industry

The growth of chemical industries and the training of professional chemists had an interestingly shared history. Until about 150 years ago chemists were not trained professionally. Chemistry was advanced by the work of those who were interested in the subject, but who made no systematic effort to train new workers in the field. Physicians and wealthy amateurs often hired assistants, only some of whom continued their masters' work.

Early in the 19th century, however, this haphazard system of chemical education changed. Many provincial universities were established in Germany, a country with a long tradition of research. A research center in chemistry was set up at Giessen by the German chemist Justus Liebig. This first teaching laboratory became so successful that it drew students from all over the world; other German universities soon followed.

A large group of young chemists was thus trained just at the time when chemical industries were beginning to exploit new discoveries. This exploitation had its start during the Industrial Revolution; the Leblanc process for the production of soda, for example—one of the first large-scale production processes—was developed in France in 1791 and was commercialized in England beginning in 1823. The laboratories of such growing industries were able to employ the newly trained chemistry students and also to use university professors as consultants. This interplay between the universities and the chemical industry benefited both of them, and the accompanying rapid growth of the organic chemical industry toward the end of the 19th century created the great German dye and pharmaceutical trusts that gave Germany scientific predominance in the field until World War I.

After the war, the German system was introduced into all the industrial nations of the world, and chemistry and chemical industries progressed even more rapidly. Among some of the more recent industrial developments, increasing use has been made of enzymatic reaction processes (see Enzyme), mainly because of the low costs and high yields that can be achieved. Industries are at present studying production methods using genetically altered microorganisms for industrial purposes (see Genetic Engineering).


Chemistry and Society

Chemistry has had an enormous influence on human life. In earlier periods chemical techniques were used to isolate useful natural products and to find new ways to employ them. In the 19th century techniques were developed for synthesizing completely new substances that were either better than the natural ones or could completely replace them more cheaply. As the complexity of synthesized compounds increased, wholly new materials with novel uses began to appear. Plastics and new textiles were developed, and new drugs conquered whole classes of disease. At the same time, what had been entirely separate sciences began to be drawn together. Physicists, biologists, and geologists had developed their own techniques and ways of looking at the world, but now it became evident that each science, in its own way, was the study of matter and its changes. Chemistry lay at the base of each of them. The resulting formation of such interscientific disciplines as geochemistry or biochemistry has stimulated all of the parent sciences.

The progress of science in recent years has been spectacular, although the benefits of this progress have not been without some corresponding liabilities. The most obvious dangers come from radioactive materials, with their potential for producing cancers in exposed individuals and mutations in their children. It has also become apparent that the accumulation in plant and animal cells of pesticides once thought harmless or of by-products from manufacturing processes often have damaging effects. These dangerous materials have been manufactured in enormous amounts and dispersed widely, and it has become the task of chemistry to discover the means by which these substances can be rendered harmless. This is one of the greatest challenges science will have to meet.




Robot, computer-controlled machine that is programmed to move, manipulate objects, and accomplish work while interacting with its environment. Robots are able to perform repetitive tasks more quickly, cheaply, and accurately than humans. The term robot originates from the Czech word robota, meaning “compulsory labor.” It was first used in the 1921 play R.U.R. (Rossum's Universal Robots) by the Czech novelist and playwright Karel Capek. The word robot has been used since to refer to a machine that performs work to assist people or work that humans find difficult or undesirable.



The concept of automated machines dates to antiquity with myths of mechanical beings brought to life. Automata, or manlike machines, also appeared in the clockwork figures of medieval churches, and 18th-century watchmakers were famous for their clever mechanical creatures.

Feedback (self-correcting) control mechanisms were used in some of the earliest robots and are still in use today. An example of feedback control is a watering trough that uses a float to sense the water level. When the water falls past a certain level, the float drops, opens a valve, and releases more water into the trough. As the water rises, so does the float. When the float reaches a certain height, the valve is closed and the water is shut off.

The first true feedback controller was the Watt governor, invented in 1788 by the Scottish engineer James Watt. This device featured two metal balls connected to the drive shaft of a steam engine and also coupled to a valve that regulated the flow of steam. As the engine speed increased, the balls swung out due to centrifugal force, closing the valve. The flow of steam to the engine was decreased, thus regulating the speed.

Feedback control, the development of specialized tools, and the division of work into smaller tasks that could be performed by either workers or machines were essential ingredients in the automation of factories in the 18th century. As technology improved, specialized machines were developed for tasks such as placing caps on bottles or pouring liquid rubber into tire molds. These machines, however, had none of the versatility of the human arm; they could not reach for objects and place them in a desired location.

The development of the multijointed artificial arm, or manipulator, led to the modern robot. A primitive arm that could be programmed to perform specific tasks was developed by the American inventor George Devol, Jr., in 1954. In 1975 the American mechanical engineer Victor Scheinman, while a graduate student at Stanford University in California, developed a truly flexible multipurpose manipulator known as the Programmable Universal Manipulation Arm (PUMA). PUMA was capable of moving an object and placing it with any orientation in a desired location within its reach. The basic multijointed concept of the PUMA is the template for most contemporary robots.



The inspiration for the design of a robot manipulator is the human arm, but with some differences. For example, a robot arm can extend by telescoping—that is, by sliding cylindrical sections one over another to lengthen the arm. Robot arms also can be constructed so that they bend like an elephant trunk. Grippers, or end effectors, are designed to mimic the function and structure of the human hand. Many robots are equipped with special purpose grippers to grasp particular devices such as a rack of test tubes or an arc-welder.

The joints of a robotic arm are usually driven by electric motors. In most robots, the gripper is moved from one position to another, changing its orientation. A computer calculates the joint angles needed to move the gripper to the desired position in a process known as inverse kinematics.

Some multijointed arms are equipped with servo, or feedback, controllers that receive input from a computer. Each joint in the arm has a device to measure its angle and send that value to the controller. If the actual angle of the arm does not equal the computed angle for the desired position, the servo controller moves the joint until the arm's angle matches the computed angle. Controllers and associated computers also must process sensor information collected from cameras that locate objects to be grasped, or they must touch sensors on grippers that regulate the grasping force.

Any robot designed to move in an unstructured or unknown environment will require multiple sensors and controls, such as ultrasonic or infrared sensors, to avoid obstacles. Robots, such as the National Aeronautics and Space Administration (NASA) planetary rovers, require a multitude of sensors and powerful onboard computers to process the complex information that allows them mobility. This is particularly true for robots designed to work in close proximity with human beings, such as robots that assist persons with disabilities and robots that deliver meals in a hospital. Safety must be integral to the design of human service robots.



In 1995 about 700,000 robots were operating in the industrialized world. Over 500,000 were used in Japan, about 120,000 in Western Europe, and about 60,000 in the United States. Many robot applications are for tasks that are either dangerous or unpleasant for human beings. In medical laboratories, robots handle potentially hazardous materials, such as blood or urine samples. In other cases, robots are used in repetitive, monotonous tasks in which human performance might degrade over time. Robots can perform these repetitive, high-precision operations 24 hours a day without fatigue. A major user of robots is the automobile industry. General Motors Corporation uses approximately 16,000 robots for tasks such as spot welding, painting, machine loading, parts transfer, and assembly. Assembly is one of the fastest growing industrial applications of robotics. It requires higher precision than welding or painting and depends on low-cost sensor systems and powerful inexpensive computers. Robots are used in electronic assembly where they mount microchips on circuit boards.

Activities in environments that pose great danger to humans, such as locating sunken ships, cleanup of nuclear waste, prospecting for underwater mineral deposits, and active volcano exploration, are ideally suited to robots. Similarly, robots can explore distant planets. NASA's Galileo, an unpiloted space probe, traveled to Jupiter in 1996 and performed tasks such as determining the chemical content of the Jovian atmosphere.

Robots are being used to assist surgeons in installing artificial hips, and very high-precision robots can assist surgeons with delicate operations on the human eye. Research in telesurgery uses robots, under the remote control of expert surgeons that may one day perform operations in distant battlefields.



Robotic manipulators create manufactured products that are of higher quality and lower cost. But robots can cause the loss of unskilled jobs, particularly on assembly lines in factories. New jobs are created in software and sensor development, in robot installation and maintenance, and in the conversion of old factories and the design of new ones. These new jobs, however, require higher levels of skill and training. Technologically oriented societies must face the task of retraining workers who lose jobs to automation, providing them with new skills so that they can be employable in the industries of the 21st century.



Automated machines will increasingly assist humans in the manufacture of new products, the maintenance of the world's infrastructure, and the care of homes and businesses. Robots will be able to make new highways, construct steel frameworks of buildings, clean underground pipelines, and mow lawns. Prototypes of systems to perform all of these tasks already exist.

One important trend is the development of microelectromechanical systems, ranging in size from centimeters to millimeters. These tiny robots may be used to move through blood vessels to deliver medicine or clean arterial blockages. They also may work inside large machines to diagnose impending mechanical problems.

Perhaps the most dramatic changes in future robots will arise from their increasing ability to reason. The field of artificial intelligence is moving rapidly from university laboratories to practical application in industry, and machines are being developed that can perform cognitive tasks, such as strategic planning and learning from experience. Increasingly, diagnosis of failures in aircraft or satellites, the management of a battlefield, or the control of a large factory will be performed by intelligent computers.

Factory System



Factory System, working arrangement whereby a number of persons cooperate to produce articles of consumption. Today the term factory generally refers to a large establishment employing many people involved in mass production of industrial or consumer goods. Some form of the factory system, however, has existed since ancient times.



Pottery works have been uncovered in ancient Greece and Rome. In various parts of the Roman Empire factories manufactured glassware and bronze ware and other similar articles for export as well as for domestic consumption. In the Middle Ages, large silk factories were operated in the Syrian cities of Antakya and Tyre; and in Europe, during the late medieval period, textile factories were established in several countries, notably in Italy, Flanders (now Belgium), France, and England.

During the Renaissance, the advance of science, contact with the New World, and the development of new trade routes to the Far East stimulated commercial activity and the demand for manufactured goods and thereby promoted industrialization. In western Europe and particularly in England, during the 16th and 17th centuries, many factories were created to produce such goods as paper, firearms, gunpowder, cast iron, glass, items of clothing, beer, and soap. Although heavy machinery, operated by water power in some places, was used in a few establishments, the industrial processes were generally carried on by means of hand labor and simple tools. In contrast to modern mechanized plants with assembly lines, the factories were merely large workshops where each laborer functioned independently. Nor were factories the most usual place of production; although some workers used their employer's tools and worked on the premises, most manufacturing was done under the domestic, or putting-out, system, by which workers received the raw materials, worked in their own homes, returned the finished articles, and were paid for their labor.



The factory system, which eventually replaced the domestic system and became the characteristic method of production in modern economies, began to develop in the late 18th century, when a series of inventions transformed the British textile industry and marked the beginning of the Industrial Revolution. Among the most important of these inventions were the flying shuttle patented (1733) by John Kay, the spinning jenny (1764) of James Hargreaves, the water frame for spinning (1769) of Sir Richard Arkwright, the spinning mule (1779) of Samuel Crompton, and the power loom (1785) of Edmund Cartwright. These inventions mechanized many of the hand processes involved in spinning and weaving, making it possible to produce textiles much more quickly and cheaply. Many of the new machines were too large and costly for them to be used at home, however, and it became necessary to move production into factories.

One of the major technological breakthroughs early in the Industrial Revolution was the invention of a practical steam engine. When textile factories first became mechanized, only water power was available to operate the machinery; the factory owner was forced to locate the establishment near a water supply, sometimes in an isolated and inconvenient area far from a labor supply. After 1785, when a steam engine was first installed in a cotton factory, steam began to replace water as power for the new machinery. Manufacturers could build factories closer to a labor supply and to markets for the goods produced. The development of the steam locomotive and steamship in the early 19th century made it possible to ship factory-built products to distant markets more rapidly and economically, thus encouraging industrialization.

The Arkwright method of spinning was introduced into the U.S. in 1790 by Samuel Slater, a former apprentice in a British mechanized textile factory who started a factory in Pawtucket, Rhode Island. From that time on, mechanized textile factories sprang up throughout New England. In 1814, at a cotton mill established by the American industrialist Francis Cabot Lowell in Waltham, Massachusetts, all the steps of an industrial process were, for the first time, combined under one roof; here, cotton entered the factory as raw fiber and emerged as finished goods ready for sale.



Textiles, particularly cotton goods, were the major factory-made products during the early 19th century. Meanwhile, new machinery and techniques were being invented that made it possible to extend the factory system to other industries. The American inventor Eli Whitney, who stimulated textile manufacturing in the U.S. by inventing the cotton gin in 1793, made an equally, if not more important, contribution to the factory system by developing the idea of using interchangeable parts in making firearms. Interchangeable parts, with which Whitney began experimenting in 1798, eventually made it possible to produce firearms by assembly line techniques, rather than custom work, and to repair them quickly with premade parts. The idea of interchangeable parts was applied to the manufacture of timepieces from about 1820 on. Then, in the 1850s, at Waltham, Massachusetts, automatic machinery was used for the first time to make watches by consecutive process in a single factory. Thus, by the middle of the 19th century, American factories had begun to develop the outstanding feature of the modern factory system: mass production of standardized articles.

The garment industries were revolutionized by the sewing machine, patented in 1846 by the American inventor Elias Howe, and underwent a tremendous expansion during the 1860s. Spurred by the urgent demand for uniforms during the American Civil War, clothing manufacturers developed standardized sizes, a prerequisite for mass production of ready-made garments. At the same time, the military demand for shoes stimulated the creation of shoe-sewing machinery to mass-produce footwear.



As the 20th century began, the factory system of production prevailed throughout the United States and most of Western Europe. It reached its greatest European development in Germany, England, the Netherlands, and Belgium, which became, to a great extent, importers of food and raw materials and exporters of factory-made commodities. In 1913 Henry Ford, the pioneer automobile manufacturer, made an immense contribution to the expansion of the factory system in the U.S. when he introduced assembly line techniques to automobile production in the Ford Motor plant. In time the factory system spread to the Orient, where cheap labor attracted capital from the industrialized countries of the West. Japan, which had begun to industrialize in the late 19th century, rapidly became the foremost industrial power of Asia and a serious competitor of the Western nations.

The general trend of development of the factory system has been toward larger establishments with greater capital investment per worker. In the U.S. the number of manufacturing establishments actually declined from about 500,000 in 1899 to about 325,000 in the early 1980s, but the number of workers employed increased greatly, as did the value added to the economy by manufacture. By the mid-1980s, however, many factories felt the impact of serious problems in manufacturing industries, especially in the production of textiles, steel, automobiles, machine tools, and electrical equipment. Of major concern was the proliferation of cheap foreign imports. Cuts in these industries have led to relocation of businesses and factory closings, with accompanying loss of jobs and even economic devastation in some regions.

Other important trends have been the rise to leadership positions of professional managers who treat factory organization and operation as a science, and the development and use of increasingly sophisticated equipment in modern factory operation. Some machines, aided by computers, semiconductors, and other technological innovations of the mid-20th century, are so nearly self-regulating that an entire factory may be kept running by a few people operating sets of controls. This method of production, called automation, has brought many economic changes, which eventually may be as basic as those resulting from the Industrial Revolution.



The introduction of the factory system had a profound effect on social relationships and living conditions. In earlier times the feudal lord and the guildmaster both had been expected to take some responsibility for the welfare of the serfs, apprentices, and journeymen who worked under them (see Feudalism; Guild). By contrast, the factory owners were considered to have discharged their obligations to employees with the payment of wages; thus, most owners took an impersonal attitude toward those who worked in their factories. This was in part because no particular strength or skill was required to operate many of the new factory machines. The owners of the early factories often were more interested in hiring a worker cheaply than in any other qualification. Thus they employed many women and children, who could be hired for lower wages than men. These low-paid employees had to work for as long as 16 hours a day; they were subjected to pressure, and even physical punishment, in an effort to make them speed up production. Since neither the machines nor the methods of work were designed for safety, many fatal and maiming accidents resulted. In 1802 the exploitation of pauper children led to the first factory legislation in England. That law, which limited a child's workday to 12 hours, and other legislation that followed were not strictly enforced.

The workers in the early mill towns were not in a position to act in their own interest against the factory owners. The first cotton mills were located in small villages where all the shops and inhabitants depended on a single factory for their livelihood. Few dared to challenge the will of the person who owned such a factory and controlled the lives of the workers both on and off the job. The long hours of work and low wages kept a laborer from leaving the community or being otherwise exposed to outside influences. Later, when factories were located in larger cities, the disadvantages of the mill town gave way to such urban evils as overcrowded sweatshops and slums. In addition, the phenomenon of the business cycle began to manifest itself, subjecting industrial laborers to the frequent threat of unemployment.



By the early 19th century the condition of workers under the factory system had aroused concern. One who called for reform was Robert Owen, a British self-made capitalist and cotton mill owner, who tried to set an example by transforming a squalid Scottish mill town called New Lanark into a model industrial community between 1815 and 1828. At New Lanark, wages were higher and hours shorter, young children were kept out of the factory and sent to school, and employee housing was superior by the standards of the day; yet the mill operated at a substantial profit. In Owen's day modern trade unions were beginning to develop in the British Isles, and he sought to organize them into a national movement. His aim was to improve working conditions as well as effect basic social and economic reforms. In his concern for the increasing differences between capital and labor, Owen was joined by such economic theorists as the Frenchmen Charles Fourier, Claude Henri de Saint-Simon, and Pierre Joseph Proudhon and the Germans Karl Marx and Friedrich Engels, each of whom analyzed the processes of modern industrial society and proposed social and industrial reforms.

In time, organized protest forced owners to correct some of the worst abuses. Workers agitated for and obtained the right to vote, and they established political parties and labor unions. The unions, after a considerable struggle and frequent setbacks, won important concessions from management and government, including the right to organize workers in factories and to represent them in negotiations (see Trade Union; Trade Unions in the United States). Furthermore, issues and problems germane to the factory system came to figure prominently in the formulation of modern political and economic theory (see Labor Relations). In the Soviet Union, the factory became a social and political, as well as an industrial, unit (see Socialism; Union of Soviet Socialist Republics).

One of the important and often overlooked consequences of the factory system was its promotion of the emancipation of women. The factory created wage-earning opportunities for women, enabling them to become economically independent. Thus, industrialization began to change the family relationship and the status of women. See Women, Employment of.



The inspection of factories by state agencies began in England in the early 19th century in response to public protest against the working conditions for women and child laborers. Later, wherever the factory system spread, governments eventually adopted regulations against unhealthful and dangerous conditions. Thus, a factory code became standard in every industrialized country. These codes provided for restrictions on child labor and hours of work, regulation of sanitary conditions, installation of safety devices and the enforcement of safety standards, medical supervision, adequate ventilation, the elimination of sweatshops, and the establishment of minimum wages. One important regulating agency was the International Association of Factory Inspection, established in 1886 by Canada and 14 states of the U.S. The International Labor Organization, acting in cooperation first with the League of Nations and later with the United Nations, correlated the regulation of factory conditions throughout the world.

In the U.S., the federal government is responsible for regulating the working conditions in factories and most other places of employment. Prior to the early 1970s, each of the states regulates the inspection of factories within its own borders. In 1970 the Occupational Safety and Health Administration (OSHA) was established as an agency of the U.S. Department of Labor. OSHA gradually took over the regulation of health and safety standards in the workplace. Although some states still maintain their own inspection plans, all are monitored by OSHA in order to keep stringent standards. A citation is issued for each violation and a fine may be imposed for a serious infraction. Factory inspection also includes examination of payrolls and employment records. Any establishment covered by the Fair Labor Standards Act is subject to review by the Wage and Hour Division of the Labor Department to ascertain whether employers are complying with regulations.




Manufacturing, producing goods that are necessary for modern life from raw materials. The word manufacture comes from the Latin manus (hand) and facere (to make). Originally manufacturing was accomplished by hand, but most of today's modern manufacturing operations are highly mechanized and automated.

There are three main processes involved in virtually all manufacturing: assembly, extraction, and alteration. Assembly is the combination of parts to make a product. For example, an airplane is assembled when the manufacturer puts together the engines, wings, and fuselage. Extraction is the process of removing one or more components from raw materials, such as obtaining gasoline from crude oil. Alteration is modifying or molding raw materials into a final product—for example, sawing trees into lumber.

Science and engineering are required to develop new products and to create new manufacturing methods, but there are other factors involved in the manufacturing process. Legal matters, such as obtaining operating permits and meeting industrial safety standards, must be adhered to. Economic considerations, such as competition, worldwide markets, and tariffs, control to some degree what prices are set for manufactured goods and what inventories are needed.



Manufacturing has existed as long as civilizations have required goods: bricks to build the Mesopotamian city of Erech (Uruk), clay pots to store grain in ancient Greece, or bronze weapons for the Roman Empire. In the Middle Ages, silk factories operated in Syria, and textile mills were established in Italy, Belgium, France, and England. New routes discovered from Europe to the Far East and to the New World during the Renaissance (14th century to 17th century) stimulated demand for manufactured goods to trade. Factories were built to produce gunpowder, clothing, cast iron, and paper. The manufacturing of these goods was primarily done by hand labor, simple tools, and, rarely, by machines powered by water.


Industrial Revolution

The Industrial Revolution began in England in the middle of the 18th century when the first modern factories appeared, primarily for the production of textiles. Machines, to varying degrees, began to replace the workforce in these modern factories. The cotton gin, created by the American inventor Eli Whitney in 1793, mechanically removed cotton fibers from the seed and increased production. In 1801 Joseph Jacquard, a French inventor, created a loom that used cards with punched holes to automate the placement of threads in the weaving process. The development of the steam engine as a reliable power source by Thomas Newcomen, James Watt, and Richard Trevithick in England, and in America by Oliver Evans, enabled factories to be built away from water sources that had previously been needed to power machines. From the 1790s to the 1830s, more than 100,000 power looms and 9 million spindles were put into service in England and Scotland (see Factory System; Industrial Revolution).


Mass Production

In addition to inventing the cotton gin, Eli Whitney made another contribution to the factory system in 1798 by proposing the idea of interchangeable parts. Interchangeable parts make it possible to produce goods quickly because repairs and assembly can be done with previously manufactured, standard parts rather than with costly custom-made ones. This idea led to the development of the assembly line, where a product is manufactured in discrete stages. When one stage is complete, the product is passed to another station where the next stage of production is accomplished. In 1913 the American industrialist Henry Ford and his colleagues first introduced a conveyer belt to an assembly line for flywheel magnetos, a type of simple electric generator, more than tripling production. The assembly line driven by a conveyor belt was then implemented to manufacture the automobile body and motors.


Labor Movement

Labor unions, associations of workers whose goal is to improve their economic conditions, originated in the craft guilds of 16th-century Europe. The modern labor movement, however, did not start until the late 19th-century, when reliable railroad systems were developed. Railroads brought materials from diverse locations for final manufacturing and assembly and created a large demand for industrial labor. Labor unions gained enormous strength after World War II (1939-1945) when the United States had both high inflation and a huge population of factory workers. This combination forced labor unions to negotiate for better contracts and wages, and they achieved significant influence in industry. Today fewer manufacturing jobs and the trend for factories to relocate to foreign countries have combined to diminish the strength of organized labor (see Trade Unions).


Military Operations and Manufacturing

When the United States joined the Allies against Hitler in World War II, the country was in its 11th year of economic depression, 17 percent of the workforce was unemployed, and manufacturers were unprepared to mobilize for wartime production. President Franklin Delano Roosevelt succeeded in motivating the industrial complex to invest in new manufacturing facilities through a combination of generous business contracts, tax laws, and patriotism. By 1943 manufacturing capacity had increased dramatically: 10,000 military airplanes were produced a month, and it took only 69 days to build a warship. When World War II ended, the United States was the leading producer of manufactured goods. After the war, part of this vast military manufacturing capacity was converted to create consumer items such as automobiles, furniture, and televisions.

The development of the Cold War between Communist and non-Communist powers was accompanied by a buildup of manufactured weapons such as fighter airplanes and bombers, submarines, missiles, and nuclear weapons. The shift to a military manufacturing base accelerated the development of space science and advanced electronics, particularly integrated circuitry, which would eventually become the processing engine for the modern personal computer. Computers, in turn, have helped increase the productivity of modern manufacturing plants because they enable automated design, production, and record keeping (see Computer-Aided Design/Computer-Aided Manufacture).


Types of Manufacturing

Manufacturing processes can produce either durable or nondurable goods. Durable goods are products that exist for long periods of time without significant deterioration, such as automobiles, airplanes, and refrigerators. Nondurable goods are items that have a comparatively limited life span, such as clothing, food, and paper.


Iron and Steel Manufacture

Iron manufacturing originated about 3500 years ago when iron ore was accidentally heated in the presence of charcoal. The oxygen-laden ore was reduced to a product similar to modern wrought iron.

Today, iron is made from ore in blast furnaces. Oxygen and other elements are removed when the ore is mixed with coke (a material that contains mostly carbon) and limestone and is then blasted by hot air. The gases formed by the burning materials combine with the oxygen in the ore and reduce the ore to iron. This molten iron still contains many impurities, however. Steel is manufactured by first removing these impurities and then adding elements, predominantly carbon, in a controlled manner. Strong steels contain up to 2 percent carbon. The steel is then shaped into bars, plates, sheets, and such structural components as girders (see Iron and Steel Manufacture).


Textile Manufacturing

Raw fibers of cotton, wool, or synthetic materials such as nylon and polyester go through a complex series of processes to form fabrics for apparel, home furnishings, and biomedical, recreation, and aerospace products. In most cases, loose tufts of fiber are straightened, and the thick ropelike slivers are thinned for spinning. In the spinning process, the fibers are twisted to add strength. Synthetic fibers are generally made in a continuous string, but sometimes they go through a texturing process to give them a natural appearance. These twisted fibers, known as yarns, are then woven or knitted into fabrics. Weaving is a process that interlaces two sets of yarns, the warp and filling, in a variety of patterns that impart design and different physical characteristics. Knitting is a technique that loops yarns together to form fabric. The fabrics are then dyed, and finishes applied (see Textiles).


Lumber Industry

The lumber industry converts trees into construction materials or the precursor material for pulp and paper. Trees are harvested, debarked, then sawed into usable shapes such as boards and slabs. The lumber is graded for use and quality and then dried in large kilns, or ovens. Lumber is manufactured into boards, plywood, composition board, or paneling. Pulp wood for paper is sent directly to the manufacturer without sawing or drying (see Lumber Industry).


Automobile Manufacturing

The automobile was the first major manufactured item built by a mass production system using cost-effective assembly line techniques. Today, before an automobile reaches its final assembly point, subsystems, such as the engine, transmission, electrical components, and chassis, are fabricated from raw materials in other specialized facilities. The metallic automobile body parts are stamped and welded together by robots into a unibody, or one-piece, construction. This body is then dipped in a succession of chemical baths that rustproof and provide undercoat and paint treatments. During the final assembly, conveyor systems direct all of the components to stations along the production route. The engine, transmission, fuel tank, radiator, electrical systems, body panels and doors, suspension system, tires, and interior accessories are fastened to the chassis. Rigid quality-control standards at every step ensure that the completed vehicle is safe and built to specifications (see Automobile Industry).


Aerospace Industry

The aerospace industry manufactures airplanes, rockets and missiles, among other technologies. The first airplanes were constructed from wood and fabrics; modern airplanes are built from aluminum alloys, titanium, plastics, and advanced textile-reinforced composite materials. As in automobile manufacturing, components such as engines and landing gear are manufactured in separate facilities and then assembled with the wings, rudders, and fuselage to produce the finished airplane. Final assembly is conducted on an assembly line, where the partially manufactured airplane is moved from station to station.

Rockets are built on an individual basis. Rocket casings are created by winding high-strength carbon fibers and epoxy resins onto a cylindrical shape. The epoxy hardens and encapsulates the fibers to produce a strong, lightweight material. Solid rocket fuel is put into the body of the rocket. Thrust nozzles and exit cones are then added along with electronic guidance systems and payloads.


Petrochemical Industry

Petrochemicals are manufactured from naturally occurring crude oils and gases. Once removed from the earth, the crude oil is refined into gasoline, heating oil, kerosene, plastics, textile fibers, coatings, adhesives, drugs, pesticides, and fertilizers. Crude oil contains thousands of natural organic chemicals. These are separated by distilling, or boiling off, the compounds at different temperatures. Gases such as methane, ethane, and propane are also released. Methane, when combined with nitrogen and pressurized and heated, yields ammonia, an important ingredient in fertilizers. Simple plastic materials, such as polyethylene and polypropylene, are manufactured by first heating ethane and propane gases and then rapidly cooling them to alter their chemical structure (see Petroleum).



Manufacturing systems today are designed to recycle many of their components. For example, in the automotive industry, excess steel and aluminum can become scrap stock for new metal, rubber tires can be chopped and mixed with asphalt for new roadways, and engine starters can be remanufactured and sold again. Recycling for newer materials, such as composites (combinations of materials designed with superior physical and mechanical properties), has yet to be developed, however.

Emission control will be a critical issue for future manufacturers. Smoke scrubbers must remove dangerous gases and particulates from industrial plant discharges, and manufacturing facilities that dump chemicals into rivers must develop methods of eliminating or reusing these waste products.

The economically advantageous automated factory has become the norm. Most automobile engines are manufactured using robotic tools and handling systems that deliver the engine to various machining sites. Computers with sophisticated inventory tracking programs make it possible for items to be assembled and delivered at the manufacturing facility only as they are needed. In demand-activated manufacturing, when an item is sold a computer schedules the manufacture of an item to replace the unit sent to the customer.

Engineers use computers to help them design new products efficiently. The Boeing 777 jet, for example, was developed in record time by having its entire design and manufacturing systems created on a computer database rather than using traditional blueprints.




Aviation, term applied to the science and practice of flight in heavier-than-air craft, including airplanes, gliders, helicopters, ornithopters, convertiplanes, and VTOL (vertical takeoff and landing) and STOL (short takeoff and landing) craft (see Airplane; Glider; Helicopter). These are distinguished from lighter-than-air craft, which include balloons (free, usually spherical; and captive, usually elongated), and dirigible airships (see Airship; Balloon).

Operational aviation is grouped broadly into three classes: military aviation, commercial aviation, and general aviation. Military aviation includes all forms of flying by the armed forces—strategic, tactical, and logistical. Commercial aviation embraces primarily the operation of scheduled and charter airlines. General aviation embraces all other forms of flying such as instructional flying, crop dusting by air, flying for sport, private flying, and transportation in business-owned airplanes, usually known as executive aircraft.



Centuries of dreaming, study, speculation, and experimentation preceded the first successful flight. The ancient legends contain numerous references to the possibility of movement through the air. Philosophers believed that it could be accomplished by imitating the wing motions of birds, and by using smoke or other lighter-than- air media. The first form of aircraft made was the kite, about the 5th century bc. In the 13th century, the English monk Roger Bacon conducted studies that led him to the conclusion that air could support a craft in the same manner that water supports boats. At the beginning of the 16th century, Leonardo da Vinci gathered data on the flight of birds and anticipated developments that subsequently became practical. Among his important contributions to the development of aviation were his invention of the airscrew, or propeller, and the parachute. He conceived three different types of heavier-than-air craft: an ornithopter, a machine with mechanical wings designed to flap like those of a bird; a helicopter, designed to rise by the revolving of a rotor on a vertical axis; and a glider, consisting of a wing fixed to a frame on which a person might coast on the air. Leonardo's concepts involved the use of human muscular power, quite inadequate to produce flight with the craft that he pictured. Nevertheless, he was important because he was the first to make scientific proposals.



The practical development of aviation took various paths during the 19th century. The British aeronautical engineer and inventor Sir George Cayley was a farsighted theorist who proved his ideas with experiments involving kites and controlled and human-carrying gliders. He designed a combined helicopter and horizontally propelled aircraft and deserves to be called the father of aviation. The British scientist Francis Herbert Wenham used a wind tunnel in his studies and foresaw the use of multiple wings placed one above the other. He was also a founding member of the Royal Aeronautical Society of Great Britain. Makers and fliers of models included the British inventors John Stringfellow and William Samuel Henson, who collaborated in the early 1840s to produce the model of an airliner. Stringfellow's improved 1848 model, powered with a steam engine and launched from a wire, demonstrated lift but failed to climb. The French inventor Alphonse Penaud produced a hand-launched model powered with rubber bands that flew about 35 m (about 115 ft) in 1871. Another French inventor, Victor Tatin, powered his model plane with compressed air. Tethered to a central pole, it was pulled by two traction propellers; rising with its four-wheeled chassis, it made short, low-altitude flights.

The British-born Australian inventor Lawrence Hargrave produced a rigid-winged model, propelled by flapping blades that were operated by a compressed-air motor. It flew 95 m (312 ft) in 1891. The American astronomer Samuel Pierpont Langley produced (1896) steam-powered, tandem-monoplane models with wingspans of 4.6 m (15 ft). They repeatedly flew 915 to 1220 m (3000 to 4000 ft) for about 1.5 min, climbing in large circles. Then, with power exhausted, they descended slowly to alight on the waters of the Potomac River.

Numerous efforts to imitate the flight of birds were also made with experiments involving muscle-powered paddles or flappers, but none proved successful. These included the early attempts of the Austrian Jacob Degen, who carried out various experiments from 1806 to 1813; the Belgian Vincent DeGroof, who crashed to his death in 1874, and the American R. J. Spaulding who actually received a patent for his idea of muscle- powered flight in 1889.

More successful were the attempts of aeronauts who advanced the art through their study of gliding and contributed extensively to the design of wings. They included the Frenchman Jean Marie Le Bris, who tested a glider with movable wings, the American John Joseph Montgomery, and the renowned Otto Lilienthal, of Germany. Lilienthal's experiments with aircraft, including kites and ornithopters, attained greatest success with his glider flights in 1894-96. In 1896, however, he met his death when his glider went out of control and crashed. Percy S. Pilcher, of Scotland, who had attained remarkable success with his gliders, had a fatal fall in 1899. The American engineer Octave Chanute had a limited success with multiplane gliders, in 1896-1902. Chanute's most notable contribution to flight was his compilation of developments, Progress in Flying Machines (1894).

Additional information on aerodynamics and on flight stability was gained by a number of experiments with kites. The American inventor James Means published his results in the Aeronautical Annuals of 1895, 1896, and 1897. Lawrence Hargrave invented the box kite in 1893 and Alexander Graham Bell developed huge human-carrying tetrahedral-celled kites between 1895 and 1910.

Powered experiments with full-scale models were conducted by various investigators between 1890 and 1901. Most important were the attempts of Langley, who tested and flew an unmanned quarter-sized model in 1901 and 1903 before testing a full-scale model of his machine, which he called the aerodrome. This model was the first gasoline-engine-powered heavier-than-air craft to fly. His full-scale machine was completed in 1903 and tested twice, but each launching ended in a mishap. The German aviator Karl Jatho also tested a full-scale powered craft in 1903 but without success.

Advances through the 19th century laid the foundation for the eventual successful flight by the Wright brothers in 1903, but the major developments were the result of the efforts of Chanute, Lilienthal, and Langley after 1885. A sound basis in experimental aerodynamics had been established, although the stability and control required for sustained flight had not been acquired. More important, successful powered flight needed the light gasoline engine to replace the heavy steam engine.



On December 17, 1903, near Kitty Hawk, North Carolina, the brothers Wilbur and Orville Wright made the world's first successful flights in a heavier-than-air craft under power and control. The airplane had been designed, constructed, and flown by them, each brother making two flights that day. The longest, by Wilbur, extended to a distance of 260 m (852 ft) in 59 sec. The next year, continuing the development of their design and improving their skill as pilots, the brothers made 105 flights, the longest lasting more than 5 min. The following year, their best flight was 38.9 km (24.2 mi) in 38 min 3 sec. All these flights were in open country, the longest involving numerous turns, usually returning to near the starting point.

Not until 1906 did anyone else fly in an airplane. In that year short hops were made by a Romanian, Trajan Vuia, living in Paris, and by Jacob Christian Ellehammer, in Denmark. The first officially witnessed flight in Europe was made in France, by Alberto Santos-Dumont, of Brazil. His longest flight, on November 12, 1906, covered a distance of about 220 m (722 ft) in 21.2 sec. The airplane, the 14- bis, was of his own design, made by the Voisin firm in Paris, and powered with a Levavasseur 40-horsepower Antoinette engine. The airplane resembled a large box kite, with a smaller box at the front end of a long, cloth-covered frame. The engine and propeller were at the rear, and the pilot stood in a basket just forward of the main rear wing. Not until near the end of 1907 did anyone in Europe fly for 1 min; Henri Farman did so in an airplane built by Voisin.

In great contrast were the flights of the Wright brothers. Orville, in the U.S., demonstrated a Flyer for the Army Signal Corps at Fort Myer, Virginia, beginning September 3, 1908. On September 9 he completed the world's first flight of more than one hour and, also for the first time, carried a passenger, Lieutenant Frank P. Lahm, for a 6-min 24-sec flight. These demonstrations were interrupted on September 17, when the airplane crashed, injuring Orville and his passenger, Lieutenant Thomas E. Selfridge, who died hours later from a concussion. Selfridge was the first person to be fatally injured in a powered airplane. Wilbur, meanwhile, had gone to France in August 1908, and on December 31 of that year completed a flight of over 2 hours and 20 minutes, demonstrating total control of his Flyer, turning gracefully, and climbing or descending at will. Recovered from his injuries, and with Wilbur's assistance, Orville resumed demonstrations for the Signal Corps in the following July and met their requirements by the end of the month. The airplane was purchased on August 2, becoming the first successful military airplane. It remained in active service for about two years and was then retired to the Smithsonian Institution, Washington, D.C., at which it is displayed today.

Prominent among American designers, makers, and pilots of airplanes was Glenn Hammond Curtiss, of Hammondsport, New York. He first made a solo flight on June 28, 1907, in a dirigible airship built by Thomas Baldwin. It was powered with a Curtiss engine, modified from those used on Curtiss motorcycles. In the following May, Curtiss flew alone in an airplane designed and built by a group known as the Aerial Experiment Association, organized by Alexander Graham Bell. Curtiss was one of the five members. In their third airplane, the June Bug, Curtiss, on July 4, 1908, covered a distance of 1552 m (5090 ft) in 1 min 42.5 sec., winning the first American award, the Scientific American Trophy, given for an airplane flight. At Reims, France, on August 28, 1909, Curtiss won the first international speed event, at about 75.6 km/h (47 mph). On May 29, 1910, he won the New York World prize of $10,000 for the first flight from Albany, New York, to New York City. In August of that year he flew along the shore of Lake Erie, from Cleveland, Ohio, to Sandusky, Ohio, and back. In January 1911 he became the first American to develop and fly a seaplane. The first successful seaplane had been made and flown by Henri Fabre, of France, on March 28, 1910.

The pioneer airplane flight across the English Channel, from Calais, France, to Dover, England, a distance of about 37 km (about 23 mi) in 35.5 min, was made July 25, 1909, by the French engineer Louis Blériot, in a monoplane that he had designed and built.

During the period before World War I the design of both the airplane and the engine showed considerable improvement. Pusher biplanes— two-winged airplanes with the engine and propeller behind the wing—were succeeded by tractor biplanes, with the propeller in front of the wing. Only a few types of monoplanes were used. Huge biplane bombers with two, three, or four engines were introduced by both contending forces in World War I. In Europe, the rotary engine was favored at first, but was succeeded by radial-type engines. In Britain and the U.S., water-cooled engines of the V type predominated.

The first transportation of mail by airplane to be officially approved by the U.S. Post Office Department began on September 23, 1911, at the Nassau Boulevard air meet, Long Island, New York. The pilot was Earle Ovington, who carried the mail bag on his knees, flying about 8 km (5 mi) to Mineola, Long Island, where he tossed the bag overboard, to be picked up and carried to the post office. The service was continued for only a week (see Airmail).

In 1911 the first transcontinental flight across the United States, from New York City to Long Beach, California, was completed by the American aviator Calbraith P. Rodgers. He left Sheepshead Bay, in Brooklyn, New York, on September 17, 1911, using a Wright machine, and landed at his goal on December 10, 1911, 84 days later. His actual flying time was 3 days, 10 hr, and 14 min.



During World War I both airplanes and lighter-than-air craft were used by the belligerents. The urgent necessities of war provided the impetus for designers to construct special planes for reconnaissance, attack, pursuit, bombing, and other highly specialized military purposes.

Because of the pressure of war, more pilots were trained and more planes built during the 4 years of conflict than in the 13 years since the first flight.

Many of the surplus military planes released after the war were acquired and operated by wartime-trained aviators, who “barnstormed” from place to place, using such fields as were available. Their operations included practically any flying activity that would provide an income, including carrying passengers, aerial photography, advertising (usually by writing names of products on their airplanes), flight instruction, air racing, and exhibitions of stunt flying.

Notable flights following World War I included a nonstop flight of 1170 km (727 mi) from Chicago to New York City in 1919 by Captain E. F. White of the U.S. Army. In 1920 Major Quintin Brand and Captain Pierre Van Ryneveld, of England, flew from Cairo to Cape Town, South Africa. In the same year, five U.S. Army Air Service planes, each carrying a pilot and a copilot-mechanic, with Captain St. Clair Streett in command, flew from New York City to Nome, Alaska, and returned. In other army exploits, Lieutenant James Harold Doolittle, in 1922, made a one-stop flight from Jacksonville, Florida, to San Diego, California.; Lieutenant Oakley Kelly and Lieutenant John A. Macready made the first nonstop transcontinental flight, May 2-3, 1923, from Roosevelt Field, Long Island, to Rockwell Field, San Diego, California, and the first flight completely around the world was made from April 6 to September 28, 1924. Four Liberty-engined Douglas Cruisers, each with two men, left Seattle, Washington, and two returned. One plane had been lost in Alaska, the other in the North Sea; there were no fatalities.

Transoceanic flying began with the flight of the NC-4, the initials denoting Navy-Curtiss. This huge flying boat flew from Rockaway Beach, Long Island, to Plymouth, England, with intermediate stops including Newfoundland, the Azores, and Lisbon, Portugal; the elapsed time was from May 8 to May 31, 1919. The first nonstop transatlantic flight was made by the British aviators John William Alcock and Arthur Whitten Brown. They flew from St. John's, Newfoundland, to Clifden, Ireland, June 14-15, 1919, in a little over 16 hours. The fliers won the London Daily Mail prize of $50,000.

The first nonstop solo crossing of the Atlantic Ocean was the flight of the American aviator Charles A. Lindbergh from New York City to Paris, a distance of 5810 km (3610 mi) covered in 33.5 hr on May 20-21, 1927. On June 28-29 of the same year Lieutenant Lester J. Maitland and Lieutenant Albert F. Hegenberger (1895-1983) of the U.S. Army made a nonstop flight from California to Hawaii, a distance of 3860 km (2400 mi) in 26 hr. Between August 27 and September 14 two other Americans, William S. Brock and Edward F. Schlee, flew from Newfoundland to Japan, a trip of 19,800 km (12,300 mi).

The first nonstop westward flight by an airplane over the Atlantic was on April 12-13, 1928, by Captain Herman Köhl and Baron Guenther von Hünefeld, Germans, and Captain James Fitzmaurice, an Irishman. They flew from Dublin, Ireland, to Greenly Island, Labrador, a distance of 3564 km (2215 mi). Between May 31 and June 9, 1928, Sir Charles Kingsford Smith and Charles T. P. Ulm, Australian fliers, with Harry W. Lyon and James Warner, Americans, flew the Southern Cross from Oakland, California, to Sydney, Australia, 11,910 km (7400 mi) with stops at Hawaii, the Fiji Islands, and Brisbane, Australia. Three American fliers, Amelia Earhart with pilots Wilmer Stultz and Louis Gordon, crossed the Atlantic from Trepassey Bay, Newfoundland, to Burry Port, Wales, on June 17-18; and from July 3 to 5 Captain Arturo Ferrarin and Major Carlo P. Del Prete, Italian army pilots, made a nonstop flight of 7186 km (4466 mi) across the Atlantic from Rome to Point Genipabu, Brazil.

In 1920 airlines were established for mail and passenger service between Key West, Florida, and Havana, Cuba, and between Seattle, Washington, and Vancouver, British Columbia. In 1921 scheduled transcontinental airmail service between New York City and San Francisco was inaugurated by the U.S. Post Office Department. Congress passed the Kelly Air Mail Act in 1925, authorizing the Post Office Department to contract with air-transport operators for the transportation of U.S. mail. Fourteen domestic airmail lines were established in 1926. Lines were also established and extended between the U.S. and Central and South America and between the United States and Canada.

Between 1930 and 1940, commercial air transportation was greatly expanded, and frequent long-distance and transoceanic flights were undertaken. The transcontinental nonstop flight record was reduced by American aviators flying small planes and, subsequently, transport planes. In 1930 Roscoe Turner flew from New York City to Los Angeles in 18 hr 43 min; Frank Hawks flew from Los Angeles to New York City in 12 hr 25 min. In 1937 Howard Hughes flew from Burbank, California, to Newark, New Jersey, in 7 hr 28 min. In 1939 Ben Kelsey flew from Marsh Field, California, to Mitchell Field, New York, in 7 hr 45 min.



Most of the major countries of the world developed commercial air transportation in varying degrees, with the U.S. gradually gaining ascendancy. On the foundations of the U.S. air-transport industry were built the military-transport commands that played a decisive role in winning World War II.

Largest of all international airlines in operation when World War II began was Pan American Airways, which, with its subsidiaries and affiliated companies, served 47 countries and colonies on 82,000 route miles, linking all continents and spanning most oceans.

The demands of World War II greatly accelerated the further development of aircraft. Important advances were achieved in the development of planes for bombing and combat and for the transportation of parachute troops and of tanks and other heavy equipment. Aircraft became a decisive factor in warfare.

Small aircraft production expanded rapidly. Under the Civilian Pilot Training program of the Civil Aeronautics Administration, private operators expanded their facilities and gave training to thousands of students, who subsequently became the backbone of the army, navy, and marine-air arms. Types of aircraft designed for personal use found extensive military use throughout the world. Large contracts for light planes were awarded by the U.S. Army and Navy in 1941.

During 1941 American military aircraft were in action on all fronts. The number of persons employed in the aviation industry totaled 450,000, compared to about 193,000 employed before World War II. About 3,375,000 passengers, about 1 million more than in 1940, were carried by 18 U.S. airlines. Mail and express loads increased by about 30 percent.

Toward the end of the war, airplane production attained an all-time high, air warfare increased in intensity and extent, and domestic airlines established new passenger- and cargo-carrying records. In the U.S., the number of planes produced in 1944 totaled 97,694, with an average weight of approximately 4770 kg (about 10,500 lb). An outstanding development in the same year was the appearance in air combat of German jet-engined and rocket-propelled fighter planes.



In 1945, U.S. military-aircraft production was sharply curtailed, but civilian-aircraft orders increased considerably. By the end of the year, U.S. manufacturers held orders for 40,000 planes, in contrast to the former production record for civilian use of 6844 planes in 1941. Again the domestic and international airlines of the U.S. broke all records, with all categories of traffic showing substantial gains over 1944. Both passenger fares and basic freight rates were reduced. International commercial services were resumed in 1945.

The experience gained in the production of military aircraft during the war was utilized in civil-aircraft production following the close of hostilities. Larger, faster aircraft, with such improvements as pressurized cabins, were made available to the airlines. Improved airports, more efficient weather forecasting, additional aids to navigation (see Air Traffic Control), and public demand for air transportation all aided in the postwar boom in airline passenger travel and freight transportation.

Experimentation with new aerodynamic designs, new metals, new power plants, and electronic inventions resulted in the development of high-speed turbojet planes designed for transoceanic flights, supersonic aircraft, experimental rocket planes, STOL craft, and the space shuttle (see Airplane; Jet Propulsion; Space Exploration).

In December 1986 the ultralight experimental aircraft Voyager successfully completed the first nonstop around-the-world flight without refueling. Voyager was designed by Burt Rutan in an unorthodox H shape with outrigger booms and rudders. The aircraft had two engines: one engine in front for takeoffs, landings, and maneuvering; the other in back for in-flight power. Composed mostly of lightweight plastic composite materials, the plane weighed only 4420 kg (9750 lb) at takeoff—with 4500 liters (1200 gallons) of fuel in its 17 fuel tanks—and 840 kg (1858 lb) on landing. Pilots Dick Rutan, Burt's brother, and Jeana Yeager flew 40,254 km (25,012 mi) in 9 days, 3 min, 44 sec at an average speed of 186.3 km/h (115.8 mph), establishing a distance and endurance record. The previous distance record of 20,169 km (12,532 mi) was set in 1962.

In 1967 the Federal Aviation Administration (FAA) replaced the Federal Aviation Agency, which had been created in 1958. The FAA classified the air transportation industry in the U.S. as commercial air carriers, regionals and commuters, helicopters, and all-cargo carriers. Nonscheduled air carriers are in a separate classification. The scheduled airlines maintain a trade association known as the Air Transport Association of America. See Air Transport Industry; Transportation, Department of.

After World War II a marked increase in the use of company-owned airplanes for the transportation of executives took place. In fact, by the early 1980s such craft composed well more than 90 percent of all aircraft active in the U.S. General trends in the U.S. air transport industry, in the 1980s, included airline deregulation (begun in 1978), mergers of airlines, and fluctuating air fares and “price wars.” Three major U.S. airlines ceased operations in 1991: Pan American and Eastern, both which had been flying since 1928, and a relative newcomer, Midway, which was founded in 1979.

Conferences relative to the problems of international flight were held as early as 1889, but it was not until 1947 that an organization was established to handle the problems of large-scale international air travel: the International Civil Aviation Organization (ICAO), an affiliate of the United Nations, with headquarters in Montréal. Working in close cooperation with ICAO is the International Air Transport Association (IATA), which also has its headquarters in Montréal and is comprised of about 100 airlines that seek jointly to solve mutual problems. Another such organization is the Fédération Aéronautique International (FAI).

Aerospace Industry



Aerospace Industry, complex of manufacturing firms that produce vehicles for flight—from balloons, gliders, and airplanes to jumbo jets, guided missiles, and the space shuttle. The industry also encompasses producers of everything from seat belts to jet engines and missile guidance systems. The term aerospace is a contraction of the words aeronautics (the science of flight within Earth’s atmosphere) and space flight. It came into use during the 1950s when many companies that had previously specialized in aeronautical products began to manufacture equipment for space flight.

The aerospace industry traces its origins to the Wright brothers’ historic first flights in a heavier-than-air-machine at Kitty Hawk, North Carolina, on December 17, 1903. Until World War I (1914-1918), airplane construction largely remained in the hands of industry pioneers, who built each wood-framed plane by hand. Wartime military needs drove improvement in aircraft design. By the 1930s all-metal planes featuring retractable landing gear and high-performance engines were commonly used to deliver airmail and carry civilian passengers in Europe and the United States. During World War II (1939-1945) the industry made further strides with the introduction of massive production facilities that turned out tens of thousands of airplanes. World War II research and development resulted in radar, electronic controls, jet aircraft with gas-powered turbine engines, and combat rockets.

Postwar tension between the Union of Soviet Socialist Republics (USSR) and the United States drove aerospace technologies to new highs as the two countries raced to establish a presence in space. By the start of the Apollo Program in 1961, development and construction of space flight vehicles and supporting systems occupied a major portion of the American and Soviet aerospace industries. At the close of the 20th century, aerospace firms around the world produced rockets and artificial satellites. Originally developed for national space exploration and military purposes, these spacecraft found peacetime uses in telecommunications, navigation, and meteorology.



More than 40 countries have industries engaged in some form of aerospace production. The largest, the American aerospace industry, employs approximately 900,000 people. American manufacturer The Boeing Company leads the world in production of commercial airplanes and military aircraft. Other major U.S. aerospace manufacturers include the Lockheed Martin Corporation, the world’s largest producer of military aircraft and equipment, and the Raytheon Company, a global leader in air traffic control systems and a major supplier of aircraft, weapons systems, and electronic equipment to the U.S. government.

The European aerospace industry employs about 420,000 people, with workers from the United Kingdom, France, and Germany accounting for more than two-thirds of these employees. Airbus, headquartered in Toulouse, France, is the world’s second largest manufacturer of commercial aircraft. European Aeronautic Defense and Space Company (EADS) owns 80 percent of Airbus, and Britain’s BAE Systems PLC (formerly British Aerospace) owns the other 20 percent.

Canada ranks among the top six aerospace producers in the world. The Canadian industry employs 59,000 people and is a global leader in production of commercial helicopters and business aircraft. Canadian aerospace manufacturer Bombardier ranks third in the production of nonmilitary aircraft and leads the world in the production of business jets and regional jet airliners.



Products of the aerospace industry fall into four general categories. The largest product category, aircraft, encompasses aircraft produced for military purposes, passenger and cargo transport, and general aviation (business jets, recreational airplanes, traffic helicopters, and all other aircraft). This category also includes aircraft engines. The wide variety of missiles produced for military use makes up another product category. Space vehicles, such as the space shuttle and artificial satellites, and rockets to launch them into space, comprise their own category. The final category is made up of the thousands of different pieces of equipment and equipment systems—both those on board flight vehicles and those on the ground—that make flying a relatively safe and comfortable endeavor.


Aircraft and Jet Engines

Sales of aircraft, including their engines and parts, total more than the sales of all other aerospace products combined. The production of military aircraft and accessories has traditionally dominated the field of aircraft production. In the late 20th century, however, the demand for commercial jets increased around the world while global defense spending declined.


Military Aircraft

Aerospace firms produce a broad variety of military aircraft, including fighter jets, bombers, attack aircraft, troop transports, and helicopters. Each type of craft is designed for a specific purpose. Fighter jets engage enemy aircraft, attack targets on or below the Earth’s surface, and perform reconnaissance missions. Bombers specialize in striking at distant surface targets. Attack aircraft carry lighter bombs than bombers and hit surface targets at closer range. Helicopters are used in rescue work, to transport troops and supplies, and less frequently, on attack missions. The Boeing Company, Lockheed Martin Corporation, and Northrop Grumman Corporation are among the largest builders of military aircraft in the world.


Commercial Aircraft

Aerospace products in the commercial aircraft category include jet airplanes used by commercial airlines. Jet airliners generally fall under one of two classifications, depending on the number of aisles in the main passenger cabin. In narrow-body jets, a single aisle divides the cabin into two banks of seats. In wide-body jets, twin aisles separate the cabin into three banks of seats. The first of the wide-body jets, the Boeing 747, entered service in 1970. This massive jetliner is capable of transporting more than 400 passengers. Today, a variety of wide-body jets are produced by Boeing and Airbus. Airbus has launched production of a "superjumbo" jet, the A380, with seating for 555 passengers on two decks. It is scheduled to begin service in 2006.

Narrow-body jets seat fewer passengers. Boeing and Airbus build large narrow-body jets that carry between 100 and 200 passengers. For commuter flights, airlines use smaller jets, called regional jets, some seating as few as six passengers. The majority of these planes are built by Canadian airplane manufacturer Bombardier and Brazilian manufacturer Empresa Brasileira de Aeronautica (Embraer).


Aircraft for General Aviation

Aerospace manufacturers produce more than 30 types of general aviation aircraft, a category that encompasses corporate aircraft, recreational airplanes, planes used to spray agricultural crops, and helicopters for police, ambulance, and patrol service. Corporate aircraft are usually powered by jet engines and carry up to 40 passengers. Major manufacturers in the corporate jet market include the Cessna Aircraft Company, Gulfstream Aerospace Corporation, and Raytheon in the United States, Bombardier in Canada, and Dassault Aviation in France. Recreational pilots commonly fly single-seat or twin-seat planes designed and manufactured by several companies, including Cessna and The New Piper (formerly Piper Aircraft Corporation).


Jet Engines

Other aerospace firms specialize in designing and building the engines that power aircraft. The three most common types of jet engines are the turbojet, the turboprop, and the turbofan (see Jet Propulsion). In turbojet engines, energy produced by burning fuel spins a turbine that compresses the air entering the engine and directs it into a combustion chamber, where it is mixed with fuel vapor and burned. Turboprop engines are driven almost entirely by a propeller mounted in front of the engine. Turbofans combine air passing through the engine, hot engine exhaust, and air from a fan.

Production of large jet engines for airliners is dominated by American jet engine manufacturers General Electric Company and Pratt & Whitney, and Rolls-Royce of Britain. These companies also produce engines for jet fighters, bombers, and transports. Several manufacturers produce smaller gas turbines for corporate jets and helicopters. AlliedSignal Engines, part of Honeywell International in the United States, supplies a wide range of engines for regional airliners, corporate jets, helicopters, and military aircraft.



Aerospace firms design and build a wide variety of missiles for military use. These range in size from large guided missiles that carry nuclear warheads to small portable rockets carried and launched by foot soldiers. Modern missiles incorporate their own propulsion systems and sophisticated guidance systems.


Surface-Fired Missiles

Surface-fired missiles launch from the ground or the sea. There are two chief types of surface-fired missiles: those fired at targets on Earth’s surface or in its oceans, and those fired at targets in the air. The largest surface-to-surface missiles are intercontinental ballistic missiles (ICBMs), which are capable of carrying nuclear warheads to targets as far as 15,000 km (9,200 mi) away. Soldiers use smaller surface-to-surface missiles against enemy tanks or troops. Still other missiles dive deep into the ocean to search out and destroy enemy submarines. Surface-to-air missiles are used against airborne targets, such as airplanes or other missiles. This category includes the U.S. Army’s Patriot missile system, a large missile and launcher that intercepts and destroys enemy missiles before they strike. The Patriot missile system was developed for the U.S. military by Raytheon and Lockheed Martin. Patriots are also used by Germany, Israel, Japan, and a number of other countries.


Air-Launched Missiles

Air-launched missiles are launched from fighter aircraft. Missiles in this category tend to be short-range. Air-to-air missiles, such as the U.S. Sidewinder missile built by Raytheon and other companies, usually rely on infrared heat-seeking devices to track their targets. These sophisticated missiles follow and destroy enemy aircraft and can change course when their targets do. Air-to-surface missiles commonly incorporate global positioning and inertial guidance systems, or miniature television homing systems.


Spacecraft and Launch Vehicles

Aerospace contractors design and build spacecraft for military and commercial purposes, and for use in space exploration. Products in this category include unmanned spacecraft, such as satellites and space probes, and piloted spacecraft. Other aerospace contractors design and build the rockets used to propel spacecraft out of Earth’s atmosphere and into space.



Telecommunications companies contract with aerospace manufactures to design and build communications satellites. These Earth-orbiting satellites transmit radio signals from cellular telephones, television broadcasting, and a number of other wireless communications. Military networks of defense-system satellites detect missile and satellite launches in other countries. Surveillance satellites provide a way to monitor activity in other countries, making it possible to detect terrorist actions or other illegal activities. The U.S. military also maintains 24 satellites as part of the global positioning system (GPS), an electronic satellite navigation system. Research satellites gather scientific information. The National Aeronautics and Space Administration (NASA) uses research satellites to observe Earth, other planets and their moons, comets, stars, and galaxies. The Hubble Space Telescope orbits about 610 km (about 380 mi) above Earth’s surface, photographing objects as far as 15 billion light-years away.

The largest manufacturers of satellites include the American companies Hughes Space and Communications Company, Lockheed Martin, and Loral Space & Communications, and the French conglomerate Alcatel. These and other satellite manufacturers develop, build, and sometimes operate satellites for private companies, the military, and governments.


Space Shuttle

The space shuttle is the only piloted spacecraft produced in the United States. It consists of three main components: an orbiter, propulsion systems—two solid rocket boosters and three main engines—and an external fuel tank. Shuttle orbiters are reusable, designed to withstand 100 missions or more each. Many different aerospace contractors contribute to the shuttle’s design, construction, and maintenance. NASA and the United Space Alliance, a partnership between Boeing and Lockheed Martin, oversee shuttle design and construction.


Launch Vehicles

Some aerospace companies design and build launch vehicles—rockets that propel spacecraft out of Earth’s atmosphere and into space. To escape Earth’s atmosphere, launch vehicles must reach velocities of about 30,000 km/h (about 18,500 mph). To achieve this speed and power, aerospace firms build rockets composed of two or more engines, one atop another. The largest manufacturers of launch vehicles include Lockheed Martin, which makes several versions of its Atlas and Titan rockets, and French rocket manufacturer Arianespace, which builds the Ariane launch vehicle. Boeing also manufacturers rockets for use as launch vehicles. Rockets from Boeing’s Delta family, for example, launched all the GPS satellites.


Flight Equipment and Navigational Aids

The fourth and final category encompasses the thousands of different pieces of equipment and equipment systems found on flight vehicles and ground-based flight support facilities. Some firms specialize in flight and engine controls for various flight vehicles. The space shuttle orbiter has more than 2,000 different controls and displays in the crew compartment. Other firms design and build instruments for flight navigation and radar systems, landing gear, flight data recorders, and cabin-pressure control systems. Still others manufacture seats, lights, kitchen equipment, and waste management systems. Companies that specialize in missile technology build state-of-the-art guidance systems, such as infrared heat-seeking devices and computer navigational systems.

Aerospace firms also produce ground-based navigational systems that support flight vehicles. These range from the radar, radio, and computers used in air traffic control at airports to the state-of-the-art command and control systems that track and operate spacecraft millions of miles from Earth. Others produce sophisticated remote controls that enable engineers on the ground to change a spacecraft’s course or to operate telescopes or cameras.



The area of research and development constitutes one of the largest expenditures of the aerospace industry. Development of a new flight vehicle might take a decade or more and involve thousands of people. Such an endeavor requires significant advances in equipment and systems—in some cases it calls for entirely new inventions—and several billion dollars. Because the cost of developing new flight vehicles is so high, most large aerospace companies devote their research and development resources to improving existing products. They may redesign aircraft components to make them lighter and more fuel efficient, for example, or redesign wings or body surfaces to make the craft travel faster (see Aerodynamics).

Much of the design process takes place on supercomputers capable of performing billions of operations per second. Computer-aided design enables engineers to test thousands of design parameters, such as the shape or angle of wings. The designer uses a computer to create a model of the flight vehicle’s basic structure, or airframe, and then to simulate flight in various atmospheric conditions (see Computer-Aided Design/Computer-Aided Manufacturing). In addition to the shape and size of the airframe, engineers must also consider thousands of details. For example, they must consider the weight and placement of the engines, how and where fuel will be stored, the type and layout of instruments in the cockpit, and details of the passenger compartment, such as the number of seats and their dimensions. In designing commercial airplanes, engineers must also plan for entertainment systems, food storage and preparation, and the location and number of lavatories.

After preliminary computer designs are in place, engineers build a scale model of the aircraft and subject it to a series of tests in a wind tunnel. Wind tunnels simulate the conditions encountered by the flight vehicle as it moves through the air. Many research facilities have their own wind tunnels. Manufacturers also have access to government-funded wind tunnels, such as NASA’s Ames Research Center tunnel at Moffett Field, California. This massive wind tunnel can accommodate a full-size aircraft with a wingspan of 22 m (72 ft). Observations made during wind tunnel testing confirm or invalidate design assumptions tested on the computer. Engineers use the results of the wind tunnel tests to refine design as necessary.

Once the design has been finalized, engineers build one or more full-size prototypes of the flight vehicle and subject them to a barrage of additional tests. Engineers confirm that the structure can withstand the thundering vibrations and heat produced by the jet engines. They use machines to bend, twist, and push the aircraft to verify that it can withstand the stresses it will likely encounter during flight. Engineers also confirm that flight instruments will withstand the pressure and sub-zero temperatures of high altitudes. The engines, landing gear, navigational systems, and other aircraft equipment undergo equally rigorous testing. Finally, pilots take a prototype for a test flight to verify the results of earlier exercises.



The manufacturing process is usually coordinated by a prime contractor that manages a number of subcontractors specializing in particular components of the flight vehicle. Subcontractors build and test their products in their own facilities, then deliver them to the prime contractor’s facility to be integrated into the flight vehicle. The prime contractor oversees the assembly of the flight vehicle, ensures that the project meets schedule and budget requirements, and assumes ultimate responsibility for the safety of the aircraft.

Modern aircraft are often built from parts that come from all over the world. For example, the McDonnell Douglas MD-11 commercial jet, which entered production in the early 1990s, incorporated parts from Italy, Spain, Japan, Brazil, Canada, the United States, and Britain. The exterior panels of the plane’s main body, or fuselage, were produced by the Italian company Aeritalia, which also supplied the plane’s vertical stabilizer and other parts. The Spanish firm CASA made landing-gear doors and the horizontal stabilizer. Japanese companies supplied certain tail parts and movable flaps on the wings called ailerons. Additional ailerons came from Brazil, the nose gear originated in Britain, Canadian firms delivered major wing assemblies, and the engines were built in the United States and Britain. The plane came together at the plant of the prime contractor, McDonnell Douglas, in California.



The earliest aviators made their own wood-framed airplanes by hand. Orville and Wilbur Wright completed their historic 1903 flight in a machine of their own design. While the Wright brothers quietly worked to perfect and patent their flying machine, Brazilian inventor Alberto Santos-Dumont designed and flew a biplane in Paris in 1906. In the following years, fledgling aviation further captured the attention of the public. Wilbur Wright made a triumphal airplane tour of Europe in the summer of 1908. In July 1909 French aviator Louis Blériot flew a plane of his own design across the English channel, completing a highly symbolic journey in the history of flight.


The First Airplane Manufacturers

The success of the Wright brothers, Santos-Dumont, and other pioneering aviators created a small demand for flying machines on both sides of the Atlantic Ocean. In Paris, France, the Voisin brothers, who had helped Santos-Dumont build his biplane in 1906, set up the first facility to build airplanes for sale. In the earliest airplane shops, a small number of workers built airplanes from wood and bamboo frameworks covered with fabric. They used modified engines from automobiles and motorcycles or lightweight boat engines to power the planes. They tested new ideas by building the planes to see if they worked.

By 1909 the Voisin brothers had gained a reputation for building reliable airplanes. That year, several competitors arrived with Voisin machines at an aerial exhibition and flying meet held at Rheims, France. Publicity from the exhibition at Rheims brought orders for about 20 more by the end of the year.


World War I

In the years leading up to World War I, militaries on both sides of the Atlantic Ocean grew to appreciate the role airplanes could serve in the military. While Wilbur Wright toured Europe to attract the interest of the public, Orville Wright demonstrated their invention before officers of the U.S. Army. Blériot’s successful crossing of the English Channel convinced European militaries of their need for airplanes.

The military saw uses for airplanes in aerial scouting missions and to carry small bombs that were dropped by hand (see Air Warfare). The Nieuport firm, founded in France in 1909, responded to this demand by producing monoplanes for the French army and for military services in Italy, Britain, Russia, and Sweden. Blériot and a number of other manufacturers followed suit, and by the start of World War I in the summer of 1914, Germany, France, Britain, and Russia each had 200 to 300 military planes plus several airships. American manufacturers lagged behind their European counterparts. In 1912 U.S. firms produced just 39 airplanes. In 1915, as the war raged across Europe, the United States Congress formed the National Advisory Committee for Aeronautics (NACA) to fund research and development in the flight industry. Despite this effort, when the United States entered the war in 1917, it had only 16 airplane-building companies, and only 6 of them had built as many as ten airplanes.

The rate of airplane manufacture in Europe and the United States skyrocketed during the war. Britain turned out more than 55,000 airplanes from 1914 to 1918, and Germany produced 40,000 airplanes during the same period. The fledgling American industry also rallied behind the war effort, turning out 14,000 planes in 1918 alone. By the end of the war, the American aerospace industry had grown to 200,000 workers.


Innovation Between the Wars

In the years following World War I, the frenzied pace of airplane production slowed, and the aircraft industry turned its attention to improvements to aircraft design. American and British firms, encouraged by NACA in the United States and the Royal Aircraft Establishment in Britain, investigated a broad range of design innovations. Progressive techniques of design, engineering, and construction also came from graduates of newly established professional aeronautical engineering schools, first introduced during the 1920s. These innovation efforts resulted in dramatic changes to aircraft. Wooden airframes gave way to lightweight metal structures, while improvements in engine technology and fuels yielded greater speed and engine reliability.

These and other advances opened up new uses for airplanes. In 1921 the U.S. Post Office began regular transcontinental airmail service between New York City and San Francisco, California. Boeing developed its first commercial aircraft, the Model 40, in 1927 after winning a contract to fly mail for the U.S. Postal Service between Chicago, Illinois, and San Francisco.

In 1933 Boeing introduced the twin-engine Model 247 airplane, an all-metal, low-wing monoplane with retractable landing gear and room for ten passengers. The Model 247 revolutionized commercial aircraft design but was soon displaced by the larger, faster DC-3 designed and built by the Douglas Aircraft Company. The DC-3 carried 21 passengers and could travel across the country in less than 24 hours, though it had to stop many times for fuel. The DC-3 quickly came to dominate commercial aviation in the late 1930s and helped establish the United States as the leading producer of global airline equipment.


World War II

In 1939 World War II broke out in Europe. Airplane manufacturers in Britain and France, already overburdened with orders for military aircraft, placed massive orders for planes and equipment with American manufacturers. In response, the American aeronautics industry significantly expanded its production capabilities. By the time the United States entered the war in December 1941, the nation’s aerospace industry was prepared to meet the increased demand for aircraft and produced more than 300,000 aircraft before the war was over.

During the war the geographic centers of U.S. aircraft production, traditionally concentrated on the coasts, became more diversified. Wartime planners moved production inland to improve security against foreign attack and to satisfy the skyrocketing demand for workers. In Wichita, Kansas, formerly center of the light plane industry, manufacturers produced thousands of training aircraft and larger combat planes. New facilities in Atlanta, Georgia, built B-29 bombers, and new plants in the Dallas-Fort Worth region of Texas turned out B-24 Liberator bombers, P-51 Mustang fighters, and AT-6 trainers.

World War II military research also produced technological innovations that forever changed aviation. Rocket scientists in Germany developed missile prototypes that later served as the foundation for space exploration. The most important of these prototypes was the world’s first large-scale rocket, the A-4 (later renamed the V-2).

Wartime efforts also resulted in the use of jet propulsion in military aircraft. In the late 1930s British aeronautical engineer Frank Whittle made the first successful tests of the turbojet engine. The Germans, French, and Italians made subsequent improvements to jet engine design during the war. The British shared their engine technology with the United States, and by the end of World War II in 1945, Germany, Britain, and the United States had built jet-powered fighter planes.

After the war, most airplane manufacturers shifted their efforts back to passenger airplanes. They incorporated technology developed for troop transports during the war, such as pressurized cabins. This innovation enabled pilots to fly at higher altitudes, above turbulent weather, increasing passenger comfort. Lockheed began commercial production of the Constellation, one of the first commercial airplanes with a pressurized cabin. The Constellation joined the Douglas DC-3 and the newer DC-6 in transcontinental and transatlantic service. Together these large, comfortable airliners posed a significant threat to railway travel and ocean liners as the principal modes of long-distance transportation.


The Cold War

Following World War II, the United States and the Union of Soviet Socialist Republics (USSR) engaged in a long struggle that came to be known as the Cold War. The defense budgets of both countries escalated during this period as each tried to stay ahead of the other’s military technology. Assisted by NACA research and generous federal funding for aeronautical research and development, American firms such as General Electric and Pratt & Whitney developed powerful jet engines. These new engines powered subsequent generations of military aircraft, such as the North American F-86 Sabre fighter and the Boeing B-47 Stratojet bomber. American manufacturers reaped additional profits during the Cold War by selling helicopters, fighters, and transport aircraft to friendly foreign powers.

In 1957 the USSR put Sputnik, the world’s first artificial satellite, into orbit. In response, the United States revamped its aerospace efforts. In 1958 it restructured NACA and dubbed the new organization the National Aeronautics and Space Administration (NASA). NASA devoted all of its resources to catching up with—and beating—the Soviet space program. The United States also announced its intention to be the first nation to put a human on the moon. This led to the Apollo program, a multibillion-dollar space exploration effort that eventually sent 12 American astronauts to the surface of the moon.


Rise of Commercial Air Travel

British aerospace engineers revolutionized the air transport industry when they incorporated the jet engine, previously used only in military aircraft, into a commercial plane. The de Havilland Comet, introduced in 1952, was celebrated as the first commercial airplane powered by jet engines. Unforeseen structural weaknesses in the Comet caused a series of crashes, two of them fatal. The Comet was grounded for investigation for several years, giving American manufacturers the opportunity to catch up to their British counterparts. In the late 1950s Boeing and Douglas introduced the jet-powered 707 and DC-8. Pan American World Airways inaugurated Boeing 707 jet service in October 1958, and air travel changed dramatically almost overnight. Transatlantic jet service enabled travelers to fly from New York City to London, England, in less than eight hours, half the time a propeller airplane took to fly that distance. Boeing’s 707 carried 112 passengers at high speed and quickly completed the displacement of ocean liners and railroads as the principal form of long-distance transportation.

In 1970 Boeing introduced the extremely successful 747, a huge, wide-body airliner. The giant aircraft, nicknamed the “jumbo jet,” could carry more than 400 people and several hundred tons of cargo. Douglas and Lockheed soon turned out their own versions of the jumbo jet, the DC-10 and the L-1011.


Globalization and Mergers

The Cold War, the space race, and advances in civil aeronautics made the aerospace industry one of the United States’ largest employers and one of the strongest and most robust industries of any kind in the world. By the late 1960s European aerospace industries were seeking ways to reduce their dependence on American manufacturers.

In an effort to usurp American leadership in the production of civil airliners, Britain and France joined forces to develop the Concorde supersonic transport, the first commercial jet to fly faster than the speed of sound (see Aerodynamics: Supersonics). The Concorde, introduced in 1967, set the stage for other multinational European efforts to build and sell airplanes in competition with the big American aerospace companies. In 1970 French, German, British, and Spanish aerospace companies collaborated to form Airbus Industrie (now Airbus). The Airbus A-300 airplane, introduced four years later, inaugurated a family of air transports that by the early 2000s ranked second only to Boeing in worldwide sales. Additional European programs evolved as multinational groups formed to develop fighters, attack aircraft, and helicopters.

In 1989, the collapse of the USSR and the ensuing demise of the Cold War brought fundamental changes to the global aerospace industrial community. Soviet aerospace agencies reorganized as private entities that often collaborated with Asian, European, and American firms—strategic partnering that put them in better positions to obtain contracts. This strategy touched off a wave of mergers in the American aerospace industry. Martin-Marietta acquired the aerospace division from General Electric Company in 1992, then merged with the aerospace giant Lockheed two years later. In 1997 Boeing acquired longtime rival McDonnell Douglas Corporation and in 2000 acquired Hughes Electronics Corporation’s space and communications division, the world's leading manufacturer of communications satellites. Several European firms announced their intention to combine forces to challenge the newly formed American aerospace giants. In 1999 the French, German, and Spanish partners in the Airbus consortium merged to form the European Aeronautic Defense and Space Company, and by 2001 Airbus was a single centralized company.




Airplane, engine-driven vehicle that can fly through the air supported by the action of air against its wings. Airplanes are heavier than air, in contrast to vehicles such as balloons and airships, which are lighter than air. Airplanes also differ from other heavier-than-air craft, such as helicopters, because they have rigid wings; control surfaces, movable parts of the wings and tail, which make it possible to guide their flight; and power plants, or special engines that permit level or climbing flight.

Modern airplanes range from ultralight aircraft weighing no more than 46 kg (100 lb) and meant to carry a single pilot, to great jumbo jets, capable of carrying several hundred people, several hundred tons of cargo, and weighing nearly 454 metric tons.

Airplanes are adapted to specialized uses. Today there are land planes (aircraft that take off from and land on the ground), seaplanes (aircraft that take off from and land on water), amphibians (aircraft that can operate on both land and sea), and airplanes that can leave the ground using the jet thrust of their engines or rotors (rotating wings) and then switch to wing-borne flight.



An airplane flies because its wings create lift, the upward force on the plane, as they interact with the flow of air around them. The wings alter the direction of the flow of air as it passes. The exact shape of the surface of a wing is critical to its ability to generate lift. The speed of the airflow and the angle at which the wing meets the oncoming airstream also contribute to the amount of lift generated.

An airplane’s wings push down on the air flowing past them, and in reaction, the air pushes up on the wings. When an airplane is level or rising, the front edges of its wings ride higher than the rear edges. The angle the wings make with the horizontal is called the angle of attack. As the wings move through the air, this angle causes them to push air flowing under them downward. Air flowing over the top of the wing is also deflected downward as it follows the specially-designed shape of the wing. A steeper angle of attack will cause the wings to push more air downward. The third law of motion formulated by English physicist Isaac Newton states that every action produces an equal and opposite reaction (see Mechanics: The Third Law). In this case, the wings pushing air downward is the action, and the air pushing the wings upward is the reaction. This causes lift, the upward force on the plane.

Lift is also often explained using Bernoulli’s principle, which states that, under certain circumstances, a faster moving fluid (such as air) will have a lower pressure than a slower moving fluid. The air on the top of an airplane wing moves faster and is at a lower pressure than the air underneath the wing, and the lift generated by the wing can be modeled using equations derived from Bernoulli’s principle.

Lift is one of the four primary forces acting upon an airplane. The others are weight, thrust, and drag. Weight is the force that offsets lift, because it acts in the opposite direction. The weight of the airplane must be overcome by the lift produced by the wings. If an airplane weighs 4.5 metric tons, then the lift produced by its wings must be greater than 4.5 metric tons in order for the airplane to leave the ground. Designing a wing that is powerful enough to lift an airplane off the ground, and yet efficient enough to fly at high speeds over extremely long distances, is one of the marvels of modern aircraft technology.

Thrust is the force that propels an airplane forward through the air. It is provided by the airplane’s propulsion system; either a propeller or jet engine or combination of the two.

A fourth force acting on all airplanes is drag. Drag is created because any object moving through a fluid, such as an airplane through air, produces friction as it interacts with that fluid and because it must move the fluid out of its way to do its work. A high-lift wing surface, for example, may create a great deal of lift for an airplane, but because of its large size, it is also creating a significant amount of drag. That is why high-speed fighters and missiles have such thin wings—they need to minimize drag created by lift. Conversely, a crop duster, which flies at relatively slow speeds, may have a big, thick wing because high lift is more important than the amount of drag associated with it. Drag is also minimized by designing sleek, aerodynamic airplanes, with shapes that slip easily through the air.

Managing the balance between these four forces is the challenge of flight. When thrust is greater than drag, an airplane will accelerate. When lift is greater than weight, it will climb. Using various control surfaces and propulsion systems, a pilot can manipulate the balance of the four forces to change the direction or speed. A pilot can reduce thrust in order to slow down or descend. The pilot can lower the landing gear into the airstream and deploy the landing flaps on the wings to increase drag, which has the same effect as reducing thrust. The pilot can add thrust either to speed up or climb. Or, by retracting the landing gear and flaps, and thereby reducing drag, the pilot can accelerate or climb.



In addition to balancing lift, weight, thrust, and drag, modern airplanes have to contend with another phenomenon. The sound barrier is not a physical barrier but a speed at which the behavior of the airflow around an airplane changes dramatically. Fighter pilots in World War II (1939-1945) first ran up against this so-called barrier in high-speed dives during air combat. In some cases, pilots lost control of the aircraft as shock waves built up on control surfaces, effectively locking the controls and leaving the crews helpless. After World War II, designers tackled the realm of supersonic flight, primarily for military airplanes, but with commercial applications as well.

Supersonic flight is defined as flight at a speed greater than that of the local speed of sound. At sea level, sound travels through air at approximately 1,220 km/h (760 mph). At the speed of sound, a shock wave consisting of highly compressed air forms at the nose of the plane. This shock wave moves back at a sharp angle as the speed increases.

Supersonic flight was achieved in 1947 for the first time by the Bell X-1 rocket plane, flown by Air Force test pilot Chuck Yeager. Speeds at or near supersonic flight are measured in units called Mach numbers, which represent the ratio of the speed of the airplane to the speed of sound as it moves air. An airplane traveling at less than Mach 1 is traveling below the speed of sound (subsonic); at Mach 1, an airplane is traveling at the speed of sound (transonic); at Mach 2, an airplane is traveling at twice the speed of sound (supersonic flight). Speeds of Mach 1 to 5 are referred to as supersonic; speeds of Mach 5 and above are called hypersonic. Designers in Europe and the United States developed succeeding generations of military aircraft, culminating in the 1960s and 1970s with Mach 3+ speedsters such as the Soviet MiG-25 Foxbat interceptor, the XB-70 Valkyrie bomber, and the SR-71 spy plane.

The shock wave created by an airplane moving at supersonic and hypersonic speeds represents a rather abrupt change in air pressure and is perceived on the ground as a sonic boom, the exact nature of which varies depending upon how far away the aircraft is and the distance of the observer from the flight path. Sonic booms at low altitudes over populated areas are generally considered a significant problem and have prevented most supersonic airplanes from efficiently utilizing overland routes. For example, the Anglo-French Concorde, a commercial supersonic aircraft, is generally limited to over-water routes, or to those over sparsely populated regions of the world. Designers today believe they can help lessen the impact of sonic booms created by supersonic airliners but probably cannot eliminate them.

One of the most difficult practical barriers to supersonic flight is the fact that high-speed flight produces heat through friction. At such high speeds, enormous temperatures are reached at the surface of the craft. In fact, today’s Concorde must fly a flight profile dictated by temperature requirements; if the aircraft moves too fast, then the temperature rises above safe limits for the aluminum structure of the airplane. Titanium and other relatively exotic, and expensive, metals are more heat-resistant, but harder to manufacture and maintain. Airplane designers have concluded that a speed of Mach 2.7 is about the limit for conventional, relatively inexpensive materials and fuels. Above that speed, an airplane would need to be constructed of more temperature-resistant materials, and would most likely have to find a way to cool its fuel.



Airplanes generally share the same basic configuration—each usually has a fuselage, wings, tail, landing gear, and a set of specialized control surfaces mounted on the wings and tail.



The fuselage is the main cabin, or body of the airplane. Generally the fuselage has a cockpit section at the front end, where the pilot controls the airplane, and a cabin section. The cabin section may be designed to carry passengers, cargo, or both. In a military fighter plane, the fuselage may house the engines, fuel, electronics, and some weapons. In some of the sleekest of gliders and ultralight airplanes, the fuselage may be nothing more than a minimal structure connecting the wings, tail, cockpit, and engines.



All airplanes, by definition, have wings. Some are nearly all wing with a very small cockpit. Others have minimal wings, or wings that seem to be merely extensions of a blended, aerodynamic fuselage, such as the space shuttle.

Before the 20th century, wings were made of wooden ribs and spars (or beams), covered with fabric that was sewn tightly and varnished to be extremely stiff. A conventional wing has one or more spars that run from one end of the wing to the other. Perpendicular to the spar are a series of ribs, which run from the front, or leading edge, to the rear, or trailing edge, of the wing. These are carefully constructed to shape the wing in a manner that determines its lifting properties. Wood and fabric wings often used spruce for the structure, because of that material’s relatively light weight and high strength, and linen for the cloth covering.

Early airplanes were usually biplanes—craft with two wings, usually one mounted about 1.5 m (about 5 to 6 ft) above the other. Aircraft pioneers found they could build such wings relatively easily and brace them together using wires to connect the upper and lower wing to create a strong structure with substantial lift. In pushing the many cables, wood, and fabric through the air, these designs created a great deal of drag, so aircraft engineers eventually pursued the monoplane, or single-wing airplane. A monoplane’s single wing gives it great advantages in speed, simplicity, and visibility for the pilot.

After World War I (1914-1918), designers began moving toward wings made of steel and aluminum, and, combined with new construction techniques, these materials enabled the development of modern all-metal wings capable not only of developing lift but of housing landing gear, weapons, and fuel.

Over the years, many airplane designers have postulated that the ideal airplane would, in fact, be nothing but wing. Flying wings, as they are called, were first developed in the 1930s and 1940s. American aerospace manufacturer Northrop Grumman Corporation’s flying wing, the B-2 bomber, or stealth bomber, developed in the 1980s, has been a great success as a flying machine, benefiting from modern computer-aided design (CAD), advanced materials, and computerized flight controls. Popular magazines routinely show artists’ concepts of flying-wing airliners, but airline and airport managers have been unable to integrate these unusual shapes into conventional airline and airport facilities.


Tail Assembly

Most airplanes, except for flying wings, have a tail assembly attached to the rear of the fuselage, consisting of vertical and horizontal stabilizers, which look like small wings; a rudder; and elevators. The components of the tail assembly are collectively referred to as the empennage.

The stabilizers serve to help keep the airplane stable while in flight. The rudder is at the trailing edge of the vertical stabilizer and is used by the airplane to help control turns. An airplane actually turns by banking, or moving, its wings laterally, but the rudder helps keep the turn coordinated by serving much like a boat’s rudder to move the nose of the airplane left or right. Moving an airplane’s nose left or right is known as a yaw motion. Rudder motion is usually controlled by two pedals on the floor of the cockpit, which are pushed by the pilot.

Elevators are control surfaces at the trailing edge of horizontal stabilizers. The elevators control the up-and-down motion, or pitch, of the airplane’s nose. Moving the elevators up into the airstream will cause the tail to go down and the nose to pitch up. A pilot controls pitch by moving a control column or stick.


Landing Gear

All airplanes must have some type of landing gear. Modern aircraft employ brakes, wheels, and tires designed specifically for the demands of flight. Tires must be capable of going from a standstill to nearly 322 km/h (200 mph) at landing, as well as carrying nearly 454 metric tons. Brakes, often incorporating special heat-resistant materials, must be able to handle emergencies, such as a 400-metric-ton airliner aborting a takeoff at the last possible moment. Antiskid braking systems, common on automobiles today, were originally developed for aircraft and are used to gain maximum possible braking power on wet or icy runways.

Larger and more-complex aircraft typically have retractable landing gear—so called because they can be pulled up into the wing or fuselage after takeoff. Having retractable gear greatly reduces the drag generated by the wheel structures that would otherwise hang out in the airstream.


Control Components

An airplane is capable of three types of motion that revolve around three separate axes. The plane may fly steadily in one direction and at one altitude—or it may turn, climb, or descend. An airplane may roll, banking its wings either left or right, about the longitudinal axis, which runs the length of the craft. The airplane may yaw its nose either left or right about the vertical axis, which runs straight down through the middle of the airplane. Finally, a plane may pitch its nose up or down, moving about its lateral axis, which may be thought of as a straight line running from wingtip to wingtip.

An airplane relies on the movement of air across its wings for lift, and it makes use of this same airflow to move in any way about the three axes. To do so, the pilot will manipulate controls in the cockpit that direct control surfaces on the wings and tail to move into the airstream. The airplane will yaw, pitch, or roll, depending on which control surfaces or combination of surfaces are moved, or deflected, by the pilot.

In order to bank and begin a turn, a conventional airplane will deflect control surfaces on the trailing edge of the wings known as ailerons. In order to bank left, the left aileron is lifted up into the airstream over the left wing, creating a small amount of drag and decreasing the lift produced by that wing. At the same time, the right aileron is pushed down into the airstream, thereby increasing slightly the lift produced by the right wing. The right wing then comes up, the left wing goes down, and the airplane banks to the left. To bank to the right, the ailerons are moved in exactly the opposite fashion.

In order to yaw, or turn the airplane’s nose left or right, the pilot must press upon rudder pedals on the floor of the cockpit. Push down on the left pedal, and the rudder at the trailing edge of the vertical stabilizer moves to the left. As in a boat, the left rudder moves the nose of the plane to the left. A push on the right pedal causes the airplane to yaw to the right.

In order to pitch the nose up or down, the pilot usually pulls or pushes on a control wheel or stick, thereby moving the elevators at the trailing edge of the horizontal stabilizer. Pulling back on the wheel deflects the elevators upward into the airstream, pushing the tail down and the nose up. Pushing forward on the wheel causes the elevators to drop down, lifting the tail and forcing the nose down.

Airplanes that are more complex also have a set of secondary control surfaces that may include devices such as flaps, slats, trim tabs, spoilers, and speed brakes. Flaps and slats are generally used during takeoff and landing to increase the amount of lift produced by the wing at low speeds. Flaps usually droop down from the trailing edge of the wing, although some jets have leading-edge flaps as well. On some airplanes, they also can be extended back beyond the normal trailing edge of the wing to increase the surface area of the wing as well as change its shape. Leading-edge slats usually extend from the front of the wing at low speeds to change the way the air flows over the wing, thereby increasing lift. Flaps also often serve to increase drag and slow the approach of a landing airplane.

Trim tabs are miniature control surfaces incorporated into larger control surfaces. For example, an aileron tab acts like a miniature aileron within the larger aileron. These kinds of controls are used to adjust more precisely the flight path of an airplane that may be slightly out of balance or alignment. Elevator trim tabs are usually used to help set the pitch attitude (the angle of the airplane in relation to the Earth) of an airplane for a given speed through the air. On some airplanes, the entire horizontal stabilizer moves in small increments to serve the same function as a trim tab.



Airplane pilots rely on a set of instruments in the cockpit to monitor airplane systems, to control the flight of the aircraft, and to navigate.

Systems instruments will tell a pilot about the condition of the airplane’s engines and electrical, hydraulic, and fuel systems. Piston-engine instruments monitor engine and exhaust-gas temperatures, and oil pressures and temperatures. Jet-engine instruments measure the rotational speeds of the rotating blades in the turbines, as well as gas temperatures and fuel flow.

Flight instruments are those used to tell a pilot the course, speed, altitude, and attitude of the airplane. They may include an airspeed indicator, an artificial horizon, an altimeter, and a compass. These instruments have many variations, depending on the complexity and performance of the airplane. For example, high-speed jet aircraft have airspeed indicators that may indicate speeds both in nautical miles per hour (slightly faster than miles per hour used with ground vehicles) and in Mach number. The artificial horizon indicates whether the airplane is banking, climbing, or diving, in relation to the Earth. An airplane with its nose up may or may not be climbing, depending on its airspeed and momentum.

General-aviation (private aircraft), military, and commercial airplanes also have instruments that aid in navigation. The compass is the simplest of these, but many airplanes now employ satellite navigation systems and computers to navigate from any point on the globe to another without any help from the ground. The Global Positioning System (GPS), developed for the United States military but now used by many civilian pilots, provides an airplane with its position to within a few meters. Many airplanes still employ radio receivers that tune to a ground-based radio-beacon system in order to navigate cross-country. Specially equipped airplanes can use ultraprecise radio beacons and receivers, known as Instrument Landing Systems (ILS) and Microwave Landing Systems (MLS), combined with special cockpit displays, to land during conditions of poor visibility.



Airplanes use either piston or turbine (rotating blades) engines to provide propulsion. In smaller airplanes, a conventional gas-powered piston engine turns a propeller, which either pulls or pushes an airplane through the air. In larger airplanes, a turbine engine either turns a propeller through a gearbox, or uses its jet thrust directly to move an airplane through the air. In either case, the engine must provide enough power to move the weight of the airplane forward through the airstream.

The earliest powered airplanes relied on crude steam or gas engines. These piston engines are examples of internal-combustion engines. Aircraft designers throughout the 20th century pushed their engineering colleagues constantly for engines with more power, lighter weight, and greater reliability. Piston engines, however, are still relatively complicated pieces of machinery, with many precision-machined parts moving through large ranges and in complex motions. Although enormously improved over the past 90 years of flight and still suitable for many smaller general aviation aircraft, they fall short of the higher performance possible with modern jet propulsion and required for commercial and military aviation.

The turbine or jet engine operates on the principle of Newton’s third law of motion, which states that for every action, there is an opposite but equal reaction. A jet sucks air into the front, squeezes the air by pulling it through a series of spinning compressors, mixes it with fuel and ignites the mixture, which then explodes with great force rearward through the exhaust nozzle. The rearward force is balanced with an equal force that pushes the jet engine, and the airplane attached to it, forward. A rocket engine operates on the same principle, except that, in order to operate in the airless vacuum of space, the rocket must carry along its own air, in the form of solid propellant or liquid oxidizer, for combustion.

There are several different types of jet engines. The simplest is the ramjet, which takes advantage of high speed to ram or force the air into the engine, eliminating the need for the spinning compressor section. This elegant simplicity is offset by the need to boost a ramjet to several hundred miles an hour before ram-air compression is sufficient to operate the engine.

The turbojet is based on the jet-propulsion system of the ramjet, but with the addition of a compressor section, a combustion chamber, a turbine to take some power out of the exhaust and spin the compressor, and an exhaust nozzle. In a turbojet, all of the air taken into the compressor at the front of the engine is sent through the core of the engine, burned, and released. Thrust from the engine is derived purely from the acceleration of the released exhaust gases out the rear.

A modern derivative known as the turbofan, or fan-jet, adds a large fan in front of the compressor section. This fan pulls an enormous amount of air into the engine case, only a relatively small fraction of which is sent through the core for combustion. The rest runs along the outside of the core case and inside the engine casing. This fan flow is mixed with the hot jet exhaust at the rear of the engine, where it cools and quiets the exhaust noise. In addition, this high-volume mass of air, accelerated rearward by the fan, produces a great deal of thrust by itself, even though it is never burned, acting much like a propeller.

In fact, some smaller jet engines are used to turn propellers. Known as turboprops, these engines produce most of their thrust through the propeller, which is usually driven by the jet engine through a set of gears. As a power source for a propeller, a turbine engine is extremely efficient, and many smaller airliners in the 19- to 70-passenger-capacity range use turboprops. They are particularly efficient at lower altitudes and medium speeds up to 640 km/h (400 mph).



There are a wide variety of types of airplanes. Land planes, carrier-based airplanes, seaplanes, amphibians, vertical takeoff and landing (VTOL), short takeoff and landing (STOL), and space shuttles all take advantage of the same basic technology, but their capabilities and uses make them seem only distantly related.


Land Planes

Land planes are designed to operate from a hard surface, typically a paved runway. Some land planes are specially equipped to operate from grass or other unfinished surfaces. A land plane usually has wheels to taxi, take off, and land, although some specialized aircraft operating in the Arctic or Antarctic regions have skis in place of wheels. The wheels are sometimes referred to as the undercarriage, although they are often called, together with the associated brakes, the landing gear. Landing gear may be fixed, as in some general-aviation airplanes, or retractable, usually into the fuselage or wings, as in more-sophisticated airplanes in general and commercial aviation.


Carrier-Based Aircraft

Carrier-based airplanes are a specially modified type of land plane designed for takeoff from and landing aboard naval aircraft carriers. Carrier airplanes have a strengthened structure, including their landing gear, to handle the stresses of catapult-assisted takeoff, in which the craft is launched by a steam-driven catapult; and arrested landings, made by using a hook attached to the underside of the aircraft’s tail to catch one of four wires strung across the flight deck of the carrier.



Seaplanes, sometimes called floatplanes or pontoon planes, are often ordinary land planes modified with floats instead of wheels so they can operate from water. A number of seaplanes have been designed from scratch to operate only from water bases. Such seaplanes have fuselages that resemble and perform like ship hulls. Known as flying boats, they may have small floats attached to their outer wing panels to help steady them at low speeds on the water, but the weight of the airplane is borne by the floating hull.



Amphibians, like their animal namesakes, operate from both water and land bases. In many cases, an amphibian is a true seaplane, with a boat hull and the addition of specially designed landing gear that can be extended to allow the airplane to taxi right out of the water onto land. Historically, some flying boats were fitted with so-called beaching gear, a system of cradles on wheels positioned under the floating aircraft, which then allowed the aircraft to be rolled onto land.


Vertical Takeoff and Landing Airplanes

Vertical Takeoff and Landing (VTOL) airplanes typically use the jet thrust from their engines, pointed down at the Earth, to take off and land straight up and down. After taking off, a VTOL airplane usually transitions to wing-borne flight in order to cover a longer distance or carry a significant load. A helicopter is a type of VTOL aircraft, but there are very few VTOL airplanes. One unique type of VTOL aircraft is the tilt-rotor, which has large, propeller-like rotating wings or rotors driven by jet engines at the wingtips. For takeoff and landing, the engines and rotors are positioned vertically, much like a helicopter. After takeoff, however, the engine/rotor combination tilts forward, and the wing takes on the load of the craft.

The most prominent example of a true VTOL airplane flying today is the AV-8B Harrier II, a military attack plane that uses rotating nozzles attached to its jet engine to direct the engine exhaust in the appropriate direction. Flown in the United States by the Marine Corps, as well as in Spain, Italy, India, and United Kingdom, where it was originally developed, the Harrier can take off vertically from smaller ships, or it can be flown to operating areas near the ground troops it supports in its ground-attack role.


Short Takeoff and Landing Airplanes

Short Takeoff and Landing (STOL) airplanes are designed to be able to function on relatively short runways. Their designs usually employ wings and high-lift devices on the wings optimized for best performance during takeoff and landing, as distinguished from an airplane that has a wing optimized for high-speed cruise at high altitude. STOL airplanes are usually cargo airplanes, although some serve in a passenger-carrying capacity as well.


Space Shuttle

The space shuttle, flown by the National Aeronautics and Space Administration (NASA), is an aircraft unlike any other because it flies as a fixed-wing airplane within the atmosphere and as a spacecraft outside Earth’s atmosphere. When the space shuttle takes off, it flies like a rocket with wings, relying on the 3,175 metric tons of thrust generated by its solid-fuel rocket boosters and liquid-fueled main engines to power its way up, through, and out of the atmosphere. During landing, the shuttle becomes the world’s most sophisticated glider, landing without propulsion.



Airplanes can be grouped into a handful of major classes, such as commercial, military, and general-aviation airplanes, all of which fall under different government-mandated certification and operating rules.


Commercial Airplanes

Commercial aircraft are those used for profit making, usually by carrying cargo or passengers for hire (see Air Transport Industry). They are strictly regulated—in the United States, by the Federal Aviation Administration (FAA); in Canada, by Transport Canada; and in other countries, by other national aviation authorities.

Modern large commercial-airplane manufacturers—such The Boeing Company in the United States and Airbus in Europe—offer a wide variety of aircraft with different capabilities. Today’s jet airliners carry anywhere from 100 passengers to more than 500 over short and long distances.

Since 1976 the British-French Concorde supersonic transport (SST) has carried passengers at twice the speed of sound. The Concorde flies for British Airways and Air France, flag carriers of the two nations that funded its development during the late 1960s and 1970s. The United States had an SST program, but it was ended because of budget and environmental concerns in 1971.


Military Airplanes

Military aircraft are usually grouped into four categories: combat, cargo, training, and observation. Combat airplanes are generally either fighters or bombers, although some airplanes have both capabilities. Fighters are designed to engage in air combat with other airplanes, in either defensive or offensive situations. Since the 1950s many fighters have been capable of Mach 2+ flight (a Mach number represents the ratio of the speed of an airplane to the speed of sound as it travels through air). Some fighters have a ground-attack role as well and are designed to carry both air-to-air weapons, such as missiles, and air-to-ground weapons, such as bombs. Fighters include aircraft such as the Panavia Tornado, the Boeing F-15 Eagle, the Lockheed-Martin F-16 Falcon, the MiG-29 Fulcrum, and the Su-27 Flanker.

Bombers are designed to carry large air-to-ground-weapons loads and either penetrate or avoid enemy air defenses in order to deliver those weapons. Some well-known bombers include the Boeing B-52, the Boeing B-1, and the Northrop-Grumman B-2 stealth bomber. Bombers such as the B-52 are designed to fly fast at low altitudes, following the terrain, in order to fly under enemy radar defenses, while others, such as the B-2, may use sophisticated radar-defeating technologies to fly virtually unobserved.

Today’s military cargo airplanes are capable of carrying enormous tanks, armored personnel carriers, artillery pieces, and even smaller aircraft. Cargo planes such as the giant Lockheed C-5B and Boeing C-17 were designed expressly for such roles. Some cargo planes can serve a dual role as aerial gas stations, refueling different types of military airplanes while in flight. Such tankers include the Boeing KC-135 and KC-10.

All military pilots go through rigorous training and education programs using military training airplanes to prepare them to fly the high-performance aircraft of the armed forces. They typically begin the flight training in relatively simple, propeller airplanes and move into basic jets before specializing in a career path involving fighters, bombers, or transports. Some military trainers include the T-34 Mentor, the T-37 and T-38, and the Boeing T-45 Goshawk.

A final category of military airplane is the observation, or reconnaissance, aircraft. With the advent of the Lockheed U-2 spy plane in the 1950s, observation airplanes were developed solely for highly specialized missions. The ultimate spy plane is Lockheed’s SR-71, a two-seat airplane that uses specialized engines and fuel to reach altitudes greater than 25,000 m (80,000 ft) and speeds well over Mach 3.


General-Aviation Aircraft

General-aviation aircraft are certified for and intended primarily for noncommercial or private operations.

Pleasure aircraft range from simple single-seat, ultralight airplanes to sleek twin turboprops capable of carrying eight people. Business aircraft transport business executives to appointments. Most business airplanes require more reliable performance and more range and all-weather capability.

Another class of general-aviation airplanes are those used in agriculture. Large farms require efficient ways to spread fertilizer and insecticides over a large area. A very specialized type of airplane, crop dusters are rugged, highly maneuverable, and capable of hauling several hundred pounds of chemicals. They can be seen swooping low over farm fields. Not intended for serious cross-country navigation, crop dusters lack sophisticated navigation aids and complex systems.



Before the end of the 18th century, few people had applied themselves to the study of flight. One was Leonardo da Vinci, during the 15th century. Leonardo was preoccupied chiefly with bird flight and with flapping-wing machines, called ornithopters. His aeronautical work lay unknown until late in the 19th century, when it could furnish little of technical value to experimenters but was a source of inspiration to aspiring engineers. Apart from Leonardo’s efforts, three devices important to aviation had been invented in Europe in the Middle Ages and had reached a high stage of development by Leonardo’s time—the windmill, an early propeller; the kite, an early airplane wing; and the model helicopter.


The First Airplanes

Between 1799 and 1809 English baronet Sir George Cayley created the concept of the modern airplane. Cayley abandoned the ornithopter tradition, in which both lift and thrust are provided by the wings, and designed airplanes with rigid wings to provide lift, and with separate propelling devices to provide thrust. Through his published works, Cayley laid the foundations of aerodynamics. He demonstrated, both with models and with full-size gliders, the use of the inclined plane to provide lift, pitch, and roll stability; flight control by means of a single rudder-elevator unit mounted on a universal joint; streamlining; and other devices and practices. In 1853, in his third full-size machine, Cayley sent his unwilling coachman on the first gliding flight in history.

In 1843 British inventor William Samuel Henson published his patented design for an Aerial Steam Carriage. Henson’s design did more than any other to establish the form of the modern airplane—a fixed-wing monoplane with propellers, fuselage, and wheeled landing gear, and with flight control by means of rear elevator and rudder. Steam-powered models made by Henson in 1847 were promising but unsuccessful.

In 1890 French engineer Clément Ader built a steam-powered airplane and made the first actual flight of a piloted, heavier-than-air craft. However, the flight was not sustained, and the airplane brushed the ground over a distance of 50 m (160 ft). Inventors continued to pursue the dream of sustained flight. Between 1891 and 1896 German aeronautical engineer Otto Lilienthal made thousands of successful flights in hang gliders of his own design. Lilienthal hung in a frame between the wings and controlled his gliders entirely by swinging his torso and legs in the direction he wished to go. While successful as gliders, his designs lacked a control system and a reliable method for powering the craft. He was killed in a gliding accident in 1896.

American inventor Samuel Pierpont Langley had been working for several years on flying machines. Langley began experimenting in 1892 with a steam-powered, unpiloted aircraft, and in 1896 made the first sustained flight of any mechanically propelled heavier-than-air craft. Launched by catapult from a houseboat on the Potomac River near Quantico, Virginia, the unpiloted Aerodrome, as Langley called it, suffered from design faults. The Aerodrome never successfully carried a person, and thus prevented Langley from earning the place in history claimed by the Wright brothers.


The First Airplane Flight

American aviators Orville Wright and Wilbur Wright of Dayton, Ohio, are considered the fathers of the first successful piloted heavier-than-air flying machine. Through the disciplines of sound scientific research and engineering, the Wright brothers put together the combination of critical characteristics that other designs of the day lacked—a relatively lightweight (337 kg/750 lb), powerful engine; a reliable transmission and efficient propellers; an effective system for controlling the aircraft; and a wing and structure that were both strong and lightweight.

At Kitty Hawk, North Carolina, on December 17, 1903, Orville Wright made the first successful flight of a piloted, heavier-than-air, self-propelled craft, called the Flyer. That first flight traveled a distance of about 37 m (120 ft). The distance was less than the wingspan of many modern airliners, but it represented the beginning of a new age in technology and human achievement. Their fourth and final flight of the day lasted 59 seconds and covered only 260 m (852 ft). The third Flyer, which the Wrights constructed in 1905, was the world’s first fully practical airplane. It could bank, turn, circle, make figure eights, and remain in the air for as long as the fuel lasted, up to half an hour on occasion.


Early Military and Public Interest

The airplane, like many other milestone inventions throughout history, was not immediately recognized for its potential. During the very early 1900s, prior to World War I (1914-1918), the airplane was relegated mostly to the county-fair circuit, where daredevil pilots drew large crowds but few investors. One exception was the United States War Department, which had long been using balloons to observe the battlefield and expressed an interest in heavier-than-air craft as early as 1898. In 1908 the Wrights demonstrated their airplane to the U.S. Army’s Signal Corps at Fort Myer, Virginia. In September of that year, while circling the field at Fort Myer, Orville crashed while carrying an army observer, Lieutenant Thomas Selfridge. Selfridge died from his injuries and became the first fatality from the crash of a powered airplane.

On July 25, 1909, French engineer Louis Blériot crossed the English channel in a Blériot XI, a monoplane of his own design. Blériot’s channel crossing made clear to the world the airplane’s wartime potential, and this potential was further demonstrated in 1910 and 1911, when American pilot Eugene Ely took off from and landed on warships. In 1911 the U.S. Army used a Wright brothers’ biplane to make the first live bomb test from an airplane. That same year, the airplane was used in its first wartime operation when an Italian captain flew over and observed Turkish positions during the Italo-Turkish War of 1911 to 1912. Also in 1911, American inventor and aviator Glenn Curtiss introduced the first practical seaplane. This was a biplane with a large float beneath the center of the lower wing and two smaller floats beneath the tips of the lower wing.

The year 1913 became known as the “glorious year of flying.” Aerobatics, or acrobatic flying, was introduced, and upside-down flying, loops, and other stunts proved the maneuverability of airplanes. Long-distance flights made in 1913 included a 4,000-km (2,500-mi) flight from France to Egypt, with many stops, and the first nonstop flight across the Mediterranean Sea, from France to Tunisia. In Britain, a modified Farnborough B.E. 2 proved itself to be the first naturally stable airplane in the world. The B.E. 2c version of this airplane was so successful that nearly 2,000 were subsequently built.


Planes of World War I

During World War I, the development of the airplane accelerated dramatically. European designers such as Louis Blériot and Dutch-American engineer Anthony Herman Fokker exploited basic concepts created by the Wrights and developed ever faster, more capable, and deadlier combat airplanes. Fokker’s biplanes, such as the D-VII and D-VIII flown by German pilots, were considered superior to their Allied competition. In 1915 Fokker mounted a machine gun with a timing gear so that the gun could fire between the rotating propellers. The resulting Fokker Eindecker monoplane fighter was, for a time, the most successful fighter in the skies.

The concentrated research and development made necessary by wartime pressures produced great progress in airplane design and construction. During World War I, outstanding early British fighters included the Sopwith Pup (1916) and the Sopwith Camel (1917), which flew as high as 5,800 m (19,000 ft) and had a top speed of 190 km/h (120 mph). Notable French fighters included the Spad (1916) and the Nieuport 28 (1918). By the end of World War I in 1918, both warring sides had fighters that could fly at altitudes of 7,600 m (25,000 ft) and speeds up to 250 km/h (155 mph).


Development of Commercial Aviation

Commercial aviation began in January 1914, just 10 years after the Wrights pioneered the skies. The first regularly scheduled passenger line in the world operated between Saint Petersburg and Tampa, Florida. Commercial aviation developed slowly during the next 30 years, driven by the two world wars and service demands of the U.S. Post Office for airmail.

In the early 1920s the air-cooled engine was perfected, along with its streamlined cowling, or engine casing. Light and powerful, these engines gave strong competition to the older, liquid-cooled engines. In the mid-1920s light airplanes were produced in great numbers, and club and private pleasure flying became popular. The inexpensive DeHavilland Moth biplane, introduced in 1925, put flying within the financial reach of many enthusiasts. The Moth could travel at 145 km/h (90 mph) and was light, strong, and easy to handle.

Instrument flying became practical in 1929, when the American inventor Elmer Sperry perfected the artificial horizon and directional gyro. On September 24, 1929, James Doolittle, an American pilot and army officer, proved the value of Sperry’s instruments by taking off, flying over a predetermined course, and landing, all without visual reference to the Earth.

Introduced in 1933, Boeing’s Model 247 was considered the first truly modern airliner. It was an all-metal, low-wing monoplane, with retractable landing gear, an insulated cabin, and room for ten passengers. An order from United Air Lines for 60 planes of this type tied up Boeing’s production line and led indirectly to the development of perhaps the most successful propeller airliner in history, the Douglas DC-3. Trans World Airlines, not willing to wait for Boeing to finish the order from United, approached airplane manufacturer Donald Douglas in Long Beach, California, for an alternative, which became, in quick succession, the DC-1, the DC-2, and the DC-3.

The DC-3 carried 21 passengers, used powerful, 1,000-horsepower engines, and could travel across the country in less than 24 hours of travel time, although it had to stop many times for fuel. The DC-3 quickly came to dominate commercial aviation in the late 1930s, and some DC-3s are still in service today.

Boeing provided the next major breakthrough with its Model 307 Stratoliner, a pressurized derivative of the famous B-17 bomber, entering service in 1940. With its regulated cabin air pressure, the Stratoliner could carry 33 passengers at altitudes up to 6,100 m (20,000 ft) and at speeds of 322 km/h (200 mph).


Aircraft Developments of World War II

It was not until after World War II (1939-1945), when comfortable, pressurized air transports became available in large numbers, that the airline industry really prospered. When the United States entered World War II in 1941, there were fewer than 300 planes in airline service. Airplane production concentrated mainly on fighters and bombers, and reached a rate of nearly 50,000 a year by the end of the war. A large number of sophisticated new transports, used in wartime for troop and cargo carriage, became available to commercial operators after the war ended. Pressurized propeller planes such as the Douglas DC-6 and Lockheed Constellation, early versions of which carried troops and VIPs during the war, now carried paying passengers on transcontinental and transatlantic flights.

Wartime technology efforts also brought to aviation critical new developments, such as the jet engine. Jet transportation in the commercial-aviation arena arrived in 1952 with Britain’s DeHavilland Comet, an 885-km/h (550-mph), four-engine jet. The Comet quickly suffered two fatal crashes due to structural problems and was grounded. This complication gave American manufacturers Boeing and Douglas time to bring the 707 and DC-8 to the market. Pan American World Airways inaugurated Boeing 707 jet service in October of 1958, and air travel changed dramatically almost overnight. Transatlantic jet service enabled travelers to fly from New York City to London, England, in less than eight hours, half the propeller-airplane time. Boeing’s new 707 carried 112 passengers at high speed and quickly brought an end to the propeller era for large commercial airplanes.

After the big, four-engine 707s and DC-8s had established themselves, airlines clamored for smaller, shorter-range jets, and Boeing and Douglas delivered. Douglas produced the DC-9 and Boeing both the 737 and the trijet 727.


The Jumbo Jet Era

The next frontier, pioneered in the late 1960s, was the age of the jumbo jet. Boeing, McDonnell Douglas, and Lockheed all produced wide-body airliners, sometimes called jumbo jets. Boeing developed and still builds the 747. McDonnell Douglas built a somewhat smaller, three-engine jet called the DC-10, produced later in an updated version known as the MD-11. Lockheed built the L-1011 Tristar, a trijet that competed with the DC-10. The L-1011 is no longer in production, and Lockheed-Martin does not build commercial airliners anymore.

In the 1980s McDonnell Douglas introduced the twin-engine MD-80 family, and Boeing brought online the narrow-body 757 and wide-body 767 twin jets. Airbus had developed the A300 wide-body twin during the 1970s. During the 1980s and 1990s Airbus expanded its family of aircraft by introducing the slightly smaller A310 twin jet and the narrow-body A320 twin, a unique, so-called fly-by-wire aircraft with sidestick controllers for the pilots rather than conventional control columns and wheels. Airbus also introduced the larger A330 twin and the A340, a four-engine airplane for longer routes, on which passenger loads are somewhat lighter. In 2000 the company launched production of the A380, a superjumbo jet that will seat 555 passengers on two decks, both of which extend the entire length of the fuselage. Scheduled to enter service in 2006, the jet will be the world’s largest passenger airliner.

Boeing introduced the 777, a wide-body jumbo jet that can hold up to 400 passengers, in 1995. In 1997 Boeing acquired longtime rival McDonnell Douglas, and a year the company later announced its intention to halt production of the passenger workhorses MD-11, MD-80, and MD-90. The company ceded the superjumbo jet market to Airbus and instead focused its efforts on developing a midsize passenger airplane, called the Sonic Cruiser, that would travel at 95 percent of the speed of sound or faster, significantly reducing flight times on transcontinental and transoceanic trips.




Engineering, term applied to the profession in which a knowledge of the mathematical and natural sciences, gained by study, experience, and practice, is applied to the efficient use of the materials and forces of nature. The term engineer properly denotes a person who has received professional training in pure and applied science, but is often loosely used to describe the operator of an engine, as in the terms locomotive engineer, marine engineer, or stationary engineer. In modern terminology these latter occupations are known as crafts or trades. Between the professional engineer and the craftsperson or tradesperson, however, are those individuals known as subprofessionals or paraprofessionals, who apply scientific and engineering skills to technical problems; typical of these are engineering aides, technicians, inspectors, draftsmen, and the like.

Before the middle of the 18th century, large-scale construction work was usually placed in the hands of military engineers. Military engineering involved such work as the preparation of topographical maps, the location, design, and construction of roads and bridges; and the building of forts and docks; see Military Engineering below. In the 18th century, however, the term civil engineering came into use to describe engineering work that was performed by civilians for nonmilitary purposes. With the increasing use of machinery in the 19th century, mechanical engineering was recognized as a separate branch of engineering, and later mining engineering was similarly recognized.

The technical advances of the 19th century greatly broadened the field of engineering and introduced a large number of engineering specialties, and the rapidly changing demands of the socioeconomic environment in the 20th century have widened the scope even further.



The main branches of engineering are discussed below in alphabetical order. The engineer who works in any of these fields usually requires a basic knowledge of the other engineering fields, because most engineering problems are complex and interrelated. Thus a chemical engineer designing a plant for the electrolytic refining of metal ores must deal with the design of structures, machinery, and electrical devices, as well as with purely chemical problems.

Besides the principal branches discussed below, engineering includes many more specialties than can be described here, such as acoustical engineering (see Acoustics), architectural engineering (see Architecture: Construction), automotive engineering, ceramic engineering, transportation engineering, and textile engineering.


Aeronautical and Aerospace Engineering

Aeronautics deals with the whole field of design, manufacture, maintenance, testing, and use of aircraft for both civilian and military purposes. It involves the knowledge of aerodynamics, structural design, propulsion engines, navigation, communication, and other related areas. See Airplane; Aviation.

Aerospace engineering is closely allied to aeronautics, but is concerned with the flight of vehicles in space, beyond the earth's atmosphere, and includes the study and development of rocket engines, artificial satellites, and spacecraft for the exploration of outer space. See Space Exploration.


Chemical Engineering

This branch of engineering is concerned with the design, construction, and management of factories in which the essential processes consist of chemical reactions. Because of the diversity of the materials dealt with, the practice, for more than 50 years, has been to analyze chemical engineering problems in terms of fundamental unit operations or unit processes such as the grinding or pulverizing of solids. It is the task of the chemical engineer to select and specify the design that will best meet the particular requirements of production and the most appropriate equipment for the new applications.

With the advance of technology, the number of unit operations increases, but of continuing importance are distillation, crystallization, dissolution, filtration, and extraction. In each unit operation, engineers are concerned with four fundamentals: (1) the conservation of matter; (2) the conservation of energy; (3) the principles of chemical equilibrium; (4) the principles of chemical reactivity. In addition, chemical engineers must organize the unit operations in their correct sequence, and they must consider the economic cost of the overall process. Because a continuous, or assembly-line, operation is more economical than a batch process, and is frequently amenable to automatic control, chemical engineers were among the first to incorporate automatic controls into their designs.


Civil Engineering

Civil engineering is perhaps the broadest of the engineering fields, for it deals with the creation, improvement, and protection of the communal environment, providing facilities for living, industry and transportation, including large buildings, roads, bridges, canals, railroad lines, airports, water-supply systems, dams, irrigation, harbors, docks, aqueducts, tunnels, and other engineered constructions. The civil engineer must have a thorough knowledge of all types of surveying, of the properties and mechanics of construction materials, the mechanics of structures and soils, and of hydraulics and fluid mechanics. Among the important subdivisions of the field are construction engineering, irrigation engineering, transportation engineering, soils and foundation engineering, geodetic engineering, hydraulic engineering, and coastal and ocean engineering.


Electrical and Electronics Engineering

The largest and most diverse field of engineering, it is concerned with the development and design, application, and manufacture of systems and devices that use electric power and signals. Among the most important subjects in the field in the late 1980s are electric power and machinery, electronic circuits, control systems, computer design, superconductors, solid-state electronics, medical imaging systems, robotics, lasers, radar, consumer electronics, and fiber optics.

Despite its diversity, electrical engineering can be divided into four main branches: electric power and machinery, electronics, communications and control, and computers.


Electric Power and Machinery

The field of electric power is concerned with the design and operation of systems for generating, transmitting, and distributing electric power. Engineers in this field have brought about several important developments since the late 1970s. One of these is the ability to transmit power at extremely high voltages in both the direct current (DC) and alternating current (AC) modes, reducing power losses proportionately. Another is the real-time control of power generation, transmission, and distribution, using computers to analyze the data fed back from the power system to a central station and thereby optimizing the efficiency of the system while it is in operation.

A significant advance in the engineering of electric machinery has been the introduction of electronic controls that enable AC motors to run at variable speeds by adjusting the frequency of the current fed into them. DC motors have also been made to run more efficiently this way. See also Electric Motors and Generators; Electric Power Systems.



Electronic engineering deals with the research, design, integration, and application of circuits and devices used in the transmission and processing of information. Information is now generated, transmitted, received, and stored electronically on a scale unprecedented in history, and there is every indication that the explosive rate of growth in this field will continue unabated.

Electronic engineers design circuits to perform specific tasks, such as amplifying electronic signals, adding binary numbers, and demodulating radio signals to recover the information they carry. Circuits are also used to generate waveforms useful for synchronization and timing, as in television, and for correcting errors in digital information, as in telecommunications. See also Electronics.

Prior to the 1960s, circuits consisted of separate electronic devices—resistors, capacitors, inductors, and vacuum tubes—assembled on a chassis and connected by wires to form a bulky package. Since then, there has been a revolutionary trend toward integrating electronic devices on a single tiny chip of silicon or some other semiconductive material. The complex task of manufacturing these chips uses the most advanced technology, including computers, electron-beam lithography, micro-manipulators, ion-beam implantation, and ultraclean environments. Much of the research in electronics is directed toward creating even smaller chips, faster switching of components, and three-dimensional integrated circuits.


Communications and Control

Engineers in this field are concerned with all aspects of electrical communications, from fundamental questions such as “What is information?” to the highly practical, such as design of telephone systems. In designing communication systems, engineers rely heavily on various branches of advanced mathematics, such as Fourier analysis, linear systems theory, linear algebra, complex variables, differential equations, and probability theory. See also Mathematics; Matrix Theory and Linear Algebra; Probability.

Engineers work on control systems ranging from the everyday, passenger-actuated, as those that run an elevator, to the exotic, as systems for keeping spacecraft on course. Control systems are used extensively in aircraft and ships, in military fire-control systems, in power transmission and distribution, in automated manufacturing, and in robotics.

Engineers have been working to bring about two revolutionary changes in the field of communications and control: Digital systems are replacing analog ones at the same time that fiber optics are superseding copper cables. Digital systems offer far greater immunity to electrical noise. Fiber optics are likewise immune to interference; they also have tremendous carrying capacity, and are extremely light and inexpensive to manufacture.



Virtually unknown just a few decades ago, computer engineering is now among the most rapidly growing fields. The electronics of computers involve engineers in design and manufacture of memory systems, of central processing units, and of peripheral devices (see Computer). Foremost among the avenues now being pursued are the design of Very Large Scale Integration (VLSI) and new computer architectures. The field of computer science is closely related to computer engineering; however, the task of making computers more “intelligent” (artificial intelligence,), through creation of sophisticated programs or development of higher level machine languages or other means, is generally regarded as being in the realm of computer science.

One current trend in computer engineering is microminiaturization. Using VLSI, engineers continue to work to squeeze greater and greater numbers of circuit elements onto smaller and smaller chips. Another trend is toward increasing the speed of computer operations through use of parallel processors, superconducting materials, and the like.


Geological and Mining Engineering

This branch of engineering includes activities related to the discovery and exploration of mineral deposits and the financing, construction, development, operation, recovery, processing, purification, and marketing of crude minerals and mineral products. The mining engineer is trained in historical geology, mineralogy, paleontology, and geophysics, and employs such tools as the seismograph and the magnetometer for the location of ore or petroleum deposits beneath the surface of the earth (see Petroleum; Seismology). The surveying and drawing of geological maps and sections is an important part of the work of the engineering geologist, who is also responsible for determining whether the geological structure of a given location is suitable for the building of such large structures as dams.


Industrial or Management Engineering

This field pertains to the efficient use of machinery, labor, and raw materials in industrial production. It is particularly important from the viewpoint of costs and economics of production, safety of human operators, and the most advantageous deployment of automatic machinery.


Mechanical Engineering

Engineers in this field design, test, build, and operate machinery of all types; they also work on a variety of manufactured goods and certain kinds of structures. The field is divided into (1) machinery, mechanisms, materials, hydraulics, and pneumatics; and (2) heat as applied to engines, work and energy, heating, ventilating, and air conditioning. The mechanical engineer, therefore, must be trained in mechanics, hydraulics, and thermodynamics and must be fully grounded in such subjects as metallurgy and machine design. Some mechanical engineers specialize in particular types of machines such as pumps or steam turbines. A mechanical engineer designs not only the machines that make products but the products themselves, and must design for both economy and efficiency. A typical example of the complexity of modern mechanical engineering is the design of an automobile, which entails not only the design of the engine that drives the car but also all its attendant accessories such as the steering and braking systems, the lighting system, the gearing by which the engine's power is delivered to the wheels, the controls, and the body, including such details as the door latches and the type of seat upholstery.


Military Engineering

This branch is concerned with the application of the engineering sciences to military purposes. It is generally divided into permanent land defense (see Fortification and Siege Warfare) and field engineering. In war, army engineer battalions have been used to construct ports, harbors, depots, and airfields. In the U.S., military engineers also construct some public works, national monuments, and dams (see Army Corps of Engineers).

Military engineering has become an increasingly specialized science, resulting in separate engineering subdisciplines such as ordnance, which applies mechanical engineering to the development of guns and chemical engineering to the development of propellants, and the Signal Corps, which applies electrical engineering to all problems of telegraph, telephone, radio, and other communication.


Naval or Marine Engineering

Engineers who have the overall responsibility for designing and supervising construction of ships are called naval architects. The ships they design range in size from ocean-going supertankers as much as 1300 feet long to small tugboats that operate in rivers and bays. Regardless of size, ships must be designed and built so that they are safe, stable, strong, and fast enough to perform the type of work intended for them. To accomplish this, a naval architect must be familiar with the variety of techniques of modern shipbuilding, and must have a thorough grounding in applied sciences, such as fluid mechanics, that bear directly on how ships move through water.

Marine engineering is a specialized branch of mechanical engineering devoted to the design and operation of systems, both mechanical and electrical, needed to propel a ship. In helping the naval architect design ships, the marine engineer must choose a propulsion unit, such as a diesel engine or geared steam turbine, that provides enough power to move the ship at the speed required. In doing so, the engineer must take into consideration how much the engine and fuel bunkers will weigh and how much space they will occupy, as well as the projected costs of fuel and maintenance. See also Ships and Shipbuilding.


Nuclear Engineering

This branch of engineering is concerned with the design and construction of nuclear reactors and devices, and the manner in which nuclear fission may find practical applications, such as the production of commercial power from the energy generated by nuclear reactions and the use of nuclear reactors for propulsion and of nuclear radiation to induce chemical and biological changes. In addition to designing nuclear reactors to yield specified amounts of power, nuclear engineers develop the special materials necessary to withstand the high temperatures and concentrated bombardment of nuclear particles that accompany nuclear fission and fusion. Nuclear engineers also develop methods to shield people from the harmful radiation produced by nuclear reactions and to ensure safe storage and disposal of fissionable materials. See Nuclear Energy.


Safety Engineering

This field of engineering has as its object the prevention of accidents. In recent years safety engineering has become a specialty adopted by individuals trained in other branches of engineering. Safety engineers develop methods and procedures to safeguard workers in hazardous occupations. They also assist in designing machinery, factories, ships, and roads, suggesting alterations and improvements to reduce the likelihood of accident. In the design of machinery, for example, the safety engineer seeks to cover all moving parts or keep them from accidental contact with the operator, to put cutoff switches within reach of the operator, and to eliminate dangerous projecting parts. In designing roads the safety engineer seeks to avoid such hazards as sharp turns and blind intersections, known to result in traffic accidents. Many large industrial and construction firms, and insurance companies engaged in the field of workers compensation, today maintain safety engineering departments. See Industrial Safety; National Safety Council.


Sanitary Engineering

This is a branch of civil engineering, but because of its great importance for a healthy environment, especially in dense urban-population areas, it has acquired the importance of a specialized field. It chiefly deals with problems involving water supply, treatment, and distribution; disposal of community wastes and reclamation of useful components of such wastes; control of pollution of surface waterways, groundwaters, and soils; milk and food sanitation; housing and institutional sanitation; rural and recreational-site sanitation; insect and vermin control; control of atmospheric pollution; industrial hygiene, including control of light, noise, vibration, and toxic materials in work areas; and other fields concerned with the control of environmental factors affecting health. The methods used for supplying communities with pure water and for the disposal of sewage and other wastes are described separately. See Plumbing; Sewage Disposal; Solid Waste Disposal; Water Pollution; Water Supply and Waterworks.



Scientific methods of engineering are applied in several fields not connected directly to manufacture and construction. Modern engineering is characterized by the broad application of what is known as systems engineering principles. The systems approach is a methodology of decision-making in design, operation, or construction that adopts (1) the formal process included in what is known as the scientific method; (2) an interdisciplinary, or team, approach, using specialists from not only the various engineering disciplines, but from legal, social, aesthetic, and behavioral fields as well; (3) a formal sequence of procedure employing the principles of operations research.

In effect, therefore, transportation engineering in its broadest sense includes not only design of the transportation system and building of its lines and rolling stock, but also determination of the traffic requirements of the route followed. It is also concerned with setting up efficient and safe schedules, and the interaction of the system with the community and the environment. Engineers in industry work not only with machines but also with people, to determine, for example, how machines can be operated most efficiently by the workers. A small change in the location of the controls of a machine or of its position with relation to other machines or equipment, or a change in the muscular movements of the operator, often results in greatly increased production. This type of engineering work is called time-study engineering.

A related field of engineering, human-factors engineering, also known as ergonomics, received wide attention in the late 1970s and the '80s when the safety of nuclear reactors was questioned following serious accidents that were caused by operator errors, design failures, and malfunctioning equipment. Human-factors engineering seeks to establish criteria for the efficient, human-centered design of, among other things, the large, complicated control panels that monitor and govern nuclear reactor operations.

Among various recent trends in the engineering profession, licensing and computerization are the most widespread. Today, many engineers, like doctors and lawyers, are licensed by the state. Approvals by professionally licensed engineers are required for construction of public and commercial structures, especially installations where public and worker safety is a consideration. The trend in modern engineering offices is overwhelmingly toward computerization. Computers are increasingly used for solving complex problems as well as for handling, storing, and generating the enormous volume of data modern engineers must work with.

The National Academy of Engineering, founded in 1964 as a private organization, sponsors engineering programs aimed at meeting national needs, encourages new research, and is concerned with the relationship of engineering to society.

Defense Systems



Defense Systems, combination of electronic warning networks and military strategies designed to protect a country from a strategic missile or bomber attack. Defense systems use radar and satellite detection systems to monitor a nation’s airspace, providing data that would allow defense forces to detect and coordinate against such an attack. Several large countries, including the United States, also maintain an arsenal of offensive nuclear weapons as a deterrent to a nuclear attack.



Modern defense systems originated during World War II (1939-1945) in response to the advent of long-range bomber aircraft. Radar stations in Great Britain were installed to detect approaching German bombers and give British fighter aircraft time to intercept the enemy. Before World War II, most nations focused national defense against assaults from land or sea.

After World War II, the United States enjoyed a brief period of military superiority as the sole possessor of nuclear weapons, but the detonation of the first Soviet atomic bomb in 1949 brought a new military threat. The United States began to focus its defenses on early detection of long-range bombers, to give U.S. fighter aircraft enough time to respond to a large-scale attack.

The ballistic missile threat was the most important development in defense systems. When the first German V-2 ballistic missiles arced over England on September 6, 1944, a new day in warfare dawned. The V-2 traveled at supersonic speeds and was impossible to intercept. After World War II an immediate missile race began between the United States and the Union of Soviet Socialist Republics (USSR). The goal was to build upon German technology and create a long-range intercontinental ballistic missile, or ICBM, that could deliver a nuclear warhead.



By 1958, both the United States and the USSR had successfully tested ICBMs and immediately began to improve them. As a result, both nations became extremely vulnerable to attack. The amount of warning that existing national radar systems could give for an incoming bomber attack had been measured in hours, but an ICBM could loft from a launching base in the USSR and impact in the United States within 30 minutes. There were no technical means to stop a missile once launched, so national leaders turned to the idea of deterrence.

Deterrence uses the threat of an offensive attack as a defense—or deterrent—against such an attack. The USSR, with their initial lead in rocket and missile technology, had adopted a so-called first strike strategy. The Soviet leaders recognized that an exchange of nuclear missiles would be so devastating to both countries that the USSR had to launch its missiles first, and in such numbers that a crippled United States would not be able to mount a significant retaliatory strike. The United States publicly said it would never undertake a first strike, deciding instead to develop a second-strike capability of such magnitude that no Soviet first strike would avoid retaliation. This strategy became known as mutually assured destruction, which had the appropriate acronym MAD. The arms buildup between the United States and the USSR, and the tensions surrounding the buildup, became known as the Cold War (because no direct combat took place). Although the world came close to nuclear war on several occasions (see Cuban Missile Crisis), the USSR never dared to launch a first strike, so the United States never had to retaliate.


Defense Systems of Other Countries

Although the Cold War ended in the early 1990s, major military powers continue to employ some version of offensive deterrent and defensive warning capability. Shortly after World War II, political and military alliances were created to offer mutual defense. The United States, Britain, France, and several other countries formed the North Atlantic Treaty Organization (NATO), while the USSR and its satellite countries responded with the Warsaw Pact. Practically all countries monitor their own airspace, but for strategic defense the members of these alliances generally looked to either the United States or the USSR for protection.



Several countries such as the United States, Russia, Britain, France and China maintain a force of offensive nuclear weapons to deter against a nuclear attack. The offensive capability of the United States rests on what is known as the Nuclear Triad, comprised of strategic bombers, land-based ICBMs, and submarine-launched ballistic missiles. It was devised so if any one of the three “legs” is destroyed by an attack, the other two can still function. The nuclear powers of the world maintain some or all of these forces.



The United States had initially (from 1945 through about 1960) depended upon the bomber aircraft of the Strategic Air Command (SAC) to deter an attack from the USSR. In the early years of SAC, these aircraft included the Boeing B-50 and the Consolidated B-36. Later jets such as the Boeing B-47 Stratojet and Boeing B-52 Stratofortress jet bombers were faster and could carry more payload. The United States currently maintains B-52, Rockwell B-1B, and Northrop Grumman B-2 bombers capable of being armed with nuclear weapons as part of its strategic force.



The USSR began an intensive ICBM development program after World War II, and the United States responded in kind. While the Soviet bomber fleet never approached that of the United States in size or capability, the Soviet ICBM fleet was truly formidable. The USSR developed greater numbers of ICBMs than the United States, and these had larger warheads, greater range and superior accuracy to U.S. weapons. The USSR also was successful in hardening (or making resistant to a nuclear attack) its silo launch facilities to a far greater degree than the United States was able to do.



A similar process followed for the submarine-launched ballistic missile (SLBM), when in the late 1950s the USSR built several submarines able to carry the SS-N-4 Sark missile. In 1960 the United States sent the USS George Washington on patrol, carrying Polaris SLBMs. As technology improved, the SLBM assumed greater importance. A ballistic missile submarine is difficult to detect, can remain on duty for weeks at a time without surfacing, and can fire its missiles from beneath the water’s surface.


Coordination and Command

The U.S. Strategic Command monitors defense information from various sources and would coordinate a military response to a nuclear attack. The Strategic Air Command (SAC) was for many years the primary deterrent force. It has been replaced in part by the Air Combat Command. For many years as much as 50 percent of the SAC bomber fleet was on airborne alert, armed with nuclear weapons, and able to attack immediately upon notice.

In the event of an attack, U.S. Strategic Command would collect data and present recommendations to the U.S. president and senior advisers (referred to as the National Command Authority). Only the president can make the decision to use nuclear weapons, even in response to an attack. The plan a president would use to respond to an attack is called the Single Integrated Operational Plan, or SIOP. The SIOP consists of several planned responses to various nuclear scenarios. If the President were to decide to use nuclear weapons, several procedures and code phrases would be used to verify the President’s authority. When the procedures are completed, they would authorize the military to use nuclear weapons. Numerous precautions exist in this process to prevent accidental or unauthorized use of nuclear weapons.

The president and the rest of the National Command Authority would possibly give orders from a modified Boeing 747 called a National Airborne Operations Center (NAOC). By being airborne, command authority is less vulnerable to a ground attack. These airplanes are outfitted with advanced communications equipment so the president can stay in contact with U.S. Strategic Command at all times. U.S. Strategic Command also has a number of airborne command centers that can coordinate military forces in the event that ground centers have been destroyed or damaged.



The consequences of a nuclear exchange would be devastating, with casualties estimated to be in the hundreds of millions on both sides and massive damage to the environment. Both the USSR and the United States were aware of the catastrophic scale of a nuclear exchange, and both built elaborate defensive systems to detect an incoming nuclear attack.


Radar Networks

From 1949 (when the USSR developed nuclear weapons) to 1959 (when ICBMs became operational), the main strategic threat was bombers. To provide advance warning, several radar posts were built across Canada by joint cooperation between Canada and the United States. The first series of linked radar stations was called the Pinetree Line, established in 1954. Two more lines, the Mid-Canada Line and the Distant Early Warning Line (or DEW Line) were created for more complete radar coverage. The DEW Line, comprising 60 radar sites along the 70th parallel, became operational in 1957.

To warn against ICBMs, the Ballistic Missile Early Warning System (BMEWS) was introduced in 1962. It consists of sophisticated radar sites in Greenland, Alaska, and England. These sites could detect, track and predict impact points of both intercontinental ballistic missiles and smaller intermediate range ballistic missiles (IRBMs) launched from within the USSR. A typical site has four giant scanner search radars, each 50.3 m (165 ft) high and 122 m (400 ft) long; and one tracking radar, a 25.6 m (84 ft) antenna in a 42.6 m (140 ft) diameter housing. The purpose of the BMEWS is to provide sufficient warning time for U.S. bombers to get airborne and ICBM forces to prepare for a counterstrike.

BMEWS is backed up by the Perimeter Acquisition Radar Characterization (PARCS) system. Operating in the U.S. interior, PARCS can detect air traffic over Canada. Four other radar sites monitor the Atlantic and Pacific oceans for possible submarine attacks. These various stations are connected to the North American Aerospace Defense Command (NORAD), to U.S. Strategic Command headquarters, the Pentagon, and to the Canadian Royal Air Force fighter command.



NORAD was activated in 1957 to provide an integrated command for the air defense of the United States and Canada, and to process the information gathered from various radar sites. The reality of ICBMs required the establishment of a detection and tracking system, and the housing of NORAD in a bombproof site located within the interior of Cheyenne Mountain near Colorado Springs, Colorado. With its increased responsibility, NORAD equipment was expanded to include the Airborne Warning and Control Aircraft (AWACS), Over the Horizon (OTH) radar that warns against low-altitude cruise missiles, and a network of satellites. The DEW Line was replaced with a superior system called the North Warning System; and the Joint Surveillance System (JSS), operated by the U.S. Air Force and the Federal Aviation Administration, provides additional air traffic coverage. NORAD monitors all of these early warning systems, processes the information, and then relates it to U.S. Strategic Command.


Soviet Air Defense

The USSR built an even more extensive integrated air defense system, covering the country with radar systems, surface-to-air missile sites and large numbers of interceptors (fast military aircraft designed to destroy attacking airplanes). The USSR built a huge infrastructure of civil and military defense systems, including deep underground blast shelters for the country’s leaders and key industries. Russia continues to maintain this network. The United States has abandoned its rather primitive civil defense efforts of the 1950s, and has not replaced it with any other system.


Antiballistic Missile Systems

Active defense systems have been proposed that would use advanced missiles to track and shoot down incoming ICBMs. These are known as antiballistic missile (ABM) systems. The most famous of the antiballistic missile systems was the Strategic Defense Initiative (SDI) proposed by former U.S. president Ronald Reagan in 1983. SDI would have used a combination of laser-equipped satellites and other space-based weapons to destroy ballistic missiles after their launch. Research had begun on SDI, but the program was eventually cancelled due to high cost and the easing of global tensions.

The 1972 Antiballistic Missile (ABM) Treaty signed by the United States and the USSR limits the implementation of antiballistic missile systems. Russia has one system in place around Moscow. The United States had a system in North Dakota, but closed it down due to cost and reliability issues. See also Strategic Arms Limitation Talks.

The Patriot is a missile designed to destroy smaller ballistic missiles. Technology of this type continues to be used as the basis of research to counter ICBMs as well as short-range ballistic missiles, like the Scud missile used by Iraq in the 1991 Persian Gulf War. The United States also indirectly defends against some missiles through the antisubmarine warfare combination of radar, aircraft, missiles, attack submarines and surface ships that track Russian ballistic missile submarines. While none of these weapons have the capability to intercept an enemy missile once launched, they can track and destroy the submarine itself.



With the end of the Cold War between the former Soviet Union and the United States, the threat of an all-out nuclear attack has diminished. It is unlikely that Russia would undertake a massive first strike against the United States, and both countries have significantly reduced their nuclear forces. Still, the threat of nuclear war and the spread of nuclear weapons remains, evidenced by the nuclear tests of India and Pakistan in 1998. Five other nations admit to having nuclear weapons (their estimated quantity is indicated in parentheses): China (434), France (482), Russia (13,200), the United Kingdom (200) and the United States (15,500). Israel is known to have the capability to deploy nuclear weapons, and still other countries, including Iran, Iraq, Libya, and North Korea, are known to have nuclear weapons programs. See also Arms Control, International, Air Warfare.

Space Exploration



Space Exploration, quest to use space travel to discover the nature of the universe beyond Earth. Since ancient times, people have dreamed of leaving their home planet and exploring other worlds. In the later half of the 20th century, that dream became reality. The space age began with the launch of the first artificial satellites in 1957. A human first went into space in 1961. Since then, astronauts and cosmonauts have ventured into space for ever greater lengths of time, even living aboard orbiting space stations for months on end. Two dozen people have circled the Moon or walked on its surface. At the same time, robotic explorers have journeyed where humans could not go, visiting all but one of the solar system’s major worlds. Unpiloted spacecraft have also visited a host of minor bodies such as moons, comets, and asteroids. These explorations have sparked the advance of new technologies, from rockets to communications equipment to computers. Spacecraft studies have yielded a bounty of scientific discoveries about the solar system, the Milky Way Galaxy, and the universe. And they have given humanity a new perspective on Earth and its neighbors in space.

The first challenge of space exploration was developing rockets powerful enough and reliable enough to boost a satellite into orbit. These boosters needed more than brute force, however; they also needed guidance systems to steer them on the proper flight paths to reach their desired orbits. The next challenge was building the satellites themselves. The satellites needed electronic components that were lightweight, yet durable enough to withstand the acceleration and vibration of launch. Creating these components required the world’s aerospace engineering facilities to adopt new standards of reliability in manufacturing and testing. On Earth, engineers also had to build tracking stations to maintain radio communications with these artificial “moons” as they circled the planet.

Beginning in the early 1960s, humans launched probes to explore other planets. The distances traveled by these robotic space travelers required travel times measured in months or years. These spacecraft had to be especially reliable to continue functioning for a decade or more. They also had to withstand such hazards as the radiation belts surrounding Jupiter, particles orbiting in the rings of Saturn, and greater extremes in temperature than are faced by spacecraft in the vicinity of Earth. Despite their great scientific returns, these missions often came with high price tags. Today the world’s space agencies, such as the United States National Aeronautics and Space Administration (NASA) and the European Space Agency (ESA), strive to conduct robotic missions more cheaply and efficiently.

It was inevitable that humans would follow their unpiloted creations into space. Piloted spaceflight introduced a whole new set of difficulties, many of them concerned with keeping people alive in the hostile environment of space. In addition to the vacuum of space, which requires any piloted spacecraft to carry its own atmosphere, there are other deadly hazards: solar and cosmic radiation, micrometeorites (small bits of rock and dust) that might puncture a spacecraft hull or an astronaut’s pressure suit, and extremes of temperature ranging from frigid darkness to broiling sunlight. It was not enough simply to keep people alive in space—astronauts needed to have a means of accomplishing useful work while they were there. It was necessary to develop tools and techniques for space navigation, and for conducting scientific observations and experiments. Astronauts would have to be protected when they ventured outside the safety of their pressurized spacecraft to work in the vacuum. Missions and hardware would have to be carefully designed to help ensure the safety of space crews in any foreseeable emergency, from liftoff to landing.

The challenges of conducting piloted spaceflights were great enough for missions that orbited Earth. They became even more daunting for the Apollo missions, which sent astronauts to the Moon. The achievement of sending astronauts to the lunar surface and back represents a summit of human spaceflight.

After the Apollo program, the emphasis in piloted missions shifted to long-duration spaceflight, as pioneered aboard Soviet and U.S. space stations. The development of reusable spacecraft became another goal, giving rise to the U.S. space shuttle fleet. Today, efforts focus on keeping people healthy during space missions lasting a year or more—the duration needed to reach nearby planets—and in lowering the cost of sending satellites into orbit.



The desire to explore the heavens is probably as old as humankind, but in the strictest sense, the history of space exploration begins very recently, with the launch of the first artificial satellite, Sputnik 1, which the Soviets sent into orbit in 1957. Soviet cosmonaut Yuri Gagarin became the first human in space just a few years later, in 1961. The decades from the 1950s to the 1990s have been full of new “firsts,” new records, and advances in technology.


First Forays into Space

Although artificial satellites and piloted spacecraft are achievements of the later 20th century, the technology and principles of space travel stretch back hundreds of years, to the invention of rockets in the 11th century and the formulation of the laws of motion in the 17th century. The power of rockets to lift objects into space is described by a law of motion that was formulated by English scientist Sir Isaac Newton in the 1680s. Newton’s third law of motion states that every action causes an equal and opposite reaction. As predicted by Newton’s law, the rearward rush of gases expelled by the rocket’s engine causes the rocket to be propelled forward. It took nine centuries from the invention of rockets and almost three centuries from the formulation of Newton’s third law for humans to send an object into space. In space, the motions of satellites and interplanetary spacecraft are described by the laws of motion formulated by German astronomer Johannes Kepler, also in the 17th century. For example, one of Kepler’s laws states that the closer a satellite is to Earth, the faster it orbits.


Rockets and Rocket Builders

Rockets made their first recorded appearance as weapons in 12th-century China, but they probably originated in the 11th century. Fueled by gunpowder, they were launched against enemy troops. In the centuries that followed, these solid-fuel rockets became part of the arsenals of Europe. In 1814, during an attack on New Orleans, Louisiana, the British fired rockets—with little effect—at American troops.

In Russia, nearly a century later, a lone schoolteacher named Konstantin Tsiolkovsky envisioned how to use rockets to voyage into space. In a series of detailed treatises, including “The Exploration of Cosmic Space With Reactive Devices” (1903), Tsiolkovsky explained how a multi-stage, liquid-fuel rocket could propel humans to the Moon.

Tsiolkovsky did not have the means to build real liquid-fuel rockets. Robert Goddard, a physics professor in Worcester, Massachusetts, took up that effort. In 1926 he succeeded in building and launching the world’s first liquid-fuel rocket, which soared briefly above a field near his home. Beginning in 1940, after moving to Roswell, New Mexico, Goddard built a series of larger liquid-fuel rockets that flew as high as 90 m (300 ft). Meanwhile, beginning in 1936 at the California Institute of Technology, other experimenters made advances in solid-fuel rockets. During World War II (1939-1945), engineers developed solid-fuel rockets that could be attached to an airplane to provide a boost during takeoff.

The greatest strides in rocketry during the first half of the 20th century occurred in Germany. There, mathematician and physicist Hermann Oberth and architect Walter Hohmann theorized about rocketry and interplanetary travel in the 1920s. During World War II, Nazi Germany undertook the first large-scale rocket development program, headed by a young engineer named Wernher Von Braun. Von Braun’s team created the V-2, a rocket that burned an alcohol-water mixture with liquid oxygen to produce 250,000 newtons (56,000 lb) of thrust. The Germans launched thousands of V-2s carrying explosives against targets in Britain and The Netherlands. While they did not prove to be an effective weapon, V-2s did become the first human-made objects to reach altitudes above 80 km (50 mi)—the height at which outer space is considered to begin—before falling back to Earth. The V-2 inaugurated the era of modern rocketry.


Early Artificial Satellites

During the years following World War II, the United States and the Union of Soviet Socialist Republics (USSR) engaged in efforts to construct intercontinental ballistic missiles (ICBMs) capable of traveling thousands of miles armed with a nuclear warhead. In August 1957 Soviet engineers, led by rocket pioneer Sergei Korolyev, were the first to succeed with the launch of their R-7 rocket, which stood almost 30 m (100 ft) tall and produced 3.8 million newtons (880,000 lb) of thrust at liftoff. Although its primary purpose was for use as a weapon, Korolyev and his team adapted the R-7 into a satellite launcher. On October 4, 1957, they launched the world’s first artificial satellite, called Sputnik (“fellow traveler”). Although it was only a simple 58-cm (23-in) aluminum sphere containing a pair of radio transmitters, Sputnik’s successful orbits around Earth marked a huge step in technology and ushered in the space age. On November 3, 1957, the Soviets launched Sputnik 2, which weighed 508 kg (1,121 lb) and contained the first space traveler—a dog named Laika, which survived for several days aboard Sputnik 2. Due to rising temperatures within the satellite, Laika died from heat exhaustion before her air supply ran out.

News of the first Sputnik intensified efforts to launch a satellite in the United States. The initial U.S. satellite launch attempt on December 6, 1957, failed disastrously when the Vanguard launch rocket exploded moments after liftoff. Success came on January 31, 1958, with the launch of the satellite Explorer 1. Instruments aboard Explorer 1 made the first detection of the Van Allen belts, which are bands of trapped radiation surrounding Earth (see Radiation Belts). This launch also represented a success for Wernher von Braun, who had been brought to the United States with many of his engineers after World War II. Von Braun’s team had created the Jupiter C (an upgraded version of their Redstone missile), which launched Explorer 1.

The satellites that followed Sputnik and Explorer into Earth orbit provided scientists and engineers with a variety of new knowledge. For example, scientists who tracked radio signals from the U.S. satellite Vanguard 1, launched in March 1958, determined that Earth is slightly flattened at the poles. In August 1959 Explorer 6 sent back the first photo of Earth from orbit. Even as these satellites revealed new details about our own planet, efforts were underway to reach our nearest neighbor in space, the Moon.


Unpiloted Lunar Missions

Early in 1958 the United States and the USSR were both working hard to be the first to send a satellite to the Moon. Initial attempts by both sides failed. On October 11, 1958, the United States launched Pioneer 1 on a mission to orbit the Moon. It did not reach a high enough speed to reach the Moon, but reached a height above Earth of more than 110,000 km (more than 70,000 mi). In early December 1958 Pioneer 3 also failed to leave high Earth orbit. It did, however, discover a second Van Allen belt of radiation surrounding Earth.

On January 2, 1959, after two earlier failed missions, the USSR launched Luna 1, which was intended to hit the Moon. Although it missed its target, Luna 1 did become the first artificial object to escape Earth orbit. On September 14, 1959, Luna 2 became the first artificial object to strike the Moon, impacting east of the Mare Serentitatis (Sea of Serenity). In October 1959, Luna 3 flew around the Moon and radioed the first pictures of the far side of the Moon, which is not visible from Earth.

In the United States, efforts to reach the Moon did not resume until 1962, with a series of probes called Ranger. The early Rangers were designed to eject an instrument capsule onto the Moon’s surface just before the main spacecraft crashed into the Moon. These missions were plagued by failures—only Ranger 4 struck the Moon, and the spacecraft had already ceased functioning by that time. Rangers 6 through 9 were similar to the early Rangers, but did not have instrument packages. They carried television cameras designed to send back pictures of the Moon before the spacecraft crashed. On July 31, 1964, Ranger 7 succeeded in sending back the first high-resolution images of the Moon before crashing, as planned, into the surface. Rangers 8 and 9 repeated the feat in 1965.

By then, the United States had embarked on the Apollo program to land humans on the Moon (see the Piloted Spaceflight section of this article for a discussion of the Apollo program). With an Apollo landing in mind, the next series of U.S. lunar probes, named Surveyor, was designed to “soft-land” (that is, land without crashing) on the lunar surface and send back pictures and other data to aid Apollo planners. As it turned out, the Soviets made their own soft landing first, with Luna 9, on February 3, 1966. Luna 9 radioed the first pictures of a dusty moonscape from the lunar surface. Surveyor 1 successfully reached the surface on June 2, 1966. Six more Surveyor missions followed; all but two were successful. The Surveyors sent back thousands of pictures of the lunar surface. Two of the probes were equipped with a mechanical claw, remotely operated from Earth, that enabled scientists to investigate the consistency of the lunar soil.

At the same time, the United States launched the Lunar Orbiter probes, which began circling the Moon to map its surface in unprecedented detail. Lunar Orbiter 1 began taking pictures on August 18, 1966. Four more Lunar Orbiters continued the mapping program, which gave scientists thousands of high-resolution photographs covering nearly all of the Moon.

Beginning in 1968 the USSR sent a series of unpiloted Zond probes—actually a lunar version of their piloted Soyuz spacecraft—around the Moon. These flights, initially designed as preparation for planned piloted missions that would orbit the Moon, returned high-quality photographs of the Moon and Earth. Two of the Zonds carried biological payloads with turtles, plants, and other living things.

Although both the United States and the USSR were achieving successes with their unpiloted lunar missions, the Americans were pulling steadily ahead in their piloted program. As their piloted lunar program began to lag, the Soviets made plans for robotic landers that would gather a sample of lunar soil and carry it to Earth. Although this did not occur in time to upstage the Apollo landings as the Soviets had hoped, Luna 16 did carry out a sample return in September 1970, returning to Earth with 100 g (4 oz) of rock and soil from the Moon’s Mare Fecunditatis (Sea of Fertility). In November 1970 Luna 17 landed with a remote-controlled rover called Lunakhod 1. The first wheeled vehicle on the Moon, Lunakhod 1 traveled 10.5 km (6.4 mi) across the Sinus Iridium (Bay of Rainbows) during ten months of operations, sending back pictures and other data. Only three more lunar probes followed. Luna 20 returned samples in February 1972. Lunakhod 2, carried aboard the Luna 21 lander, reached the Moon in January 1973. Then, in August 1976 Luna 24 ended the first era of lunar exploration.

Exploration of the Moon resumed in February 1994 with the U.S. probe called Clementine, which circled the Moon for three months. In addition to surveying the Moon with high-resolution cameras, Clementine gathered the first comprehensive data on lunar topography using a laser altimeter. Clementine’s laser altimeter bounced laser beams off of the Moon’s surface, measuring the time they took to come back to determine the height of features on the Moon.

In January 1998 NASA’s Lunar Prospector probe began circling the Moon in an orbit over the Moon’s north and south poles. Its sensors conducted a survey of the Moon’s composition. In March 1998 the spacecraft found tentative evidence of water in the form of ice mixed with lunar soil at the Moon’s poles. Lunar Prospector also investigated the Moon’s gravitational and magnetic fields. Controllers intentionally crashed the probe into the Moon in July 1999, hoping to see signs of water in the plume of debris raised by the impact. Measurements taken by instruments around Earth, however, did not find evidence of water after the crash, nor did they rule out the existence of water.


Scientific Satellites

Years before the launch of the first artificial satellites, scientists anticipated the value of putting telescopes and other scientific instruments in orbit around Earth. Orbiting satellites can view large areas of Earth or can provide views of space unobstructed by Earth’s atmosphere.


Earth-Observing Satellites

One main advantage of putting scientific instruments into space is the ability to look down at Earth. Viewing large areas of the planet allows meteorologists, scientists who research Earth’s weather and climate, to study large-scale weather patterns (see Meteorology). More detailed views aid cartographers, or mapmakers, in mapping regions that would otherwise be inaccessible to people. Researchers who study Earth’s land masses and oceans also benefit from having an orbital vantage point.

Beginning in 1960 with the launch of U.S. Tiros I, weather satellites have sent back television images of parts of the planet. The first satellite that could observe most of Earth, NASA’s Earth Resources Technology Satellite 1 (ERTS 1, later renamed Landsat 1), was launched in 1972. Landsat 1 had a polar orbit, circling Earth by passing over the north and south poles. Because the planet rotated beneath Landsat’s orbit, the satellite could view almost any location on the Earth once every 18 hours. Landsat 1 was equipped with cameras that recorded images not just of visible light but of other wavelengths in the electromagnetic spectrum (see Electromagnetic Radiation). These cameras provided a wealth of useful data. For example, images made in infrared light let researchers discriminate between healthy crops and diseased ones. Six additional Landsats were launched between 1975 and 1999.

The success of the Landsat satellites encouraged other nations to place Earth-monitoring satellites in orbit. France launched a series of satellites called SPOT beginning in 1986, and Japan launched the MOS-IA (Marine Observation System) in 1987. The Indian Remote Sensing satellite, IRS-IA, began operating in 1988. An international team of scientists and engineers launched the Terra satellite in December 1999. The satellite carries five instruments for observing Earth and monitoring the health of the planet. NASA, a member organization of the team, released the first images taken by the satellite in April 2000.


Astronomical Satellites

Astronomical objects such as stars emit radiation, or radiating energy, in the form of visible light and many other types of electromagnetic radiation. Different wavelengths of radiation provide astronomers with different kinds of information about the universe. Infrared radiation, with longer wavelengths than visible light, can reveal the presence of interstellar dust clouds or other objects that are not hot enough to emit visible light. X rays, a high-energy form of radiation with shorter wavelengths than visible light, can indicate extremely high temperatures caused by violent collisions or other events. Earth orbit, above the atmosphere, has proved to be an excellent vantage point for astronomers. This is because Earth’s atmosphere absorbs high-energy radiation, such as ultraviolet rays, X rays, and gamma rays. While such absorption shields the surface of Earth and allows life to exist on the planet, it also hides many celestial objects from ground-based telescopes. In the early 1960s, rockets equipped with scientific instruments (called sounding rockets) provided brief observations of space beyond our atmosphere, but orbiting satellites have offered far more extensive coverage.

Britain launched the first astronomical satellite, Ariel 1, in 1962 to study cosmic rays and ultraviolet and X-ray radiation from the Sun. In 1968 NASA launched the first Orbiting Astronomical Observatory, OAO 1, equipped with an ultraviolet telescope. Uhuru, a U.S. satellite designed for X-ray observations, was launched in 1970. Copernicus, officially designated OAO 3, was launched in 1972 to detect cosmic X-ray and ultraviolet radiation. In 1978 NASA’s Einstein Observatory, officially designated High-Energy Astrophysical Observatory 2 (HEAO 2), reached orbit, becoming the first X-ray telescope that could provide images comparable in detail to those provided by visible-light telescopes. The Infrared Astronomical Satellite (IRAS), launched in 1983, was a cooperative effort by the United States, The Netherlands, and Britain. IRAS provided the first map of the universe in infrared wavelengths and was one of the most successful astronomical satellites. The Cosmic Ray Background Explorer (COBE) was launched in 1989 by NASA and discovered further evidence for the big bang, the theoretical explosion at the beginning of the universe.

The Hubble Space Telescope was launched in orbit from the U.S. space shuttle in 1990, equipped with a 100-in (250-cm) telescope and a variety of high-resolution sensors produced by the United States and European countries. Flaws in Hubble’s mirror were corrected by shuttle astronauts in 1993, enabling Hubble to provide astronomers with spectacularly detailed images of the heavens. NASA launched the Chandra X-Ray Observatory in 1999. Chandra is named after American astrophysicist Subrahmanyan Chandrasekhar and has eight times the resolution of any previous X-ray telescope.


Other Satellites

In addition to observing Earth and the heavens from space, satellites have had a variety of other uses. A satellite called Corona was the first U.S. spy satellite effort. The program began in 1958. The first Corona satellite reached orbit in 1960 and provided photographs of Soviet missile bases. In the decades that followed, spy satellites, such as the U.S. Keyhole series, became more sophisticated. Details of these systems remain classified, but it is has been reported that they have attained enough resolution to detect an object the size of a car license plate from an altitude of 160 km (100 mi) or more.

Other U.S. military satellites have included the Defense Support Program (DSP) for the detection of ballistic missile launches and nuclear weapons tests. The Defense Meteorological Support Program (DMSP) satellites have provided weather data. And the Defense Satellite Communications System (DSCS) has provided secure transmission of voice and data. White Cloud is the name of a U.S. Navy surveillance satellite designed to intercept enemy communications.

Satellites are becoming increasingly valuable for navigation. The Global Positioning System (GPS) was originally developed for military use. A constellation of GPS satellites, called Navstar, has been launched since 1978; each Navstar satellite orbits Earth every 12 hours and continuously emits navigation signals. Military pilots and navigators use GPS signals to calculate their precise location, altitude, and velocity, as well as the current time. The GPS signals are remarkably accurate: Time can be figured to within a millionth of a second, velocity within a fraction of a kilometer per hour, and location to within a few meters. In addition to their military uses, slightly lower resolution versions of GPS receivers have been developed for civilian use in aircraft, ships, and land vehicles. Hikers, campers, and explorers carry handheld GPS receivers, and some private passenger automobiles now come equipped with a GPS system.


Planetary Studies

Even as the United States and the USSR raced to explore the Moon, both countries were also readying missions to travel farther afield. Earth’s closest neighbors, Venus and Mars, became the first planets to be visited by spacecraft in the mid-1960s. By the close of the 20th century, spacecraft had visited every planet in the solar system, except for the outermost planet—tiny, frigid Pluto.



Only one spacecraft has visited the solar system’s innermost planet, Mercury. The U.S. probe Mariner 10 flew past Mercury on March 29, 1974, and sent back close-up pictures of a heavily cratered world resembling Earth’s Moon. Mariner 10’s flyby also helped scientists refine measurements of the planet’s size and density. It revealed that Mercury has a weak magnetic field but lacks an atmosphere. After the first flyby, Mariner 10’s orbit brought it past Mercury for two more encounters, in September 1974 and March 1975, which added to the craft’s harvest of data. In its three flybys, Mariner 10 photographed 57 percent of the planet’s surface.



The U.S. Mariner 2 probe became the first successful interplanetary spacecraft when it flew past Venus on December 14, 1962. Mariner 2 carried no cameras, but it did send back valuable data regarding conditions beneath Venus’s thick, cloudy atmosphere. From measurements by Mariner 2’s sensors, scientists estimated the surface temperature to be 400°C (800°F—hot enough to melt lead), dispelling any notions that Venus might be very similar to Earth.

In 1973 NASA launched Mariner 10 toward a double encounter with Venus and Mercury. As it flew past Venus on February 5, 1974, Mariner 10’s cameras took the first close-up images of Venus’s clouds, including views in ultraviolet light that recorded distinct patterns in the circulation of Venus’s atmosphere.

The USSR explored Venus with their Venera series of probes. Venera 7 made the first successful planetary landing on December 15, 1970, and radioed 23 minutes of data from the Venusian surface, indicating a temperature of nearly 480°C (900°F) and an atmospheric pressure 90 times that on Earth. More Venera successes followed, and on October 22, 1975, Venera 9 landed and sent back black and white images of a rock-strewn plain—the first pictures of a planetary surface beyond Earth. Venera 10 sent back its own surface pictures three days later.

Beginning in 1978, a series of spacecraft examined Venus from orbit around the planet. These probes were equipped with radar that pierced the dense, cloudy atmosphere that hides Venus’s surface, giving scientists a comprehensive, detailed look at the terrain beneath. The first of this series, the U.S. Pioneer Venus Orbiter (see Pioneer (spacecraft)), arrived in December 1978 and operated for almost 14 years. The spacecraft’s radar data were compiled into images that showed 93 percent of the planet’s large-scale topographic features.

The Soviet Venera 15 and 16 orbiters reached Venus in October 1983, each equipped with radar systems that produced high-resolution images. In eight months of mapping operations, two spacecraft mapped much of Venus’s northern hemisphere, sending back images of mountains, plains, craters, and what appeared to be volcanoes.

After being released from the space shuttle Atlantis, NASA’s radar-equipped Magellan orbiter traveled through space and reached Venus in August 1990. During the next four years Magellan mapped Venus at very high resolution, providing detailed images of volcanoes and lava flows, craters, fractures, mountains, and other features. Magellan showed scientists that the surface of Venus is extremely well preserved and relatively young. It also revealed a history of planetwide volcanic activity that may be continuing today.



On July 14, 1965, the U.S. Mariner 4 flew past Mars and took pictures of a small portion of its surface, giving scientists their first close-up look at the red planet. To the disappointment of some who expected a more Earthlike world, Mariner’s pictures showed cratered terrain resembling the Moon’s surface. In August 1969 Mariner 6 and 7 sent back more detailed views of craters and the planet’s icy polar caps. On the whole, these pictures seemed to confirm the impression of a moonlike Mars.

NASA’s Mariner 9 went into orbit around Mars in November 1971, providing scientists with the first close-up views of the entire planet. Mariner 9’s pictures revealed giant volcanoes up to five times as high as Mount Everest, a system of canyons that would stretch the length of the continental United States, and—most intriguing of all—winding channels that resemble dry river valleys of Earth. Scientists realized that Mars’s evolution had been more complex and fascinating than they had suspected and that the planet was moonlike in some ways, but surprisingly Earthlike in others.

The USSR’s Mars probes were stymied by technical malfunctions. In November 1971 the Mars 2 spacecraft (see Mars (space program)) went into orbit around the planet and released a landing capsule that crashed without returning any data. Mars 2 became the first artificial object to reach the Martian surface. In December 1971 a lander released by the Mars 3 orbiter reached the surface successfully. However, it sent back only 20 seconds of video signals that included no data. In 1973 two more landing missions also failed. In 1988 the USSR made two unsuccessful attempts to explore the Martian moon Phobos. Contact with the spacecraft Phobos 1 (see Phobos (space program)) was lost due to an error by mission controllers when the spacecraft was on its way to Mars. Phobos 2 reached Martian orbit in January 1989 and sent back images of the planet, but failed before its planned rendezvous with Phobos.

The U.S. Viking probes made the first successful Mars landings in 1976. Two Viking spacecraft, each consisting of an orbiter and lander, left Earth in August and September 1975. Viking 1 went into orbit around Mars in June 1976, and after a lengthy search for a relatively smooth landing site, the Viking 1 lander touched down safely on Mars’s Chryse Planitia (Plain of Gold) on July 20, 1976. The Viking 2 lander reached Mars’s Utopia Planitia (Utopia Plain) on September 3, 1976. Each lander sent back close-up pictures of a dusty surface littered with rocks, under a surprisingly bright sky (due to sunlight reflecting off of airborne dust). The landers also recorded changes in atmospheric conditions at the surface. They searched, without success, for conclusive evidence of microbial life. The landers continued to send back data for several years, while the orbiters took thousands of high-resolution photographs of the planet.

On July 4, 1996, 20 years after Viking 1 arrived, NASA’s Mars Pathfinder spacecraft landed in Mars's Ares Vallis (Mars Valley). Pathfinder used a new landing system featuring pressurized airbags to cushion its impact. The next day, Pathfinder released a 10-kg (22-lb) rover called Sojourner, which became the first wheeled vehicle to operate on another planetary surface. While Pathfinder sent back images, atmospheric measurements, and other data, Sojourner examined rocks and soil with a camera and an Alpha Proton X-ray Spectrometer, which provided data on chemical compositions by measuring how radiation bounced back from rocks and dust. The mission ended when the spacecraft ceased responding to commands from Earth in October 1997.

NASA’s Mars Global Surveyor went into orbit around Mars in September 1997. Designed as a replacement for NASA’s Mars Observer probe, which failed before reaching Mars in 1993, Mars Global Surveyor is equipped with a high-resolution camera and instruments to study the planet’s atmosphere, topography and gravity, surface composition, and magnetic field. Global Surveyor reached orbit around Mars in the fall of 1997, but a problem with an unstable solar panel delayed the start of its mission—mapping the entire planet—for about a year. (In the meantime, Mars Global Surveyor began relaying high-resolution images of select areas in early 1998.) Its mapping operation, slated to last for one Martian year (about two Earth years), began in March 1999. Unlike previous Mars probes, Mars Global Surveyor adjusted its orbit using a technique called aerobraking, which relies on friction with the planet’s upper atmosphere—rather than rocket engines—to slow the spacecraft to bring it into a proper mapping orbit.

Mars Pathfinder and Mars Global Surveyor were part of a series of spacecraft that NASA plans to send to Mars about every 18 months. The next two spacecraft in the series, Mars Climate Orbiter and Mars Polar Lander, began their journeys to Mars in December 1998 and January 1999, respectively. Both probes reached Mars in late 1999, but Mars Climate Orbiter crashed into the planet due to a navigational error, and software defects led to the crash landing of Mars Polar Lander. Japan launched the spacecraft Nozomi (Japanese for “hope”), destined for Mars, on July 4, 1998. Nozomi contains equipment developed by scientists from around the world, including Canadian space scientists. This is the first time Canada has participated in a mission to another planet. Nozomi is scheduled to reach Mars in 2003.


The Outer Planets

Pioneer Space Probe

The Pioneer series of U.S. space probes was equipped with cameras and instruments to detect subatomic particles, meteorites, and electric and magnetic fields in the solar system and interstellar space.

The giant gaseous world Jupiter, the solar system’s largest planet, had its first visit from a spacecraft—Pioneer 10—on December 1, 1973. Pioneer 10 flew past Jupiter 21 months after launch and sent back images of the planet’s turbulent, multicolored atmosphere. Pioneer 10 also investigated Jupiter’s intense magnetic field, and the associated belts of trapped radiation. Acting like a slingshot, Jupiter’s powerful gravitational pull accelerated the spacecraft onto a new path that sent it out of the solar system. Pioneer 10 traveled beyond the orbit of Pluto in 1983.

Pioneer 11 made its own inspection of Jupiter, passing the planet on December 1, 1974. Like its predecessor, Pioneer 11 got a gravitational assist from Jupiter. In this case, the spacecraft was sent toward Saturn. Pioneer 11 reached this ringed giant on September 1, 1979, before heading out of the solar system. NASA maintained periodic contact with Pioneer 11 until November 1995, when the probe’s power supply was almost exhausted.

In 1977 the twin Voyager 1 and 2 probes (see Voyager) were launched on the most ambitious space exploration missions yet attempted: a grand tour of the outer solar system. Voyager 1 reached Jupiter in March 1979 and sent back thousands of detailed images of the planet’s cloud-swirled atmosphere and its family of moons. Other sensors probed the planet’s atmosphere and its magnetic field. Voyager discovered that Jupiter is encircled by a tenuous ring of dust, and found three previously unknown moons. The most surprising discovery of the Voyager probes was that the Jovian moon Io is covered with active volcanoes spewing ice and sulfur compounds into space. Io was the first world other than Earth found to be geologically active.

Voyager 1 continued on to a rendezvous with Saturn in November 1980. Its images detailed a variety of complex and sometimes bizarre phenomena within the planet’s rings. It also photographed the Saturnian moons, including planet-sized Titan. Voyager 1 found Titan’s surface obscured by a thick, opaque atmosphere of hydrocarbon smog.

Voyager 2 made its own flybys of Jupiter in July 1979 and of Saturn in August 1981. It continued outward to make the first spacecraft visits to Uranus in January 1986 and Neptune in August 1989. Like Pioneer 10 and 11, the Voyagers are now headed for interstellar space. On February 17, 1998, Voyager 1 became the most distant human-made object, reaching a distance of 10.5 billion km (6.5 billion mi) from Earth. Scientists hope it will continue sending back data well into the 21st century.

NASA’s Galileo orbiter reached Jupiter in December 1995. The spacecraft deployed a probe that entered Jupiter’s atmosphere on December 7, 1995, radioing data for 57 minutes before succumbing to intense pressures. The probe sent back the first measurements of the composition and structure of Jupiter’s atmosphere from within the atmosphere. The Galileo spacecraft then began a long-term mission to study Jupiter’s atmosphere, magnetosphere, and moons from an orbit around the planet. NASA extended the spacecraft’s mission through the year 2003. The extended mission included measurements taken simultaneously by the Galileo orbiter and by a new spacecraft, Cassini, which visited Jupiter on its way to Saturn.

Galileo Orbiter and Probe

The Galileo spacecraft, launched in 1989 with the ultimate destination of Jupiter, carried a number of scientific instruments on board to study the solar system while on route to Jupiter, including a radiometer and ultraviolet, extreme ultraviolet, and near-infrared spectrometers, which take pictures of light outside the visible range. Upon arrival at Jupiter in 1995, Galileo released a probe that plunged into the planet’s fiery atmosphere, transmitting vital scientific data before it was destroyed.

NASA’s Cassini spacecraft set out toward Saturn and Saturn’s moon Titan in October 1997. Cassini reached Jupiter at the end of the year 2000 and is scheduled to reach Saturn in 2004. After reaching Saturn, it should release a probe into Titan’s atmosphere.


Other Solar System Missions

Aside from the planets and their moons, space missions have focused on a variety of other solar system objects. The Sun, whose energy affects all other bodies in the solar system, has been the focus of many missions. Between and beyond the orbits of the planets, innumerable smaller bodies—asteroids and comets—also orbit the Sun. All of these celestial objects hold mysteries, and spacecraft have been launched to unlock their secrets.

A number of the earliest satellites were launched to study the Sun. Most of these were Earth-orbiting satellites. The Soviet satellite Sputnik 2, launched in 1957 to become the second satellite in space, carried instruments to detect ultraviolet and X-ray radiation from the Sun. Several of the satellites in the U.S. Pioneer series of the late 1950s through the 1970s gathered data on the Sun and its effects on the interplanetary environment. A series of Earth-orbiting U.S. satellites, known as the Orbiting Solar Observatories (OSO), studied the Sun’s ultraviolet, X-ray, and gamma-ray radiation through an entire cycle of rising and falling solar activity from 1962 to 1978. Helios 2, a solar probe created by the United States and West Germany, was launched into a solar orbit in 1976 and ventured within 43 million km (27 million mi) of the Sun. The U.S. Solar Maximum Mission spacecraft was designed to monitor solar flares and other solar activity during the period when sunspots were especially frequent. After suffering mechanical problems, in 1984 it became the first satellite to be repaired by astronauts aboard the space shuttle. The satellite Yohkoh, a joint effort of Japan, the United States, and Britain, was launched in 1991 to study high-energy radiation from solar flares. The Ulysses mission was created by NASA and the European Space Agency. Launched in 1990, the spacecraft used a gravitational assist from the planet Jupiter to fly over the poles of the Sun. The European Space Agency launched the Solar and Heliospheric Observatory (SOHO) in 1995 to study the Sun’s internal structure, as well as its outer atmosphere (the corona), and the solar wind, the stream of subatomic particles emitted by the Sun.

Asteroids are chunks of rock that vary in size from dust grains to tiny worlds, the largest of which is more than a third the size of Earth’s Moon. These rocky bodies, composed of debris left over from the formation of the solar system, are among the latest solar system objects to be visited by spacecraft. The first such encounter was made by the Galileo spacecraft, which passed through the solar system’s main asteroid belt on its way to Jupiter. Galileo flew within 1,600 km (1,000 mi) of the asteroid Gaspra on October 29, 1991. Galileo’s images clearly showed Gaspra's irregular shape and a surface covered with impact craters. On August 28, 1993, Galileo passed close by the asteroid 243 Ida and discovered that it is orbited by another, smaller asteroid, subsequently named Dactyl. Ida is the first asteroid known to possess its own moon. On June 27, 1997, the Near-Earth Asteroid Rendezvous (NEAR) spacecraft flew past asteroid 253 Mathilde. NEAR reached the asteroid 433 Eros and became the first spacecraft to orbit an asteroid in February 2000. The United States launched the spacecraft Deep Space 1 (DS1) in 1998 to prepare for 21st-century missions within the solar system and beyond. In July 1999 DS1 flew by the small asteroid 9969 Braille and discovered that it is composed of the same type of material as the much larger asteroid 4 Vesta. Braille may be a broken piece of Vesta, or it may have simply formed at the same time and place as Vesta in the early solar system.

Comets are icy wanderers that populate the solar system’s outermost reaches. These “dirty snowballs” are chunks of frozen gases and dust. When a comet ventures into the inner solar system, some of its ices evaporate. The comet forms tails of dust and ionized gas, and many have been spectacular sights. Because they may contain the raw materials that formed the solar system, comets hold special fascination for astronomers. Although several comets have been observed by a variety of space-born instruments, only one has been visited by spacecraft. The most famous comet of all, Halley’s Comet, made its most recent passage through the inner solar system in 1986. In March 1986 five separate spacecraft flew past Halley, including the USSR’s Vega 1 and Vega 2 probes, the Giotto spacecraft of the European Space Agency, and Japan’s Sakigake and Suisei probes. These encounters produced valuable data on the composition of the comet’s gas and dust tails and its solid nucleus. Vega 1 and 2 returned the first close-up views ever taken of a comet’s nucleus, followed by more detailed images from Giotto. Giotto went on to make a close passage to Comet P/Grigg-Skjellerup on July 10, 1992.


Piloted Spaceflight

Piloted spaceflight presents even greater challenges than unpiloted missions. Nonetheless, the United States and the USSR made piloted flights the focus of their Cold War space race, knowing that astronauts and cosmonauts put a face on space exploration, enhancing its impact on the general public. The history of piloted spaceflight started with relatively simple missions, based in part on the technology developed for early unpiloted spacecraft. Longer and more complicated missions followed, crowned by the ambitious and successful U.S. Apollo missions to the Moon. Since the Apollo program, piloted spaceflight has focused on extended missions aboard spacecraft in Earth orbit. These missions have placed an emphasis on scientific experimentation and work in space.


Vostok and Mercury

At the beginning of the 1960s, the United States and the USSR were competing to put the first human in space. The Soviets achieved that milestone on April 12, 1961, when a 27-year-old pilot named Yuri Gagarin made a single orbit of Earth in a spacecraft called Vostok (East). Gagarin’s Vostok was launched by an R-7 booster, the same kind of rocket they had used to launch Sputnik. Although the Soviets portrayed Gagarin’s 108-minute flight as flawless, historians have since learned that Vostok experienced a malfunction that caused it to tumble during the minutes before its reentry into the atmosphere. However, Gagarin parachuted to the ground unharmed after ejecting from the descending Vostok.

On May 5, 1961, the United States entered the era of piloted spaceflight with the mission of Alan Shepard. Shepard was launched by a Redstone booster on a 15-minute “hop” in a Mercury spacecraft named Freedom 7. Shepard’s flight purposely did not attain the necessary velocity to go into orbit. In February 1962, John Glenn became the first American to orbit Earth, logging five hours in space. His Mercury spacecraft, called Friendship 7, had been borne aloft by a powerful Atlas booster rocket. After his historic mission, the charismatic Glenn was celebrated as a national hero.

The Soviets followed Gagarin’s flight with five more Vostok missions, including a flight of almost five days by Valery Bykovsky and the first spaceflight by a woman, Valentina Tereshkova, both in June 1963. By contrast, the longest of the six piloted Mercury flights was the 34-hour mission flown by Gordon Cooper in May 1963.

By today’s standards, Vostok and Mercury were simple spacecraft, though they were considered advanced at the time. Both were designed for the basic mission of keeping a single pilot alive in the vacuum of space and providing a safe means of return to Earth. Both were equipped with small thrusters that allowed the pilot to change the craft’s orientation in space. There was no provision, however, for altering the craft's orbit—that capability would have to wait for the next generation of spacecraft. Compared to Mercury, Vostok was both roomier and more massive, weighing 2,500 kg (5,500 lb)—a reflection of the greater lifting power of the R-7 compared with the U.S. Redstone and Atlas rockets.


Voskhod and Gemini

Gemini Spacecraft

Ten piloted Gemini spacecraft were launched between March 1965 and November 1966. Unlike earlier American spacecraft, Gemini capsules were designed to carry two astronauts. Before returning to the earth, the crew jettisoned the resource compartment and the deorbiting system. The reentry module floated to a watery splashdown on earth using a parachute.

In early 1961—just weeks after Shepard had become the first American in space—President John F. Kennedy challenged the nation with this ambitious goal: to land a man on the Moon and return him safely to Earth by the end of the decade. With a total cost estimated at $25 billion in 1960s dollars, the Apollo program became a massive effort utilizing the combined energies of 400,000 people at NASA, other government and academic facilities, and aerospace contractors.

NASA realized, however, that it would not be possible to jump directly from the simple Mercury flights in Earth orbit to a lunar voyage. The agency needed an interim program to solve the unknowns of lunar flights. This became the Gemini program, a series of two-astronaut missions that took place in 1965 and 1966.

The Gemini missions were intended to develop and test the building blocks of a lunar flight. For instance, Gemini astronauts had to maneuver and dock two orbiting spacecraft, since astronauts would need to execute such a maneuver before and after landing on the Moon. Gemini included long-duration spaceflights of a week or more—the amount of time necessary for a lunar landing flight—as well as spacewalks that demonstrated the ability of an astronaut to perform useful work in the vacuum of space, and controlled reentry into Earth’s atmosphere. The Gemini spacecraft had less than twice the crew space of Mercury, but it was far more capable. Gemini crews could change their orbits, and even use a rudimentary onboard computer to help control their craft. Gemini was also the first spacecraft to utilize fuel cells, devices that generated electrical power by combining hydrogen and oxygen.

At the same time, the USSR was preparing a new generation of spacecraft for its own Moon program. The Soviets staged a series of intermediate flights in a craft designated Voskhod (Sunrise). Described as a new spacecraft, Voskhod was actually a converted Vostok. In October 1964 Voskhod 1 carried three cosmonauts—the first multiperson space crew—into orbit for a day-long mission. By replacing the Vostok ejection seat with a set of crew couches, designers had made room for three cosmonauts to fly, without space suits, in a craft originally designed for one.

In March 1965, just weeks before Gemini’s first piloted mission, Voskhod 2 carried two space-suited cosmonauts aloft. One of them, Alexei Leonov, became the first human to walk in space, remaining outside the craft for about ten minutes. In the vacuum of space Leonov’s suit ballooned dangerously, making it difficult for him to reenter the spacecraft. Voskhod 2 proved to be the last of the series. Further Voskhod flights had been planned, but they were canceled so that Soviet planners and engineers could concentrate on getting to the Moon.

Ten piloted Gemini missions took place in 1965 and 1966, accomplishing all of the program’s objectives. In March 1965 Gus Grissom and John Young made Gemini's piloted debut and became the first astronauts to alter their spacecraft's orbit. In June, Gemini 4’s Ed White became the first American to walk in space. Gemini 5’s Gordon Cooper and Pete Conrad captured the space endurance record with an eight-day mission. Gemini 7’s Frank Borman and Jim Lovell stretched the record to 14 days in December 1965. During their flight they were visited by Gemini 6’s Wally Schirra and Tom Stafford in the world’s first space rendezvous. Neil Armstrong and Dave Scott succeeded in making the first space docking by mating Gemini 8 to an unpiloted Agena rocket in March 1966, but their flight was cut short by a nearly disastrous episode with a malfunctioning thruster. On Gemini 11 in September 1966, Pete Conrad and Dick Gordon reached a record altitude of 1,370 km (850 mi). The final mission of the series, Gemini 12 in November 1966, saw Buzz Aldrin make a record five hours of spacewalks. At the conclusion of the Gemini program, the United States held a clear lead in the race to the Moon.


Soyuz and Early Apollo

By 1967 the United States and the USSR were each preparing to test the spacecraft they planned to use for lunar missions. The Soviets had created Soyuz (Union), an Earth-orbiting version of the craft they hoped would fly cosmonauts to and from the Moon. They were also at work on a Soyuz derivative for flights into lunar orbit, and a lunar lander that would ferry a single cosmonaut from lunar orbit to the Moon’s surface and back. Two parallel Soviet Moon programs were proceeding—one to send cosmonauts around the Moon in a loop that would form a figure-8, the other to make the lunar landing.

Apollo Command and Service Module

Astronauts used the command and service modules of the Apollo spacecraft to orbit the earth, travel to the moon, and return to the earth. The command module housed the astronauts during take-off and reentry into the earth's atmosphere. The service module carried consumable supplies such as fuel, food, and water, and was detached from the command module before the astronauts reentered the atmosphere.

Meanwhile, the United States continued work on its Apollo spacecraft. Apollo featured a cone-shaped command module designed to transport a three-man crew to the Moon and back. The command module was attached to a cylindrical service module that provided propulsion, electrical power, and other essentials. Attached to the other end of the service module was a spidery lunar module. The lunar module contained its own rocket engines to allow two astronauts to descend from lunar orbit to the Moon’s surface and then lift off back into lunar orbit. The lunar module consisted of two separate sections: a descent stage and an ascent stage. The descent stage housed a rocket engine for the trip down to the Moon. The descent stage fit underneath the ascent stage, which included the crew cabin and a rocket for returning to lunar orbit. The astronauts rode to the surface of the Moon in the ascent stage with the descent stage attached. The descent stage remained on the lunar surface when the astronauts fired the ascent rocket to return to orbit around the Moon.

The year 1967 brought tragedy to both U.S. and Soviet Moon programs. In January, the crew of the first piloted Apollo mission, Gus Grissom, Ed White, and Roger Chaffee, were killed when a flash fire swept through the cabin of their sealed Apollo command module during a pre-flight practice countdown. Subsequent investigation determined that frayed wiring probably provided a spark, and the high-pressure, all-oxygen atmosphere and flammable materials in the spacecraft created the devastating inferno. In April, the Soviets launched their new generation spacecraft, Soyuz 1, with Vladimir Komarov aboard. Consisting of three modules, only one of which was designed to return to Earth, Soyuz could carry a maximum of three cosmonauts. After a day in space Komarov was forced to end the flight because of problems orienting the craft. After reentering the atmosphere the Soyuz’s parachute failed to deploy properly, and Komarov was killed when the spacecraft struck the ground.

By the end of 1967 NASA achieved a welcome success for Apollo with the first test launch of the giant Saturn V Moon rocket, designed by a team headed by von Braun. Measuring 111 m (363 ft) in length (including the Apollo spacecraft), the three-stage Saturn V was the most powerful rocket ever successfully flown. Its five first-stage engines produced a combined thrust of 33 million newtons (7.5 million lb). The first Saturn V test flight, designated Apollo 4, took place in November 1967, and propelled an unpiloted Apollo command and service module to an altitude of 18,000 km (11,000 mi) before the spacecraft returned to Earth.

In October 1968 a redesigned, fireproof command module made its piloted debut as Wally Schirra, Donn Eisele, and Walt Cunningham reached Earth orbit in Apollo 7. During the 11-day test flight, the command and service modules checked out perfectly. Apollo 7’s success paved the way for NASA to send the crew of Apollo 8, Frank Borman, Jim Lovell, and Bill Anders, on the first voyage to the Moon. Borman’s crew became the first men to ride the Saturn V booster on December 21, 1968. About two hours after launch, the Saturn’s third stage engine reignited to send Apollo 8 speeding moonward at 40,000 km/h (25,000 mph). Some 66 hours later, on December 24, 1968, they reached the Moon and fired Apollo 8’s main rocket engine to go into lunar orbit. They spent the next 20 hours circling the Moon ten times, taking photographs, making navigation sightings on lunar landmarks, and beaming live television pictures back to Earth. Just after midnight on December 25, the astronauts fired the service module’s main rocket engine to blast out of lunar orbit and onto a course for Earth. After a fiery reentry, the heat-shielded command module splashed down in the Pacific Ocean on December 27.

The Soviets, meanwhile, flew a successful piloted Soyuz mission in October 1968. Soyuz 3 carried cosmonaut Georgi Bergovoi in orbit around Earth for four days. The USSR also sent two Zond craft, specially designed for missions around the Moon, on unpiloted flights around the Moon and back to Earth. Zond spacecraft were modified Soyuz craft. A pair of cosmonauts prepared for their own mission around the Moon in early December 1968, just ahead of Apollo 8. But concern over problems on the unpiloted Zond flights caused Soviet mission planners to postpone the attempt, and the flight never took place. Apollo 8 was not only a triumph for NASA—it also proved to be the decisive event in the Moon race.


Humans on the Moon

Having sent astronauts into lunar orbit and back to Earth, NASA faced even more daunting hurdles to achieve Kennedy’s challenge for a Moon landing before the end of the 1960s. Apollo 9 in March 1969 tested the entire Apollo spacecraft, including the lunar module, in Earth orbit. In May 1969, Apollo 10 carried out a dress rehearsal of the landing mission, with the command and service modules and lunar module in lunar orbit. With these crucial milestones accomplished, the way was clear to attempt the lunar landing itself. On July 16, 1969, the crew of Apollo 11—Neil Armstrong, Mike Collins, and Buzz Aldrin—headed for the Moon to attempt the lunar landing.

On July 20, while in lunar orbit, Armstrong and Aldrin passed through a connecting tunnel from the command module, Columbia, to the attached lunar module, named Eagle. They then undocked, leaving Collins in orbit, alone in Columbia, 111 km (69 mi) above the Moon. After shifting the low point of their orbit to 15,000 m (50,000 ft), Armstrong and Aldrin fired Eagle’s descent rocket to slow the craft into its final descent to the Moon’s Mare Tranquilatis (Sea of Tranquillity). An overloaded onboard computer threatened to abort the landing, but swift action by experts in mission control allowed the men to continue. Armstrong was forced to take over manual control when he realized that Eagle was heading for a football-field-size crater ringed with boulders. He brought Eagle to a safe touchdown with less than a minute’s worth of fuel remaining before a mandatory abort. “Houston,” Armstrong radioed, “Tranquillity Base here. The Eagle has landed.”

Hours later, Armstrong and Aldrin were sealed inside their space suits, ready to begin history’s first moonwalk. At 10:56 pm Eastern Daylight Time, Armstrong stood on Eagle’s footpad and placed his left boot on the powdery lunar surface—the first human footstep on another world. Armstrong’s famous first words on the Moon were, “That’s one small step for man, one giant leap for mankind.” (He had intended to say “That’s one small step for a man, one giant leap for mankind,” and that is how the quote is worded in many accounts of the event.) Aldrin followed Armstrong to the surface 40 minutes later. During the moonwalk, which lasted about two and a half hours, the men collected rocks, took photographs, planted the American flag, and deployed a pair of scientific experiments. Their landing site, a cratered plain strewn with rocks, proved to have “a stark beauty all its own,” in Armstrong’s words. Aldrin called the appearance of the lunar surface “magnificent desolation.”

Inside Eagle once more, Armstrong and Aldrin tried unsuccessfully to get a good night’s sleep. On July 21, after a total of 21½ hours on the Moon, they fired Eagle’s ascent engine and rejoined Collins in lunar orbit. On July 24, after a flawless mission, Armstrong, Aldrin, and Collins returned to Earth, carrying 22 kg (48 lb) of lunar rock and soil. Kennedy’s challenge had been met with months to spare, and NASA had shown that humans were capable of leaving their home world and traveling to another.

Six more lunar landing attempts followed Apollo 11. All but one of these missions were successful. In November 1969 Pete Conrad and Alan Bean made history’s first pinpoint landing on the Moon, touching down less than 200 m (less than 600 ft) from the robotic Surveyor 3 probe, which had been on the Moon since April 1967. In their 31½ hours on the Moon, Conrad and Bean made two moonwalks and collected 34 kg (76 lb) of samples.

In April 1970 Apollo 13 almost ended tragically when an oxygen tank inside the service module exploded. The spacecraft was 300,000 km (200,000 mi) from Earth. The accident left the command and service modules without propulsion or electrical power. Astronauts Jim Lovell, Jack Swigert, and Fred Haise struggled to return to Earth using their attached lunar module as a lifeboat, while experts in mission control worked out emergency procedures to bring the men home. Although the mission failed in its objective to land in the Moon’s Fra Mauro highlands, Apollo 13 was an extraordinary demonstration of the Apollo team’s ability to solve problems during a spaceflight. The mission’s goals were achieved in February 1971 by Apollo 14 astronauts Alan Shepard, Stu Roosa, and Ed Mitchell.

Lunar exploration entered a more ambitious phase with Apollo 15 in July 1971, when Dave Scott and Jim Irwin landed at the base of the Moon’s Apennine mountains. Their lunar module had been upgraded to allow a stay of nearly three days on the lunar surface. Improved space suits allowed the men to take three moonwalks, the longest of which lasted more than seven hours. They also brought along a battery-powered car called the Lunar Rover. With the rover, the astronauts ranged for miles across the landscape, even driving partway up the side of a lunar mountain. They picked up some of the oldest rocks ever found on the Moon, including one fragment that proved to be 4.5 billion years old, almost the calculated age of the Moon itself.

Two more lunar landings followed before budget cuts ended the Apollo program. The final team of lunar explorers were Apollo 17’s Gene Cernan, a former Navy fighter pilot, and Harrison “Jack” Schmitt, a geologist-astronaut who became the first scientist to reach the Moon. They explored the Moon’s Taurus-Littrow valley while crewmate Ron Evans orbited overhead. During three days on the Moon, Cernan and Schmitt collected 110 kg (243 lb) of samples, including an orange soil that gave new clues to the Moon’s ancient volcanic activity.

While the Apollo program racked up successes, the Soviet lunar program was plagued by setbacks. The Soviets built a Moon rocket of their own, the giant N-1 booster, which was designed to produce 44 million newtons (10 million lb) of thrust at liftoff. In four separate test launches between 1969 and 1972, the N-1 exploded within seconds or minutes after liftoff. Combined with the U.S. Apollo successes, the N-1 failures ended hopes of a Soviet piloted lunar landing.


Salyut Space Stations

Even before the first human spaceflights, planners in the United States and the USSR envisioned space stations in orbit around Earth. The Soviets stepped up their efforts toward this goal when it became clear they would not win the Moon race. In April 1971 they succeeded in launching the first space station, Salyut 1 (see Salyut). The name Salyut, which means “salute,” was meant as a tribute to cosmonaut Yuri Gagarin, the first person in space. Gagarin had been killed in the crash of a jet fighter during a routine training flight in 1968. Salyut consisted of a single module weighing 19 metric tons that offered 100 cu m (3,500 cu ft) of living space. Cosmonauts traveled between Earth and the Salyut stations in Soyuz spacecraft. In June 1971 cosmonauts Georgi Dobrovolski, Vladislav Volkov, and Viktor Patsayev occupied Salyut for 23 days, setting a new record for the longest human spaceflight. Tragically, the three men died when their Soyuz ferry craft developed a leak before they reentered the atmosphere. The leak allowed the oxygen in the cabin to escape, suffocating the cosmonauts. The Soyuz returned to Earth under automatic control.

Six more Salyut stations reached orbit between 1974 and 1982. Two of these, Salyuts 3 and 5, were military stations equipped with high-resolution cameras to gather military information from orbit. Salyuts 6 and 7 served as orbital homes to cosmonauts during record-breaking space marathons. In 1980 Salyut 6 cosmonauts Leonid Popov and Valerie Ryumin logged a record 185 days in space. (Remarkably, Ryumin had spent 175 days aboard Salyut 6 during the previous year.) The longest mission to Salyut 7 was also a record-breaker, lasting 237 days—nearly eight months—in space. In 1985 Salyut 7’s electrical system failed, forcing a team of cosmonauts to stage a repair mission to bring the stricken station back to life. In mid-1986, after two more crews had visited the station, Salyut 7 was abandoned for good.

The Salyut cosmonauts pushed frontiers of long-duration spaceflight, often with considerable difficulty. In addition to the medical effects of long-term exposure to weightlessness—including muscle atrophy, loss of bone minerals, and cardiovascular weakness—long-duration spaceflight can cause the psychological stresses of boredom and isolation, occasionally relieved by visits by new teams of cosmonauts. Supplies and gifts brought up by unpiloted versions of Soyuz spacecraft called Progress freighters also provided novelty and relief. The Salyut marathons paved the way for even longer stays aboard the space station Mir.


Skylab Space Station

Skylab, the first U.S. space station, utilized hardware originally created for the Apollo program. The main component, called the orbital workshop, was constructed inside the third stage of a Saturn V booster. It contained living and working space for three astronauts. Attached to the orbital workshop were the Apollo telescope mount (ATM), a collection of instruments to study the Sun from space; an airlock module to enable two of the astronauts to make spacewalks while the third remained inside; and a multiple docking adaptor (MDA) for use by the Apollo spacecraft that would ferry the crew to and from orbit. Altogether, Skylab weighed 91 metric tons and offered 210 cu m (7,400 cu ft) of habitable space.

Skylab Space Station

In 1973 and 1974 the Skylab space station supported three crews of three astronauts each for periods of up to 84 days. Command and service modules like the ones used for the Apollo program carried astronauts to Skylab and docked with the station. Skylab was more hospitable than previous spacecraft—it was as large as a small two-bedroom house, contained extensive sanitary facilities, and usually maintained a constant temperature in the interior. Skylab astronauts were able to perform many scientific experiments in this environment. Many medical and biological experiments on the effects of weightlessness took place in the orbital laboratory, and astronauts studied the earth and the sun with the telescope and infrared spectrometer. Solar panels provided the electricity needed to run the station.

Skylab’s mission almost ended with its launch in May 1973. A design flaw caused the station’s meteoroid shield to be torn off during launch, severing one of two winglike solar panels that were to convert sunlight to electricity for the space station. Mission controllers quickly went to work on a rescue plan that could be carried out by the first team of Skylab astronauts—Pete Conrad, Joe Kerwin, and Paul Weitz. After reaching the station in late May aboard an Apollo spacecraft, Conrad’s crew installed a sunshield to cool the soaring temperatures inside the station. In a spacewalk repair effort, Conrad and Kerwin restored the necessary electric power by freeing the remaining solar wing, which had failed to deploy properly. The astronauts also conducted medical tests, made observations of the Sun and Earth, and performed a variety of experiments. Their 28-day mission broke the endurance record set by the Salyut 1 crew two years before. Two more teams of astronauts reached Skylab in 1973, logging 56 and 84 days in space, respectively. The three Skylab missions gave U.S. researchers valuable information on human response to long-duration spaceflight.

Skylab was not designed to be resupplied, and by the late 1970s its orbit had decayed badly. Friction with gas molecules in the outer atmosphere had caused the spacecraft to lose altitude and speed, and controllers calculated that it would fall out of orbit by the end of the decade. Tentative plans to use the space shuttle to boost the station into a stable orbit did not come to pass—the shuttle was still in development when Skylab met its fiery end, breaking up during reentry in July 1979. Debris from Skylab landed in the Indian Ocean and in remote areas of Australia.


Mir Space Station

In 1986 the USSR launched the core of the first space station to be composed of distinct units, or modules. This modular space station was named Mir (Peace). Over the next ten years additional modules were launched and added to the station. The first of these, called Kvant, contained telescopes for astronomical observations and reached the station in April 1987. Another module, called Krystal, was devoted to experiments in processing materials in zero gravity. In 1996 Prioda, the last module, was added, bringing Mir’s total habitable volume to about 380 cubic meters (about 13,600 cubic feet).

Cosmonauts lived aboard Mir even longer than their Salyut predecessors lived in space. In 1987 and 1988 Mir cosmonauts Vladimir Titov and Musa Manarov achieved the first yearlong mission. In 1995 physician-cosmonaut Valeriy Polyakov completed a record 14 months aboard the station. Such long-duration missions helped researchers understand the problems posed by lengthy stays in space—information vital to planning for piloted interplanetary voyages.

Beginning in 1995 Mir was the scene of joint U.S.-Russian missions. (Russia took over the Soviet space program after the collapse of the USSR in 1991.) The joint missions paved the way for the International Space Station (ISS; discussed below). U.S. space shuttles docked with Mir nine times, and seven U.S. astronauts lived aboard Mir for extended periods. One of them, Shannon Lucid, set the U.S. spaceflight endurance record of 188 days in 1996.

By 1997 the 11-year-old Mir was experiencing a series of calamities that included computer failures, an onboard fire, and a collision with an unpiloted Progress spacecraft during a rendezvous exercise. Subsequent repair missions returned the station to a relatively normal level of functioning. The Russian Space Agency planned to abandon Mir and cause it to reenter Earth’s atmosphere in the summer of 2000, but the station was temporarily rescued by a private company called Mircorp. Mircorp planned to turn the station into a commercial venture. The company funded a mission in April 2000 that sent two cosmonauts to Mir to make repairs and conduct experiments, but it could not attract enough investors to keep Mir in orbit. Russian ground controllers sent the station plunging into a remote area of the South Pacific Ocean in March 2001.


International Space Station

One of NASA’s most cherished goals was to build a permanent, Earth-orbiting space station. Although it received approval from President Ronald Reagan in 1984, the space station project (designated Space Station Freedom) faced huge political and budgetary hurdles. In 1993, after several redesign efforts by NASA, the station was reshaped into an international venture and redesignated the International Space Station (ISS). In addition to the United States, many other nations have joined the project. Russia, Japan, Canada, and the European Space Agency have produced hardware for the station.

Launch of the first ISS element, a Russian-built module called Zarya, occurred in November 1998. Zarya provides the power and propulsion needed during the ISS’s assembly. Once the ISS is complete, Zarya will be used mostly for storage. The Unity module, built by the United States, was launched in December 1998. Unity acts as a passage from Zarya to other parts of the station. The first habitable part of the ISS—the Russian-made Zvezda service module—was launched in July 2000, and the first long-term crew arrived in November 2000. Planned for completion in 2006, the ISS is designed to be continuously occupied by up to seven crew members. It is envisioned as a world-class research facility, where scientists can study Earth and the heavens, as well as explore the medical effects of long-duration spaceflight, the behavior of materials in a weightless environment, and the practicality of space manufacturing techniques.


Space Shuttles

Even before the Apollo Moon landings, NASA’s long-term plans included a reusable space shuttle to ferry astronauts and cargo to and from an Earth-orbiting space station. Agency planners had hoped to pursue both the station and the shuttle during the 1970s, but in 1972 Congress approved funding only for the shuttle. With the orbiting space station on hold, NASA had to reevaluate the role of the shuttle. The agency came to envision the shuttle both as a “space truck” that could deploy and retrieve satellites and as a platform for scientific observations and experiments in space.

Space-Shuttle Orbiter

The space shuttle is the first reusable space vehicle, designed to perform up to 100 missions with only minor maintenance. The shuttle orbiter resembles an airplane in appearance, but it actually performs quite differently. The shuttle leaves the earth vertically, strapped to a launch rocket for the first stages of liftoff. The shuttle’s main engines provide part of the thrust needed to lift the shuttle into orbit while the rest of the power is provided by the launch rocket. After the mission is completed, the shuttle orbiter returns to the earth in a horizontal position similar to an airplane, but it glides back to earth to land on a conventional runway using no engine power.

The space shuttle consists of three main components: an orbiter, an external fuel tank, and two solid rocket boosters. The winged orbiter contains the crew cabin, three liquid-fuel rocket engines for use during launch, and a cargo bay 20 m (60 ft) long. Overall, the orbiter is the size of a medium-sized passenger jet airplane. It is controlled by five onboard computers and is covered with thousands of heat-resistant silica tiles to protect it during the fiery reentry into Earth’s atmosphere. Following reentry the orbiter becomes an unpowered glider, and the shuttle’s commander steers it to a landing on a runway. A total of six shuttle orbiters were built. The first one, named Enterprise, never flew in space, but was used for a series of approach and landing tests in 1977.

The shuttle’s other two components help the shuttle reach orbit. The external tank, which is the size of a grain silo, is attached to the orbiter during launch and provides fuel for its engines. The tank is discarded once the shuttle reaches orbit. The paired giant solid rocket boosters, attached to the external tank, provide additional thrust during the first two minutes of launch. After that, they fall away and are recovered in the ocean to be refurbished and reused.

On April 12, 1981—exactly 20 years after Gagarin’s pioneering flight as the first human in space—the orbiter Columbia flew a near-perfect maiden voyage. Veteran astronaut John Young and first-time astronaut Robert Crippen piloted Columbia on the two-day mission, ending with a flawless landing on a dry-lakebed runway at California’s Edwards Air Force Base. Three more qualifying flights followed, and in July 1984 the shuttle was declared operational. Over the next 17 months, 20 more shuttle missions, with crews of up to eight astronauts, racked up a string of accomplishments. Shuttle astronauts deployed and retrieved satellites using the orbiter’s remote manipulator arm. In spacewalks, astronauts repaired ailing satellites; they also tested the Manned Maneuvering Unit, a self-contained flying machine with thrusters that use compressed nitrogen. They conducted a variety of scientific and medical research missions in a module called Spacelab, which was stored in the orbiter’s cargo bay.

NASA had hoped that the reusability of the shuttle would make getting into space less expensive. The space agency expected that private companies would pay to have their satellites launched from the shuttle, which would provide a cost-effective alternative to launching by a conventional, “throwaway” rocket. However, the costs of developing and operating the shuttle proved enormous, and NASA found it was still a long way from reducing the cost of reaching Earth orbit. To offset these costs, the agency pushed for more frequent launches—in 1986 they hoped to launch 24 missions per year.

Then, on January 28, 1986, disaster struck. The shuttle Challenger exploded 73 seconds after liftoff, killing its seven-member crew, which included schoolteacher Christa McAuliffe (see Challenger Disaster). The tragedy shocked the nation and brought the shuttle program to a halt while a presidential commission tried to determine what had gone wrong. The Challenger disaster was traced to a faulty seal in one of the solid rocket boosters, and to faulty decision making by NASA and some of the contractors who manufacture shuttle components. After making several safety modifications, shuttle flights resumed in 1988.

Soviet officials viewed the U.S. program with some trepidation, fearing that the shuttle would be used for military offensives against the USSR. Partly in response, they built a heavy-lift booster called Energia, and a space shuttle called Buran (snowstorm). The Buran/Energia combination made only a single unpiloted, orbital test flight in November 1988. Unlike its U.S. counterpart, ground controllers could operate the Soviet shuttle remotely. Buran was far from ready to support piloted flight, and economic problems caused by the collapse of the USSR in 1991 ended the Buran program prematurely.

Beginning in 1995, the shuttle flew a series of missions to the Russian space station Mir. In 1998 the shuttle began taking crews into orbit to assemble the International Space Station. The shuttle program’s 100th mission is slated to take place early in 2001, and shuttle orbiters are expected to keep flying during the first decades of the 21st century.



Space is a harsh environment for humans and human-made machines. Radiation from the Sun and other cosmic sources can weaken material and harm the human body. In the vacuum of space, objects become boiling hot when exposed to the Sun and freezing cold when in the shadow of Earth or some other body. Scientists, engineers, and designers must make spacecraft that can withstand these extreme conditions and more.


General Principles of Spacecraft Design

The challenges that spacecraft designers face are daunting. Each component of a spacecraft must be durable enough to withstand the vibrations of launch, and reliable enough to function in space on time spans ranging from days to years. At the same time, the spacecraft must also be as lightweight as possible to reduce the amount of fuel required to boost it into space. Materials such as Mylar (a metal-coated plastic) and graphite epoxy (a construction material that is strong but lightweight) have helped designers and manufactures meet the requirements of durability, reliability, and lightness. Spacecraft designers also conserve space and weight by using miniaturized electronic components; in fact, the space program has fueled many advances in the field of miniaturization.

Since the early 1990s, budgetary restrictions have motivated NASA to plan projects that are better, faster, and cheaper. In this approach, space missions requiring single large, complex, and expensive spacecraft are replaced with more limited missions using smaller, less expensive craft. Although this new approach was successful with spacecraft such as the Mars Pathfinder lander and Mars Global Surveyor 96, budgetary constraints may have contributed to the loss of two other Mars spacecraft, Mars Climate Orbiter and Mars Polar Lander, in 1999. The approach is also difficult to apply to piloted spacecraft, in which the overriding concern is crew safety. However, engineers are always looking for new technologies to make spacecraft lighter and less expensive.


Getting into Space

One of the most difficult parts of any space voyage is the launch. During launch, the craft must attain sufficient speed and altitude to reach Earth orbit or to leave Earth’s gravity entirely and embark on a path between planets. Scientists sometimes find it helpful to think of Earth’s gravitational field as a deep well, with sides that are steepest near the planet’s surface. The task of the launch vehicle or booster rocket is to climb out of this well.

Although some launch vehicles consist of just a single rocket, many are composed of a series of individual rockets, or stages, stacked atop one another. Such multistage launch vehicles are used especially for heavier payloads. With a multistage rocket, each stage fires for a period of time and then falls away when its fuel supply is used up. This lightens the load carried by the remaining stages. In some liquid-fuel boosters, strap-on solid-fuel rockets are used to provide extra thrust during the initial portion of ascent. For example, the Titan III booster has two liquid-fuel core stages and two strap-on solid-fuel motors. The largest example of a successful multistage booster was the Saturn V Moon rocket, which had three liquid-fuel stages and measured 111 m (363 ft), including the Apollo spacecraft, in length.

Despite their utility, most multistage boosters are not reusable, which makes them expensive. Cost-conscious engineers have focused on creating a single-stage-to-orbit (SSTO) vehicle. In an SSTO, the entire spacecraft and booster would be integrated into one fully reusable unit. If successful, this approach would reduce the costs of reaching Earth orbit. However, the technical challenge is enormous: A full 89 percent of an SSTO’s total weight must be reserved for fuel, a much higher proportion than any previous launch vehicle. The payload, the crew, and the weight of the vehicle itself must make up only 11 percent of the SSTO’s total weight.


Navigation in Space

Spaceflight requires very detailed planning and measurement to get a spacecraft into place or to send it on its proper path. Some of the Apollo spacecraft were able to travel from Earth to the Moon (a distance of almost 390,000 km, or almost 240,000 mi) and land on the lunar surface within a few dozen meters (several dozen feet) of their target. Careful planning allowed the Mars Pathfinder spacecraft to fly from Earth to Mars, traveling more than 500 million km (300 million mi), and land just 19 km (12 mi) from the center of its target area.


Flight Paths

To launch a spacecraft into orbit around Earth, a booster rocket must do two things. First it must raise the spacecraft above the atmosphere—roughly 160 km (100 mi) or more. Second it must accelerate the spacecraft until its forward speed—that is, its speed parallel to Earth’s surface—is at least 28,200 km/h (17,500 mph). This is the speed, called orbital velocity, at which the momentum of the spacecraft is strong enough to counteract the force of gravity. Gravity and the spacecraft’s momentum balance so that the spacecraft does not fall straight down or move straight ahead—instead it follows a curved path that mimics the curve of the planet itself. The spacecraft is still falling, as any object does when it is released in a gravitational field. But instead of falling toward Earth, it falls around it. See Orbit.

Galileo Orbiter Trajectory

The Galileo spacecraft used the gravity of Earth and Venus to accelerate and build up enough speed to reach its destination of Jupiter. Launched in 1989, Galileo finally reached orbit around Jupiter in 1995, where it released a probe to study Jupiter’s atmosphere. Despite the failure of its main antenna to open completely, limiting the speed at which information is transmitted back to Earth, Galileo has provided much new information about the Jovian system. Galileo's initial mission ended in 1997, but the spacecraft is continuing to study Jupiter and its moons on an extended mission.

Using its own thrusters, a spacecraft can raise or lower its orbit by adding or removing energy, respectively. To add energy, the spacecraft orients itself and fires its thrusters so that it accelerates in its direction of flight. To subtract energy, the craft fires its engines against the direction of flight. Any change in the height of a spacecraft’s orbit also produces a change in its speed and vice versa. The craft moves more slowly in a higher orbit than it does in a lower one. By firing its rockets perpendicular to the plane of its orbit, the craft can change the orientation of its orbit in space.

To travel from one planet to another, a spacecraft must follow a precise path, or trajectory, through space. The amount of energy that a spacecraft’s launch rocket and onboard thrusters must provide varies with the type of trajectory. The trajectory that requires the least amount of energy is called a Hohmann transfer. A Hohmann transfer follows the shape of an ellipse, or a flattened circle, whose sides just touch the orbits of the two planets.

The trajectory must also take into account the motion of the planets around the Sun. For example, a probe traveling from Earth to Mars must aim for where Mars will be at the time of the spacecraft’s arrival, not where Mars is at the time of launch.

In many interplanetary missions, a spacecraft flies past a third planet and uses the planet’s gravitational field to bend the craft’s trajectory and accelerate it toward its target planet. This is known as a gravitational slingshot maneuver. The first spacecraft to use this technique was the Mariner 10 probe (see Mariner), which flew past Venus on its way to Mercury in 1974.


Navigation and Guidance

Most spacecraft depend on a combination of internal automatic systems and commands from ground controllers to keep on the correct path. Normally, ground controllers can communicate with a spacecraft only when it is within sight of an Earth-based receiving station. This poses problems for spacecraft in low Earth orbit—that is, within 2,000 km (1,200 mi) of the planet’s surface—as such craft are only within sight of a relatively small portion of the globe at any given moment. One way around this restriction is to place special satellites in orbit to act as relays between the orbiting spacecraft and ground stations, allowing continuous communications. NASA has done this for the U.S. space shuttle with the Tracking and Data Relay Satellite System (TDRSS).

At an altitude of about 35,800 km (about 22,200 mi), a satellite’s motion exactly matches the speed of Earth’s rotation. As a result, the satellite appears to hover over a specific spot on Earth’s surface. This so-called stationary, or geosynchronous, orbit is ideal for communications satellites, whose job is to relay information between widely separated points on the globe.

Spacecraft on interplanetary trajectories may travel millions or even billions of kilometers from Earth. In these cases their radio signals are so weak that giant receiving stations are necessary to detect them. The largest stations have antenna dishes in excess of 70 m (230 ft) across. NASA and the Jet Propulsion Laboratory operate the Deep Space Network, a system of three tracking stations with several antennas each. The stations are in California, Spain, and Australia, providing continuous contact with distant spacecraft as Earth spins on its axis.

Much of the work of ground controllers involves monitoring a spacecraft’s health and flight path. Using a process called telemetry, a spacecraft can transmit data about the functioning of its internal components. In addition, engineers can use a spacecraft’s radio signals to assess its flight path. This is possible because of the Doppler effect. Because of the Doppler effect, a spacecraft’s motion causes tiny shifts in the frequency of its radio signals—just as the motion of a passing car causes the apparent pitch of its horn to go up as the car approaches an observer and down as the car moves away. By analyzing Doppler shifts in a spacecraft’s radio signals, controllers can determine the craft’s speed and direction. Over time, controllers can combine the Doppler shift data with data on the spacecraft’s position in the sky to produce an accurate picture of the craft’s path through space.

The guidance system helps control the craft’s orientation in space and its flight path. In the early days of spaceflight, guidance was accomplished by means of radio signals from Earth. The Mercury spacecraft and its Atlas booster utilized such radio guidance signals broadcast from ground stations. During launch, for example, the Atlas received steering commands that it used to adjust the direction of its engines. However, Mercury flight controllers found that radio guidance was limited in accuracy because interference with the atmosphere tends to make the signals weaker.

Beginning with Gemini, engineers used a system called inertial guidance to stabilize rockets and spacecraft. This system takes advantage of the tendency of a spinning gyroscope to remain in the same orientation. A gyroscope mounted on a set of gimbals, or a mechanism that allows it to move freely, can maintain its orientation even if the spacecraft’s orientation changes. An inertial guidance system contains several gyroscopes, each oriented along a different axis. When the spacecraft rotates along one or more of its axes, measuring devices tell how far it has turned from the gyroscopes’ own orientations. In this way, the gyroscopes provide a constant reference by which to judge the craft’s orientation in space. Signals from the guidance system are fed into the spacecraft’s onboard computer, which uses this information to control the craft’s maneuvers.

The Global Positioning System satellites, which enable ships, airplanes, and even hikers to know their positions with extreme accuracy, should play a similar role in spacecraft. The space shuttle Atlantis was equipped with GPS receivers during an upgrade in late 1998.



Once in orbit, a spacecraft relies on its own rocket engines to change its orientation (or attitude) in space, the shape or orientation of its orbit, and its altitude. Of these three tasks, changes in orientation require the least energy. Relatively small rockets called thrusters control a spacecraft’s attitude. In a massive spacecraft, the attitude control thrusters may be full-fledged liquid-fuel rockets. Smaller spacecraft often use jets of compressed gas. Depending on which combination of thrusters is fired, the spacecraft turns on one or more of its three principal axes: roll, pitch, and yaw. Roll is a spacecraft’s rotation around its longitudinal axis, the horizontal axis that runs from front to rear. (In the case of the space shuttle orbiter, a roll maneuver resembles the motion of an airplane dipping its wing.) Pitch is rotation around the craft’s lateral axis, the horizontal axis that runs from side to side. (On the shuttle, a pitch maneuver resembles an airplane raising or lowering its nose.) Yaw is a spacecraft’s rotation around a vertical axis. (A space shuttle executing a yaw maneuver would appear to be sitting on a plane that is turning to the left or right.) A change in attitude might be required to point a scientific instrument at a particular target, to prepare a spacecraft for an upcoming maneuver in space, or to line the craft up for docking with another spacecraft.

When an orbiting spacecraft needs to drop out of orbit and descend to the surface, it must slow down to a speed less than orbital velocity. The craft slows down by using retrorockets in a process called a deorbit maneuver. On early piloted spacecraft, retrorockets used solid fuel because solid-fuel rockets were generally more reliable than liquid-fuel rockets. Vehicles such as the Apollo spacecraft and the space shuttle have used liquid-fuel retrorockets. In the deorbit maneuver, the retrorocket acts as a brake by firing into the line of flight. The duration of the firing is carefully controlled, because it will affect the path that the spacecraft takes into the atmosphere. The same technique has been used by Apollo lunar modules and by unpiloted planetary landers to leave orbit and head for a planet’s surface.


Power Supply

Spacecraft have used a variety of technologies to provide electrical power for running onboard systems. Engineers have used batteries and solar panels since the early days of space exploration. Often, spacecraft use a combination of the two: Solar panels provide power while the spacecraft is in sunlight, and batteries take over during orbital night. The solar panels also recharge the batteries, so the craft has an ongoing source of power. However, solar panels are impractical for many interplanetary spacecraft, which may travel vast distances from the Sun. Many of these craft have relied on thermonuclear electric generators, which create power from the decay of radioactive isotopes and have lifetimes measured in years or even decades. The twin Voyager spacecraft, which explored the outer solar system, used generators such as these. Thermonuclear electric generators are controversial because they carry radioactive substances. The radioactivity poses no danger once the spacecraft reaches space, but some people worry that an accident during launch or during an unplanned reentry into Earth’s atmosphere could release harmful radiation into the atmosphere. Concerned groups protested the 1997 launch of the Cassini spacecraft, which carried its radioactive material in explosion-proof graphite containers.


Effects of Space Travel on Humans

Space is a hostile environment for humans. Piloted spacecraft must supply oxygen, food, and water for their occupants. For longer flights, a spacecraft must provide a way to dispose of or recycle wastes. For very long flights, spacecraft will eventually have to become almost totally self-sufficient. For healthy spaceflight, the spacecraft must provide far more than just the core physical needs of astronauts. Exercise equipment, comfortable sleeping and recreation areas, and well-designed work areas are some of the amenities that soften spaceflight’s effects on humans.


Crew Support

The effort to save weight is so inherent to spacecraft design that it even affects the food supply. Much of the food eaten by astronauts is dehydrated to save both weight and space. In space, astronauts use a device like a water gun to rehydrate these items. Many food items are also carried in conventional form, ranging from bread to candy to fruit.

On many spacecraft, including the U.S. space shuttle, drinkable water is produced by fuel cells that also provide electrical power. The reaction between hydrogen and oxygen that creates electricity produces water as a byproduct. A small supply of water for emergency use is also carried in onboard storage tanks.

For very long-duration missions aboard space stations, water is recycled. Drinkable water can be extracted from a combination of waste water, urine, and moisture from the cabin atmosphere. This kind of system was used on the Mir space station and is used on the International Space Station. See also Space Station.

Perhaps the question most frequently asked of astronauts is, “How do you go to the bathroom in space?” The answer has changed over the years. On early missions such as Mercury, Gemini, and Apollo, the bathroom facilities were relatively crude. For urine collection, the astronauts, all of whom were men, used a hose with a condom-like fitting at one end. Urine was then dumped overboard. Feces were collected in plastic bags and brought back to Earth for medical analyses. The Skylab space station featured a toilet that used forced air for suction. Mir used similar toilets, with special fittings for men and women, as does the space shuttle.

Skylab was also the first spacecraft to offer astronauts the chance to bathe in space, by means of a collapsible shower. To prevent globs of water from escaping and floating around inside the cabin, the astronaut sealed the shower once inside. The astronaut used a handheld nozzle to dispense water and a small vacuum to remove it. On the space shuttle astronauts and cosmonauts have had to make do with sponge baths. The International Space Station has a shower in its habitation module.

Most piloted spacecraft have carried oxygen in onboard tanks in liquid form at cryogenic (super-cold) temperatures to save space. Liquid oxygen is about 800 times smaller in volume than gaseous oxygen at everyday temperatures. The Russian Mir space station used an additional source of oxygen: Special generators aboard Mir separated water into oxygen and hydrogen, and the hydrogen was vented overboard.

On Mercury, Gemini, and Apollo, the cabin atmosphere was pure oxygen at about 0.3 kg/sq cm (about 5 lb/sq in). On the space shuttle a mixture of oxygen and nitrogen provides a pressure of 1.01 kg/sq cm (14.5 lb/sq in), slightly less than atmospheric pressure on Earth at sea level. Shuttle astronauts who go on spacewalks must pre-breathe pure oxygen to purge nitrogen from their bloodstream. This eliminates the risk of decompression sickness, called the bends, because the shuttle space suit operates at a lower pressure (0.30 kg/sq cm, or 4.3 lb/sq in) than inside the cabin. Sudden decompression can cause nitrogen bubbles to form in blood and tissues, a painful and potentially lethal condition. The International Space Station has an oxygen-nitrogen atmosphere at a pressure similar to that in the shuttle.

In the past, astronauts on missions of a few days or less have often worked long hours. Some found that their need for sleep was reduced because of the minimal exertion required to move around in microgravity. However, the intense concentration required to complete busy flight plans can be tiring. On longer missions, proper rest is essential to the crew’s performance. Even on the Moon, astronauts on extended exploration missions—with surface stay times of three days—knew that they could not afford to go without a good night’s sleep. Redesigned space suits, which were easier to take off and put on, and hammocks that were strung across the lunar module cabin helped the Moon explorers get their rest.

On the Skylab space station, each astronaut had a small sleeping compartment with a sleeping restraint attached to the wall. On Mir, cosmonauts and astronauts sometimes took their sleeping bags and moved them to favorite locations inside one module or another. The International Space Station, like Skylab, has private sleeping quarters, and these will be expanded in the future to accommodate a greater number of people.

Recreation is also essential on long missions, and it takes many forms. Weightlessness provides an ongoing source of fascination and enjoyment, offering the opportunity for acrobatics, experimentation, and games. Looking out the window is perhaps the most popular pastime for astronauts orbiting Earth, providing ever-changing vistas of their home planet. On some flights, astronauts and cosmonauts read books, play musical instruments, watch videos, and engage in two-way conversations with family members on the ground.


Work in Space

Humans face many challenges when working in space. These challenges include communicating with Earth and other spacecraft, creating suitable environments for scientific experiments and other tasks, moving around in the microgravity of space, and working within cumbersome spacesuits.

Spacecraft in orbit around Earth cannot communicate continuously with the ground unless special relay satellites provide a link between the spacecraft and ground receiving stations. This problem disappears when astronauts leave Earth orbit. As Apollo astronauts traveled to the Moon, they were in constant touch with mission control. However, when they entered lunar orbit, communications were interrupted whenever the spacecraft flew over the far side of the Moon, because the Moon stood between the spacecraft and Earth. Lunar landing sites were on the near side of the Moon, so Earth was always overhead and the astronauts could maintain continuous contact with mission control. For astronauts who venture to other planets, the primary difficulty in communications will be one of distance. For example, radio signals from Mars will take as long as 20 minutes to reach Earth, making ordinary conversations impossible. For this reason, planetary explorers will have to be able to solve many problems on their own, without help from mission control.

The design of spacecraft interiors has changed as more powerful booster rockets have become available. Powerful boosters allow bigger spacecraft with roomier cabins. In Mercury and Gemini, for example, astronauts could not even stretch their legs completely. Their cockpits resembled those of jet fighters. The Apollo command module offered a bit of room in which to move around, and included a lower equipment bay with navigation equipment, a food pantry, and storage areas. The Soviet Vostoks had enough room for their sole occupant to float around, and Soyuz includes both a fairly cramped reentry module and a roomier orbital module. The orbital module is jettisoned prior to the cosmonauts’ return to Earth. The space shuttle has two floors—a flight deck with seats, controls, and windows and a middeck with storage lockers and space to perform experiments.

For the Skylab space station, designers had the luxury of creating several different kinds of environments for different purposes. For example, Skylab had its own wardroom, bathroom, and sleeping quarters. Designers have tried several different approaches to work spaces on spacecraft. Most rooms on Skylab were designed like rooms on Earth with a definite floor and ceiling. However, Skylab’s multiple docking adaptor had instrument panels on each wall, and each had its own frame of reference. Thanks to weightlessness, this was not a problem: Astronauts reported that they were able to shift their own sense of up and down to match their surroundings. When necessary, ceiling became floor and vice versa. On Salyut and Mir, the ceilings and floors were painted different colors to aid cosmonauts in orienting themselves. Because simulators on Earth were given the same color scheme, the cosmonauts were accustomed to it when they lifted off.

To help astronauts anchor themselves while they work in weightlessness, designers have equipped spacecraft with a variety of devices, including handholds, harnesses, and foot restraints. Foot restraints have taken a number of forms. Skylab crews used special shoes that could lock into a grid-like floor. Apollo astronauts used shoes equipped with strips of Velcro that stuck to Velcro strips on the capsule floor. Space shuttle astronauts have even used strips of tape on the floor as temporary foot restraints.

Astronauts and cosmonauts who perform spacewalks use a variety of devices to aid in mobility and in anchoring the body in weightlessness. Any surface along which astronauts move is fitted with handholds, which the astronauts use to pull themselves along. Foot restraints allow astronauts to remain anchored in one spot, something that is often essential for tasks requiring the use of both hands. During many spacewalks, astronauts use tethers to keep themselves from drifting away from the spacecraft. Sometimes, however, astronauts fly freely as they work by wearing backpacks with thrusters to control their direction and movement.

Astronauts who have conducted spacewalks report that the most difficult tasks are those that involve using their gloved hands to grip or manipulate tools and other gear. Because the suit—including its gloves—is pressurized, closing the hand around an object requires constant effort, like squeezing a tennis ball. After a few hours of this work, forearms and hands become fatigued. The astronauts must also keep careful track of tools and parts to prevent them from floating away. In general, designers of space hardware strive to make any kind of assembly or repair work in space as simple as possible.



Space exploration requires more than just science—it requires an enormous amount of money. The amount of money that a country is willing to invest in space exploration depends on the political climate of the time. During the Cold War, a period of tense relations between the United States and the USSR, both countries poured huge amounts of money into their space programs, because many of the political and public opinion battles were being fought over superiority in space. After the Cold War, space exploration budgets in both countries shrank dramatically.


The Space Race and the Cold War

Space exploration became possible at the height of the Cold War, and superpower competition between the United States and the USSR gave a boost to space programs in both nations. Indeed, the primary impact of Sputnik was political—in the United States Sputnik triggered nationwide concern about Soviet technological prowess. When the USSR succeeded in putting the first human into space, it only added to the disappointment and shame felt by many Americans, and especially by President Kennedy. Against this background, Alan Shepard’s Mercury flight on May 5, 1961, was a welcome cause for celebration. Twenty days later Kennedy told Congress, “I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the Moon and returning him safely to the Earth.” This was the genesis of the Apollo program. Although there were other motivations for going to the Moon—scientific exploration among them—Cold War geopolitics was the main push behind the Moon race. Cold War competition also affected the unpiloted space programs of the United States and USSR.


The Moon Race

During the piloted programs of the Moon race, the pressure of competition caused Soviet leaders to order a number of “space spectaculars,” as much for their propaganda value as for their contributions. Each Voskhod flight entailed significant risks to the cosmonauts—the Voskhod 1 crew flew without space suits, while Voskhod 2’s Alexei Leonov was almost unable to reenter his craft following his historic spacewalk. But the space spectacular the Soviets wanted most of all—a piloted mission around the Moon in time for the 50th anniversary of the Russian revolution—never came to pass. By December 1968, when the Apollo 8 astronauts flew around the Moon, it was clear that victory in the Moon race had gone to the United States.

The achievement of Kennedy’s goal, with the Apollo 11 lunar landing mission, signaled a new era in space exploration in the United States—but not as NASA had hoped. Instead of accepting NASA’s proposals for a suite of ambitious post-Apollo space programs, Congress backed off on space funding, with the space shuttle as the only major space program to gain approval. In time it became clear that the lavish space budgets of the 1960s had been a product of a unique time in history, in which space was the most visible arena for superpower competition.


After the Moon

Tensions between the superpowers eased somewhat in the early 1970s, and the United States and USSR joined forces for the Apollo-Soyuz mission in 1975. Nevertheless, Cold War suspicions continued to influence space planners in both nations in the 1970s and 1980s. Both sides continued to spend enormous sums on missiles and nuclear warheads. Missiles of the Cold War arms race were designed to fly between continents on a path that took them briefly into space during their journeys. In the United States, a great deal of research went into a space-based antimissile system called the Strategic Defense Initiative (known to the public as Star Wars), which was never built. The stockpiling of missiles was eventually slowed by the Strategic Arms Limitation Talks (SALT) treaties.

In the USSR, concerns over possible offensive uses of the U.S. space shuttle helped prompt the development of the heavy-lift launcher Energia and the space shuttle Buran. Economic hardships, however, forced the suspension of both programs. The economy worsened after the collapse of the USSR in 1991, threatening the now-Russian space program with extinction.


After the Cold War

In 1993 the U.S. government redefined NASA’s plans for an international space station to include Russia as a partner, a development that would not have been possible before the end of the Cold War. An era of renewed cooperation in space between Russia and the United States followed, highlighted by flights of cosmonauts on the space shuttle and astronauts on the Mir space station.

Meanwhile, other nations have staged their own programs of unpiloted and piloted space missions. Many have been conducted by the European Space Agency (ESA), formed in 1975, whose 13 member nations include France, Italy, Germany, and the United Kingdom. European astronauts visited Mir and have flown on shuttle missions. Since the late 1970s, a series of European rockets called Ariane have launched a significant percentage of commercial satellites. ESA’s activities in planetary exploration have included probes such as Huygens, which is scheduled to land on Saturn’s moon Titan in 2004 as part of NASA’s Cassini mission.

China, Japan, and India have each developed satellite launchers. None have created rockets powerful enough to put piloted spacecraft into orbit. However, Japan has joined Canada, Russia, and the ESA in contributing hardware and experiments to the International Space Station.


The High Cost of Space Exploration

One aspect of space exploration that has changed little over time is its cost. To some extent the ability to carry out a vigorous space program is a measure of a nation’s economic vitality. For example, Russia has had difficulties staying on schedule with its contributions to the International Space Station—a reflection of the unstable Russian economy.

Cost has always been a central factor in the political standing of space programs. The enormous expense of the Apollo Moon program (roughly $100 billion in 1990s dollars) prompted critics to say that the program could have been carried out far more cheaply by robotic missions. While that claim is oversimplified—no robot has yet equaled the performance of a skilled observer—it reveals how vulnerable space programs are to budget cuts. The reusable space shuttle failed to significantly lower the cost of placing satellites in low Earth orbit, as compared with throwaway launchers like the Saturn V and the Titan III. Cost, not scientific potential, is usually the most significant factor for a nation in deciding whether to adopt a major space program. In the United States budgetary process, space funding must compete in a very visible way with expenditures for social programs and other concerns. Taking inflation into account, Congress has steadily trimmed NASA’s allotments, forcing the agency to reduce its number of employees to pre-Apollo levels by the year 2000.

In response to the high cost of space access, the late 1990s saw renewed efforts to develop a single-stage, reusable space vehicle. The situation also strengthened arguments that in the future, the most expensive space programs should be carried out by a consortium of nations. Most scientists envision a program for sending humans to Mars as an international one, primarily as a cost-sharing measure. Still, the mix of scientific, political, and other motivations has yet to bring about such a venture, and it may be years or even decades before international piloted interplanetary voyages become reality.



The future of space exploration depends on many things. It depends on how technology evolves, how political forces shape rivalries and partnerships between nations, and how important the public feels space exploration is. The near future will see the continuation of human spaceflight in Earth orbit and unpiloted spaceflight within the solar system. Piloted spaceflight to other planets, or even back to the Moon, still seems far away. Any flight to other solar systems is even more distant, but a huge advance in space technology could propel space exploration into realms currently explored only by science fiction.


Piloted Spaceflight

The 1968 film 2001: A Space Odyssey depicted commercial shuttles flying to and from a giant wheel-shaped space station in orbit around Earth, bases on the Moon, and a piloted mission to Jupiter. The real space activities of 2001 will not match this cinematic vision, but the 21st century will see a continuation of efforts to transform humanity into a spacefaring species.

The International Space Station was scheduled to become operational in the first years of the new century. NASA plans to operate the space shuttle fleet at least through the year 2012 before phasing in a replacement—possibly a single-stage-to-orbit (SSTO) vehicle. However, some experts predict that the SSTO is too difficult a goal to be achieved that soon, and that a different kind of second-generation shuttle would be necessary—perhaps a two-stage, reusable vehicle much like the current shuttle. In a two-stage launcher, neither stage is required to do all the work of getting into orbit. This results in less stringent specifications on weight and performance than are necessary for an SSTO.

Perhaps the most difficult problem space planners face is how to finance a vigorous program of piloted space exploration, in Earth orbit and beyond. In 2001 no single government or international consortium had plans to send people back to the Moon, much less to Mars. Such missions are unlikely to happen until the perceived value exceeds their cost.

Some observers, such as Apollo 11 astronaut Buzz Aldrin, believe the solution may lie in space tourism. By conducting a lottery for tickets on Earth-orbit “vacations,” a nonprofit corporation could generate revenue to finance space tourism activities. In addition, the vehicles developed to carry passengers might find later use as transports to the Moon and Mars. Several organizations are pushing for the development of commercial piloted spaceflight. In 1996 the U.S. X-Prize Foundation announced that it would award $10 million to the first private team to build and fly a reusable spacecraft capable of carrying three individuals to a height of at least 100 km (62 mi). By 2000, 16 teams had registered for the competition, with estimates of first flights in 2001.

One belief shared by Aldrin and a number of other space exploration experts is that future lunar and Martian expeditions should not be Apollo-style visits, but rather should be aimed at creating permanent settlements. The residents of such outposts would have to “live off the land,” obtaining necessities such as oxygen and water from the harsh environment. On the Moon, pioneers could obtain oxygen by heating lunar soil. In 1998 the Lunar Prospector discovered evidence of significant deposits of ice—a valuable resource for settlers—mixed with soil at the lunar poles. On Mars, oxygen could be extracted from the atmosphere and water could come from buried deposits of ice.

The future of piloted lunar and planetary exploration remains largely unknown. Most space exploration scientists believe that people will be on the Moon and Mars by the middle of the 21st century, but how they get there—and the nature of their visits—is a subject of continuing debate. Clearly, key advances will need to be made in lowering the cost of getting people off Earth, the first step in any human voyage to other worlds.


Unpiloted Spaceflight

The space agencies of the world planned a wide array of robotic missions for the final years of the 20th century and the opening decade of the 21st century. NASA’s Mission to Planet Earth (MTPE) Enterprise is designed to study Earth as a global system, and to document the effects of natural changes and human activity on the environment. The Earth Observing System (EOS) spacecraft form the cornerstone of the MTPE effort. Terra, the first EOS spacecraft, was launched in December 1999. It began providing scientists with data and images in April 2000.

Mars will be visited by a succession of landers and orbiters as part of NASA’s Discovery Program, of which the Mars Pathfinder lander was a part. The program suffered setbacks in 1999 that jeopardized NASA’s goal of retrieving a sample of Martian rocks and soil in 2003 and bringing it to Earth. Although NASA planned future missions to Mars, the missions may face delays as engineers work to ensure they do not lose more spacecraft to human error or inadequate testing.

The Discovery program also includes the Near Earth Asteroid Rendezvous mission (NEAR). This spacecraft entered orbit around the asteroid Eros in 2000. In 2004 a spacecraft called Stardust, launched on February 7, 1999, is scheduled to fly past Comet Wild 2 (pronounced Vilt 2) and gather samples of the comet’s dust to bring back to Earth (see Comet).

Jupiter’s moon Europa is also likely to receive increased scrutiny, because of strong evidence for a liquid-water ocean beneath its icy crust. Among the missions being studied is a lander to drill through the ice and explore this suspected ocean. As with Mars, scientists are especially eager to find any evidence of past or present life on Europa. Such investigations will be difficult, but the discovery of any form of life beyond Earth would undoubtedly spur further explorations.

Saturn will be visited by the Cassini orbiter in the summer of 2004. The spacecraft is to deploy a probe called Huygens that will enter the atmosphere of Saturn’s largest moon, Titan, in December 2004. During its trip to the surface, Huygens will analyze the cloudy atmosphere, which is rich in organic molecules.

NASA is also considering orbiters to survey Mercury, Uranus, and Neptune. Pluto, the only planet that has never been visited by a spacecraft, is the target for a proposed Pluto Express mission. A pair of lightweight probes would be launched at high speed, reaching Pluto and its moon Charon as early as 2010.

NASA’s New Millennium program is aimed at creating new technologies for space exploration and swiftly incorporating them into spacecraft. In its first mission, the Deep Space 1 spacecraft used solar-electric propulsion to fly by an asteroid in July 1999 and was scheduled to visit comet Borelly in 2001.

NASA also plans a number of orbiting telescopes, such as the Chandra X-Ray Observatory, an X-ray astronomy telescope launched from the space shuttle in 1999. Another program, called Origins, is designed to use ground-based and space-borne telescopes to search for Earthlike planets orbiting other stars.


International Cooperation

Space exploration experts have long hoped that as international tensions have eased, an increasing number of space activities could be undertaken on an international, cooperative basis. One example is the International Space Station. In 1998, however, countries and agencies such as Japan and the European Space Agency (ESA) began to reassess their commitments to space exploration because of economic uncertainty. The transportation system for this mission may involve Russian space hardware, such as the Soyuz spacecraft.

In addition to the economic savings that could result from nations pooling their resources to explore space, the new perspective gained by space voyages could be an important benefit to international relations. The Apollo astronauts have said the greatest discovery from our voyages to the Moon was the view of their own world as a precious island of life in the void. Ultimately that awareness could help to improve our lives on Earth.


А также другие работы, которые могут Вас заинтересовать

73721. Организация безналичных расчетов с использованием банковских карт 198.82 KB
  Поэтому карты на протяжении всего срока действия остаются собственностью банка а клиенты держатели карт получают их лишь в пользование. Характер гарантий банкаэмитента зависит от платежных полномочий предоставляемых клиенту и фиксируемых классом карты. При выдаче карты клиенту осуществляется ее персонализация: на нее заносятся данные позволяющие идентифицировать карту и ее держателя а также осуществить проверку платежеспособности карты при приеме ее к оплате или выдаче наличных денег. Авторизация разрешение предоставляемое эмитентом...
  Поверхности заземления и питания Обеспечение низкоимпедансных заземляющих поверхностей большой площади очень важно для всех современных аналоговых схем. Выводы питания должны быть развязаны прямо на заземляющую поверхность с помощью низкоиндуктивных керамических конденсаторов для поверхностного монтажа SMD. Керамические конденсаторы должны быть расположены как можно ближе к выводам питания микросхемы. частично заземляющая поверхность разумеется должна быть удалена для отведения места под дорожки питания и сигналов межслойные переходы и...
73723. Экономика ресурсосбережения, конспект лекций 1.74 MB
  Обоснование программы ресурсосбережения промышленного предприятия. Схема обеспечения ресурсобезопасности предприятия. Факторы прямого воздействия связаны с действиями контрагентов непосредственно работающих с предприятием или обусловленные характером деятельности предприятия собственники предприятия персонал предприятия поставщики ресурсов потребители конечной продукции; Факторы косвенного воздействия связаны с действием системы государственного управления в сфере экономики политики социальной сферы. Сокращение длительности...
73725. Информационные системы, конспект лекций 180.5 KB
  Введение в теорию баз данных Цель лекции: сформировать общее представление о теории информационных систем и раскрыть основные понятия данной теории. Сформировать понятия классической теории баз данных. Существуют несколько классификаций информационных систем в основе которых лежат следующие критерии: цель функционирования схема 6; характер процесса преобразования данных схема 7; характерные функции управления данными схема 8; сферы применения схема 9. База данных БД это ядро информационной системы состоит из совокупности...
73726. Управление роботами и робототехническими системами 499 KB
  Современный промышленный робот – универсальный, оснащенный компьютером манипулятор, состоящий из нескольких твердых звеньев, последовательно соединенных вращательными или поступательными сочленениями.
73727. Динамика тела с одной неподвижной точкой 1.29 MB
  Будем рассматривать движение тела под действием системы n заданных сил показанных на рис. Для составления дифференциальных уравнений движения тела с одной неподвижной точкой применим теорему об изменении кинетического момента системы теорему моментов относительно неподвижной точки...
73728. Методика преподавания руского языка во вспомогательной школе 222.97 KB
  Языковыми средствами для их отображения являются слова словосочетания простые предложения нераспространенные и распространенные осложненные однородными членами. Ключевые слова: грамота аналитикосинтетический метод речедвигательный анализатор синтагма. Пишущий должен оформить свою мысль в виде предложения точно подобрав для этой цели слова и спрогнозировав место каждого предложения среди других единиц текста осуществить звуковой анализ отобранных слов соотнести звук и букву учитывая при этом правила графики и орфографии выполнить...