Architecture (computer science)


Информатика, кибернетика и программирование

The term also covers the design of system software such as the operating system the program that controls the computer as well as referring to the combination of hardware and basic software that links the machines on a computer network. Computer architecture refers to an entire structure and to the details needed to make it functional.



565 KB

7 чел.

Architecture (computer science)



Architecture (computer science), a general term referring to the structure of all or part of a computer system. The term also covers the design of system software, such as the operating system (the program that controls the computer), as well as referring to the combination of hardware and basic software that links the machines on a computer network. Computer architecture refers to an entire structure and to the details needed to make it functional. Thus, computer architecture covers computer systems, microprocessors, circuits, and system programs. Typically the term does not refer to application programs, such as spreadsheets or word processing, which are required to perform a task but not to make the system run.



In designing a computer system, architects consider five major elements that make up the system's hardware: the arithmetic/logic unit, control unit, memory, input, and output. The arithmetic/logic unit performs arithmetic and compares numerical values. The control unit directs the operation of the computer by taking the user instructions and transforming them into electrical signals that the computer's circuitry can understand. The combination of the arithmetic/logic unit and the control unit is called the central processing unit (CPU). The memory stores instructions and data. The input and output sections allow the computer to receive and send data, respectively.

Different hardware architectures are required because of the specialized needs of systems and users. One user may need a system to display graphics extremely fast, while another system may have to be optimized for searching a database or conserving battery power in a laptop computer.

In addition to the hardware design, the architects must consider what software programs will operate the system. Software, such as programming languages and operating systems, makes the details of the hardware architecture invisible to the user. For example, computers that use the C programming language or a UNIX operating system may appear the same from the user's viewpoint, although they use different hardware architectures.



When a computer carries out an instruction, it proceeds through five steps. First, the control unit retrieves the instruction from memory—for example, an instruction to add two numbers. Second, the control unit decodes the instructions into electronic signals that control the computer. Third, the control unit fetches the data (the two numbers). Fourth, the arithmetic/logic unit performs the specific operation (the addition of the two numbers). Fifth, the control unit saves the result (the sum of the two numbers).

Early computers used only simple instructions because the cost of electronics capable of carrying out complex instructions was high. As this cost decreased in the 1960s, more complicated instructions became possible. Complex instructions (single instructions that specify multiple operations) can save time because they make it unnecessary for the computer to retrieve additional instructions. For example, if seven operations are combined in one instruction, then six of the steps that fetch instructions are eliminated and the computer spends less time processing that operation. Computers that combine several instructions into a single operation are called complex instruction set computers (CISC).

However, most programs do not often use complex instructions, but consist mostly of simple instructions. When these simple instructions are run on CISC architectures they slow down processing because each instruction—whether simple or complex—takes longer to decode in a CISC design. An alternative strategy is to return to designs that use only simple, single-operation instruction sets and make the most frequently used operations faster in order to increase overall performance. Computers that follow this design are called reduced instruction set computers (RISC).

RISC designs are especially fast at the numerical computations required in science, graphics, and engineering applications. CISC designs are commonly used for nonnumerical computations because they provide special instruction sets for handling character data, such as text in a word processing program. Specialized CISC architectures, called digital signal processors, exist to accelerate processing of digitized audio and video signals.



The CPU of a computer is connected to memory and to the outside world by means of either an open or a closed architecture. An open architecture can be expanded after the system has been built, usually by adding extra circuitry, such as a new microprocessor computer chip connected to the main system. The specifications of the circuitry are made public, allowing other companies to manufacture these expansion products.

Closed architectures are usually employed in specialized computers that will not require expansion—for example, computers that control microwave ovens. Some computer manufacturers have used closed architectures so that their customers can purchase expansion circuitry only from them. This allows the manufacturer to charge more and reduces the options for the consumer.



Computers communicate with other computers via networks. The simplest network is a direct connection between two computers. However, computers can also be connected over large networks, allowing users to exchange data, communicate via electronic mail, and share resources such as printers.

Computers can be connected in several ways. In a ring configuration, data are transmitted along the ring and each computer in the ring examines this data to determine if it is the intended recipient. If the data are not intended for a particular computer, the computer passes the data to the next computer in the ring. This process is repeated until the data arrive at their intended destination. A ring network allows multiple messages to be carried simultaneously, but since each message is checked by each computer, data transmission is slowed.

In a bus configuration, computers are connected through a single set of wires, called a bus. One computer sends data to another by broadcasting the address of the receiver and the data over the bus. All the computers in the network look at the address simultaneously, and the intended recipient accepts the data. A bus network, unlike a ring network, allows data to be sent directly from one computer to another. However, only one computer at a time can transmit data. The others must wait to send their messages.

In a star configuration, computers are linked to a central computer called a hub. A computer sends the address of the receiver and the data to the hub, which then links the sending and receiving computers directly. A star network allows multiple messages to be sent simultaneously, but it is more costly because it uses an additional computer, the hub, to direct the data.



One problem in computer architecture is caused by the difference between the speed of the CPU and the speed at which memory supplies instructions and data. Modern CPUs can process instructions in 3 nanoseconds (3 billionths of a second). A typical memory access, however, takes 100 nanoseconds and each instruction may require multiple accesses. To compensate for this disparity, new computer chips have been designed that contain small memories, called caches, located near the CPU. Because of their proximity to the CPU and their small size, caches can supply instructions and data faster than normal memory. Cache memory stores the most frequently used instructions and data and can greatly increase efficiency.

Although a larger cache memory can hold more data, it also becomes slower. To compensate, computer architects employ designs with multiple caches. The design places the smallest and fastest cache nearest the CPU and locates a second larger and slower cache farther away. This arrangement allows the CPU to operate on the most frequently accessed instructions and data at top speed and to slow down only slightly when accessing the secondary cache. Using separate caches for instructions and data also allows the CPU to retrieve an instruction and data simultaneously.

Another strategy to increase speed and efficiency is the use of multiple arithmetic/logic units for simultaneous operations, called superscalar execution. In this design, instructions are acquired in groups. The control unit examines each group to see if it contains instructions that can be performed together. Some designs execute as many as six operations simultaneously. It is rare, however, to have this many instructions run together, so on average the CPU does not achieve a six-fold increase in performance.

Multiple computers are sometimes combined into single systems called parallel processors. When a machine has more than one thousand arithmetic/logic units, it is said to be massively parallel. Such machines are used primarily for numerically intensive scientific and engineering computation. Parallel machines containing as many as sixteen thousand computers have been constructed.

Computer Science



Computer Science, study of the theory, experimentation, and engineering that form the basis for the design and use of computers—devices that automatically process information. Computer science traces its roots to work done by English mathematician Charles Babbage, who first proposed a programmable mechanical calculator in 1837. Until the advent of electronic digital computers in the 1940s, computer science was not generally distinguished as being separate from mathematics and engineering. Since then it has sprouted numerous branches of research that are unique to the discipline.



Early work in the field of computer science during the late 1940s and early 1950s focused on automating the process of making calculations for use in science and engineering. Scientists and engineers developed theoretical models of computation that enabled them to analyze how efficient different approaches were in performing various calculations. Computer science overlapped considerably during this time with the branch of mathematics known as numerical analysis, which examines the accuracy and precision of calculations.

As the use of computers expanded between the 1950s and the 1970s, the focus of computer science broadened to include simplifying the use of computers through programming languages—artificial languages used to program computers, and operating systems—computer programs that provide a useful interface between a computer and a user. During this time, computer scientists were also experimenting with new applications and computer designs, creating the first computer networks, and exploring relationships between computation and thought.

In the 1970s, computer chip manufacturers began to mass produce microprocessors—the electronic circuitry that serves as the main information processing center in a computer. This new technology revolutionized the computer industry by dramatically reducing the cost of building computers and greatly increasing their processing speed. The microprocessor made possible the advent of the personal computer, which resulted in an explosion in the use of computer applications. Between the early 1970s and 1980s, computer science rapidly expanded in an effort to develop new applications for personal computers and to drive the technological advances in the computing industry. Much of the earlier research that had been done began to reach the public through personal computers, which derived most of their early software from existing concepts and systems.

Computer scientists continue to expand the frontiers of computer and information systems by pioneering the designs of more complex, reliable, and powerful computers; enabling networks of computers to efficiently exchange vast amounts of information; and seeking ways to make computers behave intelligently. As computers become an increasingly integral part of modern society, computer scientists strive to solve new problems and invent better methods of solving current problems.

The goals of computer science range from finding ways to better educate people in the use of existing computers to highly speculative research into technologies and approaches that may not be viable for decades. Underlying all of these specific goals is the desire to better the human condition today and in the future through the improved use of information.



Computer science is a combination of theory, engineering, and experimentation. In some cases, a computer scientist develops a theory, then engineers a combination of computer hardware and software based on that theory, and experimentally tests it. An example of such a theory-driven approach is the development of new software engineering tools that are then evaluated in actual use. In other cases, experimentation may result in new theory, such as the discovery that an artificial neural network exhibits behavior similar to neurons in the brain, leading to a new theory in neurophysiology.

It might seem that the predictable nature of computers makes experimentation unnecessary because the outcome of experiments should be known in advance. But when computer systems and their interactions with the natural world become sufficiently complex, unforeseen behaviors can result. Experimentation and the traditional scientific method are thus key parts of computer science.



Computer science can be divided into four main fields: software development, computer architecture (hardware), human-computer interfacing (the design of the most efficient ways for humans to use computers), and artificial intelligence (the attempt to make computers behave intelligently). Software development is concerned with creating computer programs that perform efficiently. Computer architecture is concerned with developing optimal hardware for specific computational needs. The areas of artificial intelligence (AI) and human-computer interfacing often involve the development of both software and hardware to solve specific problems.


Software Development

In developing computer software, computer scientists and engineers study various areas and techniques of software design, such as the best types of programming languages and algorithms (see below) to use in specific programs, how to efficiently store and retrieve information, and the computational limits of certain software-computer combinations. Software designers must consider many factors when developing a program. Often, program performance in one area must be sacrificed for the sake of the general performance of the software. For instance, since computers have only a limited amount of memory, software designers must limit the number of features they include in a program so that it will not require more memory than the system it is designed for can supply.

Software engineering is an area of software development in which computer scientists and engineers study methods and tools that facilitate the efficient development of correct, reliable, and robust computer programs. Research in this branch of computer science considers all the phases of the software life cycle, which begins with a formal problem specification, and progresses to the design of a solution, its implementation as a program, testing of the program, and program maintenance. Software engineers develop software tools and collections of tools called programming environments to improve the development process. For example, tools can help to manage the many components of a large program that is being written by a team of programmers.

Algorithms and data structures are the building blocks of computer programs. An algorithm is a precise step-by-step procedure for solving a problem within a finite time and using a finite amount of memory. Common algorithms include searching a collection of data, sorting data, and numerical operations such as matrix multiplication. Data structures are patterns for organizing information, and often represent relationships between data values. Some common data structures are called lists, arrays, records, stacks, queues, and trees.

Computer scientists continue to develop new algorithms and data structures to solve new problems and improve the efficiency of existing programs. One area of theoretical research is called algorithmic complexity. Computer scientists in this field seek to develop techniques for determining the inherent efficiency of algorithms with respect to one another. Another area of theoretical research called computability theory seeks to identify the inherent limits of computation.

Software engineers use programming languages to communicate algorithms to a computer. Natural languages such as English are ambiguous—meaning that their grammatical structure and vocabulary can be interpreted in multiple ways—so they are not suited for programming. Instead, simple and unambiguous artificial languages are used. Computer scientists study ways of making programming languages more expressive, thereby simplifying programming and reducing errors. A program written in a programming language must be translated into machine language (the actual instructions that the computer follows). Computer scientists also develop better translation algorithms that produce more efficient machine language programs.

Databases and information retrieval are related fields of research. A database is an organized collection of information stored in a computer, such as a company’s customer account data. Computer scientists attempt to make it easier for users to access databases, prevent access by unauthorized users, and improve access speed. They are also interested in developing techniques to compress the data, so that more can be stored in the same amount of memory. Databases are sometimes distributed over multiple computers that update the data simultaneously, which can lead to inconsistency in the stored information. To address this problem, computer scientists also study ways of preventing inconsistency without reducing access speed.

Information retrieval is concerned with locating data in collections that are not clearly organized, such as a file of newspaper articles. Computer scientists develop algorithms for creating indexes of the data. Once the information is indexed, techniques developed for databases can be used to organize it. Data mining is a closely related field in which a large body of information is analyzed to identify patterns. For example, mining the sales records from a grocery store could identify shopping patterns to help guide the store in stocking its shelves more effectively.

Operating systems are programs that control the overall functioning of a computer. They provide the user interface, place programs into the computer’s memory and cause it to execute them, control the computer’s input and output devices, manage the computer’s resources such as its disk space, protect the computer from unauthorized use, and keep stored data secure. Computer scientists are interested in making operating systems easier to use, more secure, and more efficient by developing new user interface designs, designing new mechanisms that allow data to be shared while preventing access to sensitive data, and developing algorithms that make more effective use of the computer’s time and memory.

The study of numerical computation involves the development of algorithms for calculations, often on large sets of data or with high precision. Because many of these computations may take days or months to execute, computer scientists are interested in making the calculations as efficient as possible. They also explore ways to increase the numerical precision of computations, which can have such effects as improving the accuracy of a weather forecast. The goals of improving efficiency and precision often conflict, with greater efficiency being obtained at the cost of precision and vice versa.

Symbolic computation involves programs that manipulate nonnumeric symbols, such as characters, words, drawings, algebraic expressions, encrypted data (data coded to prevent unauthorized access), and the parts of data structures that represent relationships between values. One unifying property of symbolic programs is that they often lack the regular patterns of processing found in many numerical computations. Such irregularities present computer scientists with special challenges in creating theoretical models of a program’s efficiency, in translating it into an efficient machine language program, and in specifying and testing its correct behavior.


Computer Architecture

Computer architecture is the design and analysis of new computer systems. Computer architects study ways of improving computers by increasing their speed, storage capacity, and reliability, and by reducing their cost and power consumption. Computer architects develop both software and hardware models to analyze the performance of existing and proposed computer designs, then use this analysis to guide development of new computers. They are often involved with the engineering of a new computer because the accuracy of their models depends on the design of the computer’s circuitry. Many computer architects are interested in developing computers that are specialized for particular applications such as image processing, signal processing, or the control of mechanical systems. The optimization of computer architecture to specific tasks often yields higher performance, lower cost, or both.


Artificial Intelligence

Artificial intelligence (AI) research seeks to enable computers and machines to mimic human intelligence and sensory processing ability, and models human behavior with computers to improve our understanding of intelligence. The many branches of AI research include machine learning, inference, cognition, knowledge representation, problem solving, case-based reasoning, natural language understanding, speech recognition, computer vision, and artificial neural networks.

A key technique developed in the study of artificial intelligence is to specify a problem as a set of states, some of which are solutions, and then search for solution states. For example, in chess, each move creates a new state. If a computer searched the states resulting from all possible sequences of moves, it could identify those that win the game. However, the number of states associated with many problems (such as the possible number of moves needed to win a chess game) is so vast that exhaustively searching them is impractical. The search process can be improved through the use of heuristics—rules that are specific to a given problem and can therefore help guide the search. For example, a chess heuristic might indicate that when a move results in checkmate, there is no point in examining alternate moves.



Another area of computer science that has found wide practical use is robotics—the design and development of computer controlled mechanical devices. Robots range in complexity from toys to automated factory assembly lines, and relieve humans from tedious, repetitive, or dangerous tasks. Robots are also employed where requirements of speed, precision, consistency, or cleanliness exceed what humans can accomplish. Roboticists—scientists involved in the field of robotics—study the many aspects of controlling robots. These aspects include modeling the robot’s physical properties, modeling its environment, planning its actions, directing its mechanisms efficiently, using sensors to provide feedback to the controlling program, and ensuring the safety of its behavior. They also study ways of simplifying the creation of control programs. One area of research seeks to provide robots with more of the dexterity and adaptability of humans, and is closely associated with AI.


Human-Computer Interfacing

Human-computer interfaces provide the means for people to use computers. An example of a human-computer interface is the keyboard, which lets humans enter commands into a computer and enter text into a specific application. The diversity of research into human-computer interfacing corresponds to the diversity of computer users and applications. However, a unifying theme is the development of better interfaces and experimental evaluation of their effectiveness. Examples include improving computer access for people with disabilities, simplifying program use, developing three-dimensional input and output devices for virtual reality, improving handwriting and speech recognition, and developing heads-up displays for aircraft instruments in which critical information such as speed, altitude, and heading are displayed on a screen in front of the pilot’s window. One area of research, called visualization, is concerned with graphically presenting large amounts of data so that people can comprehend its key properties.



Because computer science grew out of mathematics and electrical engineering, it retains many close connections to those disciplines. Theoretical computer science draws many of its approaches from mathematics and logic. Research in numerical computation overlaps with mathematics research in numerical analysis. Computer architects work closely with the electrical engineers who design the circuits of a computer.

Beyond these historical connections, there are strong ties between AI research and psychology, neurophysiology, and linguistics. Human-computer interface research also has connections with psychology. Roboticists work with both mechanical engineers and physiologists in designing new robots.

Computer science also has indirect relationships with virtually all disciplines that use computers. Applications developed in other fields often involve collaboration with computer scientists, who contribute their knowledge of algorithms, data structures, software engineering, and existing technology. In return, the computer scientists have the opportunity to observe novel applications of computers, from which they gain a deeper insight into their use. These relationships make computer science a highly interdisciplinary field of study.

Parallel Processing



Parallel Processing, computer technique in which multiple operations are carried out simultaneously. Parallelism reduces computational time. For this reason, it is used for many computationally intensive applications such as predicting economic trends or generating visual special effects for feature films.

Two common ways that parallel processing is accomplished are through multiprocessing or instruction-level parallelism. Multiprocessing links several processors—computers or microprocessors (the electronic circuits that provide the computational power and control of computers)—together to solve a single problem. Instruction-level parallelism uses a single computer processor that executes multiple instructions simultaneously.

If a problem is divided evenly into ten independent parts that are solved simultaneously on ten computers, then the solution requires one tenth of the time it would take on a single nonparallel computer where each part is solved in sequential order. Many large problems are easily divisible for parallel processing; however, some problems are difficult to divide because their parts are interdependent, requiring the results from another part of the problem before they can be solved.

Portions of a problem that cannot be calculated in parallel are called serial. These serial portions determine the computation time for a problem. For example, suppose a problem has nine million computations that can be done in parallel and one million computations that must be done serially. Theoretically, nine million computers could perform nine-tenths of the total computation simultaneously, leaving one-tenth of the total problem to be computed serially. Therefore, the total execution time is only one-tenth of what it would be on a single nonparallel computer, despite the additional nine million processors.



In 1966 American electrical engineer Michael Flynn distinguished four classes of processor architecture (the design of how processors manipulate data and instructions). Data can be sent either to a computer's processor one at a time, in a single data stream, or several pieces of data can be sent at the same time, in multiple data streams. Similarly, instructions can be carried out either one at a time, in a single instruction stream, or several instructions can be carried out simultaneously, in multiple instruction streams.

Serial computers have a Single Instruction stream, Single Data stream (SISD) architecture. One piece of data is sent to one processor. For example, if 100 numbers had to be multiplied by the number 3, each number would be sent to the processor, multiplied, and the result stored; then the next number would be sent and calculated, until all 100 results were calculated. Applications that are suited for SISD architectures include those that require complex interdependent decisions, such as word processing.

A Multiple Instruction stream, Single Data stream (MISD) processor replicates a stream of data and sends it to multiple processors, each of which then executes a separate program. For example, the contents of a database could be sent simultaneously to several processors, each of which would search for a different value. Problems well-suited to MISD parallel processing include computer vision systems that extract multiple features, such as vegetation, geological features, or manufactured objects, from a single satellite image.

A Single Instruction stream, Multiple Data stream (SIMD) architecture has multiple processing elements that carry out the same instruction on separate data. For example, a SIMD machine with 100 processing elements can simultaneously multiply 100 numbers each by the number 3. SIMD processors are programmed much like SISD processors, but their operations occur on arrays of data instead of individual values. SIMD processors are therefore also known as array processors. Examples of applications that use SIMD architecture are image-enhancement processing and radar processing for air-traffic control.

A Multiple Instruction stream, Multiple Data stream (MIMD) processor has separate instructions for each stream of data. This architecture is the most flexible, but it is also the most difficult to program because it requires additional instructions to coordinate the actions of the processors. It also can simulate any of the other architectures but with less efficiency. MIMD designs are used on complex simulations, such as projecting city growth and development patterns, and in some artificial-intelligence programs.


Parallel Communication

Another factor in parallel-processing architecture is how processors communicate with each other. One approach is to let processors share a single memory and communicate by reading each other's data. This is called shared memory. In this architecture, all the data can be accessed by any processor, but care must be taken to prevent the linked processors from inadvertently overwriting each other's results.

An alternative method is to connect the processors and allow them to send messages to each other. This technique is known as message passing or distributed memory. Data are divided and stored in the memories of different processors. This makes it difficult to share information because the processors are not connected to the same memory, but it is also safer because the results cannot be overwritten.

In shared memory systems, as the number of processors increases, access to the single memory becomes difficult, and a bottleneck forms. To address this limitation, and the problem of isolated memory in distributed memory systems, distributed memory processors also can be constructed with circuitry that allows different processors to access each other's memory. This hybrid approach, known as distributed shared memory, eliminates the bottleneck and sharing problems of both architectures.



Parallel processing is more costly than serial computing because multiple processors are expensive and the speedup in computation is rarely proportional to the number of additional processors.

MIMD processors require complex programming to coordinate their actions. Finding MIMD programming errors also is complicated by time-dependent interactions between processors. For example, one processor might require the result from a second processor's memory before that processor has produced the result and put it into its memory. This results in an error that is difficult to identify.

Programs written for one parallel architecture seldom run efficiently on another. As a result, to use one program on two different parallel processors often involves a costly and time-consuming rewrite of that program.



When a parallel processor performs more than 1000 operations at a time, it is said to be massively parallel. In most cases, problems that are suited to massive parallelism involve large amounts of data, such as in weather forecasting, simulating the properties of hypothetical pharmaceuticals, and code breaking. Massively parallel processors today are large and expensive, but technology soon will permit an SIMD processor with 1024 processing elements to reside on a single integrated circuit.

Researchers are finding that the serial portions of some problems can be processed in parallel, but on different architectures. For example, 90 percent of a problem may be suited to SIMD, leaving 10 percent that appears to be serial but merely requires MIMD processing. To accommodate this finding two approaches are being explored: heterogeneous parallelism combines multiple parallel architectures, and configurable computers can change their architecture to suit each part of the problem.

In 1996 International Business Machines Corporation (IBM) challenged Garry Kasparov, the reigning world chess champion, to a chess match with a supercomputer called Deep Blue. The computer utilized 256 microprocessors in a parallel architecture to compute more than 100 million chess positions per second. Kasparov won the match with three wins, two draws, and one loss. Deep Blue was the first computer to win a game against a world champion with regulation time controls. Some experts predict these types of parallel processing machines will eventually surpass human chess playing ability, and some speculate that massive calculating power will one day substitute for intelligence. Deep Blue serves as a prototype for future computers that will be required to solve complex problems.

World Wide Web



World Wide Web (WWW), computer-based network of information resources that a user can move through by using links from one document to another. The information on the World Wide Web is spread over computers all over the world. The World Wide Web is often referred to simply as “the Web.”

Internet Topology

The Internet and the Web are each a series of interconnected computer networks. Personal computers or workstations are connected to a Local Area Network (LAN) either by a dial-up connection through a modem and standard phone line, or by being directly wired into the LAN. Other modes of data transmission that allow for connection to a network include T-1 connections and dedicated lines. Bridges and hubs link multiple networks to each other. Routers transmit data through networks and determine the best path of transmission.

The Web has become a very popular resource since it first became possible to view images and other multimedia on the Internet, a worldwide network of computers, in 1993. The Web offers a place where companies, institutions, and individuals can display information about their products, research, or their lives. Anyone with access to a computer connected to the Web can view most of that information. A small percentage of information on the Web is only accessible to subscribers or other authorized users. The Web has become a forum for many groups and a marketplace for many companies. Museums, libraries, government agencies, and schools make the Web a valuable learning and research tool by posting data and research. The Web also carries information in a wide spectrum of formats. Users can read text, view pictures, listen to sounds, and even explore interactive virtual environments on the Web.



Like all computer networks, the Web connects two types of computers–clients and servers—using a standard set of rules for communication between the computers. The server computers store the information resources that make up the Web, and Web users use client computers to access the resources. A computer-based network may be a public network—such as the worldwide Internet—or a private network, such as a company’s intranet. The Web is part of the Internet. The Internet also encompasses other methods of linking computers, such as Telnet, File Transfer Protocol, and Gopher, but the Web has quickly become the most widely used part of the Internet. It differs from the other parts of the Internet in the rules that computers use to talk to each other and in the accessibility of information other than text. It is much more difficult to view pictures or other multimedia files with methods other than the Web.

Enabling client computers to display Web pages with pictures and other media was made possible by the introduction of a type of software called a browser. Each Web document contains coded information about what is on the page, how the page should look, and to which other sites the document links. The browser on the client’s computer reads this information and uses it to display the page on the client’s screen. Almost every Web page or Web document includes links, called hyperlinks, to other Web sites. Hyperlinks are a defining feature of the Web—they allow users to travel between Web documents without following a specific order or hierarchy.



When users want to access the Web, they use the Web browser on their client computer to connect to a Web server. Client computers connect to the Web in one of two ways. Client computers with dedicated access to the Web connect directly to the Web through a router (a piece of computer hardware that determines the best way to connect client and server computers) or by being part of a larger network with a direct connection to the Web. Client computers with dial-up access to the Web connect to the Web through a modem, a hardware device that translates information from the computer into signals that can travel over telephone lines. Some modems send signals over cable television lines or special high-capacity telephone lines such as Integrated Services Digital Network (ISDN) or Asymmetric Digital Subscriber Loop (ADSL) lines. The client computer and the Web server use a set of rules for passing information back and forth. The Web browser knows another set of rules with which it can open and display information that reaches the client computer.

Web servers hold Web documents and the media associated with them. They can be ordinary personal computers, powerful mainframe computers, or anywhere in the range between the two. Client computers access information from Web servers, and any computer that a person uses to access the Web is a client, so a client could be any type of computer. The set of rules that clients and servers use to talk to each other is called a protocol. The Web, and all Internet formats, uses the protocol called TCP/IP (Transmission Control Protocol/Internet Protocol). However, each part of the Internet—such as the Web, gopher systems, and File Transfer Protocol (FTP) systems—uses a slightly different system to transfer files between clients and servers.

The address of a Web document helps the client computer find and connect to the server that holds the page. The address of a Web page is called a Uniform Resource Locator (URL). A URL is a compound code that tells the client’s browser three things: the rules the client should use to reach the site, the Internet address that uniquely designates the server, and the location within the server’s file system for a given item. An example of a URL is http://encarta.msn.com/. The first part of the URL, http://, shows that the site is on the World Wide Web. Most browsers are also capable of retrieving files with formats from other parts of the Internet, such as gopher and FTP. Other Internet formats use different codes in the first part of their URLs—for example, gopher uses gopher:// and FTP uses ftp://. The next part of the URL, encarta.msn.com, gives the name, or unique Internet address, of the server on which the Web site is stored. Some URLs specify certain directories or files, such as http://encarta.msn.com/explore/default.asp—explore is the name of the directory in which the file default.asp is found.

The Web holds information in many forms, including text, graphical images, and any type of digital media files: including video, audio, and virtual reality files. Some elements of Web pages are actually small software programs in their own right. These objects, called applets (from a small application, another name for a computer program), follow a set of instructions written by the person that programmed the applet. Applets allow users to play games on the Web, search databases, perform virtual scientific experiments, and many other actions.

The codes that tell the browser on the client computer how to display a Web document correspond to a set of rules called Hypertext Markup Language (HTML). Each Web document is written as plain text, and the instructions that tell the client computer how to present the document are contained within the document itself, encoded using special symbols called HTML tags. The browser knows how to interpret the HTML tags, so the document appears on the user’s screen as the document designer intended. In addition to HTML, some types of objects on the Web use their own coding. Applets, for example, are mini-computer programs that are written in computer programming languages such as Visual Basic and Java.

Client-server communication, URLs, and HTML allow Web sites to incorporate hyperlinks, which users can use to navigate through the Web. Hyperlinks are often phrases in the text of the Web document that link to another Web document by providing the document’s URL when the user clicks their mouse on the phrase. The client’s browser usually differentiates between hyperlinks and ordinary text by making the hyperlinks a different color or by underlining the hyperlinks. Hyperlinks allow users to jump between diverse pages on the Web in no particular order. This method of accessing information is called associative access, and scientists believe it bears a striking resemblance to the way the human brain accesses stored information. Hyperlinks make referencing information on the Web faster and easier than using most traditional printed documents.



Even though the World Wide Web is only a part of the Internet, surveys have shown that over 75 percent of Internet use is on the Web. That percentage is likely to grow in the future.

One of the most remarkable aspects of the World Wide Web is its users. They are a cross section of society. Users include students who need to find materials for a term paper, physicians who need to find out about the latest medical research, and college applicants investigating campuses or even filling out application and financial aid forms online. Other users include investors who can look up the trading history of a company’s stock and evaluate data on various commodities and mutual funds. All of this information is readily available on the Web. Users can often find graphs of a company’s financial information that show the information in several different ways.

Travelers investigating a possible trip can take virtual tours, check on airline schedules and fares, and even book a flight on the Web. Many destinations—including parks, cities, resorts, and hotels—have their own Web sites with guides and local maps. Major delivery companies also have Web sites from which customers can track their shipments, finding out where their packages are or when they were delivered.

Government agencies have Web sites where they post regulations, procedures, newsletters, and tax forms. Many elected officials—including almost all members of the United States Congress—have Web sites, where they express their views, list their achievements, and invite input from the voters. The Web also contains directories of e-mail and postal mail addresses and phone numbers.

Many merchants and publishers now do business on the Web. Web users can shop at Web sites of major bookstores, clothing sellers, and other retailers. Many major newspapers have special Web editions that are issued even more frequently than daily. The major broadcast networks use the Web to provide supplementary materials for radio and television shows, especially documentaries. Electronic journals in almost every scholarly field are now on the Web. Most museums now offer the Web user a virtual tour of their exhibits and holdings. These businesses and institutions usually use their Web sites to complement the non-Web parts of the operations. Some receive extra revenues from selling advertising space on their Web sites. Some businesses, especially publishers, provide limited information to ordinary Web users, but offer much more to users who buy a subscription.



The World Wide Web was developed by British physicist and computer scientist Timothy Berners-Lee as a project within the European Organization for Nuclear Research (CERN) in Geneva, Switzerland. Berners-Lee first began working with hypertext in the early 1980s. His implementation of the Web became operational at CERN in 1989, and it quickly spread to universities in the rest of the world through the high-energy physics community of scholars. Groups at the National Center for Supercomputing Applications at the University of Illinois in Champaign-Urbana also researched and developed Web technology. They developed the first major browser, named Mosaic, in 1993. Mosaic was the first browser to come in several different versions, each of which was designed to run on a different operating system. Operating systems are the basic software that control computers.

The architecture of the Web is amazingly straightforward. For the user, the Web is attractive to use because it is built upon a graphical user interface (GUI), a method of displaying information and controls with pictures. The Web also works on diverse types of computing equipment because it is made up of a small set of programs. This small set makes it relatively simple for programmers to write software that can translate information on the Web into a form that corresponds to a particular operating system. The Web’s methods of storing information associatively, retrieving documents with hypertext links, and naming Web sites with URLs make it a smooth extension of the rest of the Internet. This allows easy access to information between different parts of the Internet.



People continue to extend and improve on World Wide Web technology. Computer scientists predict that users will likely see at least five new ways in which the Web has been extended: new ways of searching the Web, new ways of restricting access to intellectual property, more integration of entire databases into the Web, more access to software libraries, and more and more electronic commerce.

HTML will probably continue to go through new forms with extended capabilities for formatting Web pages. Other complementary programming and coding systems such as Visual Basic scripting, Virtual Reality Markup Language (VMRL), Active X programming, and Java scripting will probably continue to gain larger roles in the Web. This will result in more powerful Web pages, capable of bringing information to users in more engaging and exciting ways.

On the hardware side, faster connections to the Web will allow users to download more information, making it practical to include more information and more complicated multimedia elements on each Web page. Software, telephone, and cable companies are planning partnerships that will allow information from the Web to travel into homes along improved telephone lines and coaxial cable such as that used for cable television. New kinds of computers, specifically designed for use with the Web, may become increasingly popular. These computers are less expensive than ordinary computers because they have fewer features, retaining only those required by the Web. Some computers even use ordinary television sets, instead of special computer monitors, to display content from the Web.

Neural Network



Neural Network, in computer science, highly interconnected network of information-processing elements that mimics the connectivity and functioning of the human brain. Neural networks address problems that are often difficult for traditional computers to solve, such as speech and pattern recognition. They also provide some insight into the way the human brain works. One of the most significant strengths of neural networks is their ability to learn from a limited set of examples.

Neural networks were initially studied by computer and cognitive scientists in the late 1950s and early 1960s in an attempt to model sensory perception in biological organisms. Neural networks have been applied to many problems since they were first introduced, including pattern recognition, handwritten character recognition, speech recognition, financial and economic modeling, and next-generation computing models.

Artificial Neural Network

The neural networks that are increasingly being used in computing mimic those found in the nervous systems of vertebrates. The main characteristic of a biological neural network, top, is that each neuron, or nerve cell, receives signals from many other neurons through its branching dendrites. The neuron produces an output signal that depends on the values of all the input signals and passes this output on to many other neurons along a branching fiber called an axon. In an artificial neural network, bottom, input signals, such as signals from a television camera’s image, fall on a layer of input nodes, or computing units. Each of these nodes is linked to several other “hidden’ nodes between the input and output nodes of the network. There may be several layers of hidden nodes, though for simplicity only one is shown here. Each hidden node performs a calculation on the signals reaching it and sends a corresponding output signal to other nodes. The final output is a highly processed version of the input.



Neural networks fall into two categories: artificial neural networks and biological neural networks. Artificial neural networks are modeled on the structure and functioning of biological neural networks. The most familiar biological neural network is the human brain. The human brain is composed of approximately 100 billion nerve cells called neurons that are massively interconnected. Typical neurons in the human brain are connected to on the order of 10,000 other neurons, with some types of neurons having more than 200,000 connections. The extensive number of neurons and their high degree of interconnectedness are part of the reason that the brains of living creatures are capable of making a vast number of calculations in a short amount of time.



Biological neurons have a fairly simple large-scale structure, although their operation and small-scale structure is immensely complex. Neurons have three main parts: a central cell body, called the soma, and two different types of branched, treelike structures that extend from the soma, called dendrites and axons. Information from other neurons, in the form of electrical impulses, enters the dendrites at connection points called synapses. The information flows from the dendrites to the soma, where it is processed. The output signal, a train of impulses, is then sent down the axon to the synapses of other neurons.

Artificial neurons, like their biological counterparts, have simple structures and are designed to mimic the function of biological neurons. The main body of an artificial neuron is called a node or unit. Artificial neurons may be physically connected to one another by wires that mimic the connections between biological neurons, if, for instance, the neurons are simple integrated circuits. However, neural networks are usually simulated on traditional computers, in which case the connections between processing nodes are not physical but are instead virtual.

Artificial neurons may be either discrete or continuous. Discrete neurons send an output signal of 1 if the sum of received signals is above a certain critical value called a threshold value, otherwise they send an output signal of 0. Continuous neurons are not restricted to sending output values of only 1s and 0s; instead they send an output value between 1 and 0 depending on the total amount of input that they receive—the stronger the received signal, the stronger the signal sent out from the node and vice-versa. Continuous neurons are the most commonly used in actual artificial neural networks.


Artificial Neural Network Architecture

The architecture of a neural network is the specific arrangement and connections of the neurons that make up the network. One of the most common neural network architectures has three layers. The first layer is called the input layer and is the only layer exposed to external signals. The input layer transmits signals to the neurons in the next layer, which is called a hidden layer. The hidden layer extracts relevant features or patterns from the received signals. Those features or patterns that are considered important are then directed to the output layer, the final layer of the network. Sophisticated neural networks may have several hidden layers, feedback loops, and time-delay elements, which are designed to make the network as efficient as possible in discriminating relevant features or patterns from the input layer.



Neural networks differ greatly from traditional computers (for example personal computers, workstations, mainframes) in both form and function. While neural networks use a large number of simple processors to do their calculations, traditional computers generally use one or a few extremely complex processing units. Neural networks also do not have a centrally located memory, nor are they programmed with a sequence of instructions, as are all traditional computers.

 The information processing of a neural network is distributed throughout the network in the form of its processors and connections, while the memory is distributed in the form of the weights given to the various connections. The distribution of both processing capability and memory means that damage to part of the network does not necessarily result in processing dysfunction or information loss. This ability of neural networks to withstand limited damage and continue to function well is one of their greatest strengths.

Neural networks also differ greatly from traditional computers in the way they are programmed. Rather than using programs that are written as a series of instructions, as do all traditional computers, neural networks are “taught” with a limited set of training examples. The network is then able to “learn” from the initial examples to respond to information sets that it has never encountered before. The resulting values of the connection weights can be thought of as a ‘program’.

Neural networks are usually simulated on traditional computers. The advantage of this approach is that computers can easily be reprogrammed to change the architecture or learning rule of the simulated neural network. Since the computation in a neural network is massively parallel, the processing speed of a simulated neural network can be increased by using massively parallel computers—computers that link together hundreds or thousands of CPUs in parallel to achieve very high processing speeds.



In all biological neural networks the connections between particular dendrites and axons may be reinforced or discouraged. For example, connections may become reinforced as more signals are sent down them, and may be discouraged when signals are infrequently sent down them. The reinforcement of certain neural pathways, or dendrite-axon connections, results in a higher likelihood that a signal will be transmitted along that path, further reinforcing the pathway. Paths between neurons that are rarely used slowly atrophy, or decay, making it less likely that signals will be transmitted along them.

The role of connection strengths between neurons in the brain is crucial; scientists believe they determine, to a great extent, the way in which the brain processes the information it takes in through the senses. Neuroscientists studying the structure and function of the brain believe that various patterns of neurons firing can be associated with specific memories. In this theory, the strength of the connections between the relevant neurons determines the strength of the memory. Important information that needs to be remembered may cause the brain to constantly reinforce the pathways between the neurons that form the memory, while relatively unimportant information will not receive the same degree of reinforcement.


Connection Weights

To mimic the way in which biological neurons reinforce certain axon-dendrite pathways, the connections between artificial neurons in a neural network are given adjustable connection weights, or measures of importance. When signals are received and processed by a node, they are multiplied by a weight, added up, and then transformed by a nonlinear function. The effect of the nonlinear function is to cause the sum of the input signals to approach some value, usually +1 or 0. If the signals entering the node add up to a positive number, the node sends an output signal that approaches +1 out along all of its connections, while if the signals add up to a negative value, the node sends a signal that approaches 0. This is similar to a simplified model of a how a biological neuron functions—the larger the input signal, the larger the output signal.


Training Sets

Computer scientists teach neural networks by presenting them with desired input-output training sets. The input-output training sets are related patterns of data. For instance, a sample training set might consist of ten different photographs for each of ten different faces. The photographs would then be digitally entered into the input layer of the network. The desired output would be for the network to signal one of the neurons in the output layer of the network per face. Beginning with equal, or random, connection weights between the neurons, the photographs are digitally entered into the input layer of the neural network and an output signal is computed and compared to the target output. Small adjustments are then made to the connection weights to reduce the difference between the actual output and the target output. The input-output set is again presented to the network and further adjustments are made to the connection weights because the first few times that the input is entered, the network will usually choose the incorrect output neuron. After repeating the weight-adjustment process many times for all input-output patterns in the training set, the network learns to respond in the desired manner.

A neural network is said to have learned when it can correctly perform the tasks for which it has been trained. Neural networks are able to extract the important features and patterns of a class of training examples and generalize from these to correctly process new input data that they have not encountered before. For a neural network trained to recognize a series of photographs, generalization would be demonstrated if a new photograph presented to the network resulted in the correct output neuron being signaled.

A number of different neural network learning rules, or algorithms, exist and use various techniques to process information. Common arrangements use some sort of system to adjust the connection weights between the neurons automatically. The most widely used scheme for adjusting the connection weights is called error back-propagation, developed independently by American computer scientists Paul Werbos (in 1974), David Parker (in 1984/1985), and David Rumelhart, Ronald Williams, and others (in 1985). The back-propagation learning scheme compares a neural network’s calculated output to a target output and calculates an error adjustment for each of the nodes in the network. The neural network adjusts the connection weights according to the error values assigned to each node, beginning with the connections between the last hidden layer and the output layer. After the network has made adjustments to this set of connections, it calculates error values for the next previous layer and makes adjustments. The back-propagation algorithm continues in this way, adjusting all of the connection weights between the hidden layers until it reaches the input layer. At this point it is ready to calculate another output.



Neural networks have been applied to many tasks that are easy for humans to accomplish, but difficult for traditional computers. Because neural networks mimic the brain, they have shown much promise in so-called sensory processing tasks such as speech recognition, pattern recognition, and the transcription of hand-written text. In some settings, neural networks can perform as well as humans. Neural-network-based backgammon software, for example, rivals the best human players.

While traditional computers still outperform neural networks in most situations, neural networks are superior in recognizing patterns in extremely large data sets. Furthermore, because neural networks have the ability to learn from a set of examples and generalize this knowledge to new situations, they are excellent for work requiring adaptive control systems. For this reason, the United States National Aeronautics and Space Administration (NASA) has extensively studied neural networks to determine whether they might serve to control future robots sent to explore planetary bodies in our solar system. In this application, robots could be sent to other planets, such as Mars, to carry out significant and detailed exploration autonomously.

An important advantage that neural networks have over traditional computer systems is that they can sustain damage and still function properly. This design characteristic of neural networks makes them very attractive candidates for future aircraft control systems, especially in high performance military jets. Another potential use of neural networks for civilian and military use is in pattern recognition software for radar, sonar, and other remote-sensing devices.


Motherboard, in computer science, the main circuit board in a computer. The most important computer chips and other electronic components that give function to a computer are located on the motherboard. The motherboard is a printed circuit board that connects the various elements on it through the use of traces, or electrical pathways. The motherboard is indispensable to the computer and provides the main computing capability.

Personal computers normally have one central processing unit (CPU), or microprocessor, which is located with other chips on the motherboard. The manufacturer and model of the CPU chip carried by the motherboard is a key criterion for designating the speed and other capabilities of the computer. The CPU in many personal computers is not permanently attached to the motherboard, but is instead plugged into a socket so that it may be removed and upgraded.

Motherboards also contain important computing components, such as the basic input/output system (BIOS), which contains the basic set of instructions required to control the computer when it is first turned on; different types of memory chips such as random access memory (RAM) and cache memory; mouse, keyboard, and monitor control circuitry; and logic chips that control various parts of the computer’s function. Having as many of the key components of the computer as possible on the motherboard improves the speed and operation of the computer.

Users may expand their computer’s capability by inserting an expansion board into special expansion slots on the motherboard. Expansion slots are standard with nearly all personal computers and offer faster speed, better graphics capabilities, communication capability with other computers, and audio and video capabilities. Expansion slots come in either half or full size, and can transfer 8 or 16 bits (the smallest units of information that a computer can process) at a time, respectively.

The pathways that carry data on the motherboard are called buses. The amount of data that can be transmitted at one time between a device, such as a printer or monitor, and the CPU affects the speed at which programs run. For this reason, buses are designed to carry as much data as possible. To work properly, expansion boards must conform to bus standards such as integrated drive electronics (IDE), Extended Industry Standard Architecture (EISA), or small computer system interface (SCSI).

Central Processing Unit



Central Processing Unit (CPU), in computer science, microscopic circuitry that serves as the main information processor in a computer. A CPU is generally a single microprocessor made from a wafer of semiconducting material, usually silicon, with millions of electrical components on its surface. On a higher level, the CPU is actually a number of interconnected processing units that are each responsible for one aspect of the CPU’s function. Standard CPUs contain processing units that interpret and implement software instructions, perform calculations and comparisons, make logical decisions (determining if a statement is true or false based on the rules of Boolean algebra), temporarily store information for use by another of the CPU’s processing units, keep track of the current step in the execution of the program, and allow the CPU to communicate with the rest of the computer.




CPU Function

A CPU is similar to a calculator, only much more powerful. The main function of the CPU is to perform arithmetic and logical operations on data taken from memory or on information entered through some device, such as a keyboard, scanner, or joystick. The CPU is controlled by a list of software instructions, called a computer program. Software instructions entering the CPU originate in some form of memory storage device such as a hard disk, floppy disk, CD-ROM, or magnetic tape. These instructions then pass into the computer’s main random access memory (RAM), where each instruction is given a unique address, or memory location. The CPU can access specific pieces of data in RAM by specifying the address of the data that it wants.

As a program is executed, data flow from RAM through an interface unit of wires called the bus, which connects the CPU to RAM. The data are then decoded by a processing unit called the instruction decoder that interprets and implements software instructions. From the instruction decoder the data pass to the arithmetic/logic unit (ALU), which performs calculations and comparisons. Data may be stored by the ALU in temporary memory locations called registers where it may be retrieved quickly. The ALU performs specific operations such as addition, multiplication, and conditional tests on the data in its registers, sending the resulting data back to RAM or storing it in another register for further use. During this process, a unit called the program counter keeps track of each successive instruction to make sure that the program instructions are followed by the CPU in the correct order.


Branching Instructions

The program counter in the CPU usually advances sequentially through the instructions. However, special instructions called branch or jump instructions allow the CPU to abruptly shift to an instruction location out of sequence. These branches are either unconditional or conditional. An unconditional branch always jumps to a new, out of order instruction stream. A conditional branch tests the result of a previous operation to see if the branch should be taken. For example, a branch might be taken only if the result of a previous subtraction produced a negative result. Data that are tested for conditional branching are stored in special locations in the CPU called flags.


Clock Pulses

The CPU is driven by one or more repetitive clock circuits that send a constant stream of pulses throughout the CPU’s circuitry. The CPU uses these clock pulses to synchronize its operations. The smallest increments of CPU work are completed between sequential clock pulses. More complex tasks take several clock periods to complete. Clock pulses are measured in Hertz, or number of pulses per second. For instance, a 100-megahertz (100-MHz) processor has 100 million clock pulses passing through it per second. Clock pulses are a measure of the speed of a processor.


Fixed-Point and Floating-Point Numbers

Most CPUs handle two different kinds of numbers: fixed-point and floating-point numbers. Fixed-point numbers have a specific number of digits on either side of the decimal point. This restriction limits the range of values that are possible for these numbers, but it also allows for the fastest arithmetic. Floating-point numbers are numbers that are expressed in scientific notation, in which a number is represented as a decimal number multiplied by a power of ten. Scientific notation is a compact way of expressing very large or very small numbers and allows a wide range of digits before and after the decimal point. This is important for representing graphics and for scientific work, but floating-point arithmetic is more complex and can take longer to complete. Performing an operation on a floating-point number may require many CPU clock periods. A CPU’s floating-point computation rate is therefore less than its clock rate. Some computers use a special floating-point processor, called a coprocessor, that works in parallel to the CPU to speed up calculations using floating-point numbers. This coprocessor has become standard on many personal computer CPUs, such as Intel’s Pentium chip.




Early Computers

In the first computers, CPUs were made of vacuum tubes and electric relays rather than microscopic transistors on computer chips. These early computers were immense and needed a great deal of power compared to today’s microprocessor-driven computers. The first general purpose electronic computer, the ENIAC (Electronic Numerical Integrator And Computer), was completed in 1946 and filled a large room. About 18,000 vacuum tubes were used to build ENIAC’s CPU and input/output circuits. Between 1946 and 1956 all computers had bulky CPUs that consumed massive amounts of energy and needed continual maintenance, because the vacuum tubes burned out frequently and had to be replaced.


The Transistor

A solution to the problems posed by vacuum tubes came in 1947, when American physicists John Bardeen, Walter Brattain, and William Shockley first demonstrated a revolutionary new electronic switching and amplifying device called the transistor. The transistor had the potential to work faster and more reliably and to consume much less power than a vacuum tube. Despite the overwhelming advantages transistors offered over vacuum tubes, it took nine years before they were used in a commercial computer. The first commercially available computer to use transistors in its circuitry was the UNIVAC (UNIVersal Automatic Computer), delivered to the United States Air Force in 1956.


The Integrated Circuit

Development of the computer chip started in 1958 when Jack Kilby of Texas Instruments demonstrated that it was possible to integrate the various components of a CPU onto a single piece of silicon. These computer chips were called integrated circuits (ICs) because they combined multiple electronic circuits on the same chip. Subsequent design and manufacturing advances allowed transistor densities on integrated circuits to increase tremendously. The first ICs had only tens of transistors per chip compared to the 3 million to 5 million transistors per chip common on today’s CPUs.

In 1967 Fairchild Semiconductor introduced a single integrated circuit that contained all the arithmetic logic functions for an eight-bit processor. (A bit is the smallest unit of information used in computers. Multiples of a bit are used to describe the largest-size piece of data that a CPU can manipulate at one time.) However, a fully working integrated circuit computer required additional circuits to provide register storage, data flow control, and memory and input/output paths. Intel Corporation accomplished this in 1971 when it introduced the Intel 4004 microprocessor. Although the 4004 could only manage four-bit arithmetic, it was powerful enough to become the core of many useful hand calculators at the time. In 1975 Micro Instrumentation Telemetry Systems introduced the Altair 8800, the first personal computer kit to feature an eight-bit microprocessor. Because microprocessors were so inexpensive and reliable, computing technology rapidly advanced to the point where individuals could afford to buy a small computer. The concept of the personal computer was made possible by the advent of the microprocessor CPU. In 1978 Intel introduced the first of its x86 CPUs, the 8086 16-bit microprocessor. Although 16-bit microprocessors are still common, today’s microprocessors are becoming increasingly sophisticated, with many 32-bit and even 64-bit CPUs available. High-performance processors can run with internal clock rates that exceed 500 MHz, or 500 million clock pulses per second.



The competitive nature of the computer industry and the use of faster, more cost-effective computing continue the drive toward faster CPUs. The minimum transistor size that can be manufactured using current technology is fast approaching the theoretical limit. In the standard technique for microprocessor design, ultraviolet (short wavelength) light is used to expose a light-sensitive covering on the silicon chip. Various methods are then used to etch the base material along the pattern created by the light. These etchings form the paths that electricity follows in the chip. The theoretical limit for transistor size using this type of manufacturing process is approximately equal to the wavelength of the light used to expose the light-sensitive covering. By using light of shorter wavelength, greater detail can be achieved and smaller transistors can be manufactured, resulting in faster, more powerful CPUs. Printing integrated circuits with X-rays, which have a much shorter wavelength than ultraviolet light, may provide further reductions in transistor size that will translate to improvements in CPU speed.

Many other avenues of research are being pursued in an attempt to make faster CPUs. New base materials for integrated circuits, such as composite layers of gallium arsenide and gallium aluminum arsenide, may contribute to faster chips. Alternatives to the standard transistor-based model of the CPU are also being considered. Experimental ideas in computing may radically change the design of computers and the concept of the CPU in the future. These ideas include quantum computing, in which single atoms hold bits of information; molecular computing, where certain types of problems may be solved using recombinant DNA techniques; and neural networks, which are computer systems with the ability to learn.

Computer Memory



Computer Memory, device that stores data for use by a computer. Most memory devices represent data with the binary number system. In the binary number system, numbers are represented by sequences of the digits 0 and 1. In a computer, these numbers correspond to the on and off states of the computer’s electronic circuitry. Each binary digit is called a bit, which is the basic unit of memory in a computer. A group of eight bits is called a byte, and can represent decimal numbers ranging from 0 to 255. When these numbers are each assigned to a letter, digit, or symbol, in what is known as a character code, a byte can also represent a single character.

Memory capacity is usually quantified in terms of kilobytes, megabytes, and gigabytes. The prefixes kilo-, mega-, and giga-, are taken from the metric system and mean 1 thousand, 1 million, and 1 billion, respectively. Thus, a kilobyte is approximately 1000 (1 thousand) bytes, a megabyte is approximately 1,000,000 (1 million) bytes, and a gigabyte is approximately 1,000,000,000 (1 billion) bytes. The actual numerical values of these units are slightly different because they are derived from the binary number system. The precise number of bytes in a kilobyte is 2 raised to the 10th power, or 1,024. The precise number of bytes in a megabyte is 2 raised to the 20th power, and the precise number of bits in a gigabyte is 2 raised to the 30th power.



Computer memory may be divided into internal memory and external memory. Internal memory is memory that can be accessed directly by the central processing unit (CPU)—the main electronic circuitry within a computer that processes information. Internal memory is contained on computer chips and uses electronic circuits to store information. External memory is memory that is accessed by the CPU via slower and more complex input and output operations. External memory uses some form of inexpensive mass-storage media such as magnetic or optical media. See also Information Storage and Retrieval.

Memory can further be distinguished as being random access memory (RAM), read-only memory (ROM), or sequential memory. Information stored in RAM can be accessed in any order, and may be erased or written over depending on the specific media involved. Information stored in ROM may also be random-access, in that it may be accessed in any order, but the information recorded on ROM is permanent and cannot be erased or written over. Sequential memory is a type of memory that must be accessed in a linear order, not randomly.


Internal RAM

Random access memory is the main memory used by the CPU as it processes information. The circuits used to construct this main internal RAM can be classified as either dynamic RAM (DRAM), or static RAM (SRAM). In DRAM, the circuit for a bit consists of one transistor, which acts as a switch, and one capacitor, a device that can store charge. The bit 1 is stored in DRAM by a charged capacitor, while the bit 0 is stored in DRAM as an uncharged capacitor. To store the binary number 1 in a DRAM bit location, the transistor at that location is turned on, meaning that the switch is closed, which allows current to flow into a capacitor and charge it up. The transistor is then turned off, meaning that the switch is opened, which keeps the capacitor charged. To store a 0, charge is drained from the capacitor while the transistor is on, and then the transistor is turned off. To read a value in a DRAM bit location, a detector circuit determines whether charge is present or absent on the relevant capacitor. Because capacitors are imperfect, charge slowly leaks out of them, which results in loss of the stored data. Thus, the computer must periodically read the data out of DRAM and rewrite it by putting more charge on the capacitors, a process known as refreshing memory.

In SRAM, the circuit for a bit consists of multiple transistors that continuously refresh the stored values. The computer can access data in SRAM more quickly than DRAM, but the circuitry in SRAM draws more power. The circuitry for a SRAM bit is also larger, so a SRAM chip holds fewer bits than a DRAM chip of the same size. For this reason, SRAM is used when access speed is more important than large memory capacity or low power consumption.

The time it takes the CPU to read or write a bit to memory is particularly important to computer performance. This time is called access time. Current DRAM access times are between 60 and 80 nanoseconds (billionths of a second). SRAM access times are typically four times faster than DRAM.

The internal memory of a computer is divided into locations, each of which has a unique numerical address associated with it. In some computers an address refers directly to a single byte in memory, while in others, an address specifies a group of four bytes called a word. Computers also exist in which a word consists of two or eight bytes.

When a computer executes a read instruction, part of the instruction specifies which memory address to access. The address is sent from the CPU to the main memory (RAM) over a set of wires called an address bus. Control circuits use the address to select the bits at the specified location in RAM. Their contents are sent back to the CPU over another set of wires called a data bus. Inside the CPU the data passes through circuits called the data path. In some CPUs the data path may directly perform arithmetic operations on data from memory, while in others the data must first go to high-speed memory devices within the CPU called registers.


Internal ROM

Read-only memory is the other type of internal memory. ROM memory is used to store the basic set of instructions, called the basic input-output system (BIOS), that the computer needs to run when it is first turned on. This information is permanently stored on computer chips in the form of hardwired electronic circuits.


External Memory

External memory can generally be classified as either magnetic or optical, or a combination called magneto-optical. A magnetic storage device uses materials and mechanisms similar to those used for audio tape, while optical storage materials use lasers to store and retrieve information from a plastic disk. Magneto-optical memory devices use a combination of optical storage and retrieval technology coupled with a magnetic media.


Magnetic Media

Magnetic tape is one form of external computer memory, but instead of recording a continuous signal as with analog audio tape, distinct spots are either magnetized or demagnetized on the tape, corresponding to binary 1s and 0s. Computer systems using magnetic tape storage devices employ machinery similar to that used with analog tape: open-reel tapes, cassette tapes, and helical-scan tapes (similar to video tape).

Another form of magnetic memory uses a spinning disk coated with magnetic material. As the disk spins, a sensitive electromagnetic sensor, called a read-write head, scans across the surface of the disk, reading and writing magnetic spots in concentric circles called tracks.

Magnetic disks are classified as either hard or floppy, depending on the flexibility of the material from which they are made. A floppy disk is made of flexible plastic with small pieces of a magnetic material imbedded in its surface. The read-write head touches the surface of the disk as it scans the floppy. A hard disk is made of a rigid metal, with the read-write head flying just above its surface on a cushion of air to prevent wear.


Optical Media

Optical external memory uses a laser to scan a spinning reflective disk in which the presence or absence of nonreflective pits in the disk indicates 1s or 0s. This is the same technology employed in the audio compact disc (CD). Because its contents are permanently stored on it when it is manufactured, it is known as compact disk-read only memory (CD-ROM). A variation on the CD, called compact disk-recordable (CD-R) uses a dye that turns dark when a stronger laser beam strikes it, and can thus have information written permanently on it by a computer.


Magneto-Optical Media

Magneto-optical (MO) devices write data to a disk with the help of a laser beam and a magnetic write-head. To write data to the disk, the laser focuses on a spot on the surface of the disk heating it up slightly. This allows the magnetic write-head to change the physical orientation of small grains of magnetic material (actually tiny crystals) on the surface of the disk. These tiny crystals reflect light differently depending on their orientation. By aligning the crystals in one direction a 0 can be stored, while aligning the crystals in the opposite direction stores a 1. Another, separate, low-power laser is used to read data from the disk in a way similar to a standard CD-ROM. The advantage of MO disks over CD-ROMs is that they can be read and written to. They are, however, considerably more expensive than CD-ROMs.



Since the inception of computer memory, the capacity of both internal and external memory devices has grown steadily at a rate that leads to a quadrupling in size every three years. Computer industry analysts expect this rapid rate of growth to continue unimpeded. Computer scientists consider multigigabyte memory chips and terabyte-sized disks real possibilities. Research is also leading to new optical storage technologies with 10 to 100 times the capacity of CD-ROMs produced in the 1990s.

Some computer scientists are concerned that memory chips are approaching a limit in the amount of data they can hold. However, it is expected that transistors can be made at least four times smaller before inherent limits of physics make further reductions difficult. Scientists also expect that the dimensions of memory chips will increase by a factor of four. Current memory chips use only a single layer of circuitry, but researchers are working on ways to stack multiple layers onto one chip. Once all of these approaches are exhausted, RAM memory may reach a limit. Researchers, however, are also exploring more exotic technologies with the potential to provide even more capacity.

Access times for internal memory decreased by a factor of four from 1986 to 1996, while processors became 500 times faster. The result was a growing gap in performance between the processor and its main RAM memory. Future computers will likely have advanced data transfer capabilities that enable the CPU to access more memory faster. While current memory chips contain megabytes of RAM, future chips will likely have gigabytes of RAM.



Early electronic computers in the late 1940s and early 1950s used cathode ray tubes (CRT), similar to a computer display screen, to store data. The coating on a CRT remains lit for a short time after an electron beam strikes it. Thus, a pattern of dots could be written on the CRT, representing 1s and 0s, and then be read back for a short time before fading. Like DRAM, CRT storage had to be periodically refreshed to retain its contents. A typical CRT held 128 bytes, and the entire memory of such a computer was usually 4 kilobytes.

International Business Machines Corporation (IBM) developed magnetic core memory in the early 1950s. Magnetic core (often just called “core”) memory consisted of tiny rings of magnetic material woven into meshes of thin wires. When the computer sent a current through a pair of wires, the ring at their intersection would become magnetized either clockwise or counterclockwise (corresponding to a 0 or a 1), depending on the direction of the current. Computer manufacturers first used core memory in production computers in the 1960s, at about the same time that they began to replace vacuum tubes with transistors. Magnetic core memory was used through most of the 1960s and into the 1970s.

The next step in the development of computer memory came with the introduction of integrated circuits which enabled multiple transistors to be placed on one chip. Computer scientists developed the first such memory when they constructed an experimental supercomputer called Illiac-IV in the late 1960s. Integrated circuit memory quickly displaced core, and has been the dominant technology for internal memory ever since.




Internet, computer-based global information system. The Internet is composed of many interconnected computer networks. Each network may link tens, hundreds, or even thousands of computers, enabling them to share information with one another and to share computational resources such as powerful supercomputers and databases of information. The Internet has made it possible for people all over the world to effectively and inexpensively communicate with one another. Unlike traditional broadcasting media, such as radio and television, the Internet does not have a centralized distribution system. Instead, an individual who has Internet access can communicate directly with anyone else on the Internet, make information available to others, find information provided by others, or sell products with a minimum overhead cost.

The Internet has brought new opportunities to government, business, and education. Governments use the Internet for internal communication, distribution of information, and automated tax processing. In addition to offering goods and services online to customers, businesses use the Internet to interact with other businesses. Many individuals use the Internet for shopping, paying bills, and online banking. Educational institutions use the Internet for research and to deliver courses to students at remote sites.

The Internet’s success arises from its flexibility. Instead of restricting component networks to a particular manufacturer or particular type, Internet technology allows interconnection of any kind of computer network. No network is too large or too small, too fast or too slow to be interconnected. Thus, the Internet includes inexpensive networks that can only connect a few computers within a single room as well as expensive networks that can span a continent and connect thousands of computers. See Local Area Network.

Internet service providers (ISPs) provide Internet access to customers for a monthly fee. A customer who subscribes to an ISP’s service uses the ISP’s network to access the Internet. Because ISPs offer their services to the general public, the networks they operate are known as public access networks. In the United States, as in many countries, ISPs are private companies; in countries where telephone service is a government-regulated monopoly, the government often controls ISPs.

An organization that has many computers usually owns and operates a private network, called an intranet, that connects all the computers within the organization. To provide Internet service, the organization connects its intranet to the Internet. Unlike public access networks, intranets are restricted to provide security. Only authorized computers at the organization can connect to the intranet, and the organization restricts communication between the intranet and the global Internet. The restrictions allow computers inside the organization to exchange information but keep the information confidential and protected from outsiders.

The Internet has grown tremendously since its inception, doubling in size every 9 to 14 months. In 1981 only 213 computers were connected to the Internet. By 2000 the number had grown to more than 100 million. The current number of people who use the Internet can only be estimated. One survey found that there were 61 million Internet users worldwide at the end of 1996, 148 million at the end of 1998, and 407 million by the end of 2000. Some analysts estimate that the number of users will double again by the end of 2002.



From its inception in the 1970s until the late 1980s the Internet was a U.S. government-funded communication and research tool restricted almost exclusively to academic and military uses. As government restrictions were lifted in the early 1990s, the Internet became commercial. In 1995 the World Wide Web (WWW) replaced file transfer as the application used for most Internet traffic. The difference between the Internet and the Web is similar to the distinction between a highway system and a package delivery service that uses the highways to move cargo from one city to another: The Internet is the highway system over which Web traffic and traffic from other applications move. The Web consists of programs running on many computers that allow a user to find and display multimedia documents (documents that contain a combination of text, photographs, graphics, audio, and video). Many analysts attribute the explosion in use and popularity of the Internet to the visual nature of Web documents. By the end of 2000, Web traffic dominated the Internet—more than 80 percent of all traffic on the Internet came from the Web.

Companies, individuals, and institutions use the Internet in many ways. Companies use the Internet for electronic commerce, also called e-commerce, including advertising, selling, buying, distributing products, and providing customer service. In addition, companies use the Internet for business-to-business transactions, such as exchanging financial information and accessing complex databases. Businesses and institutions use the Internet for voice and video conferencing and other forms of communication that enable people to telecommute (work away from the office using a computer). The use of electronic mail (e-mail) speeds communication between companies, among coworkers, and among other individuals. Media and entertainment companies use the Internet for online news and weather services and to broadcast audio and video, including live radio and television programs. Online chat allows people to carry on discussions using written text. Scientists and scholars use the Internet to communicate with colleagues, perform research, distribute lecture notes and course materials to students, and publish papers and articles. Individuals use the Internet for communication, entertainment, finding information, and buying and selling goods and services.




Internet Access

The term Internet access refers to the communication between a residence or a business and an ISP that connects to the Internet. Access falls into two broad categories: dedicated and dial-up. With dedicated access, a subscriber’s computer remains directly connected to the Internet at all times by a permanent, physical connection. Most large businesses have high-capacity dedicated connections; small businesses or individuals who desire dedicated access choose technologies such as digital subscriber line (DSL) or cable modems, which both use existing wiring to lower cost. A DSL sends data across the same wires that telephone service uses, and cable modems use the same wiring that cable television uses. In each case, the electronic devices that are used to send data over the wires employ separate frequencies or channels that do not interfere with other signals on the wires. Thus, a DSL Internet connection can send data over a pair of wires at the same time the wires are being used for a telephone call, and cable modems can send data over a cable at the same time the cable is being used to receive television signals. The user usually pays a fixed monthly fee for a dedicated connection. In exchange, the company providing the connection agrees to relay data between the user’s computer and the Internet.

Dial-up is the least expensive access technology, but it is also the least convenient. To use dial-up access, a subscriber must have a telephone modem, a device that connects a computer to the telephone system and is capable of converting data into sounds and sounds back into data. The user’s ISP provides software that controls the modem. To access the Internet, the user opens the software application, which causes the dial-up modem to place a toll-free telephone call to the ISP. A modem at the ISP answers the call, and the two modems use audible tones to send data in both directions. When one of the modems is given data to send, the modem converts the data from the digital values used by computers—numbers stored as a sequence of 1s and 0s—into tones. The receiving side converts the tones back into digital values. Unlike dedicated access technologies, a dial-up modem does not use separate frequencies, so the telephone line cannot be used for regular telephone calls at the same time a dial-up modem is sending data.


How Information Travels Over the Internet

All information is transmitted across the Internet in small units of data called packets. Software on the sending computer divides a large document into many packets for transmission; software on the receiving computer regroups incoming packets into the original document. Similar to a postcard, each packet has two parts: a packet header specifying the computer to which the packet should be delivered, and a packet payload containing the data being sent. The header also specifies how the data in the packet should be combined with the data in other packets by recording which piece of a document is contained in the packet.

 A series of rules known as computer communication protocols specify how packet headers are formed and how packets are processed. The set of protocols used for the Internet are named TCP/IP after the two most important protocols in the set: the Transmission Control Protocol and the Internet Protocol. Hardware devices that connect networks in the Internet are called IP routers because they follow the IP protocol when forwarding packets. A router examines the header in each packet that arrives to determine the packet’s destination. The router either delivers the packet to the destination computer across a local network or forwards the packet to another router that is closer to the final destination. Thus, a packet travels from router to router as it passes through the Internet.

TCP/IP protocols enable the Internet to automatically detect and correct transmission problems. For example, if any network or device malfunctions, protocols detect the failure and automatically find an alternative path for packets to avoid the malfunction. Protocol software also ensures that data arrives complete and intact. If any packets are missing or damaged, protocol software on the receiving computer requests that the source resend them. Only when the data has arrived correctly does the protocol software make it available to the receiving application program, and therefore to the user.


Network Names and Addresses

To be connected to the Internet, a computer must be assigned a unique number, known as its IP (Internet Protocol) address. Each packet sent over the Internet contains the IP address of the computer to which it is being sent. Intermediate routers use the address to determine how to forward the packet. Users almost never need to enter or view IP addresses directly. Instead, to make it easier for users, each computer is also assigned a domain name; protocol software automatically translates domain names into IP addresses. For example, the domain name encarta.msn.com specifies a computer owned by Microsoft (names ending in .com are assigned to computers owned by commercial companies), and the corresponding IP address is See also Domain Name System.

Users encounter domain names when they use applications such as the World Wide Web. Each page of information on the Web is assigned a URL (Uniform Resource Locator) that includes the domain name of the computer on which the page is located. For example, a user can enter the URL


to specify a page in the domain encarta.msn.com. Other items in the URL give further details about the page. For example, the string http specifies that a browser should use the http protocol, one of many TCP/IP protocols, to fetch the item. The string category/physcience.asp specifies a particular document.


Client/Server Architecture

Internet applications, such as the Web, are based on the concept of client/server architecture. In a client/server architecture, some application programs act as information providers (servers), while other application programs act as information receivers (clients). The client/server architecture is not one-to-one. That is, a single client can access many different servers, and a single server can be accessed by a number of different clients. Usually, a user runs a client application, such as a Web browser, that contacts one server at a time to obtain information. Because it only needs to access one server at a time, client software can run on almost any computer, including small handheld devices such as personal organizers and cellular telephones (these devices are sometimes called Web appliances). To supply information to others, a computer must run a server application. Although server software can run on any computer, most companies choose large, powerful computers to run server software because the company expects many clients to be in contact with its server at any given time. A faster computer enables the server program to return information with less delay.


Electronic Mail and News Groups

Electronic mail, or e-mail, is a widely used Internet application that enables individuals or groups of individuals to quickly exchange messages, even if the users are geographically separated by large distances. A user creates an e-mail message and specifies a recipient using an e-mail address, which is a string consisting of the recipient’s login name followed by an @ (at) sign and then a domain name. E-mail software transfers the message across the Internet to the recipient’s computer, where it is placed in the specified mailbox, a file on the hard drive. The recipient uses an e-mail application to view and reply to the message, as well as to save or delete it. Because e-mail is a convenient and inexpensive form of communication, it has dramatically improved personal and business communications.

In its original form, e-mail could only be sent to recipients named by the sender, and only text messages could be sent. E-mail has been extended in two ways, and is a much more powerful tool. Software has been invented that can automatically propagate to multiple recipients a message sent to a single address. Known as a mail gateway or list server, such software allows individuals to join or leave a mail list at any time. Such software can be used to create lists of individuals who will receive announcements about a product or service or to create online discussion groups. Of particular interest are Network News discussion groups (newsgroups) that were originally part of the Usenet network. Thousands of newsgroups exist, on an extremely wide range of subjects. Messages to a newsgroup are not sent directly to each user. Instead, an ordered list is disseminated to computers around the world that run news server software. Newsgroup application software allows a user to obtain a copy of selected articles from a local news server or to use e-mail to post a new message to the newsgroup. The system makes newsgroup discussions available worldwide.

E-mail software has also been extended to allow the transfer of nontext documents, such as graphics and other images, executable computer programs, and prerecorded audio. Such documents, appended to an e-mail message, are called attachments. The standard used for encoding attachments is known as Multipurpose Internet Mail Extensions (MIME). Because the Internet e-mail system only transfers printable text, MIME software encodes each document using printable letters and digits before sending it and then decodes the item when e-mail arrives. Most significantly, MIME allows a single message to contain multiple items, allowing a sender to include a cover letter that explains each of the attachments.


Other Internet Applications

Although the World Wide Web is the most popular application, other Internet applications are widely used. For example, the Telnet application enables a user to interactively access a remote computer. Telnet gives the appearance that the user’s keyboard and screen are connected directly to the remote computer. For example, a businessperson who is visiting a location that has Internet access can use Telnet to contact their office computer. Doing so is faster and less expensive than using dial-up modems.

The Internet can also be used to transfer telephone calls using an application known as IP-telephony. This application requires a special phone that digitizes voice and sends it over the Internet to a second IP telephone. Another application, known as the File Transfer Protocol (FTP), is used to download files from an Internet site to a user’s computer. The FTP application is often automatically invoked when a user downloads an updated version of a piece of software. Applications such as FTP have been integrated with the World Wide Web, making them transparent so that they run automatically without requiring users to open them. When a Web browser encounters a URL that begins with ftp:// it automatically uses FTP to access the item.



Computers store all information as binary numbers. The binary number system uses two binary digits, 0 and 1, which are called bits. The amount of data that a computer network can transfer in a certain amount of time is called the bandwidth of the network and is measured in kilobits per second (kbps) or megabits per second (mbps). A kilobit is 1 thousand bits; a megabit is 1 million bits. A dial-up telephone modem can transfer data at rates up to 56 kbps; DSL and cable modem connections are much faster and can transfer at several mbps. The Internet connections used by businesses often operate at 155 mbps, and connections between routers in the heart of the Internet may operate at rates from 2,488 to 9,953 mbps (9.953 gigabits per second) The terms wideband or broadband are used to characterize networks with high capacity and to distinguish them from narrowband networks, which have low capacity.



Research on dividing information into packets and switching them from computer to computer began in the 1960s. The U.S. Department of Defense Advanced Research Projects Agency (ARPA) funded a research project that created a packet switching network known as the ARPANET. ARPA also funded research projects that produced two satellite networks. In the 1970s ARPA was faced with a dilemma: Each of its networks had advantages for some situations, but each network was incompatible with the others. ARPA focused research on ways that networks could be interconnected, and the Internet was envisioned and created to be an interconnection of networks that use TCP/IP protocols. In the early 1980s a group of academic computer scientists formed the Computer Science NETwork, which used TCP/IP protocols. Other government agencies extended the role of TCP/IP by applying it to their networks: The Department of Energy’s Magnetic Fusion Energy Network (MFENet), the High Energy Physics NETwork (HEPNET), and the National Science Foundation NETwork (NSFNET).

In the 1980s, as large commercial companies began to use TCP/IP to build private internets, ARPA investigated transmission of multimedia—audio, video, and graphics—across the Internet. Other groups investigated hypertext and created tools such as Gopher that allowed users to browse menus, which are lists of possible options. In 1989 many of these technologies were combined to create the World Wide Web. Initially designed to aid communication among physicists who worked in widely separated locations, the Web became immensely popular and eventually replaced other tools. Also during the late 1980s, the U.S. government began to lift restrictions on who could use the Internet, and commercialization of the Internet began. In the early 1990s, with users no longer restricted to the scientific or military communities, the Internet quickly expanded to include universities, companies of all sizes, libraries, public and private schools, local and state governments, individuals, and families.



Several technical challenges must be overcome if the Internet is to continue growing at the current phenomenal rate. The primary challenge is to create enough capacity to accommodate increases in traffic. Internet traffic is increasing as more people become Internet users and existing users send ever greater amounts of data. If the volume of traffic increases faster than the capacity of the network increases, congestion will occur, similar to the congestion that occurs when too many cars attempt to use a highway. To avoid congestion, researchers have developed technologies such as Dense Wave Division Multiplexing (DWDM) that transfer more bits per second across an optical fiber. The speed of routers and other packet handling equipment must also increase to accommodate growth. In the short term, researchers are developing faster electronic processors; in the long term, new technologies will be required.

Another challenge involves IP addresses. Although the original protocol design provided addresses for up to 4.29 billion individual computers, the addresses have begun to run out because they were assigned in blocks. Researchers developed technologies such as Network Address Translation (NAT) to conserve addresses. NAT allows multiple computers at a residence to “share” a single Internet address. Engineers have also planned a next-generation of IP, called IPv6, that will handle many more addresses than the current version.

Short, easy-to-remember domain names are also in short supply. Many domain names that use the simple format http://www.[word].com, where [word] is a common noun or verb, are already in use. Currently, only a few endings are allowed, such as .com, .org, and .net. In the near future, additional endings will be allowed, such as .biz and .info. This will greatly expand the number of possible URLs.

Other important questions concerning Internet growth relate to government controls, especially taxation and censorship. Because the Internet has grown so rapidly, governments have had little time to pass laws that control its deployment and use, impose taxes on Internet commerce, or otherwise regulate content. Many Internet users in the United States view censorship laws as an infringement on their constitutional right to free speech. In 1996 the Congress of the United States passed the Communications Decency Act, which made it a crime to transmit indecent material over the Internet. The act resulted in an immediate outcry from users, industry experts, and civil liberties groups opposed to such censorship. In 1997 the Supreme Court of the United States declared the act unconstitutional because it violated First Amendment rights to free speech. Lawmakers responded in 1998 by passing a narrower antipornography bill, the Child Online Protection Act (COPA). COPA required commercial Web sites to ensure that children could not access material deemed harmful to minors. In 1999 a federal judge blocked COPA as well, ruling that it would dangerously restrict constitutionally protected free speech.

Increasing commercial use of the Internet has heightened security and privacy concerns. With a credit or debit card, an Internet user can order almost anything from an Internet site and have it delivered to their home or office. Companies doing business over the Internet need sophisticated security measures to protect credit card, bank account, and social security numbers from unauthorized access as they pass across the Internet. Any organization that connects its intranet to the global Internet must carefully control the access point to ensure that outsiders cannot disrupt the organization’s internal networks or gain unauthorized access to the organization’s computer systems and data. The questions of government control and Internet security will continue to be important as the Internet grows.

Microsoft Corporation



Microsoft Corporation, leading American computer software company. Microsoft develops and sells a wide variety of software products to businesses and consumers in more than 50 countries. The company’s Windows operating systems for personal computers are the most widely used operating systems in the world. Microsoft has its headquarters in Redmond, Washington.

Microsoft’s other well-known products include Word, a word processor; Excel, a spreadsheet program; Access, a database program; and PowerPoint, a program for making business presentations. These programs are sold separately and as part of Office, an integrated software suite. The company also makes BackOffice, an integrated set of server products for businesses. Microsoft’s Internet Explorer allows users to browse the World Wide Web. Among the company’s other products are reference applications; games; financial software; programming languages for software developers; input devices, such as pointing devices and keyboards; and computer-related books.

Microsoft operates The Microsoft Network (MSN), a collection of news, travel, financial, entertainment, and information Web sites. Microsoft and the National Broadcasting Company (NBC) jointly operate MSNBC, a 24-hour news, talk, and information cable-television channel and companion Web site.



Microsoft was founded in 1975 by William H. Gates III and Paul Allen. The pair had teamed up in high school through their hobby of programming on the original PDP-10 computer from the Digital Equipment Corporation. In 1975 Popular Electronics magazine featured a cover story about the Altair 8800, the first personal computer. The article inspired Gates and Allen to develop a version of the BASIC programming language for the Altair. They licensed the software to Micro Instrumentation and Telemetry Systems (MITS), the Altair’s manufacturer, and formed Microsoft (originally Micro-soft) in Albuquerque, New Mexico, to develop versions of BASIC for other computer companies. Microsoft’s early customers included fledgling hardware firms such as Apple Computer, maker of the Apple II computer; Commodore, maker of the PET computer; and Tandy Corporation, maker of the Radio Shack TRS-80 computer. In 1977 Microsoft shipped its second language product, Microsoft Fortran, and it soon released versions of BASIC for the 8080 and 8086 microprocessors.



In 1979 Gates and Allen moved the company to Bellevue, Washington, a suburb of their hometown of Seattle. (The company moved to its current headquarters in Redmond in 1986.) In 1980 International Business Machines Corporation (IBM) chose Microsoft to write the operating system for the IBM PC personal computer, to be introduced the following year. Under time pressure, Microsoft purchased 86-DOS (originally called QDOS for Quick and Dirty Operating System) from Seattle programmer Tim Paterson for $50,000, modified it, and renamed it MS-DOS (Microsoft Disk Operating System). As part of its contract with IBM, Microsoft was permitted to license the operating system to other companies. By 1984 Microsoft had licensed MS-DOS to 200 personal computer manufacturers, making MS-DOS the standard operating system for personal computers and driving Microsoft’s enormous growth in the 1980s. Allen left the company in 1983 but remained on its board of directors until 2000.



As sales of MS-DOS took off, Microsoft began to develop business applications for personal computers. In 1982 it released Multiplan, a spreadsheet program, and the following year it released a word-processing program, Microsoft Word. In 1984 Microsoft was one of the few established software companies to develop application software for the Macintosh, a personal computer developed by Apple Computer. Microsoft’s early support for the Macintosh resulted in tremendous success for its Macintosh application software, including Word, Excel, and Works (an integrated software suite). Multiplan for MS-DOS, however, faltered against the popular Lotus 1-2-3 spreadsheet program made by Lotus Development Corporation.



In 1985 Microsoft released Windows, an operating system that extended the features of MS-DOS and employed a graphical user interface. Windows 2.0, released in 1987, improved performance and offered a new visual appearance. In 1990 Microsoft released a more powerful version, Windows 3.0, which was followed by Windows 3.1 and 3.11. These versions, which came preinstalled on most new personal computers, rapidly became the most widely used operating systems. In 1990 Microsoft became the first personal-computer software company to record $1 billion in annual sales.

As Microsoft’s dominance grew in the market for personal-computer operating systems, the company was accused of monopolistic business practices. In 1990 the Federal Trade Commission (FTC) began investigating Microsoft for alleged anticompetitive practices, but it was unable to reach a decision and dropped the case. The United States Department of Justice continued the probe.

In 1991 Microsoft and IBM ended a decade of collaboration when they went separate ways on the next generation of operating systems for personal computers. IBM chose to pursue the OS/2 operating system (first released in 1987), which until then had been a joint venture with Microsoft. Microsoft chose to evolve its Windows operating system into increasingly powerful systems. In 1993 Apple lost a copyright-infringement lawsuit against Microsoft that claimed Windows illegally copied the design of the Macintosh’s graphical interface. The ruling was later upheld by an appellate court.

In 1993 Microsoft released Windows NT, an operating system for business environments. The following year the company and the Justice Department reached an agreement that called for Microsoft to change the way its operating system software was sold and licensed to computer manufacturers. In 1995 the company released Windows 95, which featured a simplified interface, multitasking, and other improvements. An estimated 7 million copies of Windows 95 were sold worldwide within seven weeks of its release.




Business Developments

In the mid-1990s Microsoft began to expand into the media, entertainment, and communications industries, launching The Microsoft Network in 1995 and MSNBC in 1996. Also in 1996 Microsoft introduced Windows CE, an operating system for handheld personal computers. In 1997 Microsoft paid $425 million to acquire WebTV Networks, a manufacturer of low-cost devices to connect televisions to the Internet. That same year Microsoft invested $1 billion in Comcast Corporation, a U.S. cable television operator, as part of an effort to expand the availability of high-speed connections to the Internet.

In June 1998 Microsoft released Windows 98, which featured integrated Internet capabilities. In the following month Gates appointed Steve Ballmer, executive vice president of Microsoft, as the company’s president, transferring to him supervision of most day-to-day business operations of the company. Gates retained the title of chairman and chief executive officer (CEO).

In 1999 Microsoft paid $5 billion to telecommunications company AT&T Corp. to use Microsoft’s Windows CE operating system in devices designed to provide consumers with integrated cable television, telephone, and high-speed Internet services. Also in 1999, the company released Windows 2000, the latest version of the Windows NT operating system. In January 2000 Gates transferred his title of CEO to Ballmer. Gates, in turn, took on the title of chief software architect to focus on the development of new products and technologies.


Legal Challenges

In late 1997 the Justice Department accused Microsoft of violating its 1994 agreement by requiring computer manufacturers that installed Windows 95 to also include Internet Explorer, Microsoft’s software for browsing the Internet. The government contended that Microsoft was illegally taking advantage of its power in the market for computer operating systems to gain control of the market for Internet browsers. In response, Microsoft argued that it should have the right to enhance the functionality of Windows by integrating Internet-related features into the operating system. Also in late 1997, computer company Sun Microsystems sued Microsoft, alleging that it had breached a contract for use of Sun’s Java universal programming language by introducing Windows-only enhancements. In November 1998 a federal district court ruled against Microsoft on an injunction filed by Sun earlier that year. The injunction forced Microsoft to revise its software to meet Sun’s Java compatibility standards. The two companies settled the case in 2001, with Microsoft agreeing to pay Sun $20 million for limited use of Java.

Microsoft temporarily settled with the Justice Department in its antitrust case in early 1998 by agreeing to allow personal computer manufacturers to offer a version of Windows 95 that did not include access to Internet Explorer. However, in May 1998 the Justice Department and 20 states filed broad antitrust suits charging Microsoft with engaging in anticompetitive conduct. The suits sought to force Microsoft to offer Windows without Internet Explorer or to include Navigator, a competing browser made by Netscape Communications Corporation. The suits also challenged some of the company’s contracts and pricing strategies.

The federal antitrust trial against Microsoft began in October 1998. Executives from Netscape, Sun, and several other computer software and hardware companies testified regarding their business deals with Microsoft. In November 1999 Judge Thomas Penfield Jackson issued his findings of fact in the antitrust case, in which he declared that Microsoft had a monopoly in the market for personal computer operating systems. In 2000 Jackson ruled that the company had violated antitrust laws by engaging in tactics that discouraged competition. He ordered Microsoft to be split into two companies: one for operating systems and another for all other businesses, including its Office software suite. He also imposed a number of interim restrictions on the company’s business practices. The judge put these penalties on hold while Microsoft appealed the decision.

In June 2001 an appeals court upheld Jackson’s findings that Microsoft had monopoly power and that the company used anticompetitive business practices to protect its Windows monopoly. However, the appeals court threw out the trial court’s ruling that Microsoft had illegally integrated Internet Explorer into Windows, returning the issue to a lower court for review under a different legal standard. The appeals court also reversed Jackson’s order to break up the company, in part because of the judge’s failure to hold a proper hearing on the remedy and in part because of comments he made to reporters outside the courtroom about the merits of the case. The court found that Jackson’s comments were improper because they created the appearance of bias, even though the court found no evidence of actual bias. The appeals court ordered that the case be assigned to a different judge to reconsider the remedy for Microsoft’s violations of antitrust law.


Browser, in computer science, a program that enables a computer to locate, download, and display documents containing text, sound, video, graphics, animation, and photographs located on computer networks. The act of viewing and moving about between documents on computer networks is called browsing. Users browse through documents on open, public-access networks called internets, or on closed networks called intranets. The largest open network is the Internet, a worldwide computer network that provides access to sites on the World Wide Web (WWW, the Web).

Browsers allow users to access Web information by locating documents on remote computers that function as Web servers. A browser downloads information over phone lines to a user’s computer through the user’s modem and then displays the information on the computer. Most browsers can display a variety of text and graphics that may be integrated into such a document, including animation, audio and video. Examples of browsers are Netscape, Internet Explorer, and Mosaic.

Browsers can create the illusion of traveling to an actual location in virtual space (hyperspace) where the document being viewed exists. This virtual location in hyperspace is referred to as a node, or a Web site. The process of virtual travel between Web sites is called navigating.

Documents on networks are called hypertext if the media is text only, or hypermedia if the media includes graphics as well as text. Every hypertext or hypermedia document on an internet has a unique address called a uniform resource locator (URL). Hypertext documents usually contain references to other URLs that appear in bold, underlined, or colored text. The user can connect to the site indicated by the URL by clicking on it. This use of a URL within a Web site is known as a hyperlink. When the user clicks on a hyperlink, the browser moves to this next server and downloads and displays the document targeted by the link. Using this method, browsers can rapidly take users back and forth between different sites.

Common features found in browsers include the ability to automatically designate a Web site to which the browser opens with each use, the option to create directories of favorite or useful Web sites, access to search engines (programs that permit the use of key words to locate information on the Internet, an internet or an intranet), and the ability to screen out certain types of information by blocking access to certain categories of sites.

A browser’s performance depends upon the speed and efficiency of the user’s computer, the type of modem being used, and the bandwidth of the data-transmission medium (the amount of information that can be transmitted per second). Low bandwidth results in slow movement of data between source and recipient, leading to longer transmission times for documents. Browsers may also have difficulty reaching a site during times of heavy traffic on the network or because of high use of the site.

The most commonly used browsers for the Web are available for free or for a small charge and can be downloaded from the Internet. Browsers have become one of the most important tools—ranking with e-mail—for computer network users. They have provided tens of millions of people with a gateway to information and communication through the Internet.


Hypermedia, in computer science, the integration of graphics, sound, video, and animation into documents or files that are linked in an associative system of information storage and retrieval. Hypermedia files contain cross references called hyperlinks that connect to other files with related information, allowing users to easily move, or navigate, from one document to another through these associations.

Hypermedia is structured around the idea of offering a working and learning environment that parallels human thinking—that is, an environment that allows the user to make associations between topics rather than move sequentially from one to the next, as in an alphabetical list. Hypermedia topics are thus linked in a manner that allows the user to jump from subject to related subject in searching for information. For example, a hypermedia presentation on navigation might include links to such topics as astronomy, bird migration, geography, satellites, and radar. If the information is primarily in text form, the document or file is called hypertext. If video, music, animation, or other elements are included, the document is called a hypermedia document.

The World Wide Web (WWW) isa well-known hypermedia environment that users can access through the Internet. Other forms of hypermedia applications include CD-ROM encyclopedias and games. To view hypermedia documents, a user’s computer must have hardware and software that support multimedia. This usually consists of sound and video cards, speakers, and a graphical operating system such as Windows 95 or the Apple operating system. To view hypermedia documents on the Internet, a program called a browser is necessary. Browsers provide varying levels of support for the graphics, sound and video available on the Internet. Examples of browsers are Netscape, Internet Explorer, and Mosaic.

A wide variety of computer programs are used to create different media applications on the WWW. To run these applications, a browser must be supplemented by programs called plug-ins. Plug-ins are tools that allow a computer to display or interpret the various file formats in which multimedia files exist. Plug-ins are usually available for free and can be downloaded to and stored on a computer’s hard drive.

The number of people experiencing hypermedia is growing rapidly as a result of increased exposure to the WWW, better computer and modem performance, and increases in data transmission rates.

Web Site

Web Site, in computer science, file of information located on a server connected to the World Wide Web (WWW). The WWW is a set of protocols and software that allows the global computer network called the Internet to display multimedia documents. Web sites may include text, photographs, illustrations, video, music, or computer programs. They also often include links to other sites in the form of hypertext, highlighted or colored text that the user can click on with their mouse, instructing their computer to jump to the new site.

Every web site has a specific address on the WWW, called a Uniform Resource Locator (URL). These addresses end in extensions that indicate the type of organization sponsoring the web site, for example, .gov for government agencies, .edu for academic institutions, and .com for commercial enterprises. The user’s computer must be connected to the Internet and have a special software program called a browser to retrieve and read information from a web site. Examples of browsers include Navigator from the Netscape Communications Corporation and Explorer from the Microsoft Corporation.

The content presented on a web site usually contains hypertext and icons, pictures that also serve as links to other sites. By clicking on the hypertext or icons with their mouse, users instruct their browser program to connect to the web site specified by the URL contained in the hypertext link. These links are embedded in the web site through the use of Hypertext Markup Language (HTML), a special language that encodes the links with the correct URL.

Web sites generally offer an appearance that resembles the graphical user interfaces (GUI) of Microsoft’s Windows operating system, Apple’s Macintosh operating system, and other graphics based operating systems. They may include scroll bars, menus, buttons, icons, and toolbars, all of which can be activated by a mouse or other input device.

To find a web site, a user can consult an Internet reference guide or directory, or use one of the many freely available search engines, such as WebCrawler from America Online Incorporated. These engines are search and retrieval programs, of varying sophistication, that ask the user to fill out a form before executing a search of the WWW for the requested information. The user can also create a list of the URLs of frequently visited web sites. Such a list helps a user recall a URL and easily access the desired web site. Web sites are easily modified and updated, so the content of many sites changes frequently.

Computer Program



Computer Program, set of instructions that directs a computer to perform some processing function or combination of functions. For the instructions to be carried out, a computer must execute a program, that is, the computer reads the program, and then follows the steps encoded in the program in a precise order until completion. A program can be executed many different times, with each execution yielding a potentially different result depending upon the options and data that the user gives the computer.

Programs fall into two major classes: application programs and operating systems. An application program is one that carries out some function directly for a user, such as word processing or game-playing. An operating system is a program that manages the computer and the various resources and devices connected to it, such as RAM (random access memory), hard drives, monitors, keyboards, printers, and modems, so that they may be used by other programs. Examples of operating systems are DOS, Windows 95, OS/2, and UNIX.



Software designers create new programs by using special applications programs, often called utilityprograms or development programs. A programmer uses another type of program called a text editor to write the new program in a special notation called a programming language. With the text editor, the programmer creates a text file, which is an ordered list of instructions, also called the program source file. The individual instructions that make up the program source file are called source code. At this point, a special applications program translates the source code into machine language, or object code—a format that the operating system will recognize as a proper program and be able to execute.

Three types of applications programs translate from source code to object code: compilers, interpreters, and assemblers. The three operate differently and on different types of programming languages, but they serve the same purpose of translating from a programming language into machine language.

A compiler translates text files written in a high-level programming language—such as Fortran, C, or Pascal—from the source code to the object code all at once. This differs from the approach taken by interpreted languages such as BASIC, APL and LISP, in which a program is translated into object code statement by statement as each instruction is executed. The advantage to interpreted languages is that they can begin executing the program immediately instead of having to wait for all of the source code to be compiled. Changes can also be made to the program fairly quickly without having to wait for it to be compiled again. The disadvantage of interpreted languages is that they are slow to execute, since the entire program must be translated one instruction at a time, each time the program is run. On the other hand, compiled languages are compiled only once and thus can be executed by the computer much more quickly than interpreted languages. For this reason, compiled languages are more common and are almost always used in professional and scientific applications.

Another type of translator is the assembler, which is used for programs or parts of programs written in assembly language. Assembly language is another programming language, but it is much more similar to machine language than other types of high-level languages. In assembly language, a single statement can usually be translated into a single instruction of machine language. Today, assembly language is rarely used to write an entire program, but is instead most often used when the programmer needs to directly control some aspect of the computer’s function.

Programs are often written as a set of smaller pieces, with each piece representing some aspect of the overall application program. After each piece has been compiled separately, a program called a linker combines all of the translated pieces into a single executable program.

Programs seldom work correctly the first time, so a program called a debugger is often used to help find problems called bugs. Debugging programs usually detect an event in the executing program and point the programmer back to the origin of the event in the program code.

Recent programming systems, such as Java, use a combination of approaches to create and execute programs. A compiler takes a Java source program and translates it into an intermediate form. Such intermediate programs are then transferred over the Internet into computers where an interpreter program then executes the intermediate form as an application program.



Most programs are built from just a few kinds of steps that are repeated many times in different contexts and in different combinations throughout the program. The most common step performs some computation, and then proceeds to the next step in the program, in the order specified by the programmer.

Programs often need to repeat a short series of steps many times, for instance in looking through a list of game scores and finding the highest score. Such repetitive sequences of code are called loops.

One of the capabilities that makes computers so useful is their ability to make conditional decisions and perform different instructions based on the values of data being processed. If-then-else statements implement this function by testing some piece of data and then selecting one of two sequences of instructions on the basis of the result. One of the instructions in these alternatives may be a goto statement that directs the computer to select its next instruction from a different part of the program. For example, a program might compare two numbers and branch to a different part of the program depending on the result of the comparison:

If x is greater than y
goto instruction #10
else continue

Programs often use a specific sequence of steps more than once. Such a sequence of steps can be grouped together into a subroutine, which can then be called, or accessed, as needed in different parts of the main program. Each time a subroutine is called, the computer remembers where it was in the program when the call was made, so that it can return there upon completion of the subroutine. Preceding each call, a program can specify that different data be used by the subroutine, allowing a very general piece of code to be written once and used in multiple ways.

Most programs use several varieties of subroutines. The most common of these are functions, procedures, library routines, system routines, and device drivers. Functions are short subroutines that compute some value, such as computations of angles, which the computer cannot compute with a single basic instruction. Procedures perform a more complex function, such as sorting a set of names. Library routines are subroutines that are written for use by many different programs. System routines are similar to library routines but are actually found in the operating system. They provide some service for the application programs, such as printing a line of text. Device drivers are system routines that are added to an operating system to allow the computer to communicate with a new device, such as a scanner, modem, or printer. Device drivers often have features that can be executed directly as applications programs. This allows the user to directly control the device, which is useful if, for instance, a color printer needs to be realigned to attain the best printing quality after changing an ink cartridge.



Modern computers usually store programs on some form of magnetic storage media that can be accessed randomly by the computer, such as the hard drive disk permanently located in the computer, or a portable floppy disk. Additional information on such disks, called directories, indicate the names of the various programs on the disk, when they were written to the disk, and where the program begins on the disk media. When a user directs the computer to execute a particular application program, the operating system looks through these directories, locates the program, and reads a copy into RAM. The operating system then directs the CPU (central processing unit) to start executing the instructions at the beginning of the program. Instructions at the beginning of the program prepare the computer to process information by locating free memory locations in RAM to hold working data, retrieving copies of the standard options and defaults the user has indicated from a disk, and drawing initial displays on the monitor.

The application program requests a copy of any information the user enters by making a call to a system routine. The operating system converts any data so entered into a standard internal form. The application then uses this information to decide what to do next—for example, perform some desired processing function such as reformatting a page of text, or obtain some additional information from another file on a disk. In either case, calls to other system routines are used to actually carry out the display of the results or the accessing of the file from the disk.

When the application reaches completion or is prompted to quit, it makes further system calls to make sure that all data that needs to be saved has been written back to disk. It then makes a final system call to the operating system indicating that it is finished. The operating system then frees up the RAM and any devices that the application was using and awaits a command from the user to start another program.



People have been storing sequences of instructions in the form of a program for several centuries. Music boxes of the 18th century and player pianos of the late 19th and early 20th centuries played musical programs stored as series of metal pins, or holes in paper, with each line (of pins or holes) representing when a note was to be played, and the pin or hole indicating what note was to be played at that time. More elaborate control of physical devices became common in the early 1800s with French inventor Joseph Marie Jacquard’s invention of the punch-card controlled weaving loom. In the process of weaving a particular pattern, various parts of the loom had to be mechanically positioned. To automate this process, Jacquard used a single paper card to represent each positioning of the loom, with holes in the card to indicate which loom actions should be done. An entire tapestry could be encoded onto a deck of such cards, with the same deck yielding the same tapestry design each time it was used. Programs of over 24,000 cards were developed and used.

The world’s first programmable machine was designed—although never fully built—by the English mathematician and inventor, Charles Babbage. This machine, called the Analytical Engine, used punch cards similar to those used in the Jacquard loom to select the specific arithmetic operation to apply at each step. Inserting a different set of cards changed the computations the machine performed. This machine had counterparts for almost everything found in modern computers, although it was mechanical rather than electrical. Construction of the Analytical Engine was never completed because the technology required to build it did not exist at the time.

The first card deck programs for the Analytical Engine were developed by British mathematician Countess Augusta Ada Lovelace, daughter of the poet Lord Byron. For this reason she is recognized as the world’s first programmer.

The modern concept of an internally stored computer program was first proposed by Hungarian-American mathematician John von Neumann in 1945. Von Neumann’s idea was to use the computer’s memory to store the program as well as the data. In this way, programs can be viewed as data and can be processed like data by other programs. This idea greatly simplifies the role of program storage and execution in computers.



The field of computer science has grown rapidly since the 1950s due to the increase in their use. Computer programs have undergone many changes during this time in response to user need and advances in technology. Newer ideas in computing such as parallel computing, distributed computing, and artificial intelligence, have radically altered the traditional concepts that once determined program form and function.

Computer scientists working in the field of parallel computing, in which multiple CPUs cooperate on the same problem at the same time, have introduced a number of new program models . In parallel computing parts of a problem are worked on simultaneously by different processors, and this speeds up the solution of the problem. Many challenges face scientists and engineers who design programs for parallel processing computers, because of the extreme complexity of the systems and the difficulty involved in making them operate as effectively as possible.

Another type of parallel computing called distributed computing uses CPUs from many interconnected computers to solve problems. Often the computers used to process information in a distributed computing application are connected over the Internet. Internet applications are becoming a particularly useful form of distributed computing, especially with programming languages such as Java . In such applications, a user logs onto a Web site and downloads a Java program onto their computer. When the Java program is run, it communicates with other programs at its home web site, and may also communicate with other programs running on different computers or web sites.

Research into artificial intelligence (AI) has led to several other new styles of programming. Logic programs, for example, do not consist of individual instructions for the computer to follow blindly, but instead consist of sets of rules: if x happens then do y. A special program called an inference engine uses these rules to “reason” its way to a conclusion when presented with a new problem. Applications of logic programs include automatic monitoring of complex systems, and proving mathematical theorems.

A radically different approach to computing in which there is no program in the conventional sense is called a neural network. A neural network is a group of highly interconnected simple processing elements, designed to mimic the brain. Instead of having a program direct the information processing in the way that a traditional computer does, a neural network processes information depending upon the way that its processing elements are connected. Programming a neural network is accomplished by presenting it with known patterns of input and output data and adjusting the relative importance of the interconnections between the processing elements until the desired pattern matching is accomplished. Neural networks are usually simulated on traditional computers, but unlike traditional computer programs, neural networks are able to learn from their experience.


Debugger, in computer science, a program designed to help in debugging another program by allowing the programmer to step through the program, examine data, and check conditions. There are two basic types of debuggers: machine-level and source-level. Machine-level debuggers display the actual machine instructions (disassembled into assembly language) and allow the programmer to look at registers and memory locations. Source-level debuggers let the programmer look at the original source code (C or Pascal, for example), examine variables and data structures by name, and so on.

Electronic Games



Electronic Games, software programs played for entertainment, challenge, or educational purposes. Electronic games are full of color, sound, realistic movement, and visual effects, and some even employ human actors. There are two broad classes of electronic games: video games, which are designed for specific video-game systems, handheld devices, and coin-operated arcade consoles; and computer games, which are played on personal computers.

Categories of electronic games include strategy games, sports games, adventure and exploration games, solitaire and multiplayer card games, puzzle games, fast-action arcade games, flying simulations, and versions of classic board games. Software programs that employ game-play elements to teach reading, writing, problem solving, and other basic skills are commonly referred to as edutainment.

Electronic games put to use a variety of skills. Many games, such as Tetris and Pac-Man, serve as tests of hand-eye coordination. In these games the challenge is to play as long as possible while the game gets faster or more complex. Other games, such as Super Mario Bros., are more sophisticated. They employ hand-eye coordination by challenging the player to react quickly to action on the screen, but they also test judgment and perseverance, sometimes by presenting puzzles that players must solve to move forward in the game. Strategy games ask players to make more complicated decisions that can influence the long-range course of the game. Electronic games can pit players against each other on the same terminal, across a local network, or via the Internet. Most games that require an opponent can also be played alone, however, with the computer taking on the role of opponent.



Video-game consoles, small handheld game devices, and coin-operated arcade games are special computers built exclusively for playing games. To control the games, players can use joysticks, trackballs, buttons, steering wheels (for car-racing games), light guns, or specially designed controllers that include a joystick, direction pad, and several buttons or triggers. Goggles and other kinds of virtual reality headgear can provide three-dimensional effects in specialized games. These games attempt to give the player the experience of actually being in a jungle, the cockpit of an airplane, or another setting or situation.

The first video games, which consisted of little more than a few electronic circuits in a simplified computer, appeared around 1970 as coin-operated cabinet games in taverns and pinball arcade parlors. In 1972 the Atari company introduced a game called Pong, based on table tennis. In Pong, a ball and paddles are represented by lights on the screen; the ball is set in motion, and by blocking it with the paddles, players knock it back and forth across the screen until someone misses. Pong soon became the first successful commercial video game. Arcade games have remained popular ever since.

Also in 1972 the Magnavox company introduced a home video-game machine called the Odyssey system. It used similar ball-and-paddle games in cartridge form, playable on a machine hooked up to a television. In 1977 Atari announced the release of its own home video-game machine, the Atari 2600. Many of the games played on the Atari system had originally been introduced as arcade games. The most famous included Space Invaders and Asteroids. In Space Invaders, a player has to shoot down ranks of aliens as they march down the screen. In Asteroids, a player needs to destroy asteroids before they crash into the player’s ship. The longer the player survives, the more difficult both games become. After Atari’s success with home versions of such games, other companies began to compete for shares of the fast-growing home video-game market. Major competitors included Coleco with its ColecoVision system, and Mattel with Intellivision. Some companies, particularly Activision, gained success solely by producing games for other companies’ video-game systems.

After several years of enormous growth, the home video-game business collapsed in 1983. The large number of games on offer confused consumers, and many video-game users were increasingly disappointed with the games they purchased. They soon stopped buying games altogether. Failing to see the danger the industry faced, the leading companies continued to spend millions of dollars for product development and advertising. Eventually, these companies ran out of money and left the video-game business.

Despite the decline in the home video-game business, the arcade segment of the industry continued to thrive. Pac-Man, which appeared in arcades in 1980, was one of the major sensations of the time. In this game, players maneuver a button-shaped character notched with a large mouth around a maze full of little dots. The goal is to gobble up all the dots without being touched by one of four enemies in hot pursuit. Another popular game was Frogger, in which players try to guide a frog safely across a series of obstacles, including a busy road.

In the mid-1980s, Nintendo, a Japanese company, introduced the Nintendo Entertainment System (NES). The NES touched off a new boom in home video games, due primarily to two game series: Super Mario Bros. and The Legend of Zelda. These and other games offered more advanced graphics and animation than earlier home video-game systems had, reengaging the interest of game players. Once again, other companies joined the growing home video-game market. One of the most successful was Sega, also headquartered in Japan. In the early 1990s, the rival video-game machines were Nintendo’s Super NES and Sega’s Genesis. These systems had impressive capabilities to produce realistic graphics, sound, and animation.

Throughout the 1990s Nintendo and Sega competed for dominance of the American home video-game market, and in 1995 another Japanese company, Sony, emerged as a strong competitor. Sega and Sony introduced new systems in 1995, the Sega Saturn and the Sony PlayStation. Both use games that come on CD-ROMs (compact discs). A year later, Nintendo met the challenge with the cartridge-based Nintendo 64 system, which has even greater processing power than its competitors, meaning that faster and more complex games can be created. In 1998 Sega withdrew the Saturn system from the U.S. market because of low sales.



While video-game systems are used solely for gaming, games are only one of the many uses for computers. In computer games, players can use a keyboard to type in commands or a mouse to move a cursor around the screen, and sometimes they use both. Many computer games also allow the use of a joystick or game controller.

Computer games were born in the mid-1970s, when computer scientists started to create text adventure games to be played over networks of computers at universities and research institutions. These games challenged players to reach a goal or perform a certain task, such as finding a magical jewel. At each stop along the way, the games described a situation and required the player to respond by typing in an action. Each action introduced a new situation to which the player had to react. One of the earliest such games was called Zork.

Beginning in the late 1970s, Zork and similar games were adapted for use on personal computers, which were just gaining popularity. As technology improved, programmers began to incorporate graphics into adventure games. Because relatively few people owned home computers, however, the market for computer games grew slowly until the mid-1980s. Then, more dynamic games such as Choplifter, a helicopter-adventure game produced by Broderbund, helped fuel rising sales of computers. In 1982 Microsoft Corporation released Flight Simulator, which allows players to mimic the experience of flying an airplane.

As the power of personal computers increased in the 1980s, more sophisticated games were developed. Some of the companies that produced the most popular games were Sierra On-Line, Electronic Arts, and Strategic Simulations, Inc. (SSI). A line of so-called Sim games, produced by Maxis, enabled players to create and manage cities (SimCity), biological systems (SimEarth), and other organizational structures. In the process, players learned about the relationships between the elements of the system. For example, in SimCity a player might increase the tax rate to raise money only to find that people move out of the city, thus decreasing the number of taxpayers and possibly offsetting the increase in revenue. An educational mystery game called Where in the World Is Carmen Sandiego?, by Broderbund, was introduced in the 1980s and aimed at children. The game tests players’ reasoning ability and general knowledge by requiring them to track down an elusive master criminal by compiling clues found around the world.

Computer games continued to gain popularity in the 1990s, with the introduction of more powerful and versatile personal computers and the growing use of computers in schools and homes. With the development of CD-ROM technology, games also integrated more graphics, sounds, and videos, making them more engaging for consumers. The most successful games of the 1990s included Doom (by Id Software) and Myst (by Broderbund). Doom is a violent action game in which the player is a marine charged with fighting evil creatures. In Myst, a player wanders through a fictional island world, attempting to solve puzzles and figure out clues. The game’s appeal comes from the process of exploration and discovery.

Many of the most recent games employ live actors and some of the techniques of filmmaking. Such games are like interactive movies. With the growth of the Internet in the mid-1990s, multiplayer gaming also became popular. In games played over the Internet—such as Ultima Online by Electronic Arts—dozens, hundreds, or even thousands of people can play a game at the same time. Players wander through a fictional world meeting not only computer-generated characters but characters controlled by other players as well. By the end of the 1990s the future for new computer games seemed limitless.




Games, activities or contests governed by sets of rules. People engage in games for recreation and to develop mental or physical skills.

Games come in many varieties. They may have any number of players and can be played competitively or cooperatively. They also may involve a wide range of equipment. Some games, such as chess, test players’ analytic skills. Other games, such as darts and electronic games, require hand-eye coordination. Some games are also considered sports, especially when they involve physical skill.



Games may be classified in several ways. These include the number of players required (as in solitaire games), the purpose of playing (as in gambling games), the object of the game (as in race games, to finish first), the people who play them (as in children’s games), or the place they are played (as in lawn games). Many games fall into more than one of these categories, so the most common way of classifying games is by the equipment that is required to play them.

Board games probably make up the largest category of games. They are usually played on a flat surface made of cardboard, wood, or other material. Players place the board on a table or on the floor, then sit around it to play. In most board games, pieces are placed on the board and moved around on it. Dice, cards, and other equipment can be used.

In strategy board games, pieces are placed or moved in order to capture other pieces (as in chess or checkers) or to achieve such goals as gaining territory, linking pieces to one another, or aligning pieces together. Other major groups of board games include race games (such as backgammon), word games (Scrabble), games of deduction (Clue), trivia games (Trivial Pursuit), party games (Pictionary), family games (Life), financial games (Monopoly), sports games (Strat-O-Matic Baseball), action games (Operation), and games of conflict (Risk).

Many games fall into more than one category. The board game Life, for example, has elements of race games, and Trivial Pursuit is often played at parties. Other types of board games include topical games, which can be based on currently popular movies, television programs, or books; and simulation games, which range from historical war games to civilization-building games.

Role-playing games, which can be played without boards or with playing fields drawn by hand on paper, are often considered a distinct game category. In these games, each player assumes the role of a character with particular strengths and weaknesses. Another player known as the gamemaster leads the character-players through adventures. The most famous role-playing game is Dungeons & Dragons (now called Advanced Dungeons & Dragons), which was invented in the 1970s.

Some games, such as billiards and table tennis, are played on larger surfaces than board games, typically tables with legs. These table games also require different kinds of equipment from board games. In billiards, players use a cue stick to knock balls into one another. Table tennis players use paddles to hit a light ball back and forth over a net strung across the table.

Card games require a deck of cards, and sometimes paper and pencil (or occasionally other equipment, such as a cribbage board) for keeping score. Many popular games, including poker, bridge, and rummy, call for a standard deck of 52 playing cards. Some card games, such as canasta, use more than one deck or a larger deck. And other games use a deck from which certain cards have been removed, or decks with cards designed specifically for the game.

The major kinds of card games include trick-taking games, in which players try to take (or avoid taking) specific cards; melding games, in which players try to form winning combinations with their cards; betting games, in which players wager on the outcome; and solitaire games, which are played alone. A new category, collectible card games, became an overnight sensation in 1993 with the publication of Magic: The Gathering. In Magic and similar games, players buy a starter set of cards that they use to compete against other players. They can supplement the starter kit with additional purchases of random assortments of cards.

Tile games can be similar to card games, but they use pieces made of harder materials, such as wood, plastic, or bone. Popular tile games include Mah Jongg and dominoes. Dice games involve throwing a set of dice in an attempt to achieve certain combinations or totals. Paper and pencil games use only paper and pencil. Two such games, tic-tac-toe and dots-and-boxes, are among the first games that many children learn. Target games, in which players aim at a target, are tests of hand-eye coordination. Examples of target games are marbles, horseshoe pitching, and bowling.

Electronic games (video games and computer games) grew in popularity in the late 20th century, as the power of computers increased. In most electronic games, players use a keyboard, joystick, or some other type of game controller. Video games are played on specially designed arcade machines, handheld devices, or systems that are hooked to television screens. Computer games are played on home computers. With electronic games, the computer itself can serve as the opponent, allowing people to play traditional games such as chess or bridge against the computer.



Games have been played for thousands of years and are common to all cultures. Throughout history and around the world, people have used sticks to draw simple game boards on the ground, making up rules that incorporate stones or other common objects as playing pieces. About 5000 years ago people began to make more permanent game boards from sun-dried mud or wood. One of the earliest games, called senet, was played in ancient Egypt. Like many early games, senet had religious significance. Pictures on the board squares represented different parts of the journey that the ancient Egyptians believed the soul made after death.

Some of the oldest board games may have evolved from methods of divination, or fortune-telling. The game of go, which many experts regard as the finest example of a pure strategy game, may have evolved from a method of divination practiced in China more than 3000 years ago, in which black and white pieces were cast onto a square board marked with symbols of various significance. Go also involves black and white pieces on a board, but players deliberately place them on intersections of lines while trying to surround more territory than the opponent.

Many modern games evolved over centuries. As games spread to different geographic regions, people experimented with rules, creating variants and often changing the original game forever. The name mancala applies to a group of ancient Egyptian mathematical games in which pebbles, seeds, or other objects are moved around pits scooped out of dirt or wood. As the game spread through Asia, Africa, and the Americas, players developed local variations that are still played today. Two such variations are sungka, from the Philippines, and mweso, from Uganda.

Chess, xiangqi (Chinese chess), and shogi (Japanese chess) are among the most widely played board games in the world. Although quite different, all three are believed to have evolved from a common ancestor—either a 6th-century game played in India or an earlier game played in China. Over the centuries, chess spread westward to the Middle East and into Europe, with rules changing frequently. The game also spread eastward to Korea and Japan, resulting in very different rule changes.

For most of human history, a game could not gain much popularity unless it was fairly easy for players to make their own equipment. The invention of printing (which occurred in the mid-1400s in the West) made this process easier, but it was not until the advances of the 18th-century Industrial Revolution that it became possible to mass-produce many new varieties of games. Twentieth-century technological advances such as the invention of plastic and the computer revolution led to the creation of more games, and more new kinds of games, than in all previous centuries combined.



In recent years improvements in CDs (compact discs) and in other aspects of computer technology have brought about entire new categories of games that grow more sophisticated each year. Computer adventure games, which as recently as the early 1980s consisted almost entirely of text, can now feature sophisticated graphics and movie-like animations using human actors.

In the 1990s the Internet opened up the possibility of playing games with people in all parts of the world. Internet clubs have sprung up for many kinds of games, and many of the newest computer games now come with user interfaces for online play.


E-Mail, in computer science, abbreviation of the term electronic mail, method of transmitting data or text files from one computer to another over an intranet or the Internet. E-mail enables computer users to send messages and data quickly through a local area network or beyond through a nationwide or worldwide communication network. E-mail came into widespread use in the 1990s and has become a major development in business and personal communications.

E-mail users create and send messages from individual computers using commercial e-mail programs or mail-user agents (MUAs). Most of these programs have a text editor for composing messages. The user sends a message to one or more recipients by specifying destination addresses. When a user sends an e-mail message to several recipients at once, it is sometimes called broadcasting.

The address of an e-mail message includes the source and destination of the message. Different addressing conventions are used depending upon the e-mail destination. An interoffice message distributed over an intranet, or internal computer network, may have a simple scheme, such as the employee’s name, for the e-mail address. E-mail messages sent outside of an intranet are addressed according to the following convention: The first part of the address contains the user’s name, followed by the symbol @, the domain name, the institution’s or organization’s name, and finally the country name.

A typical e-mail address might be sally@abc.com. In this example sally is the user’s name, abc is the domain name—the specific company, organization, or institution that the e-mail message is sent to or from, and the suffix com indicates the type of organization that abc belongs to—com for commercial, org for organization, edu for educational, mil for military, and gov for governmental. An e-mail message that originates outside the United States or is sent from the United States to other countries has a supplementary suffix that indicates the country of origin or destination. Examples include uk for the United Kingdom, fr for France, and au for Australia.

E-mail data travels from the sender’s computer to a network tool called a message transfer agent (MTA) that, depending on the address, either delivers the message within that network of computers or sends it to another MTA for distribution over the Internet. The data file is eventually delivered to the private mailbox of the recipient, who retrieves and reads it using an e-mail program or MUA. The recipient may delete the message, store it, reply to it, or forward it to others.

Modems are important devices that have allowed for the use of e-mail beyond local area networks. Modems convert a computer’s binary language into an analog signal and transmit the signal over ordinary telephone lines. Modems may be used to send e-mail messages to any destination in the world that has modems and computers able to receive messages.

E-mail messages display technical information called headers and footers above and below the main message body. In part, headers and footers record the sender’s and recipient’s names and e-mail addresses, the times and dates of message transmission and receipt, and the subject of the message.

In addition to the plain text contained in the body of regular e-mail messages, an increasing number of e-mail programs allow the user to send separate files attached to e-mail transmissions. This allows the user to append large text- or graphics-based files to e-mail messages.

E-mail has had a great impact on the amount of information sent worldwide. It has become an important method of transmitting information previously relayed via regular mail, telephone, courier, fax, television, and radio.


Windows, in computer science, personal computer operating system sold by Microsoft Corporation that allows users to enter commands with a point-and-click device, such as a mouse, instead of a keyboard. An operating system is a set of programs that control the basic functions of a computer. The Windows operating system provides users with a graphical user interface (GUI), which allows them to manipulate small pictures, called icons, on the computer screen to issue commands. Windows is the most widely used operating system in the world. It is an extension of and replacement for Microsoft’s Disk Operating System (MS-DOS).

The Windows GUI is designed to be a natural, or intuitive, work environment for the user. With Windows, the user can move a cursor around on the computer screen with a mouse. By pointing the cursor at icons and clicking buttons on the mouse, the user can issue commands to the computer to perform an action, such as starting a program, accessing a data file, or copying a data file. Other commands can be reached through pull-down or click-on menu items. The computer displays the active area in which the user is working as a window on the computer screen. The currently active window may overlap with other previously active windows that remain open on the screen. This type of GUI is said to include WIMP features: windows, icons, menus, and pointing device (such as a mouse).

Computer scientists at the Xerox Corporation’s Palo Alto Research Center (PARC) invented the GUI concept in the early 1970s, but this innovation was not an immediate commercial success. In 1983 Apple Computer featured a GUI in its Lisa computer. This GUI was updated and improved in its Macintosh computer, introduced in 1984.

Microsoft began its development of a GUI in 1983 as an extension of its MS-DOS operating system. Microsoft’s Windows version 1.0 first appeared in 1985. In this version, the windows were tiled, or presented next to each other rather than overlapping. Windows version 2.0, introduced in 1987, was designed to resemble IBM’s OS/2 Presentation Manager, another GUI operating system. Windows version 2.0 included the overlapping window feature. The more powerful version 3.0 of Windows, introduced in 1990, and subsequent versions 3.1 and 3.11 rapidly made Windows the market leader in operating systems for personal computers, in part because it was prepackaged on new personal computers. It also became the favored platform for software development.

In 1993 Microsoft introduced Windows NT (New Technology). The Windows NT operating system offers 32-bit multitasking, which gives a computer the ability to run several programs simultaneously, or in parallel, at high speed. This operating system competes with IBM’s OS/2 as a platform for the intensive, high-end, networked computing environments found in many businesses.

In 1995 Microsoft released a new version of Windows for personal computers called Windows 95. Windows 95 had a sleeker and simpler GUI than previous versions. It also offered 32-bit processing, efficient multitasking, network connections, and Internet access. Windows 98, released in 1998, improved upon Windows 95.

In 1996 Microsoft debuted Windows CE, a scaled-down version of the Microsoft Windows platform designed for use with handheld personal computers. Windows 2000, released at the end of 1999, combined Windows NT technology with the Windows 98 graphical user interface.

Other popular operating systems include the Macintosh System (Mac OS) from Apple Computer, Inc., OS/2 Warp from IBM, and UNIX and its variations, such as Linux.

Operating System



Operating System (OS), in computer science, the basic software that controls a computer. The operating system has three major functions: It coordinates and manipulates computer hardware, such as computer memory, printers, disks, keyboard, mouse, and monitor; it organizes files on a variety of storage media, such as floppy disk, hard drive, compact disc, and tape; and it manages hardware errors and the loss of data.



Operating systems control different computer processes, such as running a spreadsheet program or accessing information from the computer's memory. One important process is the interpretation of commands that allow the user to communicate with the computer. Some command interpreters are text oriented, requiring commands to be typed in. Other command interpreters are graphically oriented and let the user communicate by pointing and clicking on an icon, an on-screen picture that represents a specific command. Beginners generally find graphically oriented interpreters easier to use, but many experienced computer users prefer text-oriented command interpreters because they are more powerful.

Operating systems are either single-tasking or multitasking. The more primitive single-tasking operating systems can run only one process at a time. For instance, when the computer is printing a document, it cannot start another process or respond to new commands until the printing is completed.

All modern operating systems are multitasking and can run several processes simultaneously. In most computers there is only one central processing unit (CPU; the computational and control unit of the computer), so a multitasking OS creates the illusion of several processes running simultaneously on the CPU. The most common mechanism used to create this illusion is time-slice multitasking, whereby each process is run individually for a fixed period of time. If the process is not completed within the allotted time, it is suspended and another process is run. This exchanging of processes is called context switching. The OS performs the “bookkeeping” that preserves the state of a suspended process. It also has a mechanism, called a scheduler, that determines which process will be run next. The scheduler runs short processes quickly to minimize perceptible delay. The processes appear to run simultaneously because the user's sense of time is much slower than the processing speed of the computer.

Operating systems can use virtual memory to run processes that require more main memory than is actually available. With this technique, space on the hard drive is used to mimic the extra memory needed. Accessing the hard drive is more time-consuming than accessing main memory, however, so performance of the computer slows.



Operating systems commonly found on personal computers include UNIX, Macintosh OS, MS-DOS, OS/2, and Windows. UNIX, developed in 1969 at AT&T Bell Laboratories, is a popular operating system among academic computer users. Its popularity is due in large part to the growth of the interconnected computer network known as the Internet, the software for which initially was designed for computers that ran UNIX. Variations of UNIX include SunOS (distributed by SUN Microsystems, Inc.), Xenix (distributed by Microsoft Corporation), and Linux (available for download free of charge and distributed commercially by companies such as Red Hat, Inc.). UNIX and its clones support multitasking and multiple users. Its file system provides a simple means of organizing disk files and lets users protect their files from other users. The commands in UNIX are not intuitive, however, and mastering the system is difficult.

DOS (Disk Operating System) and its successor, MS-DOS, are popular operating systems among users of personal computers. The file systems of DOS and MS-DOS are similar to that of UNIX, but they are single user and single-tasking because they were developed before personal computers became relatively powerful. A multitasking variation is OS/2, initially developed by Microsoft Corporation and International Business Machines (IBM).

Few computer users run MS-DOS or OS/2 directly. They prefer versions of UNIX or windowing systems with graphical interfaces, such as Windows or the Macintosh OS, which make computer technology more accessible. However, graphical systems generally have the disadvantage of requiring more hardware—such as faster CPUs, more memory, and higher-quality monitors—than do command-oriented operating systems.



Operating systems continue to evolve. A recently developed type of OS called a distributed operating system is designed for a connected, but independent, collection of computers that share resources such as hard drives. In a distributed OS, a process can run on any computer in the network (presumably a computer that is idle) to increase that process's performance. All basic OS functions—such as maintaining file systems, ensuring reasonable behavior, and recovering data in the event of a partial failure—become more complex in distributed systems.

Research is also being conducted that would replace the keyboard with a means of using voice or handwriting for input. Currently these types of input are imprecise because people pronounce and write words very differently, making it difficult for a computer to recognize the same input from different users. However, advances in this field have led to systems that can recognize a small number of words spoken by a variety of people. In addition, software has been developed that can be taught to recognize an individual's handwriting.

International Business Machines Corporation



International Business Machines Corporation (IBM), one of the world’s largest manufacturers of computers and a leading provider of computer-related products and services worldwide. IBM makes computer hardware, software, microprocessors, communications systems, servers, and workstations. Its products are used in business, government, science, defense, education, medicine, and space exploration. IBM has its headquarters in Armonk, New York.



The company was incorporated in 1911 as Computing-Tabulating-Recording Company in a merger of three smaller companies. After further acquisitions, it absorbed the International Business Machines Corporation in 1924 and assumed that company’s name. Thomas Watson arrived that same year and began to build the floundering company into an industrial giant. IBM soon became the country’s largest manufacturer of time clocks and punch-card tabulators. It also developed and marketed the first electric typewriter.



IBM entered the market for digital computers in the early 1950s, after the introduction of the UNIVAC computer by rival Remington Rand in 1951. The development of IBM’s computer technology was largely funded by contracts with the U.S. government’s Atomic Energy Commission, and close parallels existed between products made for government use and those introduced by IBM into the public marketplace. In the late 1950s IBM distinguished itself with two innovations: the concept of a family of computers (its 360 family) in which the same software could be run across the entire family; and a corporate policy dictating that no customer would be allowed to fail in implementing an IBM system. This policy spawned enormous loyalty to “Big Blue,” as IBM came to be known.

IBM’s dominant position in the computer industry has led the U.S. Department of Justice to file several antitrust suits against the company. IBM lost an antitrust case in 1936, when the Supreme Court of the United States ruled that IBM and Remington Rand were unfairly controlling the punch-card market and illegally forcing customers to buy their products. In 1956 IBM settled another lawsuit filed by the Department of Justice. IBM agreed to sell its tabulating machines rather than just leasing them, to establish a competitive market for used machines. In 1982 the Justice Department abandoned a federal antitrust suit against IBM after 13 years of litigation.

From the 1960s until the 1980s IBM dominated the global market for mainframe computers, although in the 1980s IBM lost market share to other manufacturers in specialty areas such as high-performance computing. When minicomputers were introduced in the 1970s IBM viewed them as a threat to the mainframe market and failed to recognize their potential, opening the door for such competitors as Digital Equipment Corporation, Hewlett-Packard Company, and Data General.



In 1981 IBM introduced its first personal computer, the IBM PC, which was rapidly adopted in businesses and homes. The computer was based on the 8088 microprocessor made by Intel Corporation and the MS-DOS operating system made by Microsoft Corporation. The PC’s enormous success led to other models, including the XT and AT lines. Seeking to capture a share of the personal-computer market, other companies developed clones of the PC, known as IBM-compatibles, that could run the same software as the IBM PC. By the mid-1980s these clone computers far outsold IBM personal computers.

In the mid-1980s IBM collaborated with Microsoft to develop an operating system called OS/2 to replace the aging MS-DOS. OS/2 ran older applications written for MS-DOS and newer, OS/2-specific applications that could run concurrently with each other in a process called multitasking. IBM and Microsoft released the first version of OS/2 in 1987. In 1991 Microsoft and IBM ended their collaboration on OS/2. IBM released several new versions of the operating system throughout the 1990s, while Microsoft developed its Windows operating systems.

In the late 1980s IBM was the world’s largest producer of a full line of computers and a leading producer of office equipment, including typewriters and photocopiers. The company was also the largest manufacturer of integrated circuits. The sale of mainframe computers and related software and peripherals accounted for nearly half of IBM’s business and about 70 to 80 percent of its profits.



In the early 1990s, amid a recession in the U.S. economy, IBM reorganized itself into autonomous business units more closely aligned to the company’s markets. The company suffered record losses in 1992 and, for the first time in its history, IBM cut stock dividends (to less than half of their previous value). John F. Akers, chairman of IBM since 1985, resigned in early 1993. Louis V. Gerstner, Jr., was named chairman of the company later that year. In 1995, IBM paid $3.5 billion to acquire Lotus Development Corporation, a software company, expanding its presence in the software industry. Beginning in 1996, IBM began increasing its stock dividends as the company returned to profitability. In 1997 an IBM supercomputer known as Deep Blue defeated world chess champion Garry Kasparov in a six-game chess match. The victory was hailed as a milestone in the development of artificial intelligence.

In 1998 IBM built the world’s fastest supercomputer for the Department of Energy at the Lawrence Livermore National Laboratory. The computer is capable of 3.9 trillion calculations per second, and was developed to simulate nuclear-weapons tests. In 1999 IBM announced a $16-billion, seven-year agreement to provide Dell Computer Corporation with storage, networking, and display peripherals–the largest agreement of its kind ever. IBM also announced plans to develop products and provide support for Linux, a free version of the UNIX operating system.

Programming Language



Programming Language, in computer science, artificial language used to write a sequence of instructions (a computer program) that can be run by a computer. Similar to natural languages, such as English, programming languages have a vocabulary, grammar, and syntax. However, natural languages are not suited for programming computers because they are ambiguous, meaning that their vocabulary and grammatical structure may be interpreted in multiple ways. The languages used to program computers must have simple logical structures, and the rules for their grammar, spelling, and punctuation must be precise.

Application of Programming Languages

Programming languages allow people to communicate with computers. Once a job has been identified, the programmer must translate, or code, it into a list of instructions that the computer will understand. A computer program for a given task may be written in several different languages. Depending on the task, a programmer will generally pick the language that will involve the least complicated program. It may also be important to the programmer to pick a language that is flexible and widely compatible if the program will have a range of applications. The examples shown here are programs written to average a list of numbers. Both C and BASIC are commonly used programming languages. The machine interpretation shows how a computer would process and execute the commands from the programs.

Programming languages vary greatly in their sophistication and in their degree of versatility. Some programming languages are written to address a particular kind of computing problem or for use on a particular model of computer system. For instance, programming languages such as Fortran and COBOL were written to solve certain general types of programming problems—Fortran for scientific applications, and COBOL for business applications. Although these languages were designed to address specific categories of computer problems, they are highly portable, meaning that they may be used to program many types of computers. Other languages, such as machine languages, are designed to be used by one specific model of computer system, or even by one specific computer in certain research applications. The most commonly used programming languages are highly portable and can be used to effectively solve diverse types of computing problems. Languages like C, PASCAL, and BASIC fall into this category.



Programming languages can be classified as either low-level languages or high-level languages. Low-level programming languages, or machine languages, are the most basic type of programming languages and can be understood directly by a computer. Machine languages differ depending on the manufacturer and model of computer. High-level languages are programming languages that must first be translated into a machine language before they can be understood and processed by a computer. Examples of high-level languages are C, C++, PASCAL, and Fortran. Assembly languages are intermediate languages that are very close to machine language and do not have the level of linguistic sophistication exhibited by other high-level languages, but must still be translated into machine language.


Machine Languages

In machine languages, instructions are written as sequences of 1s and 0s, called bits, that a computer can understand directly. An instruction in machine language generally tells the computer four things: (1) where to find one or two numbers or simple pieces of data in the main computer memory (Random Access Memory, or RAM), (2) a simple operation to perform, such as adding the two numbers together, (3) where in the main memory to put the result of this simple operation, and (4) where to find the next instruction to perform. While all executable programs are eventually read by the computer in machine language, they are not all programmed in machine language. It is extremely difficult to program directly in machine language because the instructions are sequences of 1s and 0s. A typical instruction in a machine language might read 10010 1100 1011 and mean add the contents of storage register A to the contents of storage register B.


High-Level Languages

 High-level languages are relatively sophisticated sets of statements utilizing words and syntax from human language. They are more similar to normal human languages than assembly or machine languages and are therefore easier to use for writing complicated programs. These programming languages allow larger and more complicated programs to be developed faster. However, high-level languages must be translated into machine language by another program called a compiler before a computer can understand them. For this reason, programs written in a high-level language may take longer to execute and use up more memory than programs written in an assembly language.


Assembly Language

Computer programmers use assembly languages to make machine-language programs easier to write. In an assembly language, each statement corresponds roughly to one machine language instruction. An assembly language statement is composed with the aid of easy to remember commands. The command to add the contents of the storage register A to the contents of storage register B might be written ADD B,A in a typical assembly language statement. Assembly languages share certain features with machine languages. For instance, it is possible to manipulate specific bits in both assembly and machine languages. Programmers use assembly languages when it is important to minimize the time it takes to run a program, because the translation from assembly language to machine language is relatively simple. Assembly languages are also used when some part of the computer has to be controlled directly, such as individual dots on a monitor or the flow of individual characters to a printer.



High-level languages are commonly classified as procedure-oriented, functional, object-oriented, or logic languages. The most common high-level languages today are procedure-oriented languages. In these languages, one or more related blocks of statements that perform some complete function are grouped together into a program module, or procedure, and given a name such as “procedure A.” If the same sequence of operations is needed elsewhere in the program, a simple statement can be used to refer back to the procedure. In essence, a procedure is just a mini-program. A large program can be constructed by grouping together procedures that perform different tasks. Procedural languages allow programs to be shorter and easier for the computer to read, but they require the programmer to design each procedure to be general enough to be used in different situations.

Functional languages treat procedures like mathematical functions and allow them to be processed like any other data in a program. This allows a much higher and more rigorous level of program construction. Functional languages also allow variables—symbols for data that can be specified and changed by the user as the program is running—to be given values only once. This simplifies programming by reducing the need to be concerned with the exact order of statement execution, since a variable does not have to be redeclared, or restated, each time it is used in a program statement. Many of the ideas from functional languages have become key parts of many modern procedural languages.

Object-oriented languages are outgrowths of functional languages. In object-oriented languages, the code used to write the program and the data processed by the program are grouped together into units called objects. Objects are further grouped into classes, which define the attributes objects must have. A simple example of a class is the class Book. Objects within this class might be Novel and Short Story. Objects also have certain functions associated with them, called methods. The computer accesses an object through the use of one of the object’s methods. The method performs some action to the data in the object and returns this value to the computer. Classes of objects can also be further grouped into hierarchies, in which objects of one class can inherit methods from another class. The structure provided in object-oriented languages makes them very useful for complicated programming tasks.

Logic languages use logic as their mathematical base. A logic program consists of sets of facts and if-then rules, which specify how one set of facts may be deduced from others, for example:

If the statement X is true, then the statement Y is false.

In the execution of such a program, an input statement can be logically deduced from other statements in the program. Many artificial intelligence programs are written in such languages.



Programming languages use specific types of statements, or instructions, to provide functional structure to the program. A statement in a program is a basic sentence that expresses a simple idea—its purpose is to give the computer a basic instruction. Statements define the types of data allowed, how data are to be manipulated, and the ways that procedures and functions work. Programmers use statements to manipulate common components of programming languages, such as variables and macros (mini-programs within a program).

Statements known as data declarations give names and properties to elements of a program called variables. Variables can be assigned different values within the program. The properties variables can have are called types, and they include such things as what possible values might be saved in the variables, how much numerical accuracy is to be used in the values, and how one variable may represent a collection of simpler values in an organized fashion, such as a table or array. In many programming languages, a key data type is a pointer. Variables that are pointers do not themselves have values; instead, they have information that the computer can use to locate some other variable—that is, they point to another variable.

An expression is a piece of a statement that describes a series of computations to be performed on some of the program’s variables, such as X + Y/Z, in which the variables are X, Y, and Z and the computations are addition and division. An assignment statement assigns a variable a value derived from some expression, while conditional statements specify expressions to be tested and then used to select which other statements should be executed next.

Procedure and function statements define certain blocks of code as procedures or functions that can then be returned to later in the program. These statements also define the kinds of variables and parameters the programmer can choose and the type of value that the code will return when an expression accesses the procedure or function. Many programming languages also permit minitranslation programs called macros. Macros translate segments of code that have been written in a language structure defined by the programmer into statements that the programming language understands.



Programming languages date back almost to the invention of the digital computer in the 1940s. The first assembly languages emerged in the late 1950s with the introduction of commercial computers. The first procedural languages were developed in the late 1950s to early 1960s: Fortran (FORmula TRANslation), created by John Backus, and then COBOL (COmmon Business Oriented Language), created by Grace Hopper. The first functional language was LISP (LISt Processing), written by John McCarthy in the late 1950s. Although heavily updated, all three languages are still widely used today.

In the late 1960s, the first object-oriented languages, such as SIMULA, emerged. Logic languages became well known in the mid 1970s with the introduction of PROLOG, a language used to program artificial intelligence software. During the 1970s, procedural languages continued to develop with ALGOL, BASIC, PASCAL, C, and Ada. SMALLTALK was a highly influential object-oriented language that led to the merging of object-oriented and procedural languages in C++ and more recently in JAVA. Although pure logic languages have declined in popularity, variations have become vitally important in the form of relational languages for modern databases, such as SQL (Structured Query Language).


Fortran, in computer science, acronym for FORmula TRANslation. The first high-level computer language (developed 1954-1958 by John Backus) and the progenitor of many key high-level concepts, such as variables, expressions, statements, iterative and conditional statements, separately compiled subroutines, and formatted input/output. Fortran is a compiled, structured language. The name indicates its scientific and engineering roots; Fortran is still used heavily in those fields, although the language itself has been expanded and improved vastly over the last 35 years to become a language that is useful in any field.


COBOL, in computer science, acronym for COmmon Business-Oriented Language, a verbose, English-like programming language developed between 1959 and 1961. Its establishment as a required language by the U.S. Department of Defense, its emphasis on data structures, and its English-like syntax (compared to those of Fortran and ALGOL) led to its widespread acceptance and usage, especially in business applications. Programs written in COBOL, which is a compiled language, are split into four divisions: Identification, Environment, Data, and Procedure. The Identification division specifies the name of the program and contains any other documentation the programmer wants to add. The Environment division specifies the computer(s) being used and the files used in the program for input and output. The Data division describes the data used in the program. The Procedure division contains the procedures that dictate the actions of the program. See also Computer.

C (computer)

C (computer), in computer science, a programming language developed by Dennis Ritchie at Bell Laboratories in 1972; so named because its immediate predecessor was the B programming language. Although C is considered by many to be more a machine-independent assembly language than a high-level language, its close association with the UNIX operating system, its enormous popularity, and its standardization by the American National Standards Institute (ANSI) have made it perhaps the closest thing to a standard programming language in the microcomputer/workstation marketplace. C is a compiled language that contains a small set of built-in functions that are machine dependent. The rest of the C functions are machine independent and are contained in libraries that can be accessed from C programs. C programs are composed of one or more functions defined by the programmer; thus C is a structured programming language.

Pascal (computer)

Pascal (computer), a concise procedural computer programming language, designed 1967-71 by Niklaus Wirth. Pascal, a compiled, structured language, built upon ALGOL, simplifies syntax while adding data types and structures such as subranges, enumerated data types, files, records, and sets. Acceptance and use of Pascal exploded with Borland International's introduction in 1984 of Turbo Pascal, a high-speed, low-cost Pascal compiler for MS-DOS systems that has sold over a million copies in its various versions. Even so, Pascal appears to be losing ground to C as a standard development language on microcomputers.


BASIC, in computer science, acronym for Beginner's All-purpose Symbolic Instruction Code. A high-level programming language developed by John Kemeny and Thomas Kurtz at Dartmouth College in the mid-1960s. BASIC gained its enormous popularity mostly because of two implementations, Tiny BASIC and Microsoft BASIC, which made BASIC the first lingua franca of microcomputers. Other important implementations have been CBASIC (Compiled BASIC), Integer and Applesoft BASIC (for the Apple II), GW-BASIC (for the IBM PC), Turbo BASIC (from Borland), and Microsoft QuickBASIC. The language has changed over the years. Early versions are unstructured and interpreted. Later versions are structured and often compiled. BASIC is often taught to beginning programmers because it is easy to use and understand and because it contains the same major concepts as many other languages thought to be more difficult, such as Pascal and C.


C++, in computer science, an object-oriented version of the C programming language, developed by Bjarne Stroustrup in the early 1980s at Bell Laboratories and adopted by a number of vendors, including Apple Computer, Sun Microsystems, Borland International, and Microsoft Corporation.

Assembly Language

Assembly Language, in computer science, a type of low-level computer programming language in which each statement corresponds directly to a single machine instruction. Assembly languages are thus specific to a given processor. After writing an assembly language program, the programmer must use the assembler specific to the microprocessor to translate the assembly language into machine code. Assembly language provides precise control of the computer, but assembly language programs written for one type of computer must be rewritten to operate on another type. Assembly language might be used instead of a high-level language for any of three major reasons: speed, control, and preference. Programs written in assembly language usually run faster than those generated by a compiler; use of assembly language lets a programmer interact directly with the hardware (processor, memory, display, and input/output ports).


LISP, in computer science, acronym for List Processing. A list-oriented computerprogramming language developed in 1959-1960 by John McCarthy and used primarily to manipulate lists of data. LISP was a radical departure from the procedural languages (Fortran, ALGOL) then being developed; it is an interpreted language in which every expression is a list of calls to functions. LISP continues to be heavily used in research and academic circles and has long been considered the “standard” language for artificial-intelligence (AI) research, although Prolog has made inroads into that claim in recent years.


PROLOG, in computer science, an acronym for programming in logic, a computer programming language important in the development of artificial intelligence software during the 1970s and 1980s. Unlike traditional programming languages, which process only numerical data and instructions, PROLOG processes symbols and relationships. It is designed to perform search functions that establish relationships within a program. This combination of symbolic processing and logic searching made PROLOG the preferred language during the mid-1980s for creating programs that mimic human behavior.

PROLOG was developed in 1970 at the University of Marseille in France by Alain Colmerauer. Colmerauer believed that traditional computer-programming languages, such as Fortran and COBOL, were inappropriate for representing human logic in a machine. Colmerauer's primary goal was to communicate with computers using conversational language instead of programmer's jargon. He concluded that strict symbolic logic was the appropriate bridge between human and machine.

A PROLOG program is made up of facts and rules that are usually limited to a single domain, such as marine life, accounting, or aircraft maintenance. Once a database is built for that domain, PROLOG searches the database and forms relationships between facts. PROLOG's functions are designed to prove that a proposition is either valid or invalid. This is done by applying logic to the available facts, such as “A hammerhead is a shark” and “Madeline likes all sharks.” Rules are built by combining facts: “Madeline likes X, and X is a shark.” If the program's database identifies certain symbolic entities—such as hammerheads, makos, and great whites—as sharks, then PROLOG can use the rule to determine that Madeline likes hammerheads, makos, and great whites, even though that information was not specifically programmed into the database.

This form of artificial intelligence is valuable in situations in which a fault or malfunction can be specifically identified—as in equipment maintenance and repair—and in cases in which the answers are of the YES/NO or TRUE/FALSE variety. Due to the rigidity of the applied logic, however, PROLOG has difficulty with imprecise data or fuzzy sets.

In the United States the artificial-intelligence community of the late 1970s ignored PROLOG in favor of a competing artificial-intelligence language, LISP, which was developed by John McCarthy at the Massachusetts Institute of Technology in Cambridge, Massachusetts. In Europe, however, PROLOG captured the interest of researchers and by the mid-1980s it became the preferred language for building expert systems. In 1981, when the Japanese government initiated a national project to develop commercial artificial intelligence, it adopted PROLOG as its standard programming language.

Many of the features that were once unique to PROLOG are now used in modern object-oriented programming, a programming technique that is becoming the standard for software development.

Object-Oriented Programming

Object-Oriented Programming (OOP), in computer science, type of high-level computer language that uses self-contained, modular instruction sets for defining and manipulating aspects of a computer program. These discrete, pre-defined instruction sets are called objects and may be used to define variables, data structures, and procedures for executing data operations. In OOP, objects have built-in rules for communicating with one another. They can also be manipulated or combined in various ways to modify existing programs and to create entirely new ones from pieces of other programs. See also Computer Program, Programming Language.

One especially powerful feature of OOP languages is a property known as inheritance. Inheritance allows an object to take on the characteristics and functions of other objects to which it is functionally connected. Programmers connect objects by grouping them together in different classes and by grouping the classes into hierarchies. These classes and hierarchies allow programmers to define the characteristics and functions of objects without needing to repeat source code, the coded instructions in a program. Thus, using OOP languages can greatly reduce the time it takes for a programmer to write an application, and also reduce the size of the program. OOP languages are flexible and adaptable, so programs or parts of programs can be used for more than one task. Programs written with OOP languages are generally shorter in length and contain fewer bugs, or mistakes, than those written with non-OOP languages.

The first OOP language was Smalltalk, developed by Alan Kay at the Palo Alto Research Center of the Xerox Corporation in the early 1970s. By using objects, Smalltalk allowed programmers to focus on and specify the task to be performed from the top down, rather than laboring on detailed, ground-up procedures, which were embedded in the language structure. Smalltalk, however, has not found widespread use.

The most popular OOP language is C++, developed by Bjarne Stroustrup at Bell Laboratories in the early 1980s. In May 1995 Sun Microsystems, Inc. released Java, a new OOP language, which has drawn worldwide interest. In some ways Java represents a simplified version of C++, but it adds other features and capabilities as well, and is particularly well suited for writing interactive applications to be used on the World Wide Web.


Ada, in computer science, a procedural programming language designed under the direction of the U.S. Department of Defense (DOD) in the late 1970s and intended to be the primary language for DOD software development. Ada, named after (Augusta) Ada Byron, Countess of Lovelace, who was a pioneer in the field of computers, was derived from Pascal but has major semantic and syntactical extensions, including concurrent execution of tasks, overloading of operators, and modules.

Java (computer)

Java (computer), in computer science, object-oriented programming language introduced in 1995 by Sun Microsystems, Inc. Java facilitates the distribution of both data and small applications programs, called applets, over the Internet. Java applications do not interact directly with a computer’s central processing unit (CPU) or operating system and are therefore platform independent, meaning that they can run on any type of personal computer, workstation, or mainframe computer. This cross-platform capability, referred to as “write once, run everywhere,” has caught the attention of many software developers and users. With Java, software developers can write applications that will run on otherwise incompatible operating systems such as Windows, the Macintosh operating system, OS/2, or UNIX.

To use a Java applet on the World Wide Web (WWW)—the system of software and protocols that allows multimedia documents to be viewed on the Internet—a user must have a Java-compatible browser, such as Navigator from Netscape Communications Corporation, Internet Explorer from Microsoft Corporation, or HotJava from Sun Microsystems. A browser is a software program that allows the user to view text, photographs, graphics, illustrations, and animations on the WWW. Java applets achieve platform independence through the use of a virtual machine, a special program within the browser software that interprets the bytecode—the code that the applet is written in—for the computer’s CPU. The virtual machine is able to translate the platform-independent bytecode into the platform-dependent machine code that a specific computer’s CPU understands.

Applications written in Java are usually embedded in Web pages, or documents, and can be run by clicking on them with a mouse. When an applet is run from a Web page, a copy of the application program is sent to the user’s computer over the Internet and stored in the computer’s main memory. The advantage of this method is that once an applet has been downloaded, it can be interacted with in real time by the user. This is in contrast to other programming languages used to write Web documents and interactive programs, in which the document or program is run from the server computer. The problem with running software from a server is that it generally cannot be run in real time due to limitations in network or modembandwidth—the amount of data that can be transmitted in a certain amount of time.

Java grew out of a research project at Sun Microsystems in the early 1990s that focused on controlling different consumer electronics devices using the same software. The original version of Java, called Oak, needed to be simple enough to function with the modest microprocessors found in such consumer devices. Following the introduction of the National Center for Supercomputing Applications’ (NCSA) Mosaic browser in 1993, Oak was recast by Sun Microsystems developers. In 1994 Sun Microsystems released a Java-compatible Internet browser, called HotJava, that was designed to read and execute Java applets on the WWW. Netscape Communications licensed Java from Sun Microsystems in November 1995, and its Navigator 3.0 browser supports Java applications. Microsoft also licensed Java, in 1996, for its Internet Explorer 3.0 browser. Microsoft developed a programming language, called Visual J++, to integrate Java, through its ActiveX technology, with its browser. Visual J++ is optimized for the Windows operating system. Various other WWW browsers are also capable of supporting Java applications and applets.

JavaSoft, a division of Sun Microsystems with responsibility for Java and its business development, has created JavaOS, a compact operating system for use on its own JavaStation network computers, now in development, as well as, possibly, in cellular telephones and pagers.

Structured Query Language

Structured Query Language (SQL), in computer science, a database sublanguage used in querying, updating, and managing relational databases. Derived from an IBM research project that created Structured English Query Language (SEQUEL) in the 1970s, SQL is an accepted standard in database products. Although it is not a programming language in the same sense as C or Pascal, SQL can either be used in formulating interactive queries or be embedded in an application as instructions for handling data. The SQL standard also contains components for defining, altering, controlling, and securing data. SQL is designed for both technical and nontechnical users.

Hypertext Markup Language (HTML)

Hypertext Markup Language (HTML) in computer science, the standard text-formatting language since 1989 for documents on the interconnected computing network known as the World Wide Web. HTML documents are text files that contain two parts: content that is meant to be rendered on a computer screen; and markup or tags, encoded information that directs the text format on the screen and is generally hidden from the user. HTML is a subset of a broader language called Standard Generalized Markup Language (SGML), which is a system for encoding and formatting documents, whether for output to a computer screen or to paper.

Some tags in an HTML document determine the way certain text, such as titles, will be formatted. Other tags cue the computer to respond to the user's actions on the keyboard or mouse. For instance, the user might click on an icon (a picture that represents a specific command), and that action might call another piece of software to display a graphic, play a recording, or run a short movie. Another important tag is a link, which may contain the Uniform Resource Locator (URL) of another document. The URL can be compared to an address where a particular document resides. The document may be stored on the same computer as the parent document or on any computer connected to the World Wide Web. The user can navigate from document to document simply by clicking on these links. HTML also includes markups for forms, that let the user fill out information and electronically send, or e-mail, the data to the document author, initiate sophisticated searches of information on the Internet, or order goods and services.

The software that permits the user to navigate the World Wide Web and view HTML-encoded documents is called a browser. It interprets the HTML tags in a document and formats the content for screen display. Since HTML is an accepted standard, anyone can build a browser without concerning themselves with what form various documents will assume, unlike documents produced by typical word processors, which must be translated into a different format if another word processing application is used. Most sites on the World Wide Web adhere to HTML standards and, because HTML is easy to use, the World Wide Web has grown rapidly. HTML continues to evolve, however, so browsers must be upgraded regularly to meet the revised standards.

Computer Graphics



Computer Graphics, two- and three-dimensional images created by computers that are used for scientific research, artistic pursuits, and in industries to design, test, and market products. Computer graphics have made computers easier to use. Graphical user interfaces (GUIs) and multimedia systems such as the World Wide Web, the system of interconnected worldwide computer resources, enable computer users to select pictures to execute orders, eliminating the need to memorize complex commands.



Before an image can be displayed on the screen it must be created by a computer program in a special part of the computer's memory, called a frame buffer. One method of producing an image in the frame buffer is to use a block of memory called a bitmap to store small, detailed figures such as a text character or an icon. A graphical image is created by dividing the computer's display screen into a grid of tiny dots called pixels. Frame buffer memory can also store other information, such as the color of each pixel.


Color Representation

Computers store and manipulate colors by representing them as a combination of three numbers. For example, in the Red-Green-Blue (RGB) color system, the computer uses one number each to represent the red, green, and blue primary components of the color. Alternate schemes may represent other color properties such as the hue (frequency of light), saturation (amount), and value (brightness).

If one byte of memory is used to store each color component in a three-color system, then over 16 million color combinations can be represented. But in the creation of a large image, allowing so many combinations can be very costly in terms of memory and processing time. An alternate method, color mapping, uses only one number per color combination, storing each number in a table of available colors like a painter's palette. The problem with color mapping is that the number of colors in the palette is usually too small to create realistically colored images. Choosing the colors that make the best image for the palette, called color quantization, becomes a very important part of the image-making process. Another method, called dithering, alternates the limited palette colors throughout the image—much like the patterns of dots in a newspaper comic strip—to give the appearance of more colors than are actually in the image.


Aliasing and Anti-Aliasing

Bézier Curve: Computer Graphics Tool

A Bézier curve is used in computer graphics programs to illustrate numerous shapes. Graphic artists move the handles to create many different shapes, while the anchor points remain stationary.

Since a computer monitor is essentially a grid of colored squares arranged like a sheet of graph paper, diagonal lines tend to be displayed with a jagged “stair step” appearance. This effect, called aliasing, can be lessened by calculating how close each pixel is to the ideal line of the drawn image and then basing the pixel's color on its distance from this line. For example, if the pixel is directly on the line, it may be given the darkest color, and if it is only partially on the line, it may be given a lighter color. This process effectively smooths the line.


Image Processing

Image processing is among the most powerful and important tools in computer graphics. Its underlying techniques are used for many applications, such as detecting the edge of objects; enhancing images and removing noise in medical imaging; and blurring, sharpening, and brightening images in feature films and commercials.

Image warping lets the user manipulate and deform an image over time. The most popular use of image warping is morphing, in which one image deforms and dissolves into another. Morphing is different from similar processes, in which one image simply fades into another, because the actual structures of the original image change. To morph an image, the user specifies corresponding points on the original and final objects that the computer then distorts until one image becomes the other. These transformation points are usually either a grid overlaid on each object or a specific set of features, such as the nose, eyes, and ears of two faces to be morphed.



Many uses of computer graphics, such as computer animation, computer-aided design and manufacturing (CAD/CAM), video games, and scientific visualization of data such as magnetic resonance images of internal organs, require drawing three-dimensional (3D) objects on the computer screen. The drawing of 3D scenes, called rendering, is usually accomplished using a pipeline or assembly-line approach, in which several program instructions can, at any given time, be executed in various stages on different data.

This graphics pipeline is implemented either with special-purpose 3D graphics microprocessors (hardware) or with computer programs (software). Hardware rendering can be expensive, but it enables the user to draw up to 60 images per second and to make immediate changes to the image. Software renderers are very slow, requiring from a few hours to a full day to render a single image. However, computer animation almost always uses software renderers because they provide greater control of the images and potentially photo-realistic quality.



The first step in a rendering pipeline is the creation of 3D objects. The surface of an object, such as a sphere, is represented either as a series of curved surfaces or as polygons, usually triangles. The points on the surface of the object, called vertices, are represented in the computer by their spatial coordinates. Other characteristics of the model, such as the color of each vertex and the direction perpendicular to the surface at each vertex, called the normal, also must be specified. Since polygons do not create smooth surfaces, detailed models require an extremely large number of polygons to create an image that looks natural.

Another technique used to create smooth surfaces relies on a parametric surface, a two-dimensional (2D) surface existing in three dimensions. For example, a world globe can be considered a 2D surface with latitude and longitude coordinates representing it in three dimensions. More complex surfaces, such as knots, can be specified in a similar manner.



Once these models have been created, they are placed in a computer-generated background. For example, a rendered sphere might be set against a backdrop of clouds. User instructions specify the object's size and orientation. Then the colors, their locations, and the direction of light within the computer-generated scene, as well as the location and direction of the viewing angle of the scene, are selected.

At this point, the computer program generally breaks up complex geometric objects into simple “primitives,” such as triangles. Next, the renderer determines where each primitive will appear on the screen by using the information about the viewing position and the location of each object in the scene.


Lighting and Shading

Once a primitive has been located, it must be shaded. Shading information is calculated for each vertex based on the location and color of the light in the computer-generated scene, the orientation of each surface, the color and other surface properties of the object at that vertex, and possible atmospheric effects that surround the object, such as fog.

Graphics hardware most commonly uses Gouraud shading, which calculates the lighting at the vertices of the primitive, and interpolates, or blends, colors across the surface to make the object appear more realistic. Phong shading represents highlights by blending the lighting and colors in a direction perpendicular to the surface at each vertex, the normal, and calculating the lighting at each pixel. This provides a better approximation of the surface but requires more calculation.



Several techniques permit the artist to add realistic details to models with simple shapes. The most common method is texture mapping, which maps or applies an image to an object's surface like wallpaper. For example, a brick pattern could be applied to a rendered sphere. In this process only the object's shape, not features of the texture, such as the rectangular edges and grout lines of the brick, affect the way the object looks in lighting; the sphere still appears smooth. Another technique, called bump mapping, provides a more realistic view by creating highlights to make the surface appear more complex. In the example of the brick texture, bump mapping might provide shadowing in the grout lines and highlights upon some brick surfaces. Bump mapping does not affect the look of the image's silhouette, which remains the same as the basic shape of the model. Displacement mapping addresses this problem by physically offsetting the actual surface according to a displacement map. For example, the brick texture applied to the sphere would extend to the sphere's silhouette, giving it an uneven texture.



Once the shading process has produced a color for each pixel in a primitive, the final step in rendering is to write that color into the frame buffer. Frequently, a technique called Z buffering is used to determine which primitive is closest to the viewing location and angle of the scene, ensuring that objects hidden behind others will not be drawn. Finally, if the surface being drawn is semitransparent, the front object's color is blended with that of the object behind it.


Physically Based Rendering

Because the rendering pipeline has little to do with the way light actually behaves in a scene, it does not work well with shadows and reflections. Another common rendering technique, ray tracing, calculates the path that light rays take through the scene, starting with the viewing angle and location and calculating back to the light source. Ray tracing provides more accurate shadows than other methods and also handles multiple reflections correctly. Although it takes a long time to render a scene using ray tracing, it can create stunning images.

In spite of its generally accurate portrayal of shadows and reflections, ray tracing calculates only the main direction of reflection, while real surfaces scatter light in many directions. This scattered-light phenomenon can be modeled with global illumination, which uses the lighting of the image as a whole rather than calculating illumination on each individual primitive.

Many scientific applications of computer graphics require viewing 3D volumes of data on a 2D computer screen. This is accomplished through techniques that make the volume appear semitransparent and use ray tracing through the volume to illuminate it.

Floppy Disk

Floppy Disk, in computer science, a round, flat piece of Mylar coated with ferric oxide, a rustlike substance containing tiny particles capable of holding a magnetic field, and encased in a protective plastic cover, the disk jacket. Data is stored on a floppy disk by the disk drive's read/write head, which alters the magnetic orientation of the particles. Orientation in one direction represents binary 1; orientation in the other, binary 0. Typically, a floppy disk is 5.25 inches in diameter, with a large hole in the center that fits around the spindle in the disk drive. Depending on its capacity, such a disk can hold from a few hundred thousand to over one million bytes of data. A 3.5-inch disk encased in rigid plastic is usually called a microfloppy disk but can also be called a floppy disk. See also Computer.

Microfloppy Disk

Microfloppy Disk, in computer science, a 3.5-inch floppy disk of the type used with the Apple Macintosh and with the IBM PS/1 and PS/2 and some compatible computers. A microfloppy disk is a round piece of Mylar coated with ferric oxide and encased in a rigid plastic shell. On the Macintosh, a single-sided microfloppy disk can hold 400 kilobytes (KB); a double-sided (standard) disk can hold 800 KB; and a double-sided high-density disk can hold 1.44 megabytes (MB). On IBM and compatible machines with 3.5-inch disk drives, a microfloppy can hold either 720 KB or 1.44 MB of information.


Disk, in computer science, a round, flat piece of flexible plastic (floppy disk) or inflexible metal (hard disk) coated with a magnetic material that can be electrically influenced to hold information recorded in digital (binary) form. A disk is, in most computers, the primary means of storing data on a permanent or semipermanent basis. Because the magnetic coating of the disk must be protected from damage or contamination, a floppy (5.25-inch) disk or microfloppy (3.5-inch) disk is encased in a protective plastic jacket. A hard disk, which is very finely machined, is enclosed in a rigid case and can be exposed only in a dust-free environment. Standard floppy and microfloppy disk designs are being phased out in favor of compact discs (CD-ROMs), and new microfloppy designs with upward of one hundred times the storage capacity of their older counterparts. CD-ROMs, however, cannot have information recorded upon them, and recordable compact discs (CD-Rs) remain too expensive for general use. A hybrid of the optical storage techniques used in CD-ROMs and the magnetic storage capabilities of floppy disks, called a floptical or magneto-optical disk, provides large storage capacities and the option to read and write information upon them.

Hard Disk

Hard Disk, in computer science, one or more inflexible platters coated with material that allows the magnetic recording of computer data. A typical hard disk rotates at 3600 RPM (revolutions per minute), and the read/write heads ride over the surface of the disk on a cushion of air 10 to 25 millionths of an inch deep. A hard disk is sealed to prevent contaminants from interfering with the close head-to-disk tolerances. Hard disks provide faster access to data than floppy disks and are capable of storing much more information. Because platters are rigid, they can be stacked so that one hard-disk drive can access more than one platter. Most hard disks have from two to eight platters.

Hard Disk

Hard disks are used to record computer data magnetically. A hard disk drive consists of a stack of inflexible magnetic disks mounted on a motor. As the disks spin at high speeds, read/write heads at the end of a metal fork swing in and out to access sectors of the disks.

Write Protect

Write Protect, in computer science, to prevent the writing (recording) of information, usually on a disk. Write protection can be applied (not necessarily infallibly) either to a floppy disk or to an individual file on a floppy or hard disk. Covering the write-protect notch on a 5.25-inch floppy disk enables programs to read, but not record on, the disk. Moving the slide to open the “notch” on a 3.5-inch disk provides the same protection. Individual files can also be made “read-only” through software commands; read-only files, like protected disks, can be read but not written to.

Double-Density Disk

Double-Density Disk, in computer science, a disk created to hold data at twice the density (bits per inch) of a previous generation of disks. Early IBM PC floppy disks held 180 kilobytes (KB) of data. Double-density disks increased that capacity to 360 KB. Double-density disks use modified frequency modulation encoding for storing data.

Disk Drive

Disk Drive, in computer science, a device that reads or writes data, or both, on a disk medium. The disk medium may be either magnetic, as with floppy disks or hard disks; optical, as with CD-ROM (compact disc-read only memory) disks; or a combination of the two, as with magneto-optical disks. Nearly all computers come equipped with drives for these types of disks, and the drives are usually inside the computer, but may also be connected as external, or peripheral, devices.

The main components of a disk drive are the motor, which rotates the disk; the read-write mechanism; and the logic board, which receives commands from the operating system to place or retrieve information on the disk. To read or write information to a disk, drives use various methods. Floppy and hard drives use a small magnetic head to magnetize portions of the disk surface, CD-ROM and WORM (Write-Once-Read-Many) drives use lasers to read information, and magneto-optical drives use a combination of magnetic and optical techniques to store and retrieve information.

Floppy and hard disk drives store information on magnetic disks. The disk itself is a thin, flexible piece of plastic with tiny magnetic particles imbedded in its surface. To write data to the disk, the read-write head creates a small magnetic field that aligns the magnetic poles of the particles on the surface of the disk directly beneath the head. Particles aligned in one direction represent a 0 while particles aligned in the opposite direction represent a 1. To read data from a disk, the drive head scans the surface of the disk. The magnetic fields of the particles in the disk induce an alternating electric current in the read-write head, which is then translated into the series of 1s and 0s that the computer understands.

Unlike hard or floppy disks, most CD-ROM drives are unable to write data to the CD. Data is initially written to CD-ROM discs by burning microscopic pits into the disc's reflective surface with a laser. To read the information contained on the disc, the drive shines a low-power laser beam onto the surface. When the laser light hits flat spots on the reflective surface of the CD, it bounces back to a photo detector, which records the impulse as a 0. When the laser light hits pits in the surface, it does not reflect light back to the photo detector, and this absence of light corresponds to a 1. Most CD-ROM drives are only capable of reading data and cannot write data to the CD. WORM drives, however, are able to both etch blank CDs and to read data from them.

Magneto-optical (MO) drives combine optical and magnetic technology to read from and write to disks that have the appearance of CD-ROMs in plastic, floppy-disk cases. MO drives can rewrite the MO disks without limitation just as magnetic drives rewrite magnetic media. Although more expensive than standard magnetic or optical drives, MO drives combine speed, large capacity, and high durability of data. see Computer Memory.


CD-ROM, in computer science, acronym for compact disc read-only memory, a rigid plastic disk that stores a large amount of data through the use of laser optics technology. Because they store data optically, CD-ROMs have a much higher memory capacity than computer disks that store data magnetically. However, CD-ROM drives, the devices used to access information on CD-ROMs, can only read information from the disc, not write to it.

The underside of the plastic CD-ROM disk is coated with a very thin layer of aluminum that reflects light. Data is written to the CD-ROM by burning microscopic pits into the reflective surface of the disk with a powerful laser. The data is in digital form, with pits representing a value of 1 and flat spots, called land, representing a value of 0. Once data is written to a CD-ROM, it cannot be erased or changed, and this is the reason it is termed read-only memory. Data is read from a CD-ROM with a low power laser contained in the drive that bounces light—usually infrared—off of the reflective surface of the disk and back to a photodetector. The pits in the reflective layer of the disk scatter light, while the land portions of the disk reflect the laser light efficiently to the photodetector. The photodetector then converts these light and dark spots to electrical impulses corresponding to 1s and 0s. Electronics and software interpret this data and accurately access the information contained on the CD-ROM.

CD-ROMs can store large amounts of data and so are popular for storing databases and multimedia material. The most common format of CD-ROM holds approximately 630 megabytes. By comparison, a regular floppy disk holds approximately 1.44 megabytes.

CD-ROMs and Audio CDs are almost exactly alike in structure and data format. The difference between the two lies in the device used to read the data—either a CD-ROM player or a compact disc (CD) player. CD-ROM players are used almost exclusively as computer components or peripherals. They may be either internal (indicating they fit into a computer’s housing) or external (indicating they have their own housing and are connected to the computer via an external port).

Both types of players spin the discs to access data as they read the data with a laser device. CD-ROM players only spin the disc to access a sector of data and copy it into main memory for use by the computer, while audio CDs spin throughout the time that the audio recording is read out, directly feeding the signal to an audio amplifier.

The most important distinguishing feature among CD-ROM players is their speed, which indicates how fast they can read data from the disc. A single-speed CD-ROM player reads 150,000 bytes of data per second. Double-speed (2X), triple-speed (3X), quadruple-speed (4X), six-times speed (6X), and eight-times speed (8x) CD-ROM players are also widely available.

Other important characteristics of CD-ROM players are seek time and data transfer rate. The seek time (also called the access time) measures how long it takes for the laser to access a particular segment of data. A typical CD-ROM takes about a third of a second to access data, as compared to a typical hard drive, which takes about 10 milliseconds (thousandths of a second) to access data. The data transfer rate measures how quickly data is transferred from the disk media to the computer’s main memory.

The computer industry also manufactures blank, writeable compact discs, called WORMs (Write-Once-Read-Many), that users can record data onto for one-time, permanent storage using personal WORM recording units. Another technology that allows the user to write to a compact disc is the magneto-optical (MO) disk, which combines magnetic and optical data storage. Users can record, erase, and save data to these disks any number of times using special MO drives.

Information Storage and Retrieval



Information Storage and Retrieval, in computer science, term used to describe the organization, storage, location, and retrieval of encoded information in computer systems. Important factors in storing and retrieving information are the type of media, or storage device, used to store information; the media’s storage capacity; the speed of access and information transfer to and from the storage media; the number of times new information can be written to the media; and how the media interacts with the computer.



Information storage can be classified as being either permanent, semipermanent, or temporary. Information can also be classified as having been stored to or retrieved from primary or secondary memory. Primary memory, also known as main memory, is the computer’s main random access memory (RAM). All information that is processed by the computer must first pass through main memory. Secondary memory is any form of memory other than the main computer memory, including the hard disk, floppy disks, CD-ROMs, and magnetic tape.


Permanent Storage

Information is stored permanently on storage media that is written to only once, such as ROM (read-only memory) chips and CD-ROMs (compact disc read-only memory). Permanent storage media is used for archiving information or, in the case of ROM chips, for storing basic information that the computer needs to function that cannot be overwritten.


Semipermanent Storage

Semipermanent information storage is also often used for archival purposes, but the media used can be overwritten. A common example of a semipermanent storage material is a floppy disk. The magnetic material that serves as the storage media in a floppy disk can be written to many times when a removable tab is in place. Once this tab is removed, the disk is protected from further overwriting; the write-protection may be bypassed by placing a piece of tape over the hole left by the removal of the tab.


Temporary Storage

Temporary information storage is used as intermediate storage between permanent or semipermanent storage and a computer’s central processing unit (CPU). Temporary storage is in the form of memory chips called RAM. Information is stored in RAM while it is being used by the CPU; it is then returned to a more permanent form of memory. RAM chips are known as volatile memory because they must have power supplied to them continuously or they lose the contents of their memory.



All information processed by a computer can be expressed as numbers in binary notation. These binary numbers are strings of bits, or 1s and 0s, that are grouped together in sequences of eight called bytes. The two values a bit can have are physically stored in a storage media in a variety of ways. For instance, they can be represented by the presence or absence of electrical charge on a pair of closely spaced conductors called a capacitor, by the direction of magnetization on the surface of a magnetic material, or by the presence or absence of a hole in a thin material, such as a plastic disk.

In computer systems, the CPU processes information and maintains it temporarily in RAM in the form of strings of bits called files. When this temporary information needs to be stored permanently or semipermanently, the CPU finds unused space on the storage media—generally a hard drive, floppy disk, or magnetic tape. It then instructs a device capable of making changes to the media (called a read/write head) to start transmitting bits. The read/write head and its associated electronics convert each bit to the equivalent physical value on the media. The recording mechanism then moves to the next location on the media capable of storing a bit.

When the CPU needs to access some piece of stored information, the process is reversed. The CPU determines where on the physical media the appropriate file is stored, directs the read/write head to position itself at that location on the media, and then directs it to read the information stored there.


Storage Media

Information is stored on many different types of media, the most common being floppy disks, hard drives, CD-ROMs, and magnetic tape. Floppy disks are most often used to store material that is not accessed frequently or to back up files contained on a computer’s hard drive. Hard drives are most often used to store information, such as application programs, that is frequently accessed by the user. CD-ROMs are a type of storage medium that is capable of being written to only once but read many times, hence the name read-only memory. CD-ROMs are useful for storing a large amount of information that does not need to be changed or updated by the user. They are usually purchased with information already written to them, although special types of CD drives, called WORM (write once, read many) drives, allow the user to write data to a blank CD once, after which it is a CD-ROM. Magnetic tape is most commonly used in situations where large databases of information need to be stored.


Floppy Disks

floppy disk is a thin piece of magnetizable material inside a protective envelope. The size of the disk is usually given as the diameter of the magnetic media, with the two most common sizes being 5.25 inch and 3.5 inch. Although both sizes are called floppies, the name actually comes from the 5.25-inch size, in which both the envelope and the disk itself are thin enough to bend easily.

Both sizes of floppies are removable disks—that is, they must be inserted into a compatible disk drive in order to be read from or written to. This drive is usually internal to, or part of, a computer. Inside the drive, a motor spins the disk inside its envelope and a read/write head moves over the surface of the disk on the end of an arm called an actuator. The head in the floppy drive is much like that in a tape recorder. To record information, the head magnetizes a small area on the surface of the disk in a certain direction. To read information stored on the disk, the disk controller—circuitry that controls the disk drive—directs the actuator to the location of the information on the disk. The head then senses the direction of magnetization of a small area on the disk and translates this into a signal that gets stored in RAM until the CPU retrieves it. Most floppy drives today are double sided, with one head on each side of the disk. This doubles the storage capacity of the disk, allowing it to be written to on either side.

Information is organized on the disk by dividing the disk into tracks and sectors. Tracks are concentric circular regions on the surface of the disk; sectors are pie-shaped wedges that intersect each of the tracks, further dividing them. Before a floppy disk can be used, the computer must format it by placing special information on the disk that enables the computer to find each track and sector.


Hard Drives

Hard drives consist of rigid circular platters of magnetizable material sealed in a metal box with associated read/write heads. They are usually internal to a computer. Most hard drives have multiple platters stacked on top of one another, each with its own read/write heads. The media in a hard drive is generally not removable from the drive assembly, although external hard drives do exist with removable hard disks. The read/write heads in a hard drive are precisely aligned with the surfaces of the hard disks, allowing thousands of tracks and dozens of sectors per track. The combination of more heads and more tracks allows hard drives to store more data and to transfer data at a higher rate than floppy disks.

Accessing information on a hard disk involves moving the heads to the right track and then waiting for the correct sector to revolve underneath the heads. Seek time is the average time required to move the heads from one track to some other desired track on the disk. The time needed to move from one track to a neighboring track is often in the 1 millisecond (one-thousandth of a second) range, and the average seek time to reach arbitrary tracks anywhere on the disk is in the 6 to 15 millisecond range. Rotational latency is the average time required for the correct sector to come under the heads once they are positioned on the correct track. This time depends on how fast the disk is revolving. Today, many drives run at 60 to 120 revolutions per second or faster, yielding average rotational latencies of a few milliseconds.

If a file requires more than one sector for storage, the positions of the sectors on the individual tracks can greatly affect the average access time. Typically, it takes the disk controller a small amount of time to finish reading a sector. If the next sector to be read is the neighboring sector on the track, the electronics may not have enough time to get ready to read it before it rotates under the read/write head. If this is the case, the drive must wait until the sector comes all the way around again. This access time can be reduced by interleaving, or alternatively placing, the sectors on the tracks so that sequential sectors for the same file are separated from each other by one or two sectors. When information is distributed optimally, the device controller is ready to start reading just as the appropriate sector comes under the read-write heads.

After many files have been written to and erased from a disk, fragmentation can occur. Fragmentation happens when pieces of single files are inefficiently distributed in many locations on a disk. The result is an increase in the average file access time. This problem can be fixed by running a defragmentation program, which goes through the drive track by track and rearranges the sectors for each file so that they can be accessed more quickly.

Unlike floppy drives, in which the read/write heads actually touch the surface of the material, the heads in most hard disks float slightly off the surface. When the heads accidentally touch the media, either because the drive is dropped or bumped hard or because of an electrical malfunction, the surface becomes scratched. Any data stored where the head has touched the disk is lost. This is called a head crash. To help reduce the possibility of a head crash, most disk controllers park the heads over an unused track on the disk when the drive is not being used by the CPU.



While magnetic material is the dominant media for read/write information storage (files that are read from and written to frequently), other media have become popular for more permanent storage applications. One of the most common alternative information storage mediums is the CD-ROM. CD-ROMs are plastic disks on which individual bits are stored as pits burned onto the surface of the disk by high-powered lasers. The surface of the disk is then covered with a layer of reflecting material such as aluminum. The computer uses a CD-ROM drive to access information on the CD-ROM. The drive may be external to, or part of, the computer. A light-sensitive instrument in the drive reads the disk by watching the amount of light reflected back from a smaller laser positioned over the spinning disk. Such disks can hold large amounts of information, but can only be written to once. The drives capable of writing to CD-ROMs are called write once, read many (WORM) drives. Due to their inexpensive production costs, CD-ROMs are widely used today for storing music, video, and application programs.


Magnetic Tape

Magnetic tape has served as a very efficient and reliable information storage media since the early 1950s. Most magnetic tape is made of mylar, a type of strong plastic, into which metallic particles have been embedded. A read/write head identical to those used for audio tape reads and writes binary information to the tape. Reel-to-reel magnetic tape is commonly used to store information for large mainframe or supercomputers. High-density cassette tapes, resembling audio cassette tapes, are used to store information for personal computers and mainframes.

Magnetic tape storage has the advantage of being able to hold enormous amounts of data; for this reason it is used to store information on the largest computer systems. However, magnetic tape has two major shortcomings: It has a very slow data access time when compared to other forms of storage media, and access to information on magnetic tape is sequential. In sequential data storage, data are stored with the first bit at the beginning of the tape and the last bit at the end of the tape, in a linear fashion. To access a random bit of information, the tape drive has to forward or reverse through the tape until it finds the location of the bit. The bits closest to the location of the read/write head can be accessed relatively quickly, but bits far away may take considerable time to access. RAM, on the other hand, is random access, meaning that it can locate any one bit as easily as any other.


Other Types of Storage Media

Variations in hard and floppy disk drive technology are used in read-mostly drives, in which the same drive media may be written to multiple times, although at much slower rates than data can be read. In magneto-optical (MO) drives, a strong laser heats up and re-orients metallic crystals in the surface of the MO disk, effectively erasing any information stored on the disk. To write to the MO disk, an electromagnetic head similar to that in a floppy drive polarizes, or orients, the magnetic crystals in one of two directions while the laser is on, thus storing information in a binary form. To read the disk, a light-sensitive instrument reads the light from a separate, lower-power laser that reflects light from the crystals. The crystals polarize the reflected light in one of two directions depending on which way they point.

Another type of storage media, called a flash memory, traps small amounts of electric charge in “wells” on the surface of a chip. Side effects of this trapped charge, such as the electric field it creates, are later used to read the stored value. To rewrite to flash memory, the charges in the wells must first be drained. Such drives are useful for storing information that changes infrequently.



The earliest mechanical information storage devices were music boxes of the 18th century that encoded sequences of notes as pins on a revolving drum. In the early 1800s Joseph Marie Jacquard used paper cards with information recorded as holes punched in them to control weaving looms. This idea of punched-card storage was later used by British mathematician and inventor Charles Babbage in the first programmable computer. The holes on each card allowed an arm to pass through and activate a mechanism on the other side. In the census of 1890, American inventor Herman Hollerith used punched cards to hold data. These cards were then read by machines in which rows of electrical contacts sensed when a hole was present. In the 1940s the first electronic computers used punched cards and rolls of paper tape with punched holes for storing both programs and data. Before magnetic media became popular, various memory devices such as cathode ray tubes and mercury-delay lines were used to store information. The first use of magnetic memory devices came in the late 1940s in the form of magnetic tapes and drums, and then magnetic cores, in which small doughnuts of magnetic material each stored one bit of information.

In the 1970s the first hard disks appeared, with platters as large as four feet in diameter. Shortly thereafter, the International Business Machines Corporation (IBM) invented floppy disks as a mechanism to load micro-programs into their mainframe computers. CD-ROMs were introduced in the early 1980s to store music in digital form for high-fidelity playback. These technologies underwent explosive growth with their use in mass-market personal computer systems.



Although magnetic and CD-ROM technologies continue to increase in storage density, a variety of new technologies are emerging. Redundant Arrays of Independent Disks (RAIDs) are storage systems that look like one device but are actually composed of multiple hard disks. These systems provide more storage and also read data simultaneously from many drives. The result is a faster rate of data transfer to the CPU, which is important for many very high speed computer applications, especially those involving large databases of information.

Several experimental technologies offer the potential for storage densities that are thousands or millions of times better than is possible today. Some approaches use individual molecules, sometimes at superconducting temperatures, to trap very small magnetic fields or electrical charges for data storage. In other technologies, large two-dimensional data sets such as pictures are stored as holograms in cubes of material. Individual bits are not stored at any one location, but instead are spread out over a much larger area and mixed in with other bits. Loss of information from any one spot thus does not cause the irreplaceable loss of any one bit of information.

Mainframe Computer

Mainframe Computer, a high-level computer designed for the most intensive computational tasks. Mainframe computers are often shared by multiple users connected to the computer via terminals. The most powerful mainframes, called supercomputers, perform highly complex and time-consuming computations and are used heavily in both pure and applied research by scientists, large businesses, and the military.

Data Processing



Data Processing, in computer science, the analysis and organization of data by the repeated use of one or more computer programs. Data processing is used extensively in business, engineering, and science and to an increasing extent in nearly all areas in which computers are used. Businesses use data processing for such tasks as payroll preparation, accounting, record keeping, inventory control, sales analysis, and the processing of bank and credit card account statements. Engineers and scientists use data processing for a wide variety of applications, including the processing of seismic data for oil and mineral exploration, the analysis of new product designs, the processing of satellite imagery, and the analysis of data from scientific experiments.

Data processing is divided into two kinds of processing: database processing and transaction processing. A database is a collection of common records that can be searched, accessed, and modified, such as bank account records, school transcripts, and income tax data. In database processing, a computerized database is used as the central source of reference data for the computations. Transaction processing refers to interaction between two computers in which one computer initiates a transaction and another computer provides the first with the data or computation required for that function.

Most modern data processing uses one or more databases at one or more central sites. Transaction processing is used to access and update the databases when users need to immediately view or add information; other data processing programs are used at regular intervals to provide summary reports of activity and database status. Examples of systems that involve all of these functions are automated teller machines, credit sales terminals, and airline reservation systems.



The data-processing cycle represents the chain of processing events in most data-processing applications. It consists of data recording, transmission, reporting, storage, and retrieval. The original data is first recorded in a form readable by a computer. This can be accomplished in several ways: by manually entering information into some form of computer memory using a keyboard, by using a sensor to transfer data onto a magnetic tape or floppy disk, by filling in ovals on a computer-readable paper form, or by swiping a credit card through a reader. The data are then transmitted to a computer that performs the data-processing functions. This step may involve physically moving the recorded data to the computer or transmitting it electronically over telephone lines or the Internet.

Once the data reach the computer, the computer processes it. The operations the computer performs can include accessing and updating a database and creating or modifying statistical information. After processing the data, the computer reports summary results to the program’s operator.

As the computer processes the data, it stores both the modifications and the original data. This storage can be both in the original data-entry form and in carefully controlled computer data forms such as magnetic tape. Data are often stored in more than one place for both legal and practical reasons. Computer systems can malfunction and lose all stored data, and the original data may be needed to recreate the database as it existed before the crash.

The final step in the data-processing cycle is the retrieval of stored information at a later time. This is usually done to access records contained in a database, to apply new data-processing functions to the data, or—in the event that some part of the data has been lost—to recreate portions of a database. Examples of data retrieval in the data-processing cycle include the analysis of store sales receipts to reveal new customer spending patterns and the application of new processing techniques to seismic data to locate oil or mineral fields that were previously overlooked.



To a large extent, data processing has been the driving force behind the creation and growth of the computer industry. In fact, it predates electronic computers by almost 60 years. The need to collect and analyze census data became such an overwhelming task for the United States government that in 1890 the U.S. Census Bureau contracted American engineer and inventor Herman Hollerith to build a special purpose data-processing system. With this system, census takers recorded data by punching holes in a paper card the size of a dollar bill. These cards were then forwarded to a census office, where mechanical card readers were used to read the holes in each card and mechanical adding machines were used to tabulate the results. In 1896 Hollerith founded the Tabulating Machine Company, which later merged with several other companies and eventually became International Business Machines Corporation (IBM).

During World War II (1939-1945) scientists developed a variety of computers designed for specific data-processing functions. The Harvard Mark I computer was built from a combination of mechanical and electrical devices and was used to perform calculations for the U.S. Navy. Another computer, the British-built Colossus, was an all-electronic computing machine designed to break German coded messages. It enabled the British to crack German codes quickly and efficiently.

The role of the electronic computer in data processing began in 1946 with the invention of the ENIAC, the first all-electronic computer. The U.S. armed services used the ENIAC to tabulate the paths of artillery shells and missiles. In 1950 Remington Rand Corporation introduced the first nonmilitary electronic programmable computer for data processing. This computer, called the UNIVAC, was initially sold to the U.S. Census Bureau in 1951; several others were eventually sold to other government agencies.

With the purchase of a UNIVAC computer in 1954, General Electric Company became the first private firm to own a computer, soon followed by Du Pont Company, Metropolitan Life, and United States Steel Corporation. All of these companies used the UNIVAC for commercial data-processing applications. The primary advantages of this machine were its programmability, its high-speed arithmetic capabilities, and its ability to store and process large business files on multiple magnetic tapes. The UNIVAC gained national attention in 1952, when the American Broadcast Company (ABC) used a UNIVAC during a live television broadcast to predict the outcome of the presidential election. Based upon less than 10 percent of the election returns, the computer correctly predicted a landslide victory for Dwight D. Eisenhower over his challenger, Adlai E. Stevenson.

In 1953, IBM produced the first of its computers, the IBM 701—-a machine designed to be mass-produced and easily installed in a customer’s building. The success of the 701 led IBM to manufacture many other machines for commercial data processing. The sales of IBM’s 650 computer were a particularly good indicator of how rapidly the business world accepted electronic data processing. Initial sales forecasts were extremely low because the machine was thought to be too expensive, but over 1800 were eventually made and sold.

In the 1950s and early 1960s data processing was essentially split into two distinct areas, business data processing and scientific data processing, with different computers designed for each. In an attempt to keep data processing as similar to standard accounting as possible, business computers had arithmetic circuits that did computations on strings of decimal digits (numbers with digits that range from 0 to 9). Computers used for scientific data processing sacrificed the easy-to-use decimal number system for the more efficient binary number system in their arithmetic circuitry.

The need for separate business and scientific computing systems changed with the introduction of the IBM System/360 family of machines in 1964. These machines could all run the same data-processing programs, but at different speeds. They could also perform either the digit-by-digit math favored by business or the binary notation favored for scientific applications. Several models had special modes in which they could execute programs from earlier IBM computers, especially the popular IBM 1401. From that time on, almost all commercial computers were general-purpose.

One notable exception to the trend of general-purpose computers and programming languages is the supercomputer. Supercomputers are computers designed for high-speed precision scientific computations. However, supercomputers are sometimes used for data processing that is not scientific. In these cases, they must be built so that they are flexible enough to allow other types of computations.

The division between business and scientific data processing also influenced the development of programming languages in which application programs were written. Two such languages that are still popular today are COBOL (COmmon Business Oriented Language) and Fortran (FORmula TRANslation). Both of these programming languages were developed in the late 1950s and early 1960s, with COBOL becoming the programming language of choice for business data processing and FORTRAN for scientific processing. In the 1970s other languages such as C were developed. These languages reflected the general-purpose nature of modern computers and allowed extremely efficient programs to be developed for almost any data-processing application. One of the most popular languages currently used in data-processing applications is an extension of C called C++. C++ was developed in the 1980s and is an object-oriented language, a type of language that gives programmers more flexibility in developing sophisticated applications than other types of programming languages.

Development of computer science

Computer science as an independent discipline dates to only about 1960, although the electronic digital computer that is the object of its study was invented some two decades earlier. The roots of computer science lie primarily in the related fields of electrical engineering and mathematics. Electrical engineering provides the basics of circuit design—namely, the idea that electrical impulses input to a circuit can be combined to produce arbitrary outputs. The invention of the transistor and the miniaturization of circuits, along with the invention of electronic, magnetic, and optical media for the storage of information, resulted from advances in electrical engineering and physics. Mathematics is the source of one of the key concepts in the development of the computer—the idea that all information can be represented as sequences of zeros and ones. In the binary number system, numbers are represented by a sequence of the binary digits 0 and 1 in the same way that numbers in the familiar decimal system are represented using the digits 0 through 9. The relative ease with which two states (e.g., high and low voltage) can be realized in electrical and electronic devices led naturally to the binary digit, or bit, becoming the basic unit of data storage and transmission in a computer system.

The Boolean algebra developed in the 19th century supplied a formalism for designing a circuit with binary input values of 0s and 1s (false or true, respectively, in the terminology of logic) to yield any desired combination of 0s and 1s as output. Theoretical work on computability, which began in the 1930s, provided the needed extension to the design of whole machines; a milestone was the 1936 specification of the conceptual Turing machine (a theoretical device that manipulates an infinite string of 0s and 1s) by the British mathematician Alan Turing and his proof of the model's computational power. Another breakthrough was the concept of the stored-program computer, usually credited to the Hungarian-American mathematician John von Neumann. This idea—that instructions as well as data should be stored in the computer's memory for fast access and execution—was critical to the development of the modern computer. Previous thinking was limited to the calculator approach, in which instructions are entered one at a time.

The needs of users and their applications provided the main driving force in the early days of computer science, as they still do to a great extent today. The difficulty of writing programs in the machine language of 0s and 1s led first to the development of assembly language, which allows programmers to use mnemonics for instructions (e.g., ADD) and symbols for variables (e.g., X). Such programs are then translated by a program known as an assembler into the binary encoding used by the computer. Other pieces of system software known as linking loaders combine pieces of assembled code and load them into the machine's main memory unit, where they are then ready for execution. The concept of linking separate pieces of code was important, since it allowed “libraries” of programs to be built up to carry out common tasks—a first step toward the increasingly emphasized notion of software reuse. Assembly language was found to be sufficiently inconvenient that higher-level languages (closer to natural languages) were invented in the 1950s for easier, faster programming; along with them came the need for compilers, programs that translate high-level language programs into machine code. As programming languages became more powerful and abstract, building efficient compilers that create high-quality code in terms of execution speed and storage consumption became an interesting computer science problem in itself.

Increasing use of computers in the early 1960s provided the impetus for the development of operating systems, which consist of system-resident software that automatically handles input and output and the execution of jobs. The historical development of operating systems is summarized below under that topic. Throughout the history of computers, the machines have been utilized in two major applications: (1) computational support of scientific and engineering disciplines and (2) data processing for business needs. The demand for better computational techniques led to a resurgence of interest in numerical methods and their analysis, an area of mathematics that can be traced to the methods devised several centuries ago by physicists for the hand computations they made to validate their theories. Improved methods of computation had the obvious potential to revolutionize how business is conducted, and in pursuit of these business applications new information systems were developed in the 1950s that consisted of files of records stored on magnetic tape. The invention of magnetic-disk storage, which allows rapid access to an arbitrary record on the disk, led not only to more cleverly designed file systems but also, in the 1960s and '70s, to the concept of the database and the development of the sophisticated database management systems now commonly in use. Data structures, and the development of optimal algorithms for inserting, deleting, and locating data, have constituted major areas of theoretical computer science since its beginnings because of the heavy use of such structures by virtually all computer software—notably compilers, operating systems, and file systems. Another goal of computer science is the creation of machines capable of carrying out tasks that are typically thought of as requiring human intelligence. Artificial intelligence, as this goal is known, actually predates the first electronic computers in the 1940s, although the term was not coined until 1956.Computer graphics was introduced in the early 1950s with the display of data or crude images on paper plots and cathode-ray tube (CRT) screens. Expensive hardware and the limited availability of software kept the field from growing until the early 1980s, when the computer memory required for bit-map graphics became affordable. (A bit map is a binary representation in main memory of the rectangular array of points [pixels, or picture elements] on the screen. Because the first bit-map displays used one binary bit per pixel, they were capable of displaying only one of two colours, commonly black and green or black and amber. Later computers, with more memory, assigned more binary bits per pixel to obtain more colours.) Bit-map technology, together with high-resolution display screens and the development of graphics standards that make software less machine-dependent, has led to the explosive growth of the field. Software engineering arose as a distinct area of study in the late 1970s as part of an attempt to introduce discipline and structure into the software design and development process. For a thorough discussion of the development of computing, see computers, history of.


Architecture deals with both the design of computer components (hardware) and the creation of operating systems (software) to control the computer. Although designing and building computers is often considered the province of computer engineering, in practice there exists considerable overlap with computer science.

Computer graphics

The use of computers to produce pictorial images. The images produced can be printed documents or animated motion pictures, but the term computer graphics refers particularly to images displayed on a video display screen, or display monitor. These screens can display graphic as well as alphanumeric data. A computer-graphics system basically consists of a computer to store and manipulate images, a display screen, various input and output devices, and a graphics software package—i.e., a program that enables a computer to process graphic images by means of mathematical language. These programs enable the computer to draw, colour, shade, and manipulate the images held in its memory.

A computer displays images on the phosphor-coated surface of a graphics display screen by means of an electron beam that sweeps the screen many times each second. Those portions of the screen energized by the beam emit light, and changes in the intensity of the beam determine their brightness and hue. The brightness of the resulting image fades quickly, however, and must be continuously “refreshed” by the beam, typically 30 times per second.

Graphics software programs enable a user to draw, colour, shade, and manipulate an image on a display screen with commands input by a keyboard. A picture can be drawn or redrawn onto the screen with the use of a mouse, a pressure-sensitive tablet, or a light pen. Preexisting images on paper can be scanned into the computer through the use of scanners, digitizers, pattern-recognition devices, or digital cameras. Frames of images on videotape also can be entered into a computer. Various output devices have been developed as well; special programs send digital data from the computer's memory to an imagesetter or film recorder, which prints the image on paper or on photographic film. The computer can also generate hard copy by means of plotters and laser or dot-matrix printers.

Pictures are stored and processed in a computer's memory by either of two methods: raster graphics and vector graphics. Raster-type graphics maintain an image as a matrix of independently controlled dots, while vector graphics maintain it as a collection of points, lines, and arcs. Raster graphics are now the dominant computer graphics technology.

In raster graphics, the computer's memory stores an image as a matrix, or grid, of individual dots, or pixels (picture elements). Each pixel is encoded in the computer's memory as one or several bits—i.e., binary digits represented by 0 or 1. A 2-bit pixel can represent either black or white, while a 4-bit pixel can represent any of 16 different colours or shades of gray. The constituent bits that encode a picture in the computer's memory are called a bit map. Computers need large processing and memory capacities to translate the enormous amounts of information contained in a picture into the digital code of a bit map, and graphics software programs use special algorithms (computional processes) to perform these procedures.

In raster graphics, the thousands of tiny pixels that make up an individual image are projected onto a display screen as illuminated dots that from a distance appear as a contiguous image. The picture frame consists of hundreds of tiny horizontal rows, each of which contains hundreds of pixels. An electron beam creates the grid of pixels by tracing each horizontal line from left to right, one pixel at a time, from the top line to the bottom line. Raster graphics create uniform coloured areas and distinct patterns and allow precise manipulation because their constituent images can be altered one dot at a time. Their main disadvantage is that the images are subtly staircased—i.e., diagonal lines and edges appear jagged and less distinct when viewed from a very short distance. A corollary of television technology, raster graphics emerged in the early 1970s and had largely displaced vector systems by the '90s.In vector graphics, images are made up of a series of lines, each of which is stored in the computer's memory as a vector—i.e., as two points on an x-y matrix. On a vector-type display screen, an electron beam sweeps back and forth between the points designated by the computer and the paths so energized emit light, thereby creating lines; solid shapes are created by grouping lines closely enough to form a contiguous image. Vector-graphics technology was developed in the mid-1960s and widely used until it was supplanted by raster graphics. Its application is now largely restricted to highly linear work in computer-aided design and architectural drafting, and even this is performed on raster-type screens with the vectors converted into dots.

Computer graphics have found widespread use in printing, product design and manufacturing, scientific research, and entertainment since the 1960s. In the business office, computers routinely create graphs and tables to illustrate text information. Computer-aided design systems have replaced drafting boards in the design of a vast array of products ranging from buildings to automotive bodies and aircraft hulls to electrical and electronic devices. Computers are also often used to test various mechanical, electrical, or thermal properties of the component under design. Scientists use computers to simulate the behaviour of complicated natural systems in animated motion-picture sequences. These pictorial visualizations can afford a clearer understanding of the multiple forces or variables at work in such phenomena as nuclear and chemical reactions, large-scale gravitational interactions, hydraulic flow, load deformation, and physiological systems. Computer graphics are nowhere so visible as in the entertainment industry, which uses them to create the interactive animations of video games and the special effects in motion pictures. Computers have also come into increasing use in commercial illustration and in the digitalization of images for use in CD-ROM products, online services, and other electronic media.

Living in cyberspace

Ever smaller computers

Embedded systems

One can look at the development of the electronic computer as occurring in waves. The first large wave was the mainframe era, when many people had to share single machines. (The mainframe era is covered in the section The age of Big Iron.) In this view, the minicomputer era can be seen as a mere eddy in the larger wave, a development that allowed a favoured few to have greater contact with the big machines. Overall, the age of mainframes could be characterized by the expression “Many persons, one computer.

”The second wave of computing history was brought on by the personal computer, which in turn was made possible by the invention of the microprocessor. (This era is described in the section The personal computer revolution.) The impact of personal computers has been far greater than that of mainframes and minicomputers: their processing power has overtaken that of the minicomputers, and networks of personal computers working together to solve problems can be the equal of the fastest supercomputers. The era of the personal computer can be described as the age of “One person, one computer.”

Since the introduction of the first personal computer, the semiconductor business has grown into a $120 billion worldwide industry. However, this phenomenon is only partly ascribable to the general-purpose microprocessor, which accounts for about $23 billion in annual sales. The greatest growth in the semiconductor industry has occurred in the manufacture of special-purpose processors, controllers, and digital signal processors. These computer chips are increasingly being included, or embedded, in a vast array of consumer devices, including pagers, mobile telephones, automobiles, televisions, digital cameras, kitchen appliances, video games, and toys. While the Intel Corporation may be safely said to dominate the worldwide microprocessor business, it has been outpaced in this rapidly growing multibillion-dollar industry by companies such as Motorola, Inc.; Hitachi, Ltd.; Texas Instruments; Packard Bell NEC, Inc.; and Lucent Technologies Inc. This ongoing third wave may be characterized as “One person, many computers.”

Handheld computers

The origins of handheld computers go back to the 1960s, when Alan Kay, a researcher at Xerox's Palo Alto Research Center, promoted the vision of a small, powerful notebook-style computer that he called the Dynabook. Kay never actually built a Dynabook (the technology had yet to be invented), but his vision helped to catalyze the research that would eventually make his dream feasible.

It happened by small steps. The popularity of the personal computer and the ongoing miniaturization of the semiconductor circuitry and other devices first led to the development of somewhat smaller, portable—or, as they were sometimes called, luggable—computer systems. The first of these, the Osborne 1, designed by Lee Felsenstein, an electronics engineer active in the Homebrew Computer Club in San Francisco, was sold in 1981. Soon, most PC manufacturers had portable models. At first these “portables” looked like sewing machines and weighed in excess of 20 pounds (9 kg). Gradually they became smaller laptop-, notebook-, and then sub-notebook-sized and came with more powerful processors. These devices allowed people to use computers not only in the office or at home but while they were traveling—on airplanes, in waiting rooms, or even at the beach.

As the size of computers continued to shrink and microprocessors became more and more powerful, researchers and entrepreneurs explored new possibilities in mobile computing. In the late 1980s and early '90s, several companies came out with handheld computers, called personal digital assistants. PDAs typically replaced the cathode-ray tube screen with a more compact liquid-crystal display, and they either had a miniature keyboard or replaced the keyboard with a stylus and handwriting-recognition software that allowed the user to write directly on the screen. Like the first personal computers, PDAs were built without a clear idea of what people would do with them. In fact, people did not do much at all with the early models. To some extent, the early PDAs, made by Go Corporation and Apple Computer, Inc., were technologically premature; with their unreliable handwriting recognition, they offered little advantage over paper-and-pencil planning books.

The potential of this new kind of device was finally realized with the release in March 1997 of Palm Computing, Inc.'s Palm Pilot, which was about the size of a deck of playing cards and sold for around $400—approximately the same price as the MITS Altair, the first personal computer sold as a kit in 1974. The Pilot did not try to replace the computer but made it possible to organize and carry information with an electronic calendar, telephone number and address list, memo pad, and expense-tracking software and to synchronize that data with a PC. The device included an electronic cradle to connect to a PC and pass information back and forth. It also featured a data-entry system, called “graffiti,” which involved writing with a stylus using a slightly altered alphabet that the device recognized. Its success encouraged numerous software companies to develop applications for it. In 1998 this market heated up further with the entry of several established consumer electronics firms using Microsoft's Windows CE operating system (a stripped-down version of the Windows system) to sell handheld computer devices and wireless telephones that could connect to PCs.

These small devices also often possessed a communications component and benefited from the sudden popularization of the Internet and the World Wide Web (discussed in the next section, The Internet).

Magnetic recording

Method of preserving sounds, pictures, and data in the form of electrical signals through the selective magnetization of portions of a magnetic material. The principle of magnetic recording was first demonstrated by the Danish engineer Valdemar Poulsen in 1900, when he introduced a machine called the telegraphone that recorded speech magnetically on steel wire.

In the years following Poulsen's invention, devices using a wide variety of magnetic recording mediums have been developed by researchers in Germany, Great Britain, and the United States. Principal among them are magnetic tape and disk recorders, which are used not only to reproduce audio and video signals but also to store computer data and measurements from instruments employed in scientific and medical research. Other significant magnetic recording devices include magnetic drum, core, and bubble units designed specifically to provide auxiliary data storage for computer systems.

Magnetic tape devices. Magnetic tape provides a compact, economical means of preserving and reproducing varied forms of information. Recordings on tape can be played back immediately and are easily erased, permitting the tape to be reused many times without a loss in quality of recording. For these reasons, tape is the most widely used of the various magnetic recording mediums. It consists of a narrow plastic ribbon coated with fine particles of iron oxide or other readily magnetizable material. In recording on tape, an electrical signal passes through a recording head as the tape is drawn past, leaving a magnetic imprint on the tape's surface. When the recorded tape is drawn past the playback or reproducing head, a signal is induced that is the equivalent of the recorded signal. This signal is amplified to the intensity appropriate to the output equipment.

Tape speeds for sound recording vary from less than 2 inches (5 centimetres) per second to as much as 15 in. (37.5 cm) per second. Video signals occupy a much wider bandwidth than do audio signals and require a much higher relative speed between the tape and the head. Data recording requires even greater speeds. The tape transport of a data-storage unit of a high-performance digital computer, for example, must be able to move the tape past the head at a rate of 200 in. (500 cm) per second.

Magnetic tape was initially designed for sound recording. German engineers developed an audio tape recording machine called the magnetophone during World War II. U.S. and British researchers adopted the basic design of this device to create a magnetic tape recorder capable of high-quality sound reproduction in the late 1940s. Within a decade magnetic tape supplanted phonograph records for radio music programming. Prerecorded tapes in the form of cartridges and cassettes for sound systems in homes and automobiles were in widespread use by the late 1960s.Related to the audio cassette recorder is a magnetic tape recording system that serves as a telephone answering device. Messages or instructions prerecorded on tape are reproduced automatically when a telephone user's number is dialed. The answering device then actuates the recording head, which records any messages that the caller wishes to leave.

In 1956 Charles P. Ginsburg and Ray Dolby of Ampex Corporation, a U.S. electronics firm, developed the first practical videotape recorder. Their machine revolutionized television broadcasting; recorded shows virtually replaced live telecasts with a few exceptions, such as coverage of sports events. Almost all programs are videotaped during their original telecasts, and individual broadcasters then rerun the shows at times most suitable for their own viewers. An increasing number of videotape recorders are used for recording television broadcasts received in private homes. Many such units can produce home movies if connected to an accessory video camera. Commercially produced video cassettes of popular motion pictures also can be played on these recorders. See also videotape recorder.

Magnetic tape was introduced as a data-storage medium in 1951, when it was used in the auxiliary memory of UNIVAC I, the first digital computer produced for commercial use. For about the next 10 years nearly all computers employed magnetic tape storage units. By the 1960s, however, magnetic disk and magnetic drum auxiliary memories began replacing the tape units in large-scale scientific and business data-processing systems that require extremely fast retrieval of stored information and programs. Magnetic tape devices, particularly those using cassettes, continue to be employed as a principal form of auxiliary memory in general-purpose minicomputers and microcomputers because of their low cost and great storage capacity. About 48,000 bits of information can be stored on one inch of tape.

Magnetic tape recorders have also been widely used to record measurements directly from laboratory instruments and detection devices carried aboard planetary probes. The readings are converted into electrical signals and recorded on tape, which can be played back by researchers for detailed analysis and comparison.

Magnetic disk devices.

Magnetic disks are flat circular plates of metal or plastic, coated on both sides with iron oxide. Input signals, which may be audio, video, or data, are recorded on the surface of a disk as magnetic patterns or spots in spiral tracks by a recording head while the disk is rotated by a drive unit. The heads, which are also used to read the magnetic impressions on the disk, can be positioned anywhere on the disk with great precision. For computer data-storage applications, a collection of as many as 20 disks (called a disk pack) is mounted vertically on the spindle of a drive unit. The drive unit is equipped with multiple reading/writing heads.

These features give magnetic disk devices an advantage over tape recorders. A disk unit has the ability to read any given segment of an audio or video recording or block of data without having to pass over a major portion of its content sequentially; locating desired information on tape may take many minutes. In a magnetic disk unit, direct access to a precise track on a specific disk reduces retrieval time to a fraction of a second.

Magnetic disk technology was applied to data storage in 1962. The random accessibility of data stored in disk units made these devices particularly suitable for use as auxiliary memories in high-speed computer systems. Small, flexible plastic disks called floppy disks were developed during the 1970s. Although floppy disks cannot store as much information as conventional disks or retrieve data as rapidly, they are adequate for applications such as those involving minicomputers and microcomputers where low cost and ease of use are of primary importance.

Magnetic disk recording has various other uses. Office dictating machines and transcribing units utilize the process for storing spoken messages for later use. Magnetic disk technology has also facilitated and improved a method known as “instant replay” that is widely used in live telecasts, especially of sports events. This method involves the immediate re-showing of, for example, a crucial play in a football game during a live-action broadcast. Videotape recorders were initially used for instant replay, but they proved too cumbersome. In 1967 Ampex developed a special videodisk machine that made it possible to locate and replay a desired action in less than four seconds.

Other magnetic recording devices.

Such magnetic recording mediums as drums and ferrite cores have been used for data storage since the early 1950s. A more recent development is the magnetic bubble memory devised in the late 1970s at Bell Telephone Laboratories.

Auxiliary computer memories using a magnetic drum operate somewhat like tape and disk units. They store data in the form of magnetized spots in adjacent circular tracks on the surface of a metal cylinder. A single drum may carry from one to 200 tracks. Data are recorded and read by heads positioned near the surface of the drum as the drum rotates at about 3,000 revolutions per minute. Drums provide rapid, random access to stored information. They are able to retrieve information faster than tape and disk units, but cannot store as much data as either of them.

Core memories use hundreds of thousands of magnetizable ferrite cores that resemble tiny doughnuts. Through each of the cores run two or more wires, which carry electrical currents that magnetize the cores in either a clockwise or counterclockwise direction. Cores magnetized in one direction are said to represent 0, and those in the opposite direction to represent 1. The 0 and 1 correspond to the digits of the binary system, the basis for digital computer operations. Data are stored by magnetizing an array of cores in a particular combination of 0s and 1s. Core storage units allow extremely fast, random access to stored information. Unlike other magnetic memory devices that have to wait for tape reels to unwind or drums to rotate, retrieval is performed simply by sending electrical pulses to the specific array of cores holding the desired data. The pulses reverse the direction of magnetization in the cores, which includes output signals corresponding to the stored data.

The magnetic bubble memory is more economical to operate than mechanical tape, disk, or drum units and is considerably more compact. The device consists of a chip of synthetic garnet about the size of a matchbook. It stores data in tiny cylindrically shaped magnetic domains called bubbles that appear and disappear under the control of an electromagnetic field. The presence and absence of the bubbles represent information in binary form in much the same way as do the two states of magnetic cores. As each tiny garnet chip accommodates hundreds of thousands of binary digits, enormous amounts of data can be stored in a memory unit comprised of a small stack of these chips.

Operating system

Software that controls the many different operations of a computer and directs and coordinates its processing of programs. An operating system is a remarkably complex set of instructions that schedules the series of jobs (user applications) to be performed by the computer and allocates them to the computer's various hardware systems, such as the central processing unit, main memory, and peripheral systems. The operating system directs the central processor in the loading, storage, and execution of programs and in such particular tasks as accessing files, operating software applications, controlling monitors and memory storage devices, and interpreting keyboard commands. When a computer is executing several jobs simultaneously, the operating system acts to allocate the computer's time and resources in the most efficient manner, prioritizing some jobs over others in a process called time-sharing. An operating system also governs a computer's interactions with other computers in a network.

The English mathematician Charles Babbage conceived the first set of operating instructions for a digital computer in the design of his “analytical engine” (1834), which was never built. The first operational stored-program computer was completed in 1949 at the University of Cambridge. The operating systems that came into wide use between 1950 and 1980 were developed mostly by private companies to operate proprietary mainframe computers and applications. The most popular of these systems—which are used to run mainframes built by the industry leader, International Business Machines Corporation (IBM)—include MVS, DOS/VSE, and VM.

In addition to proprietary systems, open, or portable, operating systems have been developed to run computers built by other manufacturers. Open operating systems rose to prominence during the 1980s and are now widely used to run personal computers (PCs) and workstations, which use extremely powerful PCs. The dominant operating system is the disk operating system (DOS) developed by Microsoft Corporation. Also popular is Microsoft's Windows NT, an adjunct to DOS that provides enhanced computer graphics.

Besides mainframe and PC-type operating systems, network operating systems have been developed that allow PCs and workstations to share peripheral devices and communicate with a mainframe computer or a server (i.e., a device that stores information and assists in the operation of a network of computers). Network operating systems usually act as an additional layer above a primary operating system, such as DOS. Dominant in this category are Novell Inc.'s UNIX, an operating system designed for networked workstations, and Novell's Netware, which is the leading network operating system.



А также другие работы, которые могут Вас заинтересовать

59788. ST. VALENTINE DAY 32.5 KB
  Last century, sweethearts of both sexes would spend hours fashioning a homemade card or present. The results of some of those painstaking efforts are still preserved in museums. Lace, ribbon, wild flowers, colored paper, feathers, and shells, all were put to use.
59790. Прощальний вальс (сценарій випускного вечора) 89.5 KB
  Під тиху музику ведуча звертається до присутніх. Ведуча: Доброго вам вечора шановні батьки вчителі гості Тільки вчора пролунав останній дзвоник позаду напружена пора іспитів і ось випускний бал до якого одинадцять років як по східцях довелося підійматися нашим випускникам.
59791. Values of life 50 KB
  Our lesson is devoted to the people’s values of life. By the end of the lesson I suppose you will learn some important things about youselves and real values of life. We shall try to find out how to bring up high moral qualities and to achieve success in life.
59792. Разработка управленческих решений принимаемых в ГК Техноцентр 565 KB
  Цель дипломного проекта – изучив, принимаемые в ГК Техноцентр управленческие решения, выявить их негативное влияние на снижение экономических показателей, предложить ряд управленческих решения, которые бы улучшили плановые экономические показатели.
59793. Дворіччя. Давній Вавілон 58 KB
  Якщо людина викриватиме людину і провини не доведе то позивача має бути забито. Якщо людина викриватиме людину у чаклунстві й доведе то та яка звинувачувалася піде до Божої Річки Тигру та Євфрату та плигне у воду.
59794. Важнейшие сражения ВОВ 82 KB
  Место: класс информирования личного состава Учебные пособия: 1. История Второй мировой войны 1939-1945. Уроки Второй мировой войны и основные направления ее фальсификации. Учебные цели: Формирование у военнослужащих ясного представления об уроках Второй мировой войны...
59795. Як наші вчинки пливають на життя 190.5 KB
  Очікування результатів уроку: Після цього уроку учні зможуть: Пояснювати зміст понятьвчинок подвиг; Характеризувати емоції та почуття які впливають на нашу поведінку; Обґрунтувати важливість умінь керувати собою.