Development of computers
Информатика, кибернетика и программирование
Artificial intelligence the ability of computers to carry out tasks that are typically thought of as requiring human intelligence caused scientist to work harder at Software engineering which arose as a distinct area of study in the late 1970s. The PC industry began with introduction of the Apple II by Apple Computers Inc.
First of all we have to mention the fact that computer science has become an independent discipline only by about 1960s but proved itself to be one of the most promising and profitable intellectual investments. So, due to its great importance and influence on peoples everyday life that has been shown, we consider this branch of human knowledge to be well worth covering in this paper. Well try to give the basic ideas of the given phenomenon and try to trace the development of computer technologies from their beginning until now, paying special attention to steps these technologies had to make on their way of evolution.
The roots of computer science go back to the related fields of electrical engineering and mathematics but with the invention of the transistor and the miniaturization of circuits, along with the invention of electronic, magnetic and optical media for storing information resulted in great progress of electrical engineering and physics. But the most important here is that mathematics is the source of one of the key concepts in the development of the computer the binary number system, the idea of which was firstly suggested by Alan Turing who invented the so-called Turing machine (a theoretical device that manipulates an infinite string of 0s and 1s) basing on a formalism for designing a circuit with input values of 0s and 1s (for true and false in the terminology of logic) supplied by the Boolean algebra in the 19th century.
Satisfaction of the needs of users and their applications led to the development of the assembly language, which led to creation and development of operating systems and then to a resurgence of interest in numerical methods, the field where mathematics had been showing its best for centuries already.
The term “computer” was introduced in 1956, though the first machines appeared in the 1940s. Artificial intelligence, the ability of computers to carry out tasks that are typically thought of as requiring human intelligence caused scientist to work harder at Software engineering which arose as a distinct area of study in the late 1970s.
The PC industry began with introduction of the Apple II by Apple Computers Inc. in 1977. But the worlds dominant computer producer The IBM corporation entered the market in 1981 and thanks to IBMs large sales organization and sure hardware its machines soon became the worlds most popular personal computers that ever existed.
Day after day the memory capacity of personal computers increased from 64 kilobytes (the late 1970s) to 100 megabytes (the early 1990s) and is still growing very fast that is why the trend of the 1990s in the computer industry was toward multimedia formats because the market for conventional types of computer those that have computation and data processing as their major function has begun to become saturated. Multimedia computer are those that can process graphics, sound, video and animation in addition to traditional data processing. Nowadays more 100 000 000 computers and 500 000 workstations are in use throughout the world and because of such high-volume production computers tend to be inexpensive so that almost everyone can afford buying such a machine and this paper is supposed to be of great help for those who want to become better at the sphere of computers.
Computer science as an independent discipline dates to only about 1960, although the electronic digital computer that is the object of its study was invented some two decades earlier.
The roots of computer science lie primarily in the related fields of electrical engineering and mathematics. Electrical engineering provides the basics of circuit design--namely, the idea that electrical impulses input to a circuit can be combined to produce arbitrary outputs.
The invention of the transistor and the miniaturization of circuits, along with the invention of electronic, magnetic, and optical media for the storage of information, resulted from advances in electrical engineering and physics. Mathematics is the source of one of the key concepts in the development of the computer--the idea that all information can be represented as sequences of zeros and ones. In the binary number system, numbers are represented by a sequence of the binary digits 0 and 1 in the same way that numbers in the familiar decimal system are represented using the digits 0 through 9. The relative ease with which two states (e.g., high and low voltage) can be realized in electrical and electronic devices led naturally to the binary digit, or bit, becoming the basic unit of data storage and transmission in a computer system.
The Boolean algebra developed in the 19th century supplied a formalism for designing a circuit with input values of 0's and 1's (false or true, respectively, in the terminology of logic) to yield any desired combination of 0's and 1's as output. Theoretical work on computability, which began in the 1930s, provided the needed extension to the design of whole machines; a milestone was the 1936 specification of the conceptual Turing machine (a theoretical device that manipulates an infinite string of 0's and 1's) by the British mathematician Alan Turing and his proof of the model's computational power. Another breakthrough was the concept of the stored-program computer, usually credited to the German-American mathematician John von Neumann. This idea--that instructions as well as data should be stored in the computer's memory for fast access and execution--was critical to the development of the modern computer. Previous thinking was limited to the calculator approach, in which instructions are entered one at a time.
The needs of users and their applications provided the main driving force in the early days of computer science, as they still do to a great extent today. The difficulty of writing programs in the machine language of 0's and 1's led first to the development of assembly language, which allows programmers to use mnemonics for instructions (e.g., ADD) and symbols for variables (e.g., X). Such programs are then translated by a program known as an assembler into the 0/1 encoding used by the computer. Other pieces of system software known as linking loaders combine pieces of assembled code and load them into the machine's main memory unit, where they are then ready for execution. The concept of linking separate pieces of code was important, since it allowed "libraries" of programs to be built up to carry out common tasks--a first step toward the increasingly emphasized notion of software reuse. Assembly language was found to be sufficiently inconvenient that higher-level languages were invented in the 1950s for easier, faster programming; along with them came the need to compile the high-level language programs (i.e., translate them into machine code). As programming languages became more powerful and abstract, building efficient compilers that create high-quality code in terms of execution speed and storage consumption became an interesting computer science problem in itself.
Increasing use of computers in the early 1960s provided the impetus for the development of operating systems, which consist of system-resident software that automatically handles input and output and the execution of jobs. The historical development of operating systems is summarized below under that topic. Throughout the history of computers, the machines have been utilized in two major applications: (1) computational support of scientific and engineering disciplines and (2) data processing for business needs.
The demand for better computational techniques led to a resurgence of interest in numerical methods and their analysis, an area of mathematics that can be traced to the methods devised several centuries ago by physicists for the hand computations they made to validate their theories. Business applications brought about in the 1950s the design of information systems consisting of files of records stored on magnetic tape. The invention of magnetic-disk storage, which allows rapid access to an arbitrary record on the disk, led not only to more cleverly designed file systems but also, in the 1960s and '70s, to the concept of the database and the development of the sophisticated database management systems now commonly in use.
Data structures--e.g., lists, arrays, and queues--and the development of optimal algorithms for inserting, deleting, and locating data in such structures have constituted major areas of theoretical computer science since its beginnings, because of the heavy use of such structures by virtually all computer software--notably compilers, operating systems, and file systems. Artificial intelligence, the ability of computers to carry out tasks that are typically thought of as requiring human intelligence, is a concept that originated with the first computers in the 1940s, although the term was not coined until 1956.
Computer graphics was introduced in the early 1950s with the display of data or crude images on hard-copy plotters and cathode-ray tube (CRT) screens. Expensive hardware and the limited availability of software kept the field from growing until the early 1980s, when bit-map graphics became affordable. (A bit map is a ones-and-zeros representation in main memory of the rectangular array of black and white points (pixels, or picture elements) on the screen. The sudden availability of inexpensive, large random-access memory made bit maps affordable.) Bit-map technology, together with high-resolution display screens and the development of graphics standards that make software machine-independent, has led to the explosive growth of the field. Software engineering arose as a distinct area of study in the late 1970s as part of an attempt to introduce discipline and structure into the software design and development process. The historical development of computer architectures is covered in detail in the article.
PC is a computer designed for use by only one person at a time. A personal computer is a type of microcomputer--i.e., a small digital computer that uses only one microprocessor. (A microprocessor is a semiconductor chip that contains all the arithmetic, logic, and control circuitry needed to perform the functions of a computer's central processing unit.) A typical personal computer assemblage consists of a central processing unit; primary, or internal, memory, consisting of hard magnetic disks and a disk drive; various input/output devices, including a display screen (cathode-ray tube), keyboard and mouse, modem, and printer; and secondary, or external, memory, usually in the form of floppy disks or CD-ROMs (compact disc read-only memory). Personal computers generally are low-cost machines that can perform most of the functions of larger computers but use software oriented toward easy, single-user applications.
Computers small and inexpensive enough to be purchased by individuals for use in their homes first became feasible in the 1970s, when large-scale integration made it possible to construct a sufficiently powerful microprocessor on a single semiconductor chip. A small firm named MITS made the first personal computer, the Altair. This computer, which used the Intel Corporation's 8080 microprocessor, was developed in 1974. Though the Altair was popular among computer hobbyists, its commercial appeal was limited, since purchasers had to assemble the machine from a kit.
The personal computer industry truly began in 1977, when Apple Computer, Inc., founded by Steven P. Jobs and Stephen G. Wozniak, introduced the Apple II, one of the first pre-assembled, mass-produced personal computers. Radio Shack and Commodore Business Machines also introduced personal computers that year. These machines used 8-bit microprocessors (which process information in groups of 8 bits, or binary digits, at a time) and possessed rather limited memory capacity--i.e., the ability to address a given quantity of data held in memory storage. But because personal computers were much less expensive than mainframes, they could be purchased by individuals, small and medium-sized businesses, and primary and secondary schools. The Apple II received a great boost in popularity when it became the host machine for VisiCalc, the first electronic spreadsheet (computerized accounting program). Other types of application software soon developed for personal computers.
The IBM Corporation, the world's dominant computer maker, did not enter the new market until 1981, when it introduced the IBM Personal Computer, or IBM PC. The IBM PC was only slightly faster than rival machines, but it had about 10 times their memory capacity, and it was backed by IBM's large sales organization. The IBM PC was also the host machine for 1-2-3, an extremely popular spreadsheet introduced by the Lotus Development Corporation in 1982. The IBM PC became the world's most popular personal computer, and both its microprocessor, the Intel 8088, and its operating system, which was adapted from the Microsoft Corporation's MS-DOS system, became industry standards. Rival machines that used Intel microprocessors and MS-DOS became known as "IBM compatibles" if they tried to compete with IBM on the basis of additional computing power or memory and "IBM clones" if they competed simply on the basis of low price.
In 1983 Apple introduced Lisa, a personal computer with a graphical user interface (GUI) to perform routine operations. A GUI is a display format that allows the user to select commands, call up files, start programs, and do other routine tasks by using a device called a mouse to point to pictorial symbols (icons) or lists of menu choices on the screen. This type of format had certain advantages over interfaces in which the user typed text- or character-based commands on a keyboard to perform routine tasks. A GUI's windows, pull-down menus, dialog boxes, and other controlling mechanisms could be used in new programs and applications in a standardized way, so that common tasks were always performed in the same manner. The Lisa's GUI became the basis of Apple's Macintosh personal computer, which was introduced in 1984 and proved extremely successful. The Macintosh was particularly useful for desktop publishing because it could lay out text and graphics on the display screen as they would appear on the printed page.
The Macintosh's graphical interface style was widely adapted by other manufacturers of personal computers and PC software. In 1985 the Microsoft Corporation introduced Microsoft Windows, a graphical user interface that gave MS-DOS-based computers many of the same capabilities of the Macintosh. Windows became the dominant operating environment for personal computers.
These advances in software and operating systems were matched by the development of microprocessors containing ever-greater numbers of circuits, with resulting increases in the processing speed and power of personal computers. The Intel 80386 32-bit microprocessor (introduced 1985) gave the Compaq Computer Corporation's Compaq 386 (introduced 1986) and IBM's PS/2 family of computers (introduced 1987) greater speed and memory capacity. Apple's Mac II computer family made equivalent advances with microprocessors made by the Motorola Corporation. The memory capacity of personal computers had increased from 64 kilobytes (64,000 characters) in the late 1970s to 100 megabytes (100 million characters) by the early '90s. (see also Index: computer memory )
By 1990 some personal computers had become small enough to be completely portable; they included laptop computers, which could rest in one's lap; notebook computers, which were about the size of a notebook; and pocket, or palm-sized, computers, which could be held in one's hand. At the high end of the PC market, multimedia personal computers equipped with CD-ROM players and digital sound systems allowed users to handle animated images and sound (in addition to text and still images) that were stored on high-capacity CD-ROMs.
Personal computers were increasingly interconnected with each other and with larger computers in networks for the purpose of gathering, sending, and sharing information electronically. The uses of personal computers continued to multiply as the machines became more powerful and their application software proliferated. By 1995 about one-third of all households in the United States owned a personal computer.
Trend of the 1990s in the computer industry is toward multimedia formats, as the market for conventional types of computer--those that have computation and data processing as their major functions--has begun to become saturated. Multimedia computers are systems that can process graphics, sound, video, and animation in addition to traditional data processing. Videocassette recorders, televisions, telephones, and audiocassette players have recently undergone a change in technology from analog to digital formats.
Television images, for example, can be processed by computer programs once they have been converted to digital signals, while those in conventional analog signals cannot. In other words, digital video images can be zoomed up or down, reshaped, or rearranged by the appropriate software. Also, due to advances in video-signal compression technology, the memory space required for storing a video program has been greatly reduced. For example, a CD-ROM--i.e., a compact disc with a diameter of 12 centimetres (4 3/4 inches) that resembles an audio compact disc and from which data can be read by optical means--has a memory capacity of about 650 megabytes but can store only 30 seconds of a video segment without video compression. (A CD-ROM with a memory capacity of 650 megabytes corresponds to about 600 floppy disks.) Video programs can be compressed roughly 160 times by a video compression method called DVI (digital video interactive), for example, and with this technique a CD-ROM can store an approximately 70-minute-long video program.
Multimedia has important applications for consumer products and for business needs. Video scenes that are captured by camcorders can be combined with text, sound, and data and can be viewed on television sets in homes, schools, or offices. These multimedia presentations are becoming useful educational and communication tools. For example, there are available encyclopaedias that contain video programs depicting animal behaviours, geomorphic processes, and other natural phenomena. Automobile mechanics can watch videos that demonstrate how to repair new models. In business applications, documents can be annotated with voice or video. New consumer products can be more effectively marketed by demonstrating how they can be used. CD-ROMs of numerous other subjects have been recently published; all of them can be viewed on TV monitors using multimedia computers. These multimedia computer systems can, in turn, be incorporated into computer networks, enhancing the effectiveness of communication. Exchange of still images or video programs and oral discussions about illustrations are becoming economically feasible for the first time. (see also Index: instructional media)
This multitude of new products and capabilities has been made possible by the tremendous progress of microprocessor technology. Because of the advances in this area, personal computers have become more powerful, smaller, and less expensive, which has enabled computer networks to proliferate. Many of the tasks that were traditionally performed by mainframes have been transferred to personal computers connected to communications networks. Although the mainframe continues to be produced and serves a useful purpose, it has been used more often as one of many different computers and peripheral devices connected to computer networks. In this new role, the function of mainframes as powerful processors of database systems is becoming important, and, as a result, massively parallel computers with hundreds or thousands of microprocessors are being produced as the high end of mainframes. In addition to being powerful, the microprocessors used for this purpose must be inexpensive, but low costs can be achieved only if they are mass-produced.
Throughout the world, more than 100,000,000 personal computers and 500,000 workstations are in use, whereas only several hundred supercomputers are in operation; the numbers of mainframes and minicomputers fall somewhere between those of supercomputers and workstations. Because of such high-volume production, microprocessors for personal computers or workstations tend to be inexpensive and are available for use in massively parallel computers as well.
Computers are generally classified into three types on the basis of how data are represented: analog, digital, and hybrid. These are described below, along with the reasons why digital computers are now far more extensively used than the other two types.
In a digital computer, information is processed by manipulating numbers represented as discrete digits. The abacus is the simplest and oldest form of digital computational device, although it can hardly be called a true computer. An abacus has several parallel rods, each of which is divided into two sections. The first section has a single bead (some abaci have two beads), representing zero if the bead is at the upper position along the rod and five if it is at the lower position. The second section has four beads (some abaci have five beads), representing one through four depending on how many beads are in the upper position along the rod. Each number is placed on consecutive rods of the abacus. Calculation is done by following the rules for arithmetic operations that the human brain must remember. Addition, subtraction, multiplication, or division in each digit position is performed using the abacus simply as registers to store initial numbers and final results.
In an analog computer, calculations are performed on numbers in analog form, as continuously variable physical quantities, such as the length of a certain physical device. For example, the analog addition of two numbers, A and B, can be done by using two rules made of metal. Length A is marked on one rule and then, starting from that position, length B is measured with the second rule. The new position on the first rule represents A + B. Multiplication in analog form can be done by using logarithmic scales on the two rules instead of linear scales. In other words, A and B are marked at lengths log10A and log10B, respectively, on the upper and lower rules. Then, the length log10A + log10B = log10AB on the upper rule is the logarithm of the product AB.
In the case of electronic computers, number representation and calculation can be done in digital or analog form. In electronic digital computers, numbers are represented in digital form (in discrete values of voltages or currents in electronic circuits). Calculation is performed using rules for addition, subtraction, multiplication, and division that are represented in digital form. In modern electronic digital computers, numbers are represented in binary form, a special case of the digital form, and computations are also carried out in binary because of the advantages provided by this format, as explained in the following paragraphs. In electronic analog computers, numbers are represented in analog form. Calculations performed by digital computers are different from those done by analog computers, in a manner akin to the differences between the calculating methods of the abacus and slide rule. Hybrid computers are a combination of analog and digital parts, so their operation incorporates both analog and digital computing technologies.
Binary numbers are impractical for daily use, but digital computers whose operations are based on binary form provide far greater precision and reliability than do analog computers. If greater precision is desired in a number represented in a digital computer, it can be expressed with more digits (28.8999 instead of 28.9).
Fluctuations in the signals in digital computers based on binary form are less problematic, and, as a result, the reliability of these machines is greater than that in analog computers or digital computers based on nonbinary form (e.g., decimal form). Such correction is difficult for digital computers based on decimal (or other nonbinary) form and for analog computers, because it is difficult to determine which of the many different values is the correct one. Because of this advantage, digital data in binary form can be processed repeatedly without mistakes not only during calculation but also during storage into or retrieval from a magnetic disk or RAM by restoring their original values each time.
In the manufacturing process of an integrated-circuit chip, it is difficult to make all the numerous electronic components inside the chip precisely meet their respective specifications. This fact notwithstanding, an integrated circuit for a binary-based digital computer has a far greater number of electronic components that work correctly than one for an analog computer or a nonbinary based digital computer owing to the restorability of binary values, as explained previously. Thus, the rejection rate of manufactured integrated circuits for binary digital computers is much lower than that for other computer types.
Programs, which are needed for easy change of a sequence of calculations, are stored in binary form for the reasons explained previously. In early analog computers, programs were changed by manually rewiring connections of adder, multiplier, and other circuits. These tedious methods were later replaced by programs electronically stored in binary form, although calculations were still done by analog circuits. If computational tasks are not very complex, analog computers have the advantage of performing faster calculations than digital computers, although precision is sacrificed.
For these reasons, digital computers are now far more widely used than analog computers or hybrid computers, and all present-day digital computers are based on binary representation.
The development of computer science nowadays in our country and abroad is important as well as the role of computers in our everyday life.
Computers are sometimes thought unjustifiably to demand deep technical knowledge or proficiency in mathematics and electronics. In actuality, computers, like any other discipline, inspire different levels of expertise. On the least specialized level, computer literacy involves knowing how to turn on a computer, start and stop simple application programs, and save and print information. At higher levels, computer literacy becomes more detailed, involving the ability of the “power users” to manipulate complex applications and to program in languages such as BASIC or C. At the highest levels, computer literacy leads to specializedand technical knowledge of such topics as electronics and assembly language.
During the last 20 years, computing power has doubled about every 18 months thanks to the creation of faster microprocessors, the incorporation of multiple microprocessor designs, and the development of new storage technologies. Ongoing research is focused on creating computers that use light and biological molecules instead of/r in combination with onventional electronic computer circuitry. These technological advances coupled with new methods for interconnecting computers, such as the World Wide Web will make PCs even more powerful and useful. Thats why we believe this paper to be of use for both who have just started getting acquainted with PC and those who are “power users” already, because theres never too much information.
А также другие работы, которые могут Вас заинтересовать
|41392.||Базы данных SQL. Создание таблиц.||138.5 KB|
|Заполнение таблиц Секция WHERE SELECT DELETE UPDTE Ограничение ссылочной целостности CONSTRINT SELECT ORDER BY SELECT TOP SELECT DISTINCT WHERE BEWEEN WHERE IS NULL WHERE NOT WHERE LIKE GROUP BY.|
|41394.||Базы данных SQL||121.5 KB|
|LEFT OUTER JOIN RIGHT OUTER JOIN FULL OUTER JOIN INSERT INSERT SELECT INSERT UNIQUEIDENTIFIER IDENTITY INSERT defult deciml вычисляемые столбцы Время дата .|
|41395.||Базы данных. Индексы||126 KB|
|Индекс: всегда связан с таблицейс подмножеством столбцов таблицы. Индекс: предназначен для ускорения поиска строк в таблице по индексируемым столбцам Индекс: Microsoft SQL Server бывают кластерные некластерные просто индексы. Некластерный индекс: физически находится отдельно от таблицы список значений индексируемого столбца столбцов в определенном порядке с указателем на строку в таблице; список как правило бинарное дерево поиска.|
|41397.||Базы данных. Повышение производительности запроса.||359 KB|
|Query Optimizer: вычисляет несколько планов не все запроса на основе статистики метаданных информации о индексах и др.; на основе статистики предполагает стоимости запроса по различным планам и выбирает план с минимальными затратами на использование ресурсов помещает его кэш; как правило планы хранящиеся в кэше используются повторно. Стоимость запроса: числовая величина характеризующая степень использования ресурсов; Эффективность плана: наличие индексов или сканирование; статистика о распределении данных как правило...|
|41398.||Базы данных. Программные интерфейсы с базой данных||483 KB|
|ADO.NET: архитектура, модель поставщиков данных (провайдеров) ADO.NET: Data Provider - набор классов ADO.NET, позволяющих получить доступ к базе определенного типа (MS SQL Server, Oracle, DB2, MySQL) данных (выполнять sql-команды, и извлекать данные). ADO.NET: Data Provider включает следующие классы:|
|41399.||Базы данных. Секционирование таблиц и индексов||67.5 KB|
|Секционирование: поддерживается не всеми редакциями Microsoft SQL Server 2008 а только Enterprise Edition Developer Edition. Секционирование: в разных СУБД реализовано поразному; в Orcle очень развита эта технология. Секционирование: в Microsoft SQL Server 2008 все таблицы и индексы секционированы по умолчанию таблица или индекс находятся в одной секции; секции базовая структура данных совместно со страницами и экстентами.|
|41400.||Базы данных. Введение в базы данных||2.98 MB|
|Введение в базы данных План лекции определить понятие база данных; сформулировать основные требования к базе данных; ознакомиться с основными принципами построения проектирования базы данных; ознакомиться с основными моделями данных; ознакомится с основами теории реляционных баз данных. База данных: хранилище систематизированных данных. Компьютерные базы данных: базы данных использующие электронные носители для хранения данных и специальные программные средства для...|