Главная страница
Навигация по странице:

  • Обсудите с товарищем содержание данного текста. Слова и выражения, которые помогут при обсуждении текстов.

  • TEXT I SIX COMPUTER GENERATIONS

  • TEXT II PROGRAMMING LANGUAGE

  • TEXT III COMPUTER-AIDED DESIGN

  • TEXT IV DATABASE

  • TEXT V EMBEDDED SYSTEMS

  • TEXT VI COMPUTER NETWORKING

  • TEXT VII PROGRAMMABLE LOGIC CONTROLLER

  • TEXT VIII SOFTWARE DEVELOPMENT PROCESS

  • TEXT IX A BRIEF HISTORY OF THE INTERNET

  • TEXT X ORIGINS OF THE INTERNET

  • TEXT XI HISTORY OF THE FUTURE

  • Библиографический список

  • Английский методичка. Учебное пособие для развития навыков устной речи на английском языке Омск Издательство Омгту 2009 удк 004 811. 111(075)


    Скачать 1.23 Mb.
    НазваниеУчебное пособие для развития навыков устной речи на английском языке Омск Издательство Омгту 2009 удк 004 811. 111(075)
    АнкорАнглийский методичка.doc
    Дата28.01.2017
    Размер1.23 Mb.
    Формат файлаdoc
    Имя файлаАнглийский методичка.doc
    ТипУчебное пособие
    #816
    страница9 из 9
    1   2   3   4   5   6   7   8   9
    PART II

    SUPPLEMENTARY MATERIAL
    Задания к текстам:

    1. Ознакомьтесь с содержанием текстов и выделите основную идею в каждом из них.

    2. Озаглавьте текст при необходимости.

    3. Напишите логический план текста.

    4. Пользуясь данными методами, составьте описательную аннотацию к тексту.


    (1) The text (article) entitled deals with…

    studies…

    discusses…

    is devoted to…

    is about…
    (2) Firstly, the author says about…

    Secondly, he points out…

    shows…

    gives…

    mentions…

    Then he comments on…

    illustrates…
    (3) In conclusion, he summerises…

    notes…
    (4) To my mind this text of great use to…

    is important…

    In my opinion this article interesting…


    1. Обсудите с товарищем содержание данного текста.

    Слова и выражения, которые помогут при обсуждении текстов.

    1. I'd like to ask you – хотел бы спросить вас

    2. Let me ask you – позвольте спросить вас

    3. One more question – еще один вопрос

    4. And the last question – и последний вопрос

    5. I wonder why – интересно, почему

    6. I don’t think so – я не согласен

    7. I disagree – я не согласен

    8. I object – я возражаю против

    9. I reject – я отвергаю

    10. On the contrary – напротив

    11. I am afraid you are wrong – боюсь, что вы не правы

    12. Surely – несомненно

    13. Certainly – конечно

    14. I fully agree with – я полностью согласен с

    15. Exactly, Quite so – совершенно верно

    16. I think so too – я тоже так думаю


    TEXT I

    SIX COMPUTER GENERATIONS

    The first three generations of computers have traditionally been identified as those using vacuum tubes, transistors, and integrated circuits, respectively. The fourth generation was never so clearly delineated, but has generally been associated with the use of large scale integrated circuits that enabled the creation of microprocessor chips. The next major deviation in computer technology, therefore, could be considered (in 1980) to be the fifth generation.

    The development of the fifth generation of computer systems is characterized mainly by the acceptance of parallel processing. Until this time parallelism was limited to pipelining and vector processing, or at most to a few processors sharing jobs. The fifth generation saw the introduction of machines with hundreds of processors that could all be working on different parts of a single program. The scale of integration in semiconductor continued at an incredible pace – by 1990 it was possible to build chips with a million components – and semiconductor memories became standard on all computers.

    All of the mainstream commercial computers to date have followed very much in the footsteps of the original stored program computer, the EDVAC, attributed to John von Neumann. Thus, this conventional computer architecture is referred to as “von Neumann”. It has been generally accepted that the computers of the future would need to break away from this traditional, sequential, kind of processing in order to achieve the kinds of speeds necessary to accommodate the applications expected to be wanted or required. It is expected that future computers will need to be more intelligent providing natural language interfaces, able to “see” and “hear”, and having a large store of knowledge. The amount of computing power required to support these capabilities will naturally be immense.

    Other new developments were the widespread use of computer networks and the increasing use of single-user workstations. Prior to 1985large scale parallel processing was viewed as a research goal, but two systems introduced around this time are typical of the first commercial products to be based on parallel processing. The Sequent Balance 8000 connected up to 20 processors to a single shared memory module (but each processor had its own local cache). The machine was designed the compete with the DEC VAX-780 as a general purpose Unix system, with each processor working on a different user’s job. However Sequent provided a library of subroutines that would allow programmers to write programs that would use more than one processor, and the machine was widely used to explore parallel algorithms and programming techniques.

    The Intel iPSC-1, nicknamed “the hypercube”, took a different approach. Instead of using one memory module, Intel connected each processor to its own memory and used a network interface to connect processors. This distributed memory architecture meant memory was no longer a bottleneck and large systems (using more processors) could be built. The largest iPSC-1 had 128 processors. Toward the end of this period a third type of parallel processor was introduced to the market. In this style of machine, known as a data-parallel or SIMD, there are several thousand very simple processors. All processors work under the direction of a single control unit.

    Scientific computing in this period was still dominated by vector processing. Most manufacturers of vector processors in this parallel models, but there were very few (two to eight) processors introduced parallel machines. In the area of computer networking, both wide area network (WAN) and local area network (LAN) technology developed at a rapid pace, stimulating a transition from the traditional mainframe computing environment toward a distributed computing environment in which each user has their own workstation for relatively simple tasks (editing and compiling programs, reading mail).

    One of the most dramatic changes in the sixth generation will be the explosive growth of wide area networking. Network bandwidth has expanded tremendously in the last few years and will continue to improve for the next several years. T1 transmission rates are now standard for regional networks, and the national “backbone” that interconnects regional networks uses T3. Networking technology is becoming more widespread than its original strong base in universities and government laboratories as it is rapidly finding application in K-12 education, community networks and private industry. A little over a decade after the warning voiced in the Lax report, the future of a strong computational science infrastructure is bright. The federal commitment to high performance computing has been further strengthened with the passage of two particularly significant pieces of legislation: the High Performance Computing Act of 1991, which established the High Performance Computing and Communication Program (HPCCP) and Sen. Core’s Information Infrastructure and Technology Act of 1992, which addresses a broad spectrum of issues ranging from high performance computing to expanded network access as the necessity to make leading edge technologies available to educators from kindergarten through graduate school.
    Контрольные вопросы

        1. How many generations of computers do you know?

        2. What are the peculiarities of the computers of the fifth generation?

        3. What are the peculiarities of the computers of the sixth generation?

        4. What do you know about Networking technology?


    TEXT II

    PROGRAMMING LANGUAGE

    Programming language is a machine-readable artificial language designed to express computations that can be performed by a machine, particularly a computer. Programming language can be used to create programs that specify the behavior of a machine, to express algorithms precisely, or a mode of human communication.

    The earliest programming languages predate the invention of the computer, and were used to direct the behavior of machines such as automated looms and player pianos. Thousands of different programming languages have been created, mainly in the computer field, where many more arte being created year.

    Function: A programming language is a language used to write computer programs, which involve a computer performing some kind of computation or algorithm and possibly control external devices such as printers, robots, and so on.

    Target: Programming language differ from natural languages in that natural languages also allow humans to communicate instructions to machines.

    Constructs: Programming languages may contain constructs for defining and manipulating data structures or controlling the flow of execution.

    Some authors restrict the term “programming language” to those languages that can express all possible algorithms; sometimes the term “computer language” is used for more limited artificial languages.

    Non-computational languages, such as markup languages like HTML or formal grammars like BNF are usually not considered programming languages.

    Usage: A programming language provides a structured mechanism for defining pieces of data, and the operations or transformations that may be carried out automatically on that data.

    Programs for a computer might be executed in a batch process without human interaction, or a user might type commands in an interactive session of an interpreter. In this case the “commands” are simply programs, whose execution is chained together. When a language is used to give commands to a software application (such as a shell) it is called a scripting language.

    Programs range from tiny scripts written by individual hobbyists to huge systems written by hundreds of programmers.

    Programs must balance speed, size, and simplicity on systems ranging from microcontrollers to supercomputers.

    Elements: All programming languages have some primitive building clocks for the description of data and the processes or transformations applied to them (like the addition of two numbers or the selection of an item from a collection). These primitives are defined by syntactic and semantic rules which describe their structure and meaning respectively.

    A programming language’s surface form is known as its syntax. Most programming languages are purely textual; they use sequences of text including words, numbers, and punctuation, much like written natural languages. On the other hand, there are some programming languages which are more graphical in nature, using visual relationship between symbols to specify a program.

    The syntax of a language describes the possible combinations of symbols that form a syntactically correct program.

    Implementation:

    An implementation of a programming language provides a way to execute that program on one or more configuration of hardware or a program called an o\interpreter. In some implementations that make use of the interpreter approach there is no distinct boundary between compiling and interpreting. For instance, some implementations of the DASIC programming language compile and then execute the source a line at a time.

    One technique for improving the performance of interpreted programs is just-in-time compilation. Here the virtual machine, just before execution, translates the clocks of bytecode which are going to be used to code, for direct execution on the hardware.

    Контрольные вопросы

    1. What is the function of a programming language?

    2. What does the term ‘a programming language’ mean?

    3. What are the elements of a programming language?

    4. What does an implementation of a programming language provide?


    TEXT III

    COMPUTER-AIDED DESIGN

    Computer-Aided Design (CAD) is the use of computer technology to aid in the design and particularly the drafting (technical drawing and engineering drawing) of a part or product, including entire buildings. It is both a visual (or drawing) and symbol-based method of communication whose conventions are particular to a specific technical field.

    Current Computer-Aided Design software packages range from 2D vector-bases drafting systems to 3D solid and surface modelers. Modern CAD packages can also frequently allow rotations in three dimensions. Allowing viewing of a designed object from any desired angle, even from the inside looking out. Some CAD software is capable of dynamic mathematic modeling, in which case it may be marceted as CADD – computer-aided design and drafting.

    Software technologies: Originally software for Computer-Aided Design systems was developed with computer languages such as Fortran, but with the advancement of object-oriented programming methods this has radically changed. Typical modern parametric feature based modeler and freeform surface systems are built around a number of key C (programming language) modules with their own APIs.

    A CAD system can be seen as built up from the interaction of a graphical user interface (GUI) with NURBS geometry and/or boundary representation (B-rep) data via a geometric modeling kernel. A geometry constraint engine may also be employed to manage the associative relationships between geometry, such as wireframe geometry in a sketch or components in an assembly.

    Today most Computer-Aided Design computers are Windows based PCs. Some CAD systems also run on one of the Unix operating systems and with Linux. Some CAD systems such as QCad, NX or CATIA V5 provide multiplatform support including Windows, Linux, UNIX and Mac OS X.

    Computer-Aided Design is one of the many tools used by engineers and designers and is used in many ways depending on the profession of the user and the type of software in question. There are several different types of CAD. Each of there different types of CAD systems require the operator to think differently about how he or she will use them and he or she must design their virtual components in a different manner for each.

    The Effects of CAD

    Starting in the late 1980s, the development of readily affordable Computer-Aided Design programs that could be run on personal computers began a trend of massive downsizing in drafting departments in many small to mid-size companies. As a general rule, one CAD operator could readily replace at least three to five drafters using traditional methods.

    Контрольные вопросы

    1. What does CAD (Computer-Aided Design) mean?

    2. What are the most CAD-computers based on?

    3. What are the main types of CAD?

    4. What are the effects of CAD?


    TEXT IV

    DATABASE

    A database is a structured collection or records or data that is stored in a computer system. The structure is achieved be organizing the data according to a database model. The model in most common us today is the relational model. Other models such as the hierarchical model and the network model use a more explicit representation of relationships.

    Depending on the intended use, there are a number of database architectures in use. Many databases use a combination of strategies. On-line Transaction Processing systems (OLTP) often use a row-oriented datastore architecture, while data-warehouse and other retrieval-focused applications like Google’s Big Table, or bibliographic database (library catalogue) systems may use a Column-oriented DBMS architecture.

    There are also other types of database which cannot be classified as relational databases.

    Database management systems

    A computer database relies on software to organize the storage of data. This software is known as a database management system (DBMS). Database management systems are categorized according to the database model that they support. The model tends to determine the query languages that are available to access the database. A great deal of the internal engineering of a DBMS, however, is independent of the data model, and is concerned with managing factors such as performance, concurrency, integrity, and recovery from hardware failures. In these areas there are large differences between products.

    A relational Database Management System (RDBMS) implements the features of the relational model outlined above. In this context, Date’s “Information Principle” states: “the entire information content of the database is represented in one and only one way”.
    Database models

    Products offering a more general data model than the relational model are sometimes classified as post-relational. The data model in such products incorporates relations but is not constrained by the Information Principle, which requires that all information is represented by data values in relations.

    Object database models

    In recent years, the object-oriented paradigm has been applied to database technology, creating a new programming model known as object databases. These databases attempt to bring the database world and the application programming world closer together, in particular by ensuring that the database uses the same type system as the application program. This aims to avoid the overhead (sometimes referred to as the impedance mismatch) of converting information between its representation in the database.

    Database storage structures

    Relational database tables/indexes are typically stored in memory or on hard disk in one of many forms, ordered/unordered flat files, ISAM, heaps, hash buckets or B+ trees.

    Контрольные вопросы

    1. What is a database?

    2. What are the famous database architectures in use?

    3. What do you know about database management systems (DBMS)?

    4. What is a Relational Database Management System (RDBMS)?


    TEXT V

    EMBEDDED SYSTEMS

    An embedded system is a special-purpose computer system designed to perform one or a few dedicated functions, often with real-time computing constraints. It is usually embedded as part of a complete device including hardware and mechanical parts. In contrast, a general-purpose computer, such as a personal computer, can do many different tasks depending on programming. Embedded systems control many of the common devices in use today.

    Since the embedded system is dedicated to specific tasks, design engineers can optimize it, reducing the size and cost of the product, or increasing the reliability and performance. Some embedded systems are mass-produced, benefiting from economies of scale.

    Physically, embedded systems range from portable devices such as digital watches and MP4 players, to large stationary installations like traffic lights, factory controllers, or the systems controlling nuclear power plants. Complexity varies from low, with a single microcontroller chip, to very high with multiple units, peripherals and networks mounted inside a large chassis or enclosure.

    In general. “embedded system” is not an exactly defined term, as manu systems have some element of programmability. For example, Handheld computers share some elements with embedded systems – such as the operating systems and microprocessors which power them – but are not truly embedded systems, because they allow different applications to be loaded and peripherals to be connected.

    Debugging. Embedded Debugging may be performed at different levels, depending on the facilities available. From simplest to most sophisticated they can be roughly grouped into the following areas:

    Interactive resident debugging, using the simple shell by the embedded operating system (e.g. Forth and Basic).

    External debugging using logging or serial port output to trace operation using either a monitor in flash or using a debug server like the Remedy Debugger which even works for heterogeneous multicore systems.

    An in-curcuit debugger (ICD), a hardware device that connects to the microprocessor via a JTAG or NEXUS interface. This allows the operating of the microprocessor to be controlled externally, but is typically restricted to specific debugging capabilities in the processor.

    Because an embedded system is often composed of a wide variety of elements, the debugging strategy may vary. For instance, debugging a software- (and microprocessor-) centric embedded system is different from debugging an embedded system where most of the processing is performed by peripherals (DSP, FPGA, co-processor).

    Контрольные вопросы

    1. What is an embedded system?

    2. What kinds of an embedded system are there in use?

    3. What is an in-circuit debugger (ICD)?

    4. What do you know about the debugging strategy?


    TEXT VI

    COMPUTER NETWORKING

    Computer networking is the engineering discipline concerned with communication between computer systems or devices. Networking, routers, routing protocols, and networking over the public Internet have their specifications defined in documents called RFCs. Computer networking is sometimes considered a sub-discipline of telecommunications, computer science, information technology and/or computer engineering. Computer networks rely heavily upon the theoretical and practical application of these scientific and engineering disciplines.

    A computer network is any set of computers or devices connected to each other with the ability to exchange data. Examples of different networks are:

    Local area network (LAN), which is usually a small network constrained to each other covers a large geographic area; wide area network (WAN) that is usually a larger network that covers a large geographic area; wireless LANs and WANs (WLAN & WWAN) are the wireless equivalent of the LAN and WAN).

    All networks are interconnected to allow communication with a variety of different kinds of media, including twisted-pair copper wire cable, coaxial cable, optical fiber. Power lines and various wireless technologies. The devices can be separated by a few meters (e.g. via Bluetooth) or nearly unlimited distances (e.g. via the interconnections of the Internet).

    Network administrators see networks from both physical and logical perspectives. The physical perspective involves geographic locations, physical cabling, and the network elements (e.g., routers, bridges and application layer gateways that interconnect the physical media. Logical networks, called, in the TCP/IP architecture, subnets, map onto one or more physical media. For example, a common practice in a campus of buildings is to make a set of LAN cables in each building appear to be a common subnet, using virtual LAN (VLAN) technology.

    Informally, the Internet is the set of users, enterprises, and content providers that are interconnected by Internet Service Providers (ISP). From an engineering standpoint, the Internet is the set of subnets, and aggregates of subnets, which share the registered IP address space ans exchange information about the reachability of those IP addresses using the Border Gateway Protocol.

    Networking is a complex part of computing that makes up most of the IT Industry. Without networks, almost all communication in the world would cease to happen. It is because of networking that telephones, televisions, the internet, etc. work.

    Local area network (LAN)

    A local area network is a network that spans a relatively small space and provides services to a small number of people.

    A per-to-peer or client-server method of networking may be used. A peer-to-peer network is where each client shares their resources with other workstations in the network. Examples of peer-to-peer networks are: Small office networks where resource use is minimal and a home network. A client-server network is where every client is connected to the server and each other. Client-server networks use servers in different capacities. These can be classified into two types: single-service servers; print server, where the server performs one task such as file server; while other servers can not only perform in the capacity of file servers and print servers, but they also conduct calculations and use these to provide information to clients (Web/Intranet Server). Computers may be connected in many different ways, including Ethernet cables. Wireless networks, or other types or wires such as power lines or phone lines.
    Контрольные вопросы

    1. What is a computer networking?

    2. What does computer networks rely upon?

    3. What are all networks interconnected for?

    4. Who are interconnected by Internet Service Providers (ISP)?

    5. What do you know about a local area network?


    TEXT VII

    PROGRAMMABLE LOGIC CONTROLLER

    A programmable logic controller (PLC) or programmable controller is a digital computer used for automation of electromechanical processes, such as control of machinery on factory assembly lines, amusement rides or lighting fixtures, PLC’s are used in many industries and machines, such as packaging and semiconductor machines. Unlike general-purpose computers, the PLC is designed for multiple inputs and output arrangements, extended temperature ranges, immunity to electrical noise, and resistance to vibration and impact. Programs to control machine operation are typically stored in battery-backed or non-volatile memory. A PLC is an example of a real time system since output results must be produced in response to input conditions within a bounded time, otherwise unintended operation will result.

    Features. The main difference from other computers is that PLC’s are armored for severe conditions (such as dust, moisture, heat, cold) and have the facility for extensive input/output (I/O) arrangements. These connect the PLC to sensors and actuators. PLC’s real limit switches, analog process variables (such as temperature and pressure), and the positions of complex positioning systems. Some use machine vision. On the actuator side, PLC’s operate electric motors. Pneumatic or hydraulic cylinders, magnetic relays, solenoids, or analog outputs. The input/output arrangements may be built into a simple PLC, or the PLC may have external I/O modules attached to a computer network that plugs into the PLC.

    PLC’s may need to interact with people for the purpose of configuration, alarm reporting or everyday control. A Human-Machine Interface (HMI) is employed for this purpose. HMI’s are also referred to as MMI’s (man Machine Interface) and GUI (Graphical User Interface).

    PLC’s have built in communications ports usually 9-Pin RS232, and optionally for RS485 and Ethernet. Modbus, BACnet or DF1 is usually included as one of the communications protocols. Others’ options include various fieldbuses such as DeviceNet or Profibus. Other communications protocols that may be used are listed in the List of automation protocols.

    Most modern PLC’s can communicate over a network to some other system, such as a computer running a SCADA (Supervisory Control And Data Acquisition) system or web browser.

    PLCS used in larger I/O systems may have peer-to-peer (P2P) communication between processors. This allows separate parts of a complex process to have individual control while allowing the subsystems to co-ordinate over the communication link. These communication links are also often used for HMI devices such as keypads or PC-type workstations. Some of today’s PLC’s can communicate over a wide range of media including RS-485, Coaxial, and even Ethernet.
    Контрольные вопросы

    1. What is a programmable logic controller (PLC)?

    2. What is the main difference of PLC from other computers?

    3. What is a Human-Machine Interface used for?

    4. Where is PLC used?


    TEXT VIII

    SOFTWARE DEVELOPMENT PROCESS

    Iterative processes

    Iterative development prescribes the construction of initially small but ever larger portions of a software project to help all those involved to uncover important issues early before problems or faulty assumptions can lead to disaster, Iterative processes are preferred by commercial developers because it allows a potential of reaching the design goals of a customer who does not know how to define what they want.

    Agile software development

    Agile software development processes are built on the foundation of iterative development. To that foundation they add a lighter, more people-centric viewpoint than traditional approaches. Agile processes use feedback, rather than planning, as their primary control mechanism. The feedback is driven by regular tests and releases of the evolving software.

    Interestingly, surveys have shown the potential for significant efficiency gains over the waterfall method. For example, a survey, published in August 2006 by VersionOne and Agile Alliance and based on polling more than 700 companies claims the following benefits for an Agile approach. The survey was repeated in August 2007 with about 1,700 respondents.

    XP: Extreme Programming

    Extreme Programming (XP) is the best-known iterative process. In XP, the phases are carried out in extremely small (or “continuous”) steps compared to the older, “batch” processes. The (internationally incomplete) first pass through the steps might tale a day or a week, rather than the months or years of each complete step in the Waterfall model. First, one writes automated tests, to provide concrete goals for development. Next os coding (by a pair of programmers), which is complete when all the tests pass, and the programmers can’t think of any more tests that are needed. Design and architecture emerge out of refactoring, and come after coding. Design is done by the same people who do the coding. (Only the last feature – merging design and code – is common to all the other agile processes). The incomplete but functional system is deployed or demonstrated for (some subset of) the users (at least one of which is on the development team). At this point, the practitioners start again on writing tests for the next most important part of the system.

    Maintenance

    After each step is finished, the process proceeds to the next step, just as builders don’t revise the foundation of a house after the framing has been erected.

    There is a misconception that the process has no provision for correcting errors in early steps (for example, in the requirements). In fact this is where the domain of requirements management comes in which includes change control.

    This approach is used in high risk projects, particularly large defense contracts. The problems in waterfall do not arise from “immature engineering practices, particularly in requirements analysis and requirements management”. Studies of the failure rate of the DOD-STD-2167 specification, which enforced waterfall. Have shown that the more closely a project follows its process, specifically in up-front requirements gathering, the more likely the project is to release features that are not used in their current form.

    Контрольные вопросы

    1. What is agile software development process?

    2. What do you know about Extreme Programming (XP)?

    3. Who does the coding?

    4. What can you say about the maintenance?

    TEXT IX

    A BRIEF HISTORY OF THE INTERNET

    The Internet has revolutionized the computer and communications world like nothing before. The invention of the telegraph, telephone, radio, and computer set the stage for this unprecedented integration of capability, a mechanism for information dissemination, and a medium for collaboration and interaction between individuals and their computers without regard for geographic location.

    The Internet represents one of the most successful examples of the benefits of sustained investment and commitment to research and development of information infrastructure. Beginning with the early research in packet switching the government, industry and academia has been partners in evolving and deploying this exciting new technology. Today, terms like "leaner mcc.com" and "http://www.acm.org" trip lightly off the tongue of the random person on the street.

    This is intended to be a brief, necessarily cursory and incom-plate history. Much material currently exists about the Internet, covering history, technology, and usage. A trip to almost any book store will find shelves of material written about the Internet.

    In this paper, 3 several of us involved in the development and evolution of the Internet share our views of its origins and history. This history revolves around four distinct aspects. There is the technological evolution that began with early research on packet switching and the ARPANET (and related technologies), and where current research continues to expand the horizons of the infrastructure along several dimensions such as scale, performance, and higher level functionality. There if the operations and management aspect of a global and complex operational infrastructure. There is the social aspect, which resulted in a broad community of Internatus working together to create and evolve the technology. And there is the commercialization aspect, resulting in an extremely effective transition of research results into a broadly deployed and available information infrastructure.

    The Internet today is a widespread information infrastructure, the initial prototype of what is often called the National (or Global or Galactic) Information Infrastructure. Its history is complex and involves many aspects technological, organizational, and community. And its influence reaches not only to the technical fields of computer communications but throughout society as we move toward increasing use of online tools to accomplish electronic commerce, information acquisition and community operations.
    Контрольные вопросы

    1. What is the Internet?

    2. How has the Internet revolutionized?

    3. What are the four distinct aspects in the history of Internet?

    4. What is called the National (or Global, or Galactic) Information Infrastructure?


    TEXT X

    ORIGINS OF THE INTERNET

    The first recorded description of the social interactions that could be enabled through networking was a series of memos written by J.C.R. Licklider of MIT in August 1962 discussing His "Galactic Network" concept. He envisioned a globally interconnected set of through which everyone could quickly access data and programs from any site. In spirit, the concept was very much like the Internet of today. Licklider was the first head of the computer research program at DARPA, 4, starting in October 1962. While at DARPA he convinced his successors at DARPA Ivan Sutherland, Bob Taylor, and MIT researcher Lawrence G. Roberts, of the importance of this networking concept.

    Leonard Kleinrock at MIT published the first paper on packet switching theory in July 1961 and the first book on the subject in 1964. Kleinrock convinced Roberts of the theoretical feasibility of communications using packets rather than circuits, which was a major step along the path toward computing networking. The other key step was to make the computers talk together. To explore this, in 1965 working with Thomas Merrill, Roberts connected the TX-2 computer in Mass. To the Q-32 in California with a low speed dial-up telephone line creating the first (however small) wide-area computer network ever built. The result of this experiment was the realization that the tine-shared computers could work well together, running programs and retrieving data as necessary on the remote machine, but that the circuit switched telephone system was totally inadequate for the Job. Kleinrock's conviction of the need for packet switching was confirmed.

    In late 1966 Roberts went to DARPA to develop the computer network concept and quickly put together his plan for the "ARPANET"', publishing it in 1967. At the conference where he presented the paper, there was also a paper on a packet network concept from the UK by Donald Davies and Roger Scantlebury of NPL. Scantlebury told Roberts about the NPL work as well as that of Paul Baran and others at TAND. The RAND group had written a paper on packet switching networks for secure voice in the military in 1964. It happened that the work at MIT (1961-1967), at RAND (1962-1963), and at NPL, (1964-1967) had all proceeded in parallel without any of the researchers knowing about the other work. The world "packet" was adopted from the work at NPL and the proposed line speed to be used in the ARPANET design was upgraded from 2.4 kbps to 50 kbps.

    In August 1968, after Roberts and the DARPA funded community had refined the overall structure and specifications for the ARPANET, an RFQ was released by DARPA for the development of one of the key components, the packet switches called Interface Message Processors (IMP's). The RFQ was won in December 1968 by a group headed by Frank Heart at bolt Beranek and Newman (BBN). As the BBN team worked on the imp's with Bob Kahn playing a major role in the overall ARPANET architectural design, the network topology and economics were designed and optimized by Roberts working with Howard Frank and his team at Network Analysis Corporation, and the Network measurement system was prepared by Kleinrock's team at UCLA.

    Due to Kleinrock's early development of packet switching theory and his focus on analysis, design and measurement, his Network Measurement Center at UCLA was selected to by the first node on the ARHPANET. All this came together in September 1969 when SBN Installed the first IMP at UCLA and the first host computer was connected. Doug Engeloard’s project on "Augmentation of Human Intellect" (which induced NLS, an early hypertext system) at Stanford Research Institute (SRI) provided a second node, SRI supported the Network Information Center, led by Elizabeth (Jake) Feinler and Including function such as maintaining tables of hostname to address mapping as well as a directory of the RFC's. One month later, when SRI was connected to the ARPANET, the first host-to-host message was sent from Kleinrock's laboratory to SRI. Two more nodes were added at UC Santa Barbara and University of Utah. These last two nodes incorporated application visualization projects, with Glen and Culler and Burton Fried at UCSB investigating methods for display of mathematical functions using storage displays to de­al with the problem of refresh over the net, and Robert Taylor and Ivan Sutherland at Utah investigating, methods of 3-D representations over the net. Thus, by the end of 1969, four host computers were connected together into the initial ARPANET, and the budding Internet was off the ground. Even at this early stage, it should be noted that the networking research incorporated both work on the underlying network and work on how to utilize the network. This tradition continues on this day.

    Computers were added quickly to the ARPANET during the following years, and work proceeded on completing a functionally complete Host-to-Host protocol and other network software. In December 1970 the Network Working Group (NWG) working under S. Crocker finished the initial ARPANET Host-to-Host protocol, called the Network Control Protocol (NCP). As the ARPANET sites completed implementing NCP during the period 1971-1972, the network users finally could begin to develop applications.

    In October 1972 Kahn organized a large, very successful demonstration of the ARPANET at the International Computer Communicational Conference (ICCC). This was the first public demonstration of this new network technology to the public. It was also in 1972 that the initial "hot" application, electronic mail, was introduced. In March Ray Tomlinson at BBN wrote the basic email message send and read software, motivated by the need of the ARPANET developers for an easy coordination mechanism. In July, Roberts expanded its utility by writing the first email utility program to list, selectively read, file, forward, and respond to messages, From there email took off as the largest network application for over a decade. This was a harbinger of the kind of activity we see on the World Wide Web today, namely, the enormous growth of all kinds of "people-to-people" traffic.

    Контрольные вопросы

    1. What famous scientists dealing with the Internet do you know?

    2. What is the “ARPANET”?

    3. What was selected to be the first node on the ARPANET?

    4. What is the Network Control Protocol (NCP)?


    TEXT XI

    HISTORY OF THE FUTURE

    On October 24, 1996, the FNC unanimously passed a resolution defining the term Internet. This definition was developed in consultation with members of the internet and intellectual property rights communities. RESOLUTION: The Federal Networking Council (FNC) agrees that the following language reflects our definition of the term "Internet". "Internet" refers to the global information system that - (i) is logically linked together by a globally unique address space based on the Internet Protocol (IP) or its subsequent extensions/.follow-ons: (ii) is able to support communications using the Transmission Control Protocol/Internet Protocol (TCP/IP) suite or its subsequent extensions/follow-ons, and/or other IP-compatible protocols; and (iii) provides., uses or makes accessible, either publicly or privately, high level services layered on the communications and related infrastructure described herein.

    The Internet has changed much in the two decades since it came into existence. It was conceived in the era of time-sharing, but has survived into the era of personal computers, client-server and peer-to-peer computing, and the network computer. It was, designed before LANs existed, but has accommodated that new network technology, as well as the more recent ATM arid frame switched services. It was envisioned as supporting a range of functions from file sharing and remote login to resource sharing and collaboration, and has spawned electronic mail and more recently the World Wide Web. But most important, it started as the creation of a small band of dedicated researchers, and has grown to be a commercial success with billions of dollars of annual investment.

    One should not conclude that the Internet has now finished changing. The Internet, although a network in name and geography, is a creature of the computer, not the traditional network of the telephone or television industry. It will, Indeed it must, continue to change and evolve at the speed of the computer industry if it is to remain relevant. It is now changing to provide such new services as real time transport, in order to support, for example, audio and video streams. The availability of pervasive networking (i.e., the Internet) along with powerful affordable computing and communications in portable form (i.e., laptop computers, two-way pagers, PDAs, cellular phones), is making possible a now paradigm of nomadic computing and communications.

    This evolution will tire us new applications - Internet Telephone and, slightly further out, Internet television. It is evolving to permit more sophisticated forms of pricing and coot recovery, a perhaps painful requirement in this commercial world. It is changing to accommodate yet another generation of underlying network technologies with different characteristics and requirements, from broadband residential access to satellites. New modes of access and new forms of service will spawn new applications, which in turn will drive further evolution of the net itself.

    The most pressing question for the future of the Internet is not how the technology will change, but how the process of change and evolution itself will be managed. As this paper describes, the architecture of the Internet has always been driven by a core group of designers, but the form of that group has changed as the number of interested parties has grown. With the success of the Internet has come a proliferation of stakeholders - stakeholders now with an economic as well as an intellectual investment in the network. We now see, in the debates is over control of the domain name space and the form of the next generation IP addresses, a struggle to find the next social structure that will guide the Internet in the future. The form of that structure will be harder to find, given the large number of concerned stake-holders. At the sentinel the industry struggles to find the economic rationale for the large investment needed for the future growth, for example to upgrade residential access to a more suitable technology. If the Internet stumbles. It willnot be because we lack for technology, vision, or motivation. It will be because we cannot set a direction and march collectively into the future.
    Контрольные вопросы

    1. What does the term “Internet” mean?

    2. How has the Internet changed in the two decades?

    3. What does the Internet Telephone mean?

    4. What is the future of the Internet?


    Библиографический список

    1. Моя специальность: метод. указания для развития навыков устной речи на английском языке для студентов факультета автоматизации / Т.В. Акулинина, Н.П. Андреева, Н.В. Арзуманян, Л.К. Кондратюкова, Л.Н. Ложникова. – Омск: Изд-во ОмГТУ, 2002. – 32 с.

    2. Аннотирование и реферирование английской научно-технической литературы: учеб. пособие / Л.К. Кондратюкова, Л.Б, Ткачева, Т.В. Акулинина. – Омск: Изд-во ОмГТУ, 2001. – 182 с.

    3. Santiago Remacha Esteras. InfoTech. English for computer users. Fourth Edition. Teacher’s book. Cambridge University Press, 2008. – 161 pp.

    4. Santiago Remacha Esteras. InfoTech. English for computer users. Fourth Edition. Student’s book. Cambridge University Press, 2008. – 168 pp.

    5. Elena Marco Fabré, Santiago Remacha Esteras. Professional English in use ICT. Cambridge University Press, 2007. – 118 pp.

    6. David Bonamy. Technical English. Publishing House: Pearson/Longman, 2009. – 120 pp.

    7. Celia Bingham. Technical English I. Pearson Education Limited, 2008. –
      141 pp.

    8. Eric H. Glendinning, John McEwan. Oxford English for Information Technology. Oxford University Press, 2008. – 224 pp.



    СОДЕРЖАНИЕ

    Предисловие 3

    PART I

    Introduction 4

    Unit 1 Program, design and computer language 8

    Unit 2 Software Engineering 16

    Unit 3 Recent Developments in Information Technology 30

    Unit 4 The future of Information Technology 43

    Unit 5 People in computing 58

    PART II

    Supplementary Material 72

    Text I 73

    Text II 74

    Text III 76

    Text IV 77

    Text V 78

    Text VI 79

    Text VII 80

    Text VIII 81

    Text IX 83

    Text X 84

    Text XI 86

    Библиографический список 87

    Редактор – Т.А. Москвитина

    Компьютерная верстка – Т.А. Бурдель
    ИД № 06039 от 12.10.2001
    Свод. темплан 2009 г.
    Подписано к печати 28.10.2009. Бумага офсетная. Формат 60×84 1/16.

    Отпечатано на дупликаторе. Усл. печ. л. 5,5. Уч.-изд. л. 5,5.

    Тираж 100 экз. Заказ 657.

    _______________________________________________________________________________________________

    Изд-во ОмГТУ. 644050, г. Омск, пр. Мира, 11.

    Типография ОмГТУ

    1   2   3   4   5   6   7   8   9


    написать администратору сайта