Главная страница
Навигация по странице:

  • TEXT VI COMPUTER NETWORKING

  • Контрольные вопросы

  • TEXT VII PROGRAMMABLE LOGIC CONTROLLER

  • TEXT VIII SOFTWARE DEVELOPMENT PROCESS

  • TEXT X ORIGINS OF THE INTERNET

  • Английский_МУ_Акулина_Информационные тех. Учебное пособие для развития навыков устной речи на английском языке для студентов факультета информационных технологий и компьютерных систем


    Скачать 1.24 Mb.
    НазваниеУчебное пособие для развития навыков устной речи на английском языке для студентов факультета информационных технологий и компьютерных систем
    АнкорАнглийский_МУ_Акулина_Информационные тех.doc
    Дата20.03.2018
    Размер1.24 Mb.
    Формат файлаdoc
    Имя файлаАнглийский_МУ_Акулина_Информационные тех.doc
    ТипУчебное пособие
    #16965
    страница10 из 11
    1   2   3   4   5   6   7   8   9   10   11

    Контрольные вопросы:

    1. What is an embedded system?

    2. What kinds of an embedded system are there in use?

    3. What is an in-circuit debugger (ICD)?

    4. What do you know about the debugging strategy?



    TEXT VI

    COMPUTER NETWORKING

    Computer networking is the engineering discipline concerned with communication between computer systems or devices. Networking, routers, routing protocols, and networking over the public Internet have their specifications defined in documents called RFCs. Computer networking is sometimes considered a sub-discipline of telecommunications, computer science, information technology and/or computer engineering. Computer networks rely heavily upon the theoretical and practical application of these scientific and engineering disciplines.

    A computer network is any set of computers or devices connected to each other with the ability to exchange data. Examples of different networks are:

    Local area network (LAN), which is usually a small network constrained to each other covers a large geographic area; wide area network (WAN) that is usually a larger network that covers a large geographic area; wireless LANs and WANs (WLAN & WWAN) are the wireless equivalent of the LAN and WAN).

    All networks are interconnected to allow communication with a variety of different kinds of media, including twisted-pair copper wire cable, coaxial cable, optical fiber. Power lines and various wireless technologies. The devices can be separated by a few meters (e.g. via Bluetooth) or nearly unlimited distances (e.g. via the interconnections of the Internet).

    Network administrators see networks from both physical and logical perspectives. The physical perspective involves geographic locations, physical cabling, and the network elements (e.g., routers, bridges and application layer gateways that interconnect the physical media. Logical networks, called, in the TCP/IP architecture, subnets, map onto one or more physical media. For example, a common practice in a campus of buildings is to make a set of LAN cables in each building appear to be a common subnet, using virtual LAN (VLAN) technology.

    Informally, the Internet is the set of users, enterprises, and content providers that are interconnected by Internet Service Providers (ISP). From an engineering standpoint, the Internet is the set of subnets, and aggregates of subnets, which share the registered IP address space ans exchange information about the reachability of those IP addresses using the Border Gateway Protocol.

    Networking is a complex part of computing that makes up most of the IT Industry. Without networks, almost all communication in the world would cease to happen. It is because of networking that telephones, televisions, the internet, etc. work.

    Local area network (LAN)

    A local area network is a network that spans a relatively small space and provides services to a small number of people.

    A per-to-peer or client-server method of networking may be used. A peer-to-peer network is where each client shares their resources with other workstations in the network. Examples of peer-to-peer networks are: Small office networks where resource use is minimal and a home network. A client-server network is where every client is connected to the server and each other. Client-server networks use servers in different capacities. These can be classified into two types: single-service servers; print server, where the server performs one task such as file server; while other servers can not only perform in the capacity of file servers and print servers, but they also conduct calculations and use these to provide information to clients (Web/Intranet Server). Computers may be connected in many different ways, including Ethernet cables. Wireless networks, or other types or wires such as power lines or phone lines.

    Контрольные вопросы:

    1. What is a computer networking?

    2. What does computer networks rely upon?

    3. What are all networks interconnected for?

    4. Who are interconnected by Internet Service Providers (ISP)?

    5. What do you know about a local area network?


    TEXT VII

    PROGRAMMABLE LOGIC CONTROLLER

    A programmable logic controller (PLC) or programmable controller is a digital computer used for automation of electromechanical processes, such as control of machinery on factory assembly lines, amusement rides or lighting fixtures, PLC’s are used in many industries and machines, such as packaging and semiconductor machines. Unlike general-purpose computers, the PLC is designed for multiple inputs and output arrangements, extended temperature ranges, immunity to electrical noise, and resistance to vibration and impact. Programs to control machine operation are typically stored in battery-backed or non-volatile memory. A PLC is an example of a real time system since output results must be produced in response to input conditions within a bounded time, otherwise unintended operation will result.

    Features. The main difference from other computers is that PLC’s are armored for severe conditions (such as dust, moisture, heat, cold) and have the facility for extensive input/output (I/O) arrangements. These connect the PLC to sensors and actuators. PLC’s real limit switches, analog process variables (such as temperature and pressure), and the positions of complex positioning systems. Some use machine vision. On the actuator side, PLC’s operate electric motors. Pneumatic or hydraulic cylinders, magnetic relays, solenoids, or analog outputs. The input/output arrangements may be built into a simple PLC, or the PLC may have external I/O modules attached to a computer network that plugs into the PLC.

    PLC’s may need to interact with people for the purpose of configuration, alarm reporting or everyday control. A Human-Machine Interface (HMI) is employed for this purpose. HMI’s are also referred to as MMI’s (man Machine Interface) and GUI (Graphical User Interface).

    PLC’s have built in communications ports usually 9-Pin RS232, and optionally for RS485 and Ethernet. Modbus, BACnet or DF1 is usually included as one of the communications protocols. Others’ options include various fieldbuses such as DeviceNet or Profibus. Other communications protocols that may be used are listed in the List of automation protocols.

    Most modern PLC’s can communicate over a network to some other system, such as a computer running a SCADA (Supervisory Control And Data Acquisition) system or web browser.

    PLCS used in larger I/O systems may have peer-to-peer (P2P) communication between processors. This allows separate parts of a complex process to have individual control while allowing the subsystems to co-ordinate over the communication link. These communication links are also often used for HMI devices such as keypads or PC-type workstations. Some of today’s PLC’s can communicate over a wide range of media including RS-485, Coaxial, and even Ethernet.

    Контрольные вопросы:

    1. What is a programmable logic controller (PLC)?

    2. What is the main difference of PLC from other computers?

    3. What is a Human-Machine Interface used for?

    4. Where is PLC used?


    TEXT VIII

    SOFTWARE DEVELOPMENT PROCESS

    Iterative processes

    Iterative development prescribes the construction of initially small but ever larger portions of a software project to help all those involved to uncover important issues early before problems or faulty assumptions can lead to disaster, Iterative processes are preferred by commercial developers because it allows a potential of reaching the design goals of a customer who does not know how to define what they want.

    Agile software development

    Agile software development processes are built on the foundation of iterative development. To that foundation they add a lighter, more people-centric viewpoint than traditional approaches. Agile processes use feedback, rather than planning, as their primary control mechanism. The feedback is driven by regular tests and releases of the evolving software.

    Interestingly, surveys have shown the potential for significant efficiency gains over the waterfall method. For example, a survey, published in August 2006 by VersionOne and Agile Alliance and based on polling more than 700 companies claims the following benefits for an Agile approach. The survey was repeated in August 2007 with about 1,700 respondents.

    XP: Extreme Programming

    Extreme Programming (XP) is the best-known iterative process. In XP, the phases are carried out in extremely small (or “continuous”) steps compared to the older, “batch” processes. The (internationally incomplete) first pass through the steps might tale a day or a week, rather than the months or years of each complete step in the Waterfall model. First, one writes automated tests, to provide concrete goals for development. Next os coding (by a pair of programmers), which is complete when all the tests pass, and the programmers can’t think of any more tests that are needed. Design and architecture emerge out of refactoring, and come after coding. Design is done by the same people who do the coding. (Only the last feature – merging design and code – is common to all the other agile processes). The incomplete but functional system is deployed or demonstrated for (some subset of) the users (at least one of which is on the development team). At this point, the practitioners start again on writing tests for the next most important part of the system.

    Maintenance

    After each step is finished, the process proceeds to the next step, just as builders don’t revise the foundation of a house after the framing has been erected.

    There is a misconception that the process has no provision for correcting errors in early steps (for example, in the requirements). In fact this is where the domain of requirements management comes in which includes change control.

    This approach is used in high risk projects, particularly large defense contracts. The problems in waterfall do not arise from “immature engineering practices, particularly in requirements analysis and requirements management”. Studies of the failure rate of the DOD-STD-2167 specification, which enforced waterfall. Have shown that the more closely a project follows its process, specifically in up-front requirements gathering, the more likely the project is to release features that are not used in their current form.

    Контрольные вопросы:

    1. What is agile software development process?

    2. What do you know about Extreme Programming (XP)?

    3. Who does the coding?

    4. What can you say about the maintenance?


    TEXT IX

    A BRIEF HISTORY OF THE INTERNET

    The Internet has revolutionized the computer and communications world like nothing before. The invention of the telegraph, telephone, radio, and computer set the stage for this unprecedented integration of capability, a mechanism for information dissemination, and a medium for collaboration and interaction between individuals and their computers without regard for geographic location.

    The Internet represents one of the most successful examples of the benefits of sustained investment and commitment to research and development of information infrastructure. Beginning with the early research in packet switching the government, industry and academia has been partners in evolving and deploying this exciting new technology. Today, terms like "leaner mcc.com" and "http://www.acm.org" trip lightly off the tongue of the random person on the street.

    This is intended to be a brief, necessarily cursory and incom-plate history. Much material currently exists about the Internet, covering history, technology, and usage. A trip to almost any book store will find shelves of material written about the Internet.

    In this paper, 3 several of us involved in the development and evolution of the Internet share our views of its origins and history. This history revolves around four distinct aspects. There is the technological evolution that began with early research on packet switching and the ARPANET (and related technologies), and where current research continues to expand the horizons of the infrastructure along several dimensions such as scale, performance, and higher level functionality. There if the operations and management aspect of a global and complex operational infrastructure. There is the social aspect, which resulted in a broad community of Internatus working together to create and evolve the technology. And there is the commercialization aspect, resulting in an extremely effective transition of research results into a broadly deployed and available information infrastructure.

    The Internet today is a widespread information infrastructure, the initial prototype of what is often called the National (or Global or Galactic) Information Infrastructure. Its history is complex and involves many aspects technological, organizational, and community. And its influence reaches not only to the technical fields of computer communications but throughout society as we move toward increasing use of online tools to accomplish electronic commerce, information acquisition and community operations.

    Контрольные вопросы:

    1. What is the Internet?

    2. How has the Internet revolutionized?

    3. What are the four distinct aspects in the history of Internet?

    4. What is called the National (or Global, or Galactic) Information Infrastructure?


    TEXT X

    ORIGINS OF THE INTERNET

    The first recorded description of the social interactions that could be enabled through networking was a series of memos written by J.C.R. Licklider of MIT in August 1962 discussing His "Galactic Network" concept. He envisioned a globally interconnected set of through which everyone could quickly access data and programs from any site. In spirit, the concept was very much like the Internet of today. Licklider was the first head of the computer research program at DARPA, 4, starting in October 1962. While at DARPA he convinced his successors at DARPA Ivan Sutherland, Bob Taylor, and MIT researcher Lawrence G. Roberts, of the importance of this networking concept.

    Leonard Kleinrock at MIT published the first paper on packet switching theory in July 1961 and the first book on the subject in 1964. Kleinrock convinced Roberts of the theoretical feasibility of communications using packets rather than circuits, which was a major step along the path toward computing networking. The other key step was to make the computers talk together. To explore this, in 1965 working with Thomas Merrill, Roberts connected the TX-2 computer in Mass. To the Q-32 in California with a low speed dial-up telephone line creating the first (however small) wide-area computer network ever built. The result of this experiment was the realization that the tine-shared computers could work well together, running programs and retrieving data as necessary on the remote machine, but that the circuit switched telephone system was totally inadequate for the Job. Kleinrock's conviction of the need for packet switching was confirmed.

    In late 1966 Roberts went to DARPA to develop the computer network concept and quickly put together his plan for the "ARPANET"', publishing it in 1967. At the conference where he presented the paper, there was also a paper on a packet network concept from the UK by Donald Davies and Roger Scantlebury of NPL. Scantlebury told Roberts about the NPL work as well as that of Paul Baran and others at TAND. The RAND group had written a paper on packet switching networks for secure voice in the military in 1964. It happened that the work at MIT (1961-1967), at RAND (1962-1963), and at NPL, (1964-1967) had all proceeded in parallel without any of the researchers knowing about the other work. The world "packet" was adopted from the work at NPL and the proposed line speed to be used in the ARPANET design was upgraded from 2.4 kbps to 50 kbps.

    In August 1968, after Roberts and the DARPA funded community had refined the overall structure and specifications for the ARPANET, an RFQ was released by DARPA for the development of one of the key components, the packet switches called Interface Message Processors (IMP's). The RFQ was won in December 1968 by a group headed by Frank Heart at bolt Beranek and Newman (BBN). As the BBN team worked on the imp's with Bob Kahn playing a major role in the overall ARPANET architectural design, the network topology and economics were designed and optimized by Roberts working with Howard Frank and his team at Network Analysis Corporation, and the Network measurement system was prepared by Kleinrock's team at UCLA.

    Due to Kleinrock's early development of packet switching theory and his focus on analysis, design and measurement, his Network Measurement Center at UCLA was selected to by the first node on the ARHPANET. All this came together in September 1969 when SBN Installed the first IMP at UCLA and the first host computer was connected. Doug Engeloard’s project on "Augmentation of Human Intellect" (which induced NLS, an early hypertext system) at Stanford Research Institute (SRI) provided a second node, SRI supported the Network Information Center, led by Elizabeth (Jake) Feinler and Including function such as maintaining tables of hostname to address mapping as well as a directory of the RFC's. One month later, when SRI was connected to the ARPANET, the first host-to-host message was sent from Kleinrock's laboratory to SRI. Two more nodes were added at UC Santa Barbara and University of Utah. These last two nodes incorporated application visualization projects, with Glen and Culler and Burton Fried at UCSB investigating methods for display of mathematical functions using storage displays to de­al with the problem of refresh over the net, and Robert Taylor and Ivan Sutherland at Utah investigating, methods of 3-D representations over the net. Thus, by the end of 1969, four host computers were connected together into the initial ARPANET, and the budding Internet was off the ground. Even at this early stage, it should be noted that the networking research incorporated both work on the underlying network and work on how to utilize the network. This tradition continues on this day.

    Computers were added quickly to the ARPANET during the following years, and work proceeded on completing a functionally complete Host-to-Host protocol and other network software. In December 1970 the Network Working Group (NWG) working under S. Crocker finished the initial ARPANET Host-to-Host protocol, called the Network Control Protocol (NCP). As the ARPANET sites completed implementing NCP during the period 1971-1972, the network users finally could begin to develop applications.

    In October 1972 Kahn organized a large, very successful demonstration of the ARPANET at the International Computer Communicational Conference (ICCC). This was the first public demonstration of this new network technology to the public. It was also in 1972 that the initial "hot" application, electronic mail, was introduced. In March Ray Tomlinson at BBN wrote the basic email message send and read software, motivated by the need of the ARPANET developers for an easy coordination mechanism. In July, Roberts expanded its utility by writing the first email utility program to list, selectively read, file, forward, and respond to messages, From there email took off as the largest network application for over a decade. This was a harbinger of the kind of activity we see on the World Wide Web today, namely, the enormous growth of all kinds of "people-to-people" traffic.
    1   2   3   4   5   6   7   8   9   10   11


    написать администратору сайта