Главная страница
Навигация по странице:

  • Comprehensive text-related glossary

  • Activities I.

  • Comprehensive text-related glossary

  • Unit Ten Text. CONTEMPORARY PARALLEL COMPUTING ARCITECTURES

  • VIII.

  • Практикум составлен в соответствии с требованиями к коммуникационной подготовки выпускников инновационной образовательной программы


    Скачать 186.22 Kb.
    НазваниеПрактикум составлен в соответствии с требованиями к коммуникационной подготовки выпускников инновационной образовательной программы
    Дата06.04.2019
    Размер186.22 Kb.
    Формат файлаdocx
    Имя файлаSuperkompyutery_Praktikum_Avakyan_Kirsanova.docx
    ТипПрактикум
    #72798
    страница5 из 5
    1   2   3   4   5


    Unit Nine
    Text. HISTORICAL PARALLEL COMPUTER ARCHITECTURES
    Key words: architecture, classification, model, data, instruction, memory

    Industrial strength applications have been driving supercomputers architectures for many years. Each year a new machine design appears on the horizon. And with availability of that machine the door is open to the applications scientists.

    One of the hardest problems to overcome when getting started in a new field is the acronym terminology that prevails. Additionally, due to the rapid race at which computer architecture is developing, newer architectures do not always fit well into categories that were invented before the architecture was. The most common classification scheme for parallel computers was devised by Flynn. His paper was written in 1969-70 and published in 1972. It is based on four-letter mnemonic where the first two letters pertain to instructions that act on the data and the second two letters refer to the data stream type.

    SISD: Single Instruction – Single Data. This is the traditional von Neumann model of computation where one set of instructions is executed sequentially on a stream of data.

    SIMD: Single Instruction – Multiple Data. This classification includes some of the very early parallel computers such as Connection machine, which was manufactured by a company called Thinking Machine. In this model, multiple individual processors or a single vector processor execute the same instruction set on different datasets.

    MISD: Multiple Instruction – Single Data. This is not currently a commercially viable system for computing.

    MIMD: Multiple Instruction – Multiple Data. This is the model of parallel computing that has dominated for most industrial applications. In this model, multiple processors can be executing different instructions on their own data and communicating as needed. There are two subclasses of MIMD machines, Shared Memory (SM) or Distributed Memory (DM).

    Since Flynn introduced the taxonomy, computers have been advancing at enormous rate. Despite efforts by computer scientists to revise the terminology, it is still used in many application chapters. There is some disagreement as to whether these machines should be classified as SIMD or SISD. The argument for using the SIMD classification is that a single instruction operates on many data and the arguments for the SISD classification are that the execution of the operations happens in a pipelined fashion and that modern microprocessors are staring to incorporate vector instructions.

    The programs are written primarily for the MIMD-DM class of computers. We use the term MPP for this class of computers. When the projects were initiated in the mid-1990s, the MPP architecture was one of the most difficult of the current high-performance supercomputing architectures to design codes for. This was due not only to the purely distributed memory, which caused the programmer to totally think a problem’s implementation, but also to the initial lag in the development and availability of parallel tools and programming standards for the MPP machines. Thus, it took a while before the MPP paradigm made its way into general industrial computing.
    Comprehensive text-related glossary


    shared memory

    - распределяемая, разделяемая память

    acronym

    - акроним

    MPP (Massively Parallel Processor)

    - процессор с полным параллелизмом данных


    MISD

    - организация вычислительной системы с несколькими потоками команд и одним потоком данных

    SISD

    - архитектура с одним потоком команд и одним потоком данных

    SIMD

    - организация вычислительной системы с одним потоком команд и несколькими потоками данных

    MIMD

    - организация вычислительной системы с несколькими однородными или разнородными процессорами, каждый из которых выполняет свои команды над своими данными

    pertain

    - иметь отношение

    sequentially

    - последовательно

    incorporate

    - включать, вмещать

    lag

    - задержка, запаздывание

    initiate

    - положить начало

    Activities
    I. Find the equivalents to the words given in box A from box B and translate them



    A

    B

    to prevail, classification, the hardest, field, lag, to initiate, common, to pertain, to overcome, to incorporate, sequentially, to revise

    to include, the most difficult, to solve, sphere, in sequence, ordinary, taxonomy, to correspond, to start, to dominate, delay, to reconsider


    II. Say what is true and what is false. Correct the false sentences

    1. There are more than three subclasses of MIMD machines. 2. Flynn’s taxonomy is used for historical references. In other cases wherever possible, we avoid using it. 3. The term SMP originally referred not only to symmetric multiprocessors. 4. The most powerful systems in use by industry are the SMP and the MPP. 5. One way to avoid the scalability problems is to provide the symmetric access to the processors. 6. The clusters have a total number of processors, comparable to MPP machines.
    III. Match the beginning of a sentence with an ending to produce a statement that is correct according to the text


    1. Newer architectures do not fit well into categories

    2. We avoid using the Flynn classification except for historical references because

    3. The time required to access the memory

    4. To use a combination of the programming models is necessary

    5. The purpose of such architecture

    6. The argument for using the SIMD classification is

    a) should be the same for all processors

    b) to avoid some of the problems associated with interprocessor communication

    c) in order to use entire clusters effectively

    d) that they were invented before the architecture was

    e) that a single instruction operates on many data

    f) this point is controversial


    IV. There are 4 statements given below. Each one contains the main idea of 4 paragraphs. Show which paragraph each statement refers to

    1. The high-performance computers generally fall into one of three major classes, the SMP, the MPP, and the nonuniform memory access (NUMA) generalization of the SMP.

    2. The coupling of SMP machines into clusters using very high-speed interconnects is one of the latest developments in high-performance.

    3. The most common classification scheme based on four-letter mnemonic for parallel computers was devised by Flynn.

    4. There are different arguments for using the SIMD or SISD classification.
    V. Write a summary of the text about parallel computing architecture (8-10 sentences) using your denotation graphs.
    VI. Read and translate the following text. Suggest the title
    Every program running on a computer, be it a service or an application, is a process. As long as a von Neumann architecture is used to build computers, only one process per CPU can be run at a time. Older microcomputer OSes such as MS-DOS did not attempt to bypass this limit, with the exception of interrupt processing, and only one process could be run under them.

    Most operating systems enable concurrent of many processes at once via multitasking, even with one CPU. The mechanism was used in mainframes since the early 1960s, but in the personal computers it became available in 1990s. Process management is an operating system’s way of dealing with running those multiple processes. On the most fundamental of computers multitasking is done by simply switching processes quickly. Depending on the operating system, as more processes run, either each time slice will become smaller or there will be a longer delay before each process is given a chance to run.

    Process management involves computing and distributing CPU time as well as other resources. Most operating systems allow a process to be assigned a priority which affects its location of CPU time. Interactive operating systems also employ some level of feedback in which the task with which the user is working receives higher priority. Interrupt driven processes will normally run at very priority. In many systems there is a background process, such as the System Idle Process in Windows, which will run when no other process is waiting for the CPU.

    Comprehensive text-related glossary


    interrupt processing

    - обработка прерываний

    process management

    - управление процессами

    System Idle Process

    - система обработки во время простоя

    priority

    - приоритет, очередность

    multitasking

    - мультипрограммная работа

    level of feedback

    - уровень обратной связи


    VII. Write out from the text the underlined verbs and define their tense and voice. Make up your own sentences with some of them.
    VIII. Pair or group work.

    a)Make up a number of questions on the text. Pass these questions to your neighbour or a neighbouring group to evaluate and comment on.

    b) Find key words of every paragraph. Try to make a denotation graph of the text using the key words. Exchange your graphs with other students to evaluate the results.
    IX. Make up a denotation graph using the text information. Compare your graph with the one of your neighbour. Discuss the differences in the graphs.

    ------------------------------------------------------------------------------------------

    Unit Ten
    Text. CONTEMPORARY PARALLEL COMPUTING ARCITECTURES
    Key words: classes, processor, access, memory, interconnection, cluster, node, advantage, multiprocessors
    The high-performance computers available today generally fall into one of three major classes, the SMP, the MPP, and the Non-Uniform Memory Access (NUMA) generalization of the SMP. They are widely used by industry today. The term SMP originally referred exclusively to symmetric multiprocessors, referring to the uniform way in which the processors are often connected to the shared memory. The major design issue in a symmetric multiprocessor is that all processors have equal access to the shared resources, namely, shared memory and input/ output functionality. The time required to access the memory should be the same for all processors. In the SMP, the processors share a global area of RAM (memory).

    The purpose of such architecture is to avoid some of the problems associated with interprocessor communication. However, the shared RAM design of the SMP can lead to bottlenecks as more and more processors are added. One way to avoid the scalability problems is the reintroduction of NUMA into the system. It adds other levels of memory that can be accessed without using the bus-based network. This further generalization of the SMP architecture is an attempt to regain some of the scalability lost by offering shared memory.

    One of the latest developments in high-performance is the coupling of SMP machines into clusters using interconnects. These clusters can have a total number of processors that are comparable to MPP machines. The clustering concept is not restricted to joining SMP’s; many architectures are suitable for nodes of a cluster, and the nodes need not necessarily be all of the same time. This is often referred to as a convergence of architectures in the development of computing. To use entire clusters effectively, the developer must divide the problem into levels of parallelism and often needs to use a combination of the programming models. One advantage of clusters is that they are designed to allow computer centers to add additional nodes as needed given the economic driving force. As technology advanced, it improved communication speed and the ease with which these systems could be created.

    Activities
    I. Are the following sentences true or false? Correct the false ones

    1. The current classification of high-performance computers includes three major classes. 2. This taxonomy is rarely used in the industrial sphere. 3. The term SMP firstly referred exclusively to symmetric multiprocessors. 4. Some processors in a symmetric multiprocessor have equal access to the shared resources. 5. One of the latest developments in high-performance is the coupling of SMP machines into clusters through interconnection. 6. The developer should use a combination of the programming models to have entire clusters work effectively.
    II. Find the equivalents to the words given in box A from box B


    A

    B

    uniform, to connect, issue, resource, to avoid, to regain, latest, to restrict, to converge, advantage, to need, to drive

    to move, to require, benefit, to meet, to limit, recent, to get back, to escape, supply, result, to link, common



    III. Circle the correct item

    1. The standard configuration … one processor per node.

    a) include b) have c) has d) exclude

    2. MIMD… for most industrial applications.

    a) was dominated b) has dominated c) wasn’t dominated

    3. RAM design of the SMP … to bottlenecks as more and more processors are added.

    a) can lead b) lead c) must lead

    4. Each year a new machine design … on the horizon.

    a) appeared b) occur c) appears

    5. Some details of these various components of an MPP computer … in the following subsections.

    a) was discussed b) are discussed c) discuss

    6. Multiple individual processors … the same instruction set on different datasets.

    a) are executed b) performs c) execute
    IV. Use the prompts to ask and answer questions, as in the example:

    clusters/ have

    A: Can the clusters have a total number of processors?

    B: Yes, they can have a total number of processors and they are comparable to large MPP machines.

    1. clock periods/ require;

    2. scalability problems/ avoid;

    3. time/ require;

    4. term/ refer;

    5. generalization of the SMP architecture/ regain

    V. Fill in empty boxes and make a summary of the text using the graph

    Parallel Computing Architectures



    MPP
    Referred to

    Symmetric microprocessor

    Coupling into clusters
    Adds levels of memory

    Cache coherency


    VI. Translate the attribute constructions and make sentences using them:

    supercomputer scalability problems, current high-performance computers, major design issue, input/ output functionality, economic driving force, communication speed increase.
    VII. Read and translate the following text. Suggest its title
    Current computer architectures arrange the computer’s memory in a hierarchical manner, starting from the fastest registers, CPU cache, random access memory and disk storage. An operating system’s memory manager coordinates the use of these various types of memory by tracking which one is available, which is to be allocated or deallocated and how to move data between them. This activity, usually referred to as virtual memory management, increase the amount of memory available for each process by making the disk storage seem like main memory. There is a speed penalty associated with using disks or other slower storage as memory – if running processes require significantly more RAM than is available, the system may start thrashing.

    Another important part of memory management is managing virtual addresses. If multiple processes are in memory at once, they must be prevented from interfering with each other’s memory (unless there is an explicit request to utilize shared memory).This is achieved by having separate address spaces. Each process sees the whole virtual address space, typically from address 0 up to maximum size of virtual memory, as uniquely assigned to it.

    It is also typical for operating systems to employ otherwise unused physical memory as a page cache; request for data from a slower device can be retained in memory to improve performance. The operating system can also pre-load the in-memory cache with data that may be requested by the user in the near future; SuperFetch is an example of this.
    Comprehensive text-related glossary


    address space

    - адресное пространство

    page cache

    - кэш-память со страничной организацией

    memory management

    - управление памятью

    thrashing

    - перегрузка, «пробуксовка»

    pre-load

    -загрузка при инициации; выполнять предзагрузку

    interfere with

    - вмешиваться


    VIII. Write out the underlined words and define their part of speech.
    IX. Ask your partner about: a) important parts of memory management;

    b) operating system’s memory manager; c) unused physical memory
    X. Group work. Try to identify the key ideas in each paragraph working with your neighbours. Report your key ideas to another group and give reasons for your choices.
    XI. Prepare a short report on the memory management using both the information from the text and some additional information on the problem known to you before reading the text. Make up your denotation graph, using the text information.


    REFERENCES
    1. Пешкова Н.П. Использование инновационных компонентов денотативной методики в обучении иноязычной речевой коммуникации (письменной и устной). – Уфа: УГАТУ, 2008. – 12 с.

    2. Alice E. Koniges. Industrial Strength Parallel Computing. – San Francisco, California: Morgan Kaufman Publishers, 2000. – 598 p.

    3. Flynn, M.J. 1972. Some Computer Organizations and Their Effectiveness. IEEE Transactions on Computing, 21, 948-960.

    4. Hartman S. “The world as a process: Simulations in the Natural and Social Sciences” in: R. Hegselmann et al. (eds.), Modelling and Simulations in the Social Sciences from Philosophy of Science Point of View, Theory and Decision Library. Dordrecht: Kluwer 1996, p.77-100.

    5. http://www.numberwatch.co.uk/computer_modelling.htm

    6. http://images.yandex.ru/yandsearch?text

    7. http://www.computer-science.ru/docs/comp/eng/develop/databases

    8. http://ldp.dvo.ru/HOWTO/Parallel-Processing-HOWTO.html

    9. http://agent.csd.auth.gr/

    karatza/mathimata/parallel-uk.html
    DICTIONARIES

    1. Англо-русский политехнический словарь (100 000 слов и выражений). Под ред. М.В. Якимова. СПб.: Издательский дом «Литера», 2004. – 960 с.

    2. Современный англо-русский словарь компьютерных технологий (более 30 000 терминов). Под ред. доктора физ.-мат. наук Н.А. Голованова. М.: ЗАО «Новый издательский дом», 2004. – 528 с.


    1   2   3   4   5


    написать администратору сайта