Практикум составлен в соответствии с требованиями к коммуникационной подготовки выпускников инновационной образовательной программы
Скачать 186.22 Kb.
|
Введение Предлагаемый практикум составлен в соответствии с требованиями к коммуникационной подготовки выпускников инновационной образовательной программы. Данный практикум разработан на основе денотативной методики обучения различным видам иноязычной речевой коммуникации и представляют собой модифицированныйинновационный вариант метода денотативного анализа семантики текста А.И. Новикова, разработанного в русле приоритетного в отечественном языкознании направления исследований по психолингвистике. Структура и содержание практикума определяются целевой установкой формирования коммуникативной компетенции. Основная часть практикума включает десять уроков, каждый из которых содержит специально отобранный аутентичный текстовый материал по тематике инновационной образовательной программы, предназначенный для аудиторных занятий и самостоятельной работы студентов. Уроки не содержат предтекстового задания, преподаватель может выбрать вид текстовой деятельности по своему усмотрению. Разработанные для каждого урока задания и упражнения направлены на активизацию умений и навыков, необходимых современному специалисту при работе со специальным текстом. Принципы обработки текстовой информации основаны на анализе структуры содержания текста с применением методики построения денотатных графов. Послетекстовые упражнения направлены на закрепление усвоенного материала, усвоение лексики, специальных терминов по компьютерной технологии, на развитие навыков свертывания и развертывания информации. Грамматические упражнения имеют целью усвоение грамматических правил на материале текстов специальности. Задания на обсуждение направлены на развитие коммуникативных навыков, а также на развитие умения анализировать и обобщать полученную информацию. --------------------------------------------------------------------------------------- Unit One Text. SUPERCOMPUTER: HISTORY Key words: processing capacity, speed of calculation, supercomputer market, scalar processors, vector processor, parallel design A supercomputer is a computer that is considered, or was considered at the time of its introduction, to be at the frontline in terms of processing capacity, particularly speed of calculation. The term "Super Computing" was first used by New York Worldnewspaper in 1929 to refer to large custom-built tabulators IBM made for Columbia University. Supercomputers introduced in the 1960s were designed primarily by Seymour Cray at Control Data Corporation (CDC), and led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for five years (1985-1990). Cray, himself, never used the word "supercomputer", a little-remembered fact is that he only recognized the word "computer". In the 1980s a large number of smaller competitors entered the market, in a parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash". Today, supercomputers are typically one-of-a-kind custom designs produced by "traditional" companies such as IBM and HP, who had purchased many of the 1980s companies to gain their experience, although still specializes in building supercomputer. The term supercomputeritself is rather fluid, and today's supercomputer tends to become tomorrow's normal computer. CDC's early machines were simply very fast scalar processors, some ten times the speed of the fastest machines offered by other companies. In the 1970s most supercomputers were dedicated to running a vector processor, and many of the newer players developed their own such processors at a lower price to enter the market. The early and mid-1980s saw machines with a modest number of vector processors working in parallel become the standard. Typical numbers of processors were in the range of four to sixteen. In the later 1980s and 1990s, attention turned from vector processors to massive parallel processing systems with thousands of "ordinary" CPUs, some being off the shelf units and others being custom designs. (This is commonly and humorously referred to as the attack of the killer microsin the industry). The New York Times, November 20, 1990, reported: “The long running debate about how best to make computers thousand of times more powerful than they are today appears to be ending in a stunning consensus. The nation’s leading computer designers have agreed that a technology considered the underdog as recently as two years ago has the most potential. The winning technology is known as massively parallel processing. For example, Cray and Convex announced that their companies are embarking on designs for massively parallel computers”. Today, parallel designs are based on "off the shelf" server-class microprocessors, such as the PowerPC. Itanium, or x86-64, and most modern supercomputers are now highly-tuned computer clusters using commodity processors combined with custom interconnects. According to Seymour Cray the next qualitative step for supercomputing will be biological computing – using DNA and proteins as the computing elements just as Nature does.
Activities I. Complete the following sentences on the basis of the text information 1. A supercomputeris characterized as ... . 2. At first the term supercomputer referred to ... . 3. Modern supercomputers are typically ... . 4. The speed of CDC’s first supercomputers was ... . 5. Most supercomputers in 1970s and in the mid of 1980s used ... . 6. The 90s saw the tendency of changing vector processors for … . 7. Modern supercomputers can be considered as ... . 8. The debates about ways of computers speedup seems … . 9. The technology considered as a … turned to be … . II. Match the beginning of a sentence with an ending to produce a statement that is correct according to the text
III. Questions for discussion: 1. What is the main feature that differs supercomputers from ordinary computers? 2. Seymour Cray, a developer of supercomputers didn’t recognize the term supercomputer. Can you explain this? 3. What was the reason of “supercomputer market crash” in the 1980s? 4. Do you agree that today's supercomputer tends to become tomorrow's normal computer? Why? IV. Translate the sentences and write out the underlined word expressions. Find their equivalents in the text 1. A supercomputer was considered at the time of its introduction to be in the van in terms of processing capacity. 2. Large, made to order tabulators IBM produced for Columbia University. 3. He then assumed control of the supercomputer market with his new designs, being the leader in supercomputing for five years. 4. In the 1970s most supercomputers were devoted to running a vector processor. 5. The long running debate about how best to make computers thousand of times more powerful than they are today appears to be ending in a staggering agreement. 6. A technology considered the looser as recently as two years ago has the most potential. 7. Cray and Convex announced that their companies are starting designs for massively parallel computers. V. Group work Each group of students should choose from the key words given before the text 2 words (word expressions) and prepare a mother tongue explanation. The other groups have to guess the word concerned. VI. Make up a summary according to the graph using the information from the text Supercomputer introduction Seymour Cray company Supercomputers market Vector processor system Parallel processing system VII. Read and translate the text, suggest the title Supercomputers are used for highly calculation-intensive tasks such as problems involving quantum mechanical physics, weather forecasting, climate research (including research into global warming), molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion), cryptanalysis, and the like. Major universities, military agencies and scientific research laboratories are heavy users. A particular class of problems, known as Grand Challenge problems, are problems whose full solution requires semi-infinite computing resources. Relevant here is the distinction between capability computing and capacity computing. Capability computingis typically thought of as using the maximum computing power to solve a large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can. Capacity computingin contrast is typically thought of as using efficient cost-effective computing power to solve somewhat large problems or many small problems or to prepare for a run on a capability system. Vector processing techniques were first developed for supercomputers and continue to be used in specialist high-performance applications. Vector processing techniques have trickled down to the mass market in DSP architectures and SIMP (Secway`s Instant Messenger Privacy) processing instructions for general-purpose computers. Modern video game consoles in particular use SIMP extensively and this is the basis for some manufacturers' claim that their game machines are themselves supercomputers. Indeed, some graphics cards have the computing power of several TeraFLOPS. The applications to which this power can be applied was limited by the special-purpose nature of early video processing. As video processing has become more sophisticated, Graphics Processing Units (GPUs) have evolved to become more useful as general-purpose vector processors, and an entire computer science sub-discipline has arisen to exploit this capability: General-Purpose Computing on Graphics Processing Units (GPGPU). Comprehensive text-related glossary
VIII. Match the definitions with the terms given below in brackets: a) computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals; b) a program encrypting messages before they are sent over the Internet and decrypting them when they arrive at your contacts; c) one billion of floating point operations per second; d) the unit holding all graphic data to be processed e) using the maximum computing power to solve a large problem in the shortest amount of time; f) using efficient cost-effective computing power to solve large problems or many small problems (Capability computing; Capacity computing;Molecular modeling; SIMP; TeraFLOPS; Graphics Processing Unit) IX. Try to define the following statements, using the text information a) true b) false c) there is no information in the text 1. Today supercomputers are used to solve all highly calculation-intensive problems. 2. Grand Challenge problems are the problems of capability computing and capacity computing. 3. Vector processing techniques are not usually used in general-purpose computers. 4. General-purpose vector processors used in modern video game consoles are GPUs. 5. GPUs turned out to be more useful as general-purpose vector processors because of video processing growing complexity. X. Match the parts of the sentences from the left and right columns
XI. Group work. 1. Working in pairs try to remind as many tasks being solved by supercomputers as you can. 2. What is the difference between capacity computing and capability computing? Give some examples of problems to be solved by these systems. 3. What are the applications of processing techniques? --------------------------------------------------------------------------------------- Unit Two Text. SUPERCOMPUTER OPERATING SYSTEMS Key words: hardware, conventional desktop software, processing and memory access speed, vectorizing and parallelizing compilers Supercomputers using custom CPUs traditionally gained their speed over conventional computers through the use of innovative designs that allow them to perform many tasks in parallel, as well as complex detail engineering. They tend to be specialized for certain types of computation, usually numerical calculations, and perform poorly at more general computing tasks. Their memory hierarchy is very carefully designed to ensure the processor is kept fed with data and instructions at all times—in fact, much of the performance difference between slower computers and supercomputers is due to the memory hierarchy. Their I/O systems tend to be designed to support high bandwidth with latency less of an issue, because supercomputers are not used for transaction processing. As with all highly parallel systems, Amdahl's law applies, and supercomputer designs devote great effort to eliminating software serialization, and using hardware to accelerate the remaining bottlenecks. Supercomputers predominantly run some variant of Linux or UNIX. Linux has been the most popular operating system since 2004. Supercomputer operating systems, today most often variants of Linux or UNIX, are every bit as complex as those for smaller machines, if not more so. Their user interfaces tend to be less developed, however, as the OS developers have limited programming resources to spend on non-essential parts of the OS (i.e., parts not directly contributing to the optimal utilization of the machine's hardware). This stems from the fact that because these computers, often priced at millions of dollars, are sold to a very small market, their R&D budgets are often limited. (The advent of Unix and Linux allows reuse of conventional desktop software and user interfaces). Interestingly this has been a continuing trend throughout the supercomputer industry, with former technology leaders such as Silicon Graphics taking a back seat in such companies as NVIDIA, who have been able to produce cheap, feature-rich, high-performance, and innovative products due to the vast number of consumers driving their R&D. Historically, until the early-to-mid-1980s, supercomputers usually sacrificed instruction set compatibility and code portability for performance (processing and memory access speed). For the most part, supercomputers to this time (unlike high-end mainframes) had vastly different operating systems. The Cray-1 alone had at least six different proprietary OSs largely unknown to the general computing community. Similarly different and incompatible vectorizing and parallelizing compilers for Fortran existed. This trend would have continued with the ETA-10 were it not for the initial instruction set compatibility between the Cray-1 and the Cray X-MP, and the adoption of UNIX operating system variants (such as Cray's Unicos and today's Linux.) For this reason future highest performance systems are likely to have a UNIX flavor but with incompatible system-unique features. Comprehensive text-related glossary
Activities I. Complete the sentences using the information from the text 1. The most widely used operating system in the supercomputers is ... . 2. OS developers sacrificed high-performed user interface for ... 3. The technology leaders such as Silicon Graphics failed in the competition with NVIDIA because ... . 4. Until the mid of 80s there existed a wide range of ... . 5. The number of operating systems used in Cray-1 was ... . 6. The adoption of UNIX operating systems resulted in ... . 7. Supercomputers using custom CPUs gain their speed over conventional computers by … . 8. The supercomputer memory hierarchy is designed to … . II. Find the equivalents to the words given in box A from box B
III. Answer the questions: 1. What kinds of operating systems are mentioned in the text? 2. How did the OS developers achieve the optimal utilization of the machine's hardware? What was the result of it? 3. Supercomputers have a moderate R&D budget though they are often priced at millions of dollars. Can you explain why? 4. What was the tendency in operating systems use till the mid of 1980s? 5. Was the problem of incompatibility solved? How? IV. Find all words with –ing forms in the text. What parts of speech they are? Make up 6 sentences using some of them. V. Group work. Try to identify the key ideas in each paragraph working with your neighbours. Report your key ideas to another group and give reasons for your choices. VI. Read and translate the text, find the key words. Suggest the title It has recently been shown that a new, very large-scale and high-performance simulation system can be built as a cluster of computers that are relatively slow and cheap. However, existing solutions in simulation do not permit the use of all the advantages of these new architectures. In other words, there is a need to create new, portable, flexible, powerful, well-scaled tools of simulation. The parallel simulation system was realized for supercomputer RM600-E30, an SMP system under Reliant UNIX. The source language is a С++ based, process-oriented, discrete simulation language. The system provides the following capabilities: interaction of processes with the help of message passing; building of hierarchical models; dynamical change of the structure of the model. The goal of developing the simulation system was to obtain a very portable, high-performance system. This goal is achieved by using recent approaches in system design, by using recent portable and progressive techniques. These techniques are threads and MPI (Message Passing Interface). Presently, many operational systems support these techniques. This technology makes it possible to achieve portability of the simulation system on distinct parallel and distributed architectures. The performance is achieved by dividing the run-time system into a Communication Part and Simulation Engine. The Communication Part provides for message passing between processes and also synchronizes execution of the model in model time. The method of synchronization can be set in the Communication Part. It is intended to realize a library of various methods of synchronization, both conservative and optimistic. It is also intended to provide the capability to choose a method that is more suitable on performance for a definite class of models or for individual models. Recently, one conservative method of synchronization has been realized in the simulation system. There are various realizations of the simulation system. There is a quasiparallel realization for Windows 95/98/NT and distributed realization for QNX operating system. The parallel simulation system was also realized on a MBC-1000M supercomputer. The parallel simulation system is intended for large-scale simulation of large systems. Special-purpose supercomputersare high-performance computing devices with a hardware architecture dedicated to a single problem. This allows the use of specially programmed FPGA chips or even custom VLSI chips, allowing higher price/performance ratios by sacrificing generality. They are used for applications such as astrophysics computation and brute-force codebreaking. Historically a new special-purpose supercomputer has occasionally been faster than the world's fastest general-purpose supercomputer, by some measure. For example, GRAPE-6 was faster than the Earth Simulator in 2002 for a particular special set of problems. Comprehensive text-related glossary
VII. Match the beginning of a sentence with an ending to produce a statement that is correct according to the text
VIII. Find in the text the notions to which the following characteristics are referred. Make up your own sentences with some of them 1) slow and cheap; 2) new, portable, flexible, powerful, well-scaled; 3) brute-force; 4) new, very large-scale, high-performance; 5) process-oriented, discrete; 6) conservative and optimistic; 7) distinct parallel and distributed; 8) special-purpose; 9) quasiparallel and distributed; IX. Match these topics with one of 5 text paragraphs. One of the topics does not correspond to the paragraph. Correct it
2. The method of synchronization. 3. The problem of building a new, large-scale and high-performance simulation system. 4. A library of various methods of synchronization. 5. Special-purpose supercomputers X. Make a summary using the graph below and text information Parallel Simulation System Capabilities Interaction of processes Building of hierarchical models Systems Models Language C ++based, process-oriented, discrete simulation Language Recent approaches in system design Techniques Parallel simulation SMP RM 600-E30 Dynamical change of the structure of the model Threads MPI Run-time systems Communication part Communication engine Synchronization Quasiparallel --------------------------------------------------------------------------------------- Unit Three Text. MODERN SUPERCOMPUTER ARCHITECTURE Key words: supercomputer design, top-level architecture, cluster of multiprocessors, SIMP processor, the fastest machine, parallelization The CPU Architecture Share of Top500 Rankings between 1998 and 2007: x86 family includes x86-64. 2005: Moore's Law and economies of scale are the dominant factors in supercomputer design: a single modern desktop PC is now more powerful than a 15-year old supercomputer, and the design concepts that allowed past supercomputers to outperform contemporaneous desktop machines have now been incorporated into commodity PCs. Furthermore, the costs of chip development and production make it uneconomical to design custom chips for a small run and favor mass-produced chips that have enough demand to recoup the cost of production. A current model quad core Xeon workstation running at 2.66 GHz will outperform a multimillion dollar Cray C90 supercomputer used in the early 1990s, lots of workloads requiring such a supercomputer in the 1990s can now be done on workstations costing less than 4000 US dollars. November 2006: the top ten supercomputers on the Top500 list (and indeed the bulk of the remainder of the list) have the same top-level architecture. Each of them is a cluster of MIMD multiprocessors, each processor of which is SIMP. The supercomputers vary radically with respect to the number of multiprocessors per cluster, the number of processors per multiprocessor, and the number of simultaneous instructions per SIMP processor. Within this hierarchy we have: - A computer cluster is a collection of computers that are highly interconnected via a high-speed network or switching fabric. Each computer runs under a separate instance of an Operating System (OS). - A multiprocessing computer is a computer, operating under a single OS and using more than one CPU, where the application-level software is indifferent to the number of processors. The processors share tasks using Symmetric Multiprocessing (SMP) and Non-Uniform Memory Access (NUMA). - A SIMP processor executes the same instruction on more than one set of data at the same time. The processor could be a general purpose commodity processor or special-purpose vector processor. It could also be high performance processor or a low power processor. November 2007: the fastest machine is Blue Gene/L. This machine is a cluster of 65,536 computers, each with two processors, each of which processes two data streams concurrently. By contrast, Columbia is a cluster of 20 machines, each with 512 processors, each of which processes two data streams concurrently. Additionally, many problems carried out by supercomputers are particularly suitable for parallelization (in essence, splitting up into smaller parts to be worked on simultaneously) and, particularly, fairly coarse-grained parallelization that limits the amount of information that needs to be transferred between independent processing units. For this reason, traditional supercomputers can be replaced, for many applications, by "clusters" of computers of standard design which can be programmed to act as one large computer. Comprehensive text-related glossary
Activities I. Say what is true and what is false. Correct the false sentences 1. The design concepts developed for supercomputers are now used in commodity computers. 2. Chip R&D is not profitable in terms of mass-production. 3. The main difference between supercomputers in 2006 is the number of multiprocessors. 4. A computer cluster is a collection of computers running a single operating system. 5. The application-level software in supercomputers depends on the number of CPU. 6. Each computer in a cluster runs under different OSs. II. Write down the main characteristics of supercomputer architecture in 2005, 2006 and 2007. III. Find the paragraph where is mentioned about: 1) coarse-grained parallelization; 2) special-purpose vector processor; 3) the number of multiprocessors per cluster; 4) mass-produced chips; 5) the application-level software; Make up short sentences using these word combinations. IV. Questions for discussion: 1. The tasks that supercomputers solved 15 years ago can be outperformed by modern custom PCs. How can you explain this? 2. What does the architecture of supercomputer include? 3. What are the main principles of supercomputer work? 4. Do you agree that challenge tasks can be solved only by supercomputers? Why? V. Find in the text on modern computer architecture the words which can render the main text content. Try to make denotation graph. VI. Read and translate the text. Find in the text and write out the key words (i.e. the words which represent the main ideas of the text) Supercomputer challenges A supercomputer generates large amounts of heat and must be cooled. Cooling most supercomputers is a major HVAC problem. Information cannot move faster than the speed of light between two parts of a supercomputer. For this reason, a supercomputer that is many meters across must have latencies between its components measured at least in the tens of nanoseconds. Seymour Cray's supercomputer designs attempted to keep cable runs as short as possible for this reason: hence the cylindrical shape of Cray range of computers. In modern supercomputers built of many conventional CPUs running in parallel, latencies of 1-5 microseconds to send a message between CPUs are typical. Supercomputers consume and produce massive amounts of data in a very short period of time. Much work on external storage bandwidth is needed to ensure that this information can be transferred quickly and stored/retrieved correctly. Complex architecture of modern computers makes development of effective programs a hard task. It is known that an average programmer does not achieve 20% of peak performance of his computer even with the sequential programs. Parallel programs show much worse figures because of the difficulties which come from the interprocess communications. When trying to increase the performance of application, the developer has to take into account general principles of effective programming and particular architecture features of target supercomputer. Also, it is important to make right choice of tools and ways of parallel programming, and pay attention to the characteristic properties of the system software. Usually, an application programmer is not able to reveal bottlenecks of the target supercomputer, and does not know where the possibilities to increase performance are and how to use them to the best. This is because the task requires testing of supercomputer at different levels of its architecture and measuring of its properties. The most of the widely known test suites can be used to estimate performance of components of supercomputers (Flops, STREAM, NetTest, The Livermore loops, MPI tests, etc.) or determine a performance of particular applications (Linpack, SPEC, NPB). VII. Match the beginning of a sentence with an ending to produce a statement that is correct according to the text
VIII. Answer the questions: 1. What are the main problems of supercomputers mentioned in this text? 2. Seymour Cray's supercomputer design has a cylindrical shape. What is the reason? 3. Can an average programmer use the supercomputer effectively? Give your reasons. 4. What should a user take into account running a supercomputer? 5. How can we test supercomputers? IX. Group work. Choose one important new word or phrase in the text, then prepare an explanation of this word or phrase. The other group should guess the word or phrase concerned. X. According to the given information in the text complete the graph with missing phrases then make your own sentences
--------------------------------------------------------------------------------------- Unit Four Text. PARALLEL PROCESSING Key words: processor, speed, memory, input/output, distributed computing systems, intercommunication, parallelization, overhead There are three primary limits to performance at the supercomputer level: individual processor speed, the overhead involved in making large numbers of processors work together on a single task, and the input/output speed between processors and between processors and memory. Input/output speed between the data-storage medium and memory is also a problem, but no more so than in any other kind of computer, and, since supercomputers all have amazingly high RAM capacities, this problem can be largely solved with the liberal application of large amounts of money. The speed of individual processors is increasing all the time, but at a great cost in research and development, and the reality is that we are beginning to reach the limits of silicon based processors. Seymour Cray showed that gallium arsenide technology could be made to work, but it is very difficult to work with and very few companies know enough to make usable processors based on it. It was such a problem that Cray Computer was forced to acquire their own GaAs foundry so that they could do the work themselves. The solution the industry has been turning to, of course, is to add ever-larger numbers of processors to their systems, giving them their speed through parallel processing. However, parallelism brings with it the problems of high overhead and the difficulty of writing programs that can utilize multiple processors at once in an efficient manner. Both problems had existed before, as most supercomputers had from two to sixteen processors, but they were much easier to deal with on that level than on the level of complexity arising from the use of hundreds or even thousands of processors. If these machines were to be used the way mainframes had been used in the past, then relatively little work was needed, as a machine with hundreds of processors could handle hundreds of jobs at a time fairly efficiently. Distributed computing systems, however, are more efficient solutions to the problem of many users with many small tasks. Supercomputers, on the other hand, were designed, built, and bought to work on extremely large jobs that could be handled by no other type of computing system. So ways had to be found to make many processors work together as efficiently as possible. |