Главная страница
Навигация по странице:

  • Data Analysis Module.

  • Data classification module.

  • Analysis of data relevance.

  • Statistical analysis module.

  • 2. METHODS OF APPLICATION AND TYPES OF INFORMATION COLLECTION SYSTEM IN COMPUTERS.

  • EXAMPLES of information sources used in automated data collection and processing

  • How do automated data collection and processing systems work

  • Where are they used

  • 3. TECHNICAL PART OF SSD ANALYSIS

  • Текст на английском на тему Система сбора информации и данных, с переводом. 30000 знаков с переводом. Задачи и понятие сбора информации


    Скачать 490.49 Kb.
    НазваниеЗадачи и понятие сбора информации
    АнкорТекст на английском на тему Система сбора информации и данных, с переводом
    Дата14.01.2022
    Размер490.49 Kb.
    Формат файлаdocx
    Имя файла30000 знаков с переводом.docx
    ТипЗадача
    #331136
    страница3 из 4
    1   2   3   4

    Data collection module. This module is used to periodically poll data sources in order to obtain current information. To describe data sources, a special language is used that allows you to make a basic processing of the received data, i.e. bring them to a form that can be used by other modules.

    Data Analysis Module. It serves for the initial verification of the data obtained by the data collection module for reliability and for emission according to the criteria specified by the program. In the simplest case, these criteria can be the maximum and minimum values of the parameters obtained.

    Data classification module. Necessary to separate the resulting

    data per group (Example 1: normal pressure, high pressure and critical pressure. Example 2: student, graduate student, employee). The features used for classification are specified in the configuration files of the module using a special description language.

    Database. Processed and classified data is transferred to the DBMS storage. In order to make the program universal, the MySQL DBMS is used. It also allows you to organize a permanent backup of stored information, as well as constant maintenance of the integrity of the database as a whole. In addition, the advantages of this DBMS include cross-platform and the ability to organize clusters for storing large amounts of data.

    Analysis of data relevance. At the classification stage, each of the classes is assigned a period during which the data obtained will be relevant. This module is a kind of filter that prevents obsolete data from entering the current reports. However, when building a reliable model of development, it is necessary to take into account all the data, including historical ones. Therefore, the use of this module in the system is not necessary.

    Statistical analysis module. It is perhaps the most complex module of the system. It is a combination of various functions that implement different methods of statistical analysis of the study. At the output, the module will provide data obtained using the selected method of statistical analysis, or models built for various classes of the variable according to the algorithm chosen by the user. It is also planned to develop a module for neural network analysis.

    User modules for compiling current reports and reports with a prediction of the dynamics of variable development are actually a user interface for working with the program. The modules are Web-oriented, so they do not require the installation of additional software on the user's computer of the system. Most of the data will be presented to the user in graphical form, which greatly simplifies the process of analysis and decision-making by the user.

    The module that carries the main load is the data analysis module. Typical tasks that will be solved by this module are:

    1. Description of data (compact and informative presentation of the data obtained)

    2. Establish data group matches (e.g. data matches by month, by source)

    3. Distinguishing groups of data

    A description of the data. In the tasks solved by the program, there will usually be large sets of measured data (hundreds, and sometimes thousands of results of measurements of individual characteristics), so the task arises of a compact description of the available data. To do this, they use the methods of descriptive statistics - describing the results using various aggregated indicators and graphs.

    In addition, some indicators of descriptive statistics are used in statistical criteria in determining the reliability of coincidences and/or differences in the characteristics of several groups of data.

    Indicators of descriptive statistics can be divided into several groups:

    • Position indicators describe the position of the experimental data on the numerical axis. Examples of such data are maximum and minimum sampling elements, average value, median, fashion, etc .;

    • Spread indicators describe the degree of dispersion of data relative to its center (average value). These include: selective variance, the difference between the minimum and maximum elements (scope, sampling interval), etc.• asymmetry indicators: the position of the median relative to the average, etc.

    • graphs, diagrams, etc.

    These indicators are used for visual representation and primary ("visual") analysis of the results.

    General approaches to determining the validity of coincidences and differences. As noted above, the typical task of data analysis in pedagogical research is to establish coincidences or differences in the characteristics of different groups of data. To do this, statistical hypotheses are formulated:

    • the hypothesis of the absence of differences (the so-called zero hypothesis);

    • hypothesis about the significance of differences (the so-called alternative hypothesis).

    To make decisions about which of the hypotheses (zero or alternative) should be adopted, decisive rules are used - statistical criteria. That is, based on information about the results of observations (characteristics of members of the experimental and control group), a number called the empirical value of the criterion is calculated. This number is compared to a known (e.g., table-specified) reference number called a critical criterion value.

    Critical values are usually given for several levels of significance. The significance level is the probability of an error consisting in rejecting (not accepting) the null hypothesis, that is, the probability that the differences are considered significant, and they are actually random.

    Typically, significance levels (denoted by a) of 0.05, 0.01, and 0.001 are used. If the empirical value of the criterion obtained by the researcher is less than or equal to the critical one, then the zero hypothesis is accepted - it is considered that at the given level of significance (that is, with the value a for which the critical value of the criterion is calculated), the characteristics of the data groups coincide. Otherwise, if the empirical value of the criterion turns out to be strictly greater than the critical one, then the null hypothesis is rejected and an alternative hypothesis is accepted - the characteristics of the data groups are considered different with the certainty of differences (1 - a).

    In other words, the smaller the empirical value of the criterion (the to the left it is from the critical value), the greater the degree of coincidence of the characteristics of the compared objects. Conversely, the greater the empirical value of the criterion (the farther it is from the critical value), the more the characteristics of the objects being compared differ.

    2. METHODS OF APPLICATION AND TYPES OF INFORMATION COLLECTION SYSTEM IN COMPUTERS.

    A data collection system is a complex of tools designed to work in conjunction with a personal computer or a specialized computer and carrying out automated collection of information about the values of physical parameters at given points of the research object from analog and / or digital signal sources, as well as primary processing, accumulation and transmission of data.

    Together with a personal computer equipped with specialized software, the data collection system forms an information and measuring system (IIS). IIS is a multi-channel measuring instrument with extensive data processing and analysis capabilities. On the basis of IIS, various automated control systems (ACS) can be built, including: information and logical complexes (they are called ACS technological processes - APCS), information and computing complexes (automated system of scientific research - ASNI), information and diagnostic complexes and information control systems.

    Classification

    According to the method of pairing with the computer, the data collection system can be divided into:

    1) data collection system based on embedded data acquisition cards with a standard system interface (pci interface is the most common).

    2) data collection system based on data acquisition modules with external interface 3) data collection system made in the form of crates

    Groups of digital measuring instruments or intelligent sensors: GPIB, 1-wire, CAN, HART interfaces are used for their organization.

    According to the method of obtaining information, the data collection system is divided into:

    1) scanning,

    2) multiplex (multiplexer, sometimes they say "multi-point"),

    3) parallel,

    4) multiplied.

    The last type of data collection system is practically not used due to its exceptionally low performance. The only advantage of SSD of this type - relative simplicity - is completely leveled by modern technologies for the manufacture of integrated circuits.

    Characteristics

    The scanning principle of the construction of the data collection system is used where it is necessary to measure the field of distribution of parameters: a thermal imager, an ultrasound machine, a tomograph are used to obtain primary information from the scanning type of data collection system .

    Parallel data collection systems should be considered data collection system based on the so-called intelligent sensors (ID). Each ID is a single-channel SSD with a specialized interface. Historically, the first parallel data collection system was data collection system , where each sensor "personal" had only an ADC, and data was collected and processed by a multiprocessor computer. Currently, for the collection and processing of measurement information, as a rule, the computational characteristics of an "ordinary" computer are quite enough. Parallel systems do not yet displace multiplexer ones, due to their hardware redundancy. However, in some cases, the parallel principle is attractive: when there are inexpensive ready-made IDs and an inexpensive communication channel (for example, a system on the 1-Wire interface) or with a small number of channels (quadruple sigma-delta ADC is produced), etc.

    Multiplex (multiplexer) data collection system has for each measuring channel individual means of analog signal processing and a common for all channels analog-digital conversion unit (in addition to the ADC itself, it necessarily includes an "anti-aliasing" FLF, a storage sampling device, optionally - a protection scheme and a sign discharge generation scheme). The most common at present are multiplex data collection systems.

    A typical data acquisition system is multiplex and contains the following nodes: sensors, analog switch, measuring amplifier, analog-to-digital converter, data acquisition controller, interface module. Also, data collection system is often equipped with digital input-output lines and a digital-to-analog converter (DAC).

     

    EXAMPLES of information sources used in automated data collection and processing:

    1. Sensors that record raw material costs, output and equipment downtime.

    2. Various measuring flow devices, e.g. fuel meters at automatic car filling stations.

    3. Modern electronic scales used by wholesale suppliers and packaging departments in large food chains.

    4. Automated time tracking systems based on smart cards.

    5. Banknote counters and electronic cash desks.

    6. Video cameras installed in cities – in addition to safety functions, they can also be used in data collection for further analysis of traffic flows.

    7. Primary data from paper media (documents, tables, graphs) shall be entered into automated data collection and processing systems directly from personal computers or with the help of scanners.

    Information from these and many other sources of information goes directly to the automated data collection and processing system. The received data is processed, including deciphered if necessary and converted into a form convenient for making management decisions.

    How do automated data collection and processing systems work?

    The input receives analog signals from sensors of physical quantities installed on various physical objects, including natural ones. A variety of devices that make up the complex amplify the signal, remove extraneous noise, filter it and convert it into digital. Then, already in digital form, the information enters the controller for initial processing.

    Various additional devices are also involved in data collection:

    Scanners collect printed information or graphic images. In production, these can be barcode scanners, automatic sensors of pressure, temperature, humidity.

    Audio and video information is collected by voice recorders and video cameras, special devices for recording radio and television signals.

    Complexes of data collection and processing are used quite widely, in almost all areas of industry and agriculture, as well as in scientific research.

    Geological exploration in hard-to-reach places and the collection of information from artificial satellites of the earth, warnings about natural disasters and the calculation of the load on power grids during their design, the management of a large plant and the control of resource consumption in housing and communal services - all this would be impossible without information collection and processing complexes.

    Where are they used?

    Books with accounting entries, lists of suppliers and lists of products - all this remained in the last century, along with forgotten accounts and payroll statements. Most of today's generation of managers and business owners don't even know what these once necessary things look like.

    Business automation began with accounting. And if a couple of decades ago accounting entries were automatically taken into account, which facilitated the compilation of the balance sheet, then modern platforms for business automation help to keep, among other things, records of suppliers, buyers, groups of goods and individual items, to receive reports in any combination of data.

    Automatic formation of declarations, reports for state bodies, payroll records greatly facilitated the work of accountants, reduced technical errors, saved considerable sums by optimizing the number of personnel, reducing fines.

    Modern platforms for business automation are also being introduced for management accounting. Many of them provide opportunities to manage the company's business processes, set tasks and monitor their implementation.

    The work of purchasing managers who operate with the data of hundreds of suppliers and tens of thousands of items of goods is also impossible manually. Platforms for business automation offer not only to automate the process of ordering goods in a certain time from certain suppliers, they can also track price reductions and offer a replacement of the supplier of a particular item of goods to the benefit of the company.

    The work of large medical centers, including private ones, is currently greatly facilitated by the introduction of platforms for business automation - medical information systems. These systems store all information about patients, appointments, drugs, operations. Doctors do not waste time manually filling out a large number of reports, searching for information about the patient and the treatment performed.

    In conclusion, at present it is possible to automate the processes even for microbusinesses and the self-employed - many banks, along with maintaining accounts, provide the necessary services for online accounting, automatic tax deductions.

     

    3. TECHNICAL PART OF SSD ANALYSIS

    You can imagine that the processor, instead of executing a set of instructions, will be rebuilt for each program and turn the algorithm directly into hardware. That's how FPGAs work. FPGA stands for field-programmable gate array, in Russian - user-programmable gate arrays, FPGMs. More generally, they are called FPGAs – programmable logic integrated circuits.

    With the help of FPGA, you can literally design digital chips, sitting at home with an affordable debug board on the table and developer software for a couple of kilobucks. However, there are also free options. It is worth considering: it is to design, not to program, because the output is a physical digital circuit that performs a certain algorithm at the hardware level, and not a program for the processor.

    It works something like this. There is a ready-made printed circuit board with a set of interfaces that are connected to the FPGA chip installed on the board, like a cool board for the data center or a debug board for training.

    Until we configure the FPGA, there is simply no logic inside the chip to process data from the interfaces, and therefore nothing will obviously work. But as a result of the design, firmware will be created, which, after loading into the FPGA, will create the digital circuit we need. For example, you can create a 100G Ethernet controller that will receive and process network packets.

    An important feature of the FPGA is the ability to reconfigure. Today we need a 100G Ethernet controller, and tomorrow the same card can be used to implement the independent four 25G Ethernet interfaces.

    There are two major manufacturers of FPGA chips: Xilinx and Intel, which control 58 and 42% of the market, respectively. The founders of Xilinx invented the first FPGA chip back in 1985. Intel entered the market recently – in 2015, absorbing the company Altera, which was founded at the same time as Xilinx. Xilinx and Altera technologies are similar in many ways, as are development environments.

    FPGAs are widely used in various devices: consumer electronics,telecom equipment, accelerator boards for use in data centers, various robotics, as well as in prototyping ASIC chips. I'll look at a couple of examples below. We will also consider the technology that provides hardware reconfiguration, get acquainted with the design process and analyze a simple example of the implementation of a hardware counter in the Verilog language.
    1   2   3   4


    написать администратору сайта