×

You are using an outdated browser Internet Explorer. It does not support some functions of the site.

Recommend that you install one of the following browsers: Firefox, Opera or Chrome.

Contacts:

+7 961 270-60-01
ivdon3@bk.ru

  • Assessment of the rational directions of the forest industry development of the Russian Arctic border regions based on cluster analysis

    The purpose of this study is to present the forestry complex development scenarios of the Republic of Karelia and the Murmansk region. Based on the use of factor analysis and cluster analysis, 27 central foresters of the study region were divided into 9 clusters according to 20 indicators. The selected indicators took into account the characteristics of wood resources, natural-production conditions and road infrastructure. Based on cluster profiles, as well as on topographical, climatic, soil map and vegetation maps, scenarios for the development of the study region forestry complex in the context of the resulting clusters. The results of the study showed that as they move from south to north, a gradual impoverishment of wood resources occurs. The efforts of the state and business should be aimed at resolving issues of road infrastructure, involving deciduous, small, energy wood in production circulation. Given the natural and production conditions, which are largely determined by the moist forest soils and the extreme vulnerability of the northern ecosystems, in the process of logging, it is especially necessary to pay attention to the minimization of the negative impact of logging operations on the soil cover.

    Keywords: zoning, forest industry, factor analysis, cluster analysis, k-means cluster analysis, logging, forest management

  • Construction and evaluation of the effectiveness of a decision tree model for predicting student performance

    This work solves the problem of increasing the effectiveness of educational activities by predicting student performance based on external and internal factors. To solve this problem, a model for predicting student performance was built using the Python programming language. The initial data for building the decision tree model was taken from the UCI Machine Learning Repository platform and pre-processed using the Deductor Studio Academic analytical platform. The results of the model are presented and a study was conducted to evaluate the effectiveness of predicting student performance.

    Keywords: forecasting, decision tree, student performance, influence of factors, effectiveness assessment

  • Applying DIANA hierarchical clustering to improve text classification quality

    The article presents ways to improve the accuracy of the classification of normative and reference information using hierarchical clustering algorithms.

    Keywords: machine learning, artificial neural network, convolutional neural network, normative reference information, hierarchical clustering, DIANA

  • On the development of an information system for processing the results of sports competitions on the 1C:Enterprise platform

    The article describes the results of the development of an information system for processing the results of sports competitions on the 1C:Enterprise platform for informational support of the sports social project Don Family League, which involves entire families in sports and physical education. An object model of configuration data is presented, which allowed structuring the subject area, highlighting the main application objects, their details and the relationships between them, which was later used for algorithmization of the solution and software implementation of complex tools on the 1C platform.:Enterprise". The software package was tested as part of information support for the activities of the Don Family League project in the period from July 2022 to July 2023 and showed high efficiency.

    Keywords: All-Russian physical culture and sports complex «Ready for work and Defense», the Don Family League project, processing of results of sports competitions, rating of individual standings, rating of family standings, rating of team standings

  • Dependence comparison of the effectiveness of neural networks to improve image resolution on format and size

    Roads have a huge impact on the life of a modern person. One of the key characteristics of the roadway is its quality. There are many systems for assessing the quality of the road surface. Such technologies work better with high-resolution images (HRI), because it is easier to identify any features on them. There are a sufficient number of ways to improve the resolution of photos, including neural networks. However, each neural network has certain characteristics. For example, for some neural networks, it is quite problematic to work with photos of a large initial size. To understand how effective a particular neural network is, a comparative analysis is needed. In this study, the average time to obtain the HRI is taken as the main indicator of effectiveness. EDSR, ESPCN, ESRGAN, FSRCNN and LapSRN were selected as neural networks, each of which increases the width and height of the image by 4 times (the number of pixels increases by 16 times). The source material is 5 photos of 5 different sizes (141x141, 200x200, 245x245, 283x283, 316x316) in png, jpg and bmp formats. ESPCN demonstrates the best performance indicators according to the proposed methodology, the FSRCNN neural network also has good results. Therefore, they are more preferable for solving the problem of improving image resolution.

    Keywords: comparison, dependence, effectiveness, neural network, neuronet, resolution improvement, image, photo, format, size, road surface

  • Methodology of formation and determination of parameters of machine learning algorithms for classification of electronic documents according to the importance of information for officials of organizations

    The article considers the methodology of forming and determining the parameters of machine learning algorithms for classifying electronic documents according to the importance of information for officials of organizations, which differs from the known ones by the dynamic formation of the structure and number of machine learning algorithms, due to the automated determination of sets of structural divisions of the organization, sets of keywords reflecting the tasks and functions of structural divisions in the process of automated analysis of the Organization's Regulations, The positions of structural units based on the theory of pattern recognition.

    Keywords: lemmatization, pattern recognition, machine learning algorithm, electronic document, vectorization, formalized documents

  • Ethical aspects of the use of artificial intelligence systems

    In modern society, problems related to the ethics of artificial intelligence (AI) are increasingly emerging. AI is used everywhere, and the lack of ethical standards and a code necessitates its creation to ensure the safety and comfort of users. The purpose of the work is to analyze approaches to the ethics of artificial intelligence and identify the parameters for evaluating approaches to create systems that meet ethical standards and meet the needs of users. Approaches to the ethics of artificial intelligence are considered. The parameters for evaluating approaches are highlighted. The main characteristics are highlighted for each parameter. The parameters described in this paper will help achieve better results when creating standards for the development of safer and more user-friendly systems.

    Keywords: Code, parameters, indicators, characteristics, ethics, artificial intelligence

  • Dependence сomparative analysis of the effectiveness of image quality improvement approaches on the format and size

    Road surface quality assessment is one of the most popular tasks worldwide. To solve it, there are many systems, mainly interacting with images of the roadway. They work on the basis of both traditional methods (without using machine learning) and machine learning algorithms. To increase the effectiveness of such systems, there are a sufficient number of ways, including improving image quality. However, each of the approaches has certain characteristics. For example, some of them produce an improved version of the original photo faster. The analyzed methods for improving image quality are: noise reduction, histogram equalization, sharpening and smoothing. The main indicator of effectiveness in this study is the average time to obtain an improved image. The source material is 10 different photos of the road surface in 5 sizes (447x447, 632x632, 775x775, 894x894, 1000x1000) in png, jpg, bmp formats. The best performance indicator according to the methodology proposed in the study was demonstrated by the "Histogram equalization" approach, the "Sharpening" method has a comparable result.

    Keywords: comparison, analysis, dependence, effectiveness, approach, quality improvement, image, photo, format, size, road surface

  • Model of configuration of structural and functional characteristics of departmental information systems

    This paper considers the conditions and factors affecting the security of information systems functioning under network reconnaissance conditions. The developed model is based on the techniques that realize the dynamic change of domain names, network addresses and ports to the network devices of the information system and false network information objects functioning as part of them. The formalization of the research problem was carried out. The theoretical basis of the developed model is the theories of probability and random processes. The modeled target system is represented as a semi-Markov process identified by an oriented graph. The results of calculation of probabilistic-temporal characteristics of the target system depending on the actions of network reconnaissance are presented, which allow to determine the mode of adjustment of the developed protection measures and to evaluate the security of the target system under different conditions of its functioning.

    Keywords: departmental information system, network intelligence, structural and functional characterization, false network information object

  • Large data deduplication using databases

    To date, a huge amount of heterogeneous information passes through electronic computing systems. There is a critical need to analyze an endless stream of data with limited means, and this, in turn, requires structuring information. One of the steps in solving the problem of data ordering is deduplication. This article discusses the method of removing duplicates using databases, analyzes the results of testing work with various types of database management systems with different sets of parameters.

    Keywords: deduplication, database, field, row, text data, artificial neural network, sets, query, software, unstructured data

  • Artificial intelligence: the danger of inflated expectations

    Currently, digitalization as a technological tool penetrates into the humanitarian sphere of knowledge, linking technocratic and humanitarian industries. An example is legal informatics, in which conceptual devices of quite different – at first glance – areas of human knowledge are interfaced. However, the desire to abstract (formalize) any knowledge is the most important task in the "convergence" of computer technologies and mathematical methods into a non-traditional humanitarian sphere for them. The paper discusses the problems generated by the superficial idea of artificial intelligence. A typical example is the attempt of some authors in jurisprudence to give computer technologies, often referred to as artificial intelligence by humanitarians, an almost sacred meaning and endow it with legal personality.

    Keywords: artificial intelligence, deep learning, machine learning, hybrid intelligence, adaptive behavior, digital economy, digital law, legal personality of artificial intelligence

  • Implementation of a competition for regression models in assessing the amount of social and pension funding

    Social and pension provision are key processes in the activities of any state, and the issues of forecasting expenses for them are among the most important in the economy. The task of evaluating the effectiveness of the pension fund has been solved by various methods, including regression analysis methods. This task is particularly difficult due to the presence of a large number of factors determining the activity of the pension fund, such as: the number of recipients of old-age pensions, the number of policyholders, self-employed policyholders, recipients of benefits, insured persons and working pensioners. As the main approach to the study, the method of implementing a model competition was applied. Those variants that violated the meaningful meaning of the variables and did not fully reflect the behavior of the modeled process were excluded from the resulting set of alternative model options. The final option was selected using the multi-criteria selection method. It is revealed that the use of relative variables is important for qualitative modeling of the studied processes. The above model shows that an increase in the ratio of the number of employers and the self-employed to the number of insured persons leads to a decrease in the cost of financing social and pension provision.The model can be effectively used for short-term forecasting of the total annual volume of financing of the pension fund department in the context of changing social and macroeconomic factors.

    Keywords: pension fund, regression model, model competition, adequacy criteria, forecasting

  • Using segment tree in PostgreSQL

    The article considers an approach to solving the problem of optimizing the speed of aggregating queries to a continuous range of rows of a PostgreSQL database table. A program module based on PostgreSQL Extensions is created, which provides construction of a segment tree for a table and queries to it. Increased query speed by more than 80 times for a table of 100 million records compared to existing solutions.

    Keywords: PostgreSQL, segment tree, query, aggregation, optimization, PosgreSQL Extensions, asymptotics, index, build, get, insert

  • Modification of the algorithm for correcting errors that occur during the operation of the satellite authentication system

    Frequency multiplexing (OFDM) methods have become the main basis for most outbred systems. These methods have also found application in modern systems of low-orbit satellite Internet (LOSIS). For example, the StarLink system uses OFDM transmission systems that use a signal frame consisting of 52 channels to transmit data. One way to increase the data rate in OFDM is to replace the Fourier transform (FT) with a faster orthogonal transform. As such, the modified wavelet transform (MWT) of Haar was chosen. The Haar MVP allows to reduce the number of arithmetic operations during the orthogonal signal transformation in comparison with the PF. The use of integer algebraic systems, such as Galois fields and modular residue class codes (MCCR), makes it possible to increase the speed of a computing device that performs orthogonal transformations of signals. Obviously, the transition to new algebraic systems should lead to changes in the structure of OFDM systems. Therefore, the development of structural models of an OFDM transmission system using the Haar MWP in the Galois field and the ICCM is an urgent task. Therefore, the aim of the work is to develop structural models of wireless OFDM systems using a modified integer discrete Haar transform, which can reduce the execution time of the orthogonal signal transformation. And this, in turn, will lead to an increase in the data transfer rate in the SNSI.

    Keywords: orthogonal frequency multiplexing, modification of the Haar wavelet transform, structural models of execution of the Haar MVP, Galois field, modular residue class codes

  • Estimating the power consumption of wireless sensor network nodes

    The article proposes an algorithm for ensuring the minimum power consumption of end nodes in a wireless network of sensors. A simulation model of the process of information exchange in a wireless network of sensors developed in the Matlab - Simulink software environment is presented, the use of which allows estimating the total power consumption when transmitting messages by all end nodes of the network during a given time interval.

    Keywords: wireless sensor network, LoRaWAN, Internet of Things, Internet of Things, IoT, power consumption, simulation model, Simulink, signal attenuation, frame transmission

  • Analysis of floating point calculations on microcontrollers

    The article discusses methods for optimizing floating point calculations on microcontroller devices. Hardware and software methods for accelerating calculations are considered. Algorithms of Karatsuba and Schönhage-Strassen for the multiplication operation are given. A method for replacing floating-point calculations with integer calculations is proposed. Describes how to use fixed point instead of floating point. The option of using hash memory and code optimization is considered. The results of measuring calculations on the AVR microcontroller are presented.

    Keywords: floating point calculations, fixed point calculations, microcontroller, AVR, ARM

  • Preprocessing speech data to train a neural network

    This article analyzes data processing problems for training a neural network. The first stage of model training - feature extraction - is discussed in detail. The article discusses the method of mel-frequency cepstral coefficients. The spectrum of the voice signal was plotted. By multiplying the vectors of the signal spectrum and the window function, we found the signal energy that falls into each of the analysis windows. Next, we calculated the mel-frequency cepstral coefficients. The use of a chalk scale helps in audio analysis tasks and is used in training neural networks when working with speech. The use of mel-cepstral coefficients significantly improved the quality of recognition due to the fact that it made it possible to see the most informative coefficients. These coefficients have already been used as input to the neural network. The method with mel-frequency cepstral coefficients made it possible to reduce the input data for training, increase productivity, and improve recognition clarity.

    Keywords: machine learning, data preprocessing, audio analysis, mel-cepstral coefficients, feature extraction, voice signal spectrum, Fourier transform, Hann window, discrete cosine transform, short Fourier transform

  • On existing methods for removing noise from an image

    This paper considers existing classical and neural network methods for combating noise in computer vision systems. Despite the fact that neural network classifiers demonstrate high accuracy, it is not possible to achieve stability on noisy data. Methods for improving an image based on a bilateral filter, a histogram of oriented gradients, integration of filters with Retinex, a gamma-normal model, a combination of a dark channel with various tools, as well as changes in the architecture of convolutional neural networks by modifying or replacing its components and the applicability of ensembles of neural networks are considered.

    Keywords: image processing, image filtering, machine vision, pattern recognition

  • Information processing using a VGA adapter for an FPGA camera

    This article describes the first stage of the research work on the development of an FPGA-based camera for vehicle identification tasks, which are widely used in automated weight and size control points. Since the FPGA is an alternative to conventional processors, which features the ability to perform multiple tasks in parallel, an FPGA-equipped camera will be able to perform the functions of detecting and identifying vehicles at the same time.Thus, the camera will not only transmit the image, but also transmit the result of processing for problem-oriented control systems, decision-making and optimization of data flow processing, after which the server will only need to confirm or deny the results of the camera, which will significantly reduce the image processing time from all automated points of weight and size control.In the course of development, a simple VGA port board, a static image program for displaying it on a monitor in 640x480 resolution, and a pixel counter program were implemented. EP4CE6E22C8 is used as FPGA, the power of which is more than enough to achieve the result.

    Keywords: system analysis methods, optimization, FPGA, VGA adapter, Verilog, recognition camera, board design, information processing, statistics

  • Investigation of the corrective capabilities of the noise-immune code of the system of residual classes

    This article explores the LTE-R group, in which the OFDM system has a special place, and considers the possibility of developing new methods for detecting error-correcting coding to test high data transmission under rate estimation conditions. The problems associated with the transmission of large amounts of information in conditions of high speed of movement of trains and the volatility of the environment are considered, as well as the features of interference associated with the railway infrastructure. As noise-immune codes, modular codes are used, which, unlike criminal BCH codes, are arithmetic.

    Keywords: LTE-R standard, OFDM system, modular codes, noise immunity, error hit interval, BCH codes, error packet, error rate

  • Optimization based on mixing methods when solving multicriteria selection problems

    The method of analyzing hierarchies has long been described, studied and applied in practice. In order to reduce the factor of subjectivity inherent in many decisions, we are considering the option of using the hierarchy analysis method, in which the assessment is carried out not by the decision maker, but by a group of independent experts. Thus, we propose a method for solving multicriteria optimization problems based on mixing (combination) of two methods - the method of analyzing hierarchies and the method of expert assessment.

    Keywords: Optimality criteria, alternative, decision maker, optimization, method of expert assessments, method of hierarchy analysis, competence of experts, consistency of expert opinions

  • Improving efficiency of Dijkstra's algorithm using parallel computing technologies with OpenMP library

    The purpose of the study is to improve the efficiency of Dijkstra's algorithm by using the shared memory model with OpenMP library and working on the principle of parallel execution in the implementation of the algorithm. Using Dijkstra's algorithm to find the shortest path between two nodes in a graph is quite common. However, the time complexity of the algorithm increases as the size of the graph increases, resulting in longer execution time, so parallel execution is a good option to solve the time complexity problem. In this research work, we propose a parallel computing method to improve the efficiency of Dijkstra's algorithm for large graphs.The method involves dividing the array of paths in Dijkstra's algorithm into a specified number of processors for parallel execution. We provide an implementation of the parallelized Dijkstra algorithm and access its performance using actual datasets and with different number of nodes. Our results show that Dijkstra's parallelized algorithm can significantly speed up the process compared to the sequential version of the algorithm, while reducing execution time and continuously improving CPU efficiency, making it a useful choice for finding shortest paths in large graphs.

    Keywords: Dijkstra algorithm, graph, shortest paths, parallel computing, shared memory model, OpenMP library

  • Modelling construction time by discrete Markov chains

    Often in practice, construction times are estimated using deterministic methods, for example, based on a network schedule of the construction plan with deterministic values for the timing of specific works. This approach does not reflect the reality associated with the probabilistic nature of risks and leads to a systematic underestimation of the time and, as a consequence, the cost of construction. The research proposes to use a Markov discrete heterogeneous Markov chain to assess the risks of non-completion of construction in due time. The states of the Markov process are proposed to correspond to the stages of construction of the object. Probabilities of system transitions from state to state are proposed to be estimated on the basis of empirical data on previously implemented projects and/or expertly, taking into account the risks characterising construction conditions in dynamics. The dynamic model of the construction plan development allows to determine such characteristics as: the probability of the construction plan realisation within the established terms, the probability that the object will ever be completed, the time of construction to the stage of completion with a given degree of reliability; unconditional probabilities of the system states (construction stage) in a given period of time relative to the beginning of construction. The model has been tested. The proposed model allows us to estimate the time of completion of construction, to assess the risks of failure to complete construction within the established deadlines in the planned conditions of construction realisation, taking into account the dynamics of risks.

    Keywords: construction time, risk assessment, markov model, discrete Markov chain, inhomogeneous random process

  • Modeling and program development for an intelligent system to support personnel management decisions in the electric power industry

    In the conditions of modern economy, where optimal personnel decisions are very important for any organizations, especially in the dynamically divisive electric power industry, the issue of developing an intelligent system for making personnel decisions in the electric power industry becomes relevant. This paper analyzes the existing tools for selection of candidates for vacant positions including managerial positions and vacancies from the electric power industry. Based on the analysis and earlier research, a competency profile of managers of the electric power industry is formed. The development of the program product was conducted using various programming languages in the Visual Studio development environment. The program represents a dynamic and interactive process of managerial decision-making, where users face different scenarios to assess the formed competencies, with the output of a detailed report on their skills, which provides employers with an objective assessment of the candidate's potential for a vacant managerial position.

    Keywords: electric power industry, competences, personnel, personnel, optimal personnel management decisions, intellectual system, personnel management, competence assessment, software product

  • Simulation of the design activity diversification of innovative enterprise

    It is estimated that more than 9% of the Russian population is hearing impaired, and the development of dactyl recognition systems is becoming critical to facilitate their social communication. The introduction of dactyl recognition systems will improve communication for these people, providing them with equal opportunities and improving their quality of life. The research focused on learning the characters of the dactyl alphabet, as well as developing a labeled dataset and training a neural network for gesture recognition. The aim of the work is to create tools capable of recognizing the signs of the Russian dactyl alphabet. Within the framework of this research the method of computer vision was applied. The process of gesture recognition consists of the following steps: first the camera captures the video stream, after the images of hands are preprocessed. Then a pre-trained neural network analyzes these images and extracts important features. Next, gesture classification takes place, where the model determines whether the sign belongs to a certain letter of the alphabet. Finally, the recognition results are interpreted into a suitable symbol associated with the gesture. During the research process, the signs of the dactyl alphabet and interaction features of people with auditory impairment were studied and a dataset of more than 25000 trained data was also created. A model was developed and trained based on the most appropriate architecture for the task of the work. The model was tested and optimized to improve its accuracy. The results of this work can be used in the creation of devices to compensate for poor hearing, providing people with hearing impairment comfort in society.

    Keywords: computer vision, sign recognition, dactyl classification, transfer learning, Russian dactyl alphabet, deep learning, computerization, software, assistive technology, convolutional neural networks