Journal Archive

Issue 4 (2024)

THE MAIN FACTORS THAT INCREASE THE EFFECTIVENESS OF THE APPLICATION OF MODERN STANDARDS IN THE FIELD OF INFORMATION TECHNOLOGY IN THE COURSE OF DIGITAL TRANSFORMATION
Annotation: Information technology standards are an integral part of the digital transformation process. But for their effective use in the development of specific products, it is necessary to solve a whole range of problems, each of which requires serious scientific study. The article provides a list of such tasks and provides a reference model that can be used as the basis for solving the optimization problem of determining the rational ratio of innovative and standard solutions in the development of complex systems, taking into account stages of the life cycle.
Page numbers: 4-8.
ANALYSIS OF POSSIBLE THREATS IN WHEN FUNCTIONING OF THE KNOWLEDGE PROCESSING CENTER
Annotation: One of the urgent tasks at present is the task of forming an effective management system in the field of science, technology, as well as production, in order to ensure a unified scientific and technological space focused on solving government tasks and meeting the needs of the economy and society. The knowledge processing center is precisely the effective system that is able to provide a unified information and scientific and technological space. The Knowledge Processing center provides an opportunity to collect, create and store data in a single information resource, as well as use them effectively to solve tasks. In the process of functioning, a knowledge processing center may be exposed to diverse threats affecting its performance. One of the main processes in the operation of the knowledge processing center is the decision-making process. Since the achievement of the necessary results depends on the decisions taken. Currently, in the face of real and potential threats, decision-making in the knowledge processing center is of priority importance. This article analyzes possible threats that arise when making decisions in the life cycle of a knowledge processing center and suggests measures to counter these threats. Organizational support processes such as the knowledge management process and the quality management process and technical processes such as the functioning process and the maintenance process were selected for the analysis. For each of the selected life cycle processes of the knowledge processing center, the composition of the decisions is determined. After determining the composition of the decisions taken, a list of potential threats is formed for each decision taken and recommendations are given to minimize the risk of disrupting the functioning of the knowledge processing center. The data obtained will further make it possible to justify decisions to reduce and keep risks within acceptable limits for possible threat scenarios that arise in the decision-making process during the operation of the knowledge processing center.
Page numbers: 9-23.
ON THE ISSUE OF EVALUATING THE ACCURACY OF METHODS AND TEST RESULTS OF OIL AND PETROLEUM PRODUCTS
Annotation: To obtain reliable data on the results of tests of oil and petroleum products using instrumental control methods, numerous models are used to evaluate quality indicators. The analysis of precision indicators (convergence and reproducibility) used in instrumental methods of testing petroleum products is carried out. It has been revealed that some standard test methods have precision in all ranges of measurements, and some methods have different precision depending on the measurement range. In this case, precision indicators can be expressed not only as a numerical value, but also as a functional dependence on the value of the quality indicator obtained during the tests. Therefore, the classification of oil and petroleum products testing methods according to precision indicators into four types has been carried out. Theoretical and standardized options for processing statistical data based on the results of tests of oil and petroleum products are presented.
Page numbers: 24-33.
Annotation: Machine learning technologies and various tools for code generation have had a significant impact on the field of software development in recent years. Although most of the existing solutions are not built exactly for code generation, programmers apply them in different tasks. Not many of the existing AI solutions work well with less common languages, such as Kotlin or Swift, that are used in mobile development. Therefore, existing large language models are rarely adapted in the third-party software for mobile developers, although it would benefit the industry. The goal of this work is to develop a service that would use a large language model to provide the users, mobile developers, with a tool for efficient programming in the aforementioned programming languages. The developed service utilizes an already existing language model, which is fine-tuned based on the data available online in open-source repositories and collected manually. The developed software can perform various programming tasks specific to the mobile development domain: writing code for screen layouts, UI (User Interface) components, business logic, and unit tests. The software is also evaluated against the HumanEval benchmark and its variations as well as a custom benchmark that gives an understanding of the quality of generated code. This article is the result of a research project implemented within the framework of the fundamental research program of the National Research University Higher School of Economics (HSE University).
Page numbers: 34-41.
THE PROCESS OF TRANSLATION OF REGULAR EXPRESSIONS OF DIFFERENT DIALECTS WITH OPTIMIZATION OF INTERMEDIATE STATES
Annotation: The article considers aspects of translation of regular expressions from the source to the target dialect as a way to solve the problem of matching a string with an image. The purpose of this work is to develop and justify the architecture of a translator of regular expressions into different dialects taking into account the optimization of intermediate representations in the process of translation. The classification of dialects of regular expressions and the classification of software implementations of executors of finite automata described by regular expressions are presented. Recommendations are formulated for choosing a specific implementation of regular expressions in the aspect of text processing problems. An algorithm for optimizing regular expressions using population algorithms is described. The results of an experiment on optimizing intermediate representations of verification regular expressions using the differential evolution algorithm and the particle swarm algorithm are presented.
Page numbers: 42-58.
ON INCREASING THE TECHNOLOGICAL READINESS OF THE DIGITAL ECONOMY WITH THE USE OF ARTIFICIAL INTELLIGENCE ELEMENTS
Annotation: The development of productions and entire industries with high digital maturity significantly depends on the availability of computing infrastructure that allows for the management of Internet of Things objects, digital twins and other digital entities. For a number of reasons, the possibility of extensive expansion of access to resources is locally limited today. This acutely raises the question of the efficiency of using the existing infrastructure, especially for computationally time-consuming tasks, in addition, having requirements or restrictions for speed, timeliness, and reliability of calculations. The sufficiency of means to ensure trust in data and produced information in conditions of mass access to digital platforms, services, and means of communication is particularly relevant today. Estimation of the energy consumption of the digital computing industry of the Russian Federation are given, as well as possible ways to increase the impact of the existing infrastructure. A complex issue is formulated not only to increase the energy efficiency of the entire economic system, which will ensure a balance of active development of both the social sphere (infrastructure, convenience, accessibility of services) and the real sector, industries, but also the energy efficiency of computing, network communication and other types of digital infrastructure, which determines the possibilities of advanced development in key sectors of the economy. The paper considers ways of rational management of computing infrastructure for a class of tasks with limitations, which allows us to form the basis for increasing the output for a number of available classes of computing tasks enabled by the usage of a certain intelligent agents [discrete] class. The development of such methods allows us to formulate a systematic direction for increasing the output of digital methods application both in individual industries and in the economy of the state as a whole in the context of the formation of technological sovereignty for the foreseeable future.
Page numbers: 59-67.
INVESTIGATION OF THE CHARACTERISTICS OF PIPELINED STRUCTURES
Annotation: The article discusses the issues of identifying the dependencies between the main characteristics of a specialized computing devices with a pipeline architecture and the chosen solution to the main problem, as well as the limitations of the hardware platform. It presents a model of a parameterized pipelined computing circuit that allows flexible configuration of the computing system to meet the specific requirements of a task. A set of metrics has been compiled to evaluate the circuit design, including indicators such as Worst Negative Slack, total power, static power, dynamic power and the number of hardware resources used in the design. The results of experiments to identify dependencies based on these metrics are summarized in graphs that clearly demonstrate how the computing device’s characteristics change with different task implementations and changes in hardware platform parameters.The article also discusses ways to optimize the pipeline architecture in order to achieve better performance and efficiency. Special attention is paid to analyzing the impact of different architectural solutions on overall system performance. The data presented in the article can be useful for developers working on designing and optimizing specialized computing systems. The final part of the article includes conclusions and recommendations for choosing optimal architectural solutions based on different types of tasks. It also discusses the application of a parameterized pipeline computing model in real-world scenarios, describing specific cases where it has shown good results and providing recommendations for adapting it. This makes the research particularly valuable for engineers and scientists who are seeking to improve the efficiency and performance of specialized computing systems for a variety of applications.
Page numbers: 68-82.
GPU-BASED ALGORITHM OF TWO-DIMENSIONAL CONSTRUCTIVE GEOMETRY
Annotation: The visualization algorithms used in the development of electronic design automation computer-aided design software do not have sufficient performance to work with modern large-scale tasks, such as the development of chiplets. The aim of the work is to develop a visualization algorithm based on a graphics processor that has similar functionality and sufficient performance to solve complex modern and promising tasks. The algorithm is based on the idea of processing geometric primitives in raster form on the GPU, while resulting geometry in vector form is not being calculated on the CPU. The algorithm is required to form the outline of the resulting geometry, parametric fills and work with transparency. The developed algorithm successfully copes with tasks with a volume of 1.6 million elements and has some performance reserve and the possibility of further improvement.
Page numbers: 83-97.
Previous issue

Issue Archive