Issue 4 (2025)
System analysis, management, and information processing
Annotation: When solving system engineering problems, systems of various functional purposes are required to ensure
interoperability. According to GOST R 55062-2021, interoperability is the ability of two or more systems or
components to exchange necessary information and to use the information obtained as a result of such exchange.
The purpose of this work is to provide an overview of existing models and methods of system analysis of
interoperability to identify significant threats and conditions when solving practical system engineering
problems. The review includes some illustrative examples.
Keywords: When solving system engineering problems, systems of various functional purposes are required to ensure
interoperability. According to GOST R 55062-2021, interoperability is the ability of two or more systems or
components to exchange necessary information and to use the information obtained as a result of such exchange.
The purpose of this work is to provide an overview of existing models and methods of system analysis of
interoperability to identify significant threats and conditions when solving practical system engineering
problems. The review includes some illustrative examples.
Page numbers: 5-23.
Annotation: Building a multivariate control system for a manipulation robot based on solving its inverse dynamics problem is an effective approach to eliminating the interactions between degrees of freedom. One of the main factors limiting its application is the high computational complexity associated with the significant degree of nonlinearity of the manipulator's dynamic model. In this regard, it is important to consider intelligent knowledge processing approaches capable of approximating nonlinear functions of many variables. Problems of this kind are effectively solved by neuro-fuzzy systems, of which adaptive neuro-fuzzy inference systems (ANFIS) are currently the most widely used. This paper is devoted to the development of a multivariate control system for
a manipulation robot based on these systems. A three-link manipulator with PUMA (Programmable Universal
Manipulation Arm) kinematics is considered as the control object, since this kinematics has become widespread
in industrial robotics. Ten ANFIS structures, which together approximate the solution to the inverse dynamics
problem for this manipulator, are trained using analytical data. This paper presents two types of multivariate
control systems: one with direct dynamic compensation for the mutual influence of degrees of freedom and one
with feedback-based linearization of the plant. The approximation of the solution to the inverse dynamics
problem required in both systems is provided by the implemented ANFIS structures. The developed neuro-fuzzy
control systems are compared with a system based on proportional-integral-differential controllers (PID
controllers). Experimental studies to estimate the tracking error during the execution of a given trajectory were
conducted using a mathematical modeling environment. According to the obtained results, direct dynamic
compensation enabled a tenfold increase in accuracy, while feedback-based linearization enabled a thousandfold
increase in accuracy compared to a control system based on PID controllers.
Keywords: manipulative robot, feedback linearization, direct dynamic compensation, multiconnected control systems,
intelligent control, neuro-fuzzy systems, ANFIS.
Page numbers: 38-49.
Annotation: An analysis of the current state of regulation and standardization of artificial intelligence technologies in the road transport complex of the Russian Federation has been conducted. The study reveals a significant gap between a formally developed regulatory framework, which includes strategic documents, federal legislation, and approved national standards, and its limited practical application in industry processes. Based on comprehensive monitoring, encompassing scientific communication, public procurement, expert assessments, the regulatory landscape, patent activity, and public discourse, key systemic problems have been identified.
A roadmap for the phased integration of artificial intelligence into the road transport complex of the Russian Federation has been formulated to overcome these challenges. In the short term, the key stages of the roadmap's implementation are defined as the creation of a testing laboratory for the validation and certification of AI technologies and in-depth standardization in two areas: intelligent road infrastructure and motor vehicles. In the
medium term, the priority is the formation of legislative foundations for liability concerning decisions made by
AI systems, which requires the implementation of models of shared liability and mandatory algorithm audits. In
the long term, the strategic goal is the establishment of an industry competence center to consolidate expertise,
harmonize standards within the Eurasian Economic Union, and ensure the interoperability and sustainable
development of intelligent transport. The proposed approach aims to transform the fragmented model of AI
regulation into an integrated system. Its foundation is the synergy of three key elements: technical standards
ensuring interoperability and safety; legal frameworks distributing liability; and an expert platform for
methodological support. The integration of these components forms a sustainable system that ensures not only
the safe introduction of innovations but also the long-term competitiveness of the industry through balanced and
proactive regulation.
Keywords: artificial intelligence technologies, road transport complex of the Russian Federation, regulatory
framework, standards, industry competence center, intelligent transport.
Page numbers: 50-57.
Annotation: The article presents a systematic analysis of the transformation of the creative process of a monumental artist in the integration of artificial intelligence (AI) technologies into his professional activities. The purpose of the study was to build and substantiate a model of effective interaction between the artist and AI at all stages of professional activity, from the formation of a concept to its material embodiment. Using the methods of system analysis, the creative process was decomposed into seven interconnected stages, and the functions, contributions, and limitations of AI were defined for each stage. The result was a model that includes a graphical flowchart, a table of functional distribution, and a structural diagram of "inputs-process-outputs-criteria." The scientific novelty lies in the systematic description of the monumentalist's creative process as a human-machine system and in identifying areas of productive synergistic interaction. AI has been shown to be effective in analytical and preparatory tasks (reducing time, expanding the visual range, and reducing cognitive load), but it does not replace creative intuition, artistic vision, or skill.
Keywords: system analysis, monumental art, creative process, artificial intelligence, digital design
Page numbers: 146-154.
Computing systems and elements; Systems, networks, and telecommunications devices
Annotation: Modern telecommunication systems require high-speed and energy-efficient optical switches to process signals under dynamically changing loads. Existing solutions such as mechanical, electro-optical, acousto-optical, and thermo-optical switches have performance and control limitations, making them unsuitable for optical packet switching (OBS) and 5G/6G networks. Objective: To develop a fiber-optic fast non-relational switching device based on a dual-resonator Fabry-Perot interferometer (DIFF) with a comb-like structure of the output mirror, providing high switching speed and the ability to control directly through an optical signal. Methods used: The paper uses computer modeling methods (using the HFSS package) to optimize the design of a comb mirror and analyze the interference pattern in a diffraction pattern mixer. Analytical approaches have also been used to calculate device
parameters, including the gradient of the refractive index and the sharpness of the interference pattern. Novelty:
The proposed design of a diffraction pattern with a combed mirror structure that provides multi-port signal
switching. Using a gradient mixer to compensate for distortion of the interference pattern. Implementation of nonrelational
control, where control information is transmitted through the optical signal itself. Results: The developed
device demonstrates the possibility of switching optical signals with a switching time of up to fractions of
nanoseconds, which meets the requirements of modern high-speed networks. The simulation confirmed the
effectiveness of a stepped-beveled comb structure for separating signals with different wavelengths. Practical
significance: The device can be used in data centers (data centers), 5G/6G networks, as well as in radio frequency
transmission (RoF) systems for controlling antenna arrays. The implementation of the technology will reduce power
consumption and increase data processing speed, which is especially important for optical packet switching and
dynamic routing tasks.
Keywords: fiber optic packet switching, dual-cavity Fabry-Perot interferometer, optical mixer, refractive index gradient,
fractional lambda switching
Page numbers: 24-37.
Annotation: The article presents a comprehensive analysis of current approaches to the testing and calibration of microelectromechanical (MEMS) accelerometers and gyroscopes within inertial navigation systems (INS). Existing international and Russian standards (ISO 16063, IEEE 1293, IEEE 2700, IEC 60068, GOST R 8.820–2013, GOST R 52931–2008) are examined, with their applicability and limitations in relation to modern MEMS sensors identified. A classification of MEMS sensor errors is systematized, including bias, scale factor error, axis misalignment, temperature effects, and noise characteristics (ARW/VRW, bias instability, and others). The role of methods such as Allan variance and PSD in identifying noise components is highlighted. Key challenges are analyzed, including the fragmentation of existing methodologies among manufacturers and laboratories, the shortage of specialized metrological infrastructure (six-position stands, multi-axis turntables, thermal chambers), the lack of comparability of results due to differences in signal processing approaches, and the neglect of cross-effects when calibrating accelerometers and gyroscopes separately. A comparison with alternative technologies (FOG, RLG) is presented, emphasizing the competitive advantages of MEMS in terms of
cost, size, and energy efficiency, despite their limitations in accuracy. A multi-level standardization system is proposed, encompassing unified static and dynamic test scenarios, harmonized error models, a machine-readable reporting format (JSON/XML), and a set of unified metric procedures (Allan deviation, PSD, uncertainty budgets in accordance with GUM). Methodologies for static and dynamic testing, joint INS calibration, and examples of required sensor passport fields are described.
In conclusion, the necessity of developing a national standard harmonized with international documents is substantiated. Such a standard would ensure reproducibility of testing, comparability of results, and increased confidence in domestic INS solutions on the global navigation technology market.
Keywords: The article presents a comprehensive analysis of current approaches to the testing and calibration of microelectromechanical (MEMS) accelerometers and gyroscopes within inertial navigation systems (INS). Existing international and Russian standards (ISO 16063, IEEE 1293, IEEE 2700, IEC 60068, GOST R 8.820–2013, GOST R 52931–2008) are examined, with their applicability and limitations in relation to modern MEMS sensors identified. A classification of MEMS sensor errors is systematized, including bias, scale factor error, axis misalignment, temperature effects, and noise characteristics (ARW/VRW, bias instability, and others). The role of methods such as Allan variance and PSD in identifying noise components is highlighted. Key challenges are analyzed, including the fragmentation of existing methodologies among manufacturers and laboratories, the shortage of specialized metrological infrastructure (six-position stands, multi-axis turntables, thermal chambers), the lack of comparability of results due to differences in signal processing approaches, and the neglect of cross-effects when calibrating accelerometers and gyroscopes separately. A comparison with alternative technologies (FOG, RLG) is presented, emphasizing the competitive advantages of MEMS in terms of
cost, size, and energy efficiency, despite their limitations in accuracy. A multi-level standardization system is proposed, encompassing unified static and dynamic test scenarios, harmonized error models, a machine-readable reporting format (JSON/XML), and a set of unified metric procedures (Allan deviation, PSD, uncertainty budgets in accordance with GUM). Methodologies for static and dynamic testing, joint INS calibration, and examples of required sensor passport fields are described.
In conclusion, the necessity of developing a national standard harmonized with international documents is substantiated. Such a standard would ensure reproducibility of testing, comparability of results, and increased confidence in domestic INS solutions on the global navigation technology market.
Page numbers: 138-145.
Computing systems and elements; System programming
Annotation: The paper considers a semantically oriented method for forming a targeted set of test inputs for mutation-based testing (fuzz testing) of syntactic analyzers (parsers) for structured formats. Mutation-based testing is understood as automated generation and systematic modification of input data in order to provoke failures (crashes) and выявлять defects in the program under test. A syntactic analyzer (parser) is treated as a component that performs parsing of an input sequence according to the rules of a formal grammar and constructs a structured representation of the input. The approach is based on a formal representation of the analyzed program as an annotated control flow graph, where each conditional statement is associated with a logical predicate. These predicates are solved using SAT/SMT solvers, which makes it possible to generate input data targeted at program branches that are rarely reached. This mechanism increases the probability of revealing errors related to boundary conditions, deep nesting, and complex logical dependencies. The method includes a quantitative evaluation of branch coverage and residual risk using the Good–Turing method,
providing a formalized criterion for the completion of a test campaign. The practical applicability of the approach is demonstrated on a set of parsers (cJSON, RapidJSON, tinyxml2, yaml-cpp) using the American Fuzzy Lop++ (AFL++) tooling. Under identical compilation conditions and instrumentation parameters, a stable increase in the share of covered branches and in the number of unique transitions by 8–10% was observed compared to the baseline configuration, along with faster reaching of rare branches. For additional programs processing structured data, the increase was about 11–13%, which confirms the transferability of the method. Run-to-run variability, the impact of input complexity, and solver limitations were taken into account, which improves the reliability of the conclusions. It is shown that the residual risk estimate quantitatively describes the probability of discovering new branches at later stages of testing. In conclusion, it is substantiated that incorporating semantic information about the program structure when forming the test set increases the effectiveness of fuzz testing and is recommended for parsers of deeply nested and grammatically rich formats.
Keywords: fuzz testing, control flow graph (CFG), SAT/SMT solver, code coverage, residual risk, semantic seed corpus, parsers of structured formats.
Page numbers: 58-69.
System programming
Annotation: In virtual reality (VR) applications, some users experience cybersickness. For some users who easily tolerate immersion in VR, the combination of certain types of movements and movements causes discomfort. When developing software for VR applications, it is necessary to reasonably choose those movement combinations that cause minimal discomfort. Since different user groups have different levels of training and stress tolerance (for example, pilots practically do not experience cyberstress, cybersport athletes have a high level, students and users of VR simulators have different levels of resistance to cyber-pain), it is urgent to develop tools for analyzing cybersickness for a specific group of users. Software has been developed that includes various movement combinations used in VR games. A study was conducted for two samples of users (students). The study identified the most preferred combinations of movements and their technical characteristics that can be recommended when developing VR games or simulators for students.
Keywords: cybersickness, virtual reality, VR, VR portability level, VR experiment, VR sickness, motion sickness.
Page numbers: 70-80.
Annotation: This paper proposes a real‑valued single‑objective optimization algorithm that combines the potential of quantum‑inspired computation with a context‑aware self‑tuning mechanism. The core of the new algorithm is a modified quantum‑inspired genetic algorithm employing multi‑level quantum systems and a physically grounded decoherence model that emulates the noisy intermediate‑scale quantum era. In contrast to existing quantum‑inspired optimization algorithms that rely on manual calibration or fixed heuristics, the proposed algorithm automatically adjusts key control parameters, such as the rotation angle in quantum gates and the mutation probability, by analyzing its own performance history using success‑history adaptation combined with a second‑order Lehmer weighted mean. This enables dynamic balancing between global exploration and local exploitation tailored to the characteristics of the objective function landscape. Comprehensive evaluation on a suite of benchmark functions from the evolutionary computation benchmark set demonstrates that the proposed algorithm achieves high reliability and robustness on multimodal, as well as complex hybrid and composite functions. The results highlight the promise of integrating quantum‑inspired optimization models with adaptive control strategies to develop robust black‑box optimization tools.
Keywords: quantum-inspired algorithm, genetic algorithm, single-objective optimization, qudit, parameter adaptation, self-adaptation, success history.
Page numbers: 81-101.
Annotation: Objectives. This paper examines the problem of nonlinear data dimensionality reduction using the PaCMAP algorithm. The goal of the study is to explore the different scenarios of data preprocessing and embedding initialization when implementing the PaCMAP algorithm. Methods. The basic version of the PaCMAP algorithm uses the PCA algorithm, a linear dimensionality reduction algorithm, for data preprocessing and embedding initialization. This paper examines and explores various scenarios of data preprocessing and embedding initialization using 11 linear and nonlinear dimensionality reduction algorithms within the PaCMAP algorithm, in terms of loss function minimization. Results. Experimental studies on the test and real-world datasets demonstrate the advantages of several dimensionality reduction algorithms when included in the scenarios of data preprocessing and embedding initialization compared to the PCA algorithm. The best results (in terms of loss function minimization) on the examined datasets were obtained, in particular, using the UMAP, MSD, and SE algorithms. However, the use of the MSD algorithm within the PaCMAP algorithm is accompanied by significant time costs. Conclusions. A number of linear and nonlinear dimensionality reduction algorithms offer advantages (in terms of loss function minimization) over the PCA algorithm when included in the scenarios of data preprocessing and embedding initialization of the PaCMAP algorithm. Using the PCA algorithm within the PaCMAP algorithm minimizes its implementation time, while using the MSD algorithm within the PaCMAP algorithm results in the maximum implementation time.
Keywords: dimensionality reduction algorithm, PaCMAP, dataset, visualization, data preprocessing, embedding initialization, loss function.
Page numbers: 102-123.
Annotation: The article proposes an approach to the automated synthesis of the electronic structure of a product based on large language models (LLM), constraint programming and intermediate representation translators. The problem of converting unstructured natural language requirements of experts into formalized machine-readable representations conforming to GOST R 2.053-2023 and GOST 2.054-2013 standards is considered. A key feature of the approach is the use of LLM as a semantic converter that generates an intermediate representationin accordance with a formal context-free grammar based on JSON. The architecture of the system is presented, which includes modules for semantic synthesis, validation of intermediate representation and translators into target formats. The paper presents a modular system architecture that includes three main levels: interaction, processing and synthesis, as well as data and integration. The components of synthesis, validation of intermediate representation, and translators into SQL and UML formats are described in detail. Two algorithms have been developed and experimentally tested: the basic synthesis of ESI and an extended one using the human-in-the-loop methodology for iterative refinement of requirements. The practical significance of the work lies in the possibility of integrating the developed algorithms into automation systems for design, technological preparation of production and product lifecycle management in industrial enterprises.
Keywords: electronic product structure, large language models, translators, intermediate representations, database management systems, structured query language, design automation
Page numbers: 124-137.
Issue Archive
№ 4
2025
№ 3
2025
№ 2
2025
№ 1
2025
№ 4
2024
№ 3
2024
№ 2
2024
№ 1
2024
№ 4
2023
№ 3
2023
№ 2
2023
№ 1
2023
№ 4
2022
№ 3
2022
№ 2
2022
№ 1
2022
№ 4
2021
№ 3
2021
№ 2
2021
№ 1
2021
№ 4
2020
№ 3
2020
№ 2
2020
№ 1
2020
№ 4
2019
№ 3
2019
№ 2
2019
№ 1
2019
№ 4
2018
№ 3
2018
№ 2
2018
№ 1
2018
№ 4
2017
№ 3
2017
№ 2
2017
№ 1
2017
№ 4
2016
№ 3
2016
№ 2
2016
№ 1
2016
№ 4
2015
№ 3
2015
№ 2
2015
№ 1
2015
№ 3
2014