Issue 2 (2025)
System analysis, management and information processing
Annotation: At the moment, significant changes are taking place in the field of standardization. The transition from the text
format to the format of smart (SMART) standards will make it possible to move from communication in this area
from the chain "man - standard - man - machine" to the chain "M2M". Taking into account the fact that there is
a large number of standards in the field of standardization of information technologies, it becomes very difficult
to solve the problems of forming rational profiles of standards in this area. The transition to SMART standards
will significantly improve the quality and reduce the cost of creating such profiles. However, in addition to the
SMART standards themselves, the formation of such profiles requires a whole range of additional mathematical
software. The article substantiates such a list. The article also concludes that it is necessary to develop a "Strategy
for the Digital Transformation of Standardization in the Field of Information Technology" as soon as possible.
Keywords: IT standardization, SMART standard, reference model, information technology, system approach, smart
document.
Page numbers: 5-12.
Annotation: The world is changing at an incredible pace - we are witnessing technological breakthroughs and rapid digitalization, changing needs and expectations of society, regulatory pressures and geopolitical dynamics. Innovative solutions must meet all these modern challenges. Currently, standardization is intensifying discussions on the use of artificial intelligence, an interoperable and secure digital environment and its primary task of cybersecurity, as well as issues related to technological convergence and the key role of standards in navigating its complex landscape.
Leading standardization organizations in the world are advancing their digital transformation efforts in all industries. In this regard, SMART standardization and related issues of artificial intelligence are of the greatest interest to us.
The article considers the most significant results of the development and general trends of SMART standardization using examples of the activities of international organizations ISO and IEC, national standardization organizations CEN, CENELEC, SAC, DIN, DKE, AFNOR, ONORM, as well as a number of the most proactive foreign standard-developing organizations: SAE International, ASME, UL Solution.
Keywords: digital transformation, SMART standard, CEN, CENELEC, IEC, ISO, DIN, DKE, AFNOR, ONORM, SAE International, ASME, UL Solution
Page numbers: 13-19.
Annotation: The modular approach to the construction of manipulative robots is increasingly being used in various fields, as
it provides the possibility of reconfiguring the kinematic structure, thereby expanding the range of possible
applications of robotic tools in an uncertain working environment, as well as improving the uniformity of the
components of the structure and the reliability of the system as a whole. Despite the advantages of this concept,
its widespread adoption is limited by a number of disadvantages. It is shown that one of the main problems in
the control of modular manipulative robots is the mutual dynamic influence of degrees of mobility, for the
solution of which the application of ANFIS structures at various levels of the hierarchy of an intelligent control
system is considered. The paper proposes an approach to dynamically decoupling the drives of a modular
manipulative robot by using this class of neuro-fuzzy inference systems at the tactical level to approximate the
solution.
Keywords: modular manipulation robot, dynamic correction, intelligent control, neuro-fuzzy systems, ANFIS
Page numbers: 20-31.
Annotation: Abstract
Objectives. The article considers the problem of identifying novelty patterns in a list of regular expressions. The purpose of
the work is to study approaches to identifying novelty patterns in a list of regular expressions using one-class classification
algorithms.
Methods. To solve the problem, it is proposed to use approaches based on one-class classification algorithms, such as One
Class SVM and Isolation Forest. To represent regular expressions in vector form, it is proposed to use the models of
bidirectional pre-trained transformers BERT and ModernBERT.
Results. The results of experimental studies confirm the feasibility of using one-class classification algorithms for developing
classifiers that implement the identification of novelty patterns in a list of regular expressions. A herewith, the superiority of
the ModernBERT model over the BERT model is observed in terms of ensuring high quality classification when identifying
novelty patterns in a list of regular expressions.
Conclusions. The considered approaches to one-class classification of regular expressions can be recommended for use in
identifying novelty patterns in a list of regular expressions. In this case, vectorization of regular expressions used in training
and testing one-class classifiers can be performed based on models of bidirectional pre-trained transformers. One-class
classifiers of regular expressions can be used to check new data, including generated data, for the presence of normal patterns
and novelty patterns.
Keywords: regular expression, novelty pattern, BERT, ModernBERT, one-class classification, One-Class SVM, Isolation
Forest
Page numbers: 32-48.
Annotation: In a number of studies on risk management, the concept of "acceptable risk level" is used as a criterion for the need to take measures to respond to risks. The paper formulates a risk management strategy for innovative programs based on criteria related to the limits of acceptable risk. The criteria for the need to make changes to the program according to the criterion of an acceptable level of risk, or termination of its type of inexpediency, have been determined. An example of managing a program for updating the orbital constellation of the space communications system with innovative new-generation spacecraft is given, showing the ability to promptly, based on program monitoring data, make decisions on the need to form risk response measures within the framework of standard risk management processes, including in case of deviations in the program's implementation from planned targets.
Keywords: planning, program implementation management, technological innovations, process, risk
Page numbers: 93-104.
Annotation: Glossaries play a key role in standardizing terminology and ensuring uniformity of communication in rapidly
developing areas such as information technology (IT). However, their development and application are associated
with a number of problems due to the dynamism of the IT sphere and the interdisciplinary technology
environment. The article uses a unified electronic catalog of IT terms, compiled on the basis of various IT
standards (GOSTs, ITIL, COBIT, etc.), as a tool for identifying glossary problems. The use of such a tool allows
us to identify various formulas of terms and understand the nature of the emergence of such languages. In
particular, it is shown that there are discrepancies in the relevant terms even within the same type of standard (for
example, the GOST standard), and such discrepancies can be both insignificant and significant. One of the
problems associated with the definition of terms is related to translation into English if the standard is created on
the basis of an international standard. Another common problem for IT glossaries is the ambiguity of terms
depending on the area of application. Electronic catalogs comply with standards and allow for problems with the
quality of glossaries in IT.
Keywords: knowledge, glossary, standards, terms
Page numbers: 105-111.
Annotation: Objectives. The Kerberos protocol serves as a fundamental authentication mechanism within corporate networks, making
it a frequent target of attacks by malicious actors. Among the most dangerous are the so-called Golden Ticket and Silver
Tickeе attacks. In these scenarios, the attacker utilizes forged tickets that bypass the standard authentication process
defined by the protocol. Given that current mitigation techniques fail to provide adequate protection – largely because
they rely on post-event analysis – there is a critical need for the capability to detect and neutralize such threats at an early
stage. The objective of this study is to detect Golden Ticket and Silver Ticket attacks by analyzing the contents of forged
tickets generated by attackers during their initial use. To identify anomalies within these tickets that indicate illegitimacy,
an automated detection system must be developed. Methods. To achieve the stated objective, the functionality of software
tools designed to generate forged tickets, such as Mimikatz, was analyzed to gain an understanding of their operating
principles. An analysis of the contents of various illegitimate tickets was also carried out to identify anomalous patterns.
Based on the revealed characteristics, sufficient criteria were defined to perform the analysis for detecting suspicious
authentications in the system using a deterministic method. Results. Several anomalies have been identified, the presence
of which in a ticket clearly indicates its illegitimacy, as well as one anomaly that requires additional analysis to reach a
verdict. An abstract system is described that uses a deterministic method to detect forged tickets, including certain
implementation details. Coclusions. All identified logical inconsistencies typical of forged tickets stem from the attacker’s
inability to accurately replicate the original structure of the target domain. This occurs because each domain is formed
based on unique characteristics inherent to a specific infrastructure, such as user and group hierarchy, service
configurations, or security policies. Despite the obvious and simple nature of the errors in forged tickets, they will reliably
detect suspicious authentication and prevent the negative consequences of an attack on the protocol.
Keywords: authentication, Kerberos protocol, cyberattacks detection, Golden Ticket, Silver Ticket
Page numbers: 112-120.
Computing systems and elements
Annotation: Design based on very-large-scale integrated circuits (VLSI) involves modularity, integration of libraries of ready
made functional blocks and standardized interfaces. The article hypothesizes that the creation of a unified format
for describing hardware models will speed up the development and testing of hardware solutions based on VLSI.
Unification of hardware model descriptions is an important part for compatibility, simplification of development,
testing, updates and acceleration of product launch to the market. The main disadvantages of the considered
existing solutions and methods are revealed. Aspects of hardware models are defined using the example of a device
receiving data using the UART protocol. The main functional units of the hardware models are highlighted:
combinational circuits, simple circuits with memory, and finite automata. The implementation of each functional
node in the hardware description language Verilog HDL and in the high-level programming language Python is
considered. Simulations of the developed models at the system and hardware levels are presented. The similarities
of the description of hardware and software models are revealed. A route for designing hardware models based
on VLSI using a unified description is proposed. The structure of the hardware model is proposed, which does not
depend on the integrated development environment and computer-aided design tools. The work lays the
foundation for the standardization of design methods for computing systems focused on development in the VLSI
framework. Promising ways to develop this hypothesis may include the following areas: integration with artificial
intelligence and machine learning, expanding the field of testing hardware models, automating the process of
translating models into target languages, and developing visualization systems for design. These areas can
significantly expand the possibilities of applying the proposed hypothesis and contribute to more efficient design
of computing systems based on VLSI. It is planned to conduct research in the field of automatic analysis and
optimization of hardware model testing, which will improve design efficiency and simplify the integration of new
technologies into existing computing systems and their elements.
Keywords: description, software models, hardware models, programming languages, hardware description languages,
VLSI, model, Verilog HDL, Python, testing, verification.
Page numbers: 49-77.
System programming
Annotation: The study aims to develop methods for formal analysis of the stability of distributed data replication systems under variable network topology and channel characteristics. The research addresses limitations of traditional approaches, which are insufficiently effective in the dynamic conditions of modern information infrastructures.
This work introduces stochastic models based on the Erdős–Rényi model for topology description and the Gilbert–Elliott model for communication channels. These approaches account for the probabilistic nature of connections and fluctuations in channel states. The proposed dynamic equations describe system behavior, including queue lengths, data flow intensity, and the number of replicas. Lyapunov analysis was employed to determine equilibrium states and their stability. Experimental verification demonstrated the high accuracy of the model in predicting system behavior. Metrics such as mean squared error and the explained variance ratio confirmed the adequacy of the proposed equations. The model exhibits robustness to changes in load parameters and topology, underscoring its universality. Key findings include the feasibility of utilizing averaged characteristics for real-time management of distributed systems. The proposed approach minimizes delays, ensures efficient utilization of bandwidth, and maintains stable system performance even under highly dynamic conditions. The practical significance of this work lies in the applicability of the proposed models for optimizing existing infrastructures, including Kubernetes clusters with Cilium integration. The developed management mechanisms enable adaptation to changing operational conditions, paving the way for the design of scalable and resilient distributed data transmission systems.
Keywords: distributed systems, replication, stochastic modeling, network topology, system stability
Page numbers: 78-92.
Telecommunication systems, networks and devices
Annotation: The purpose of this paper is to investigate the possibilities of applying neural network (NN) controller in the control
system of a refrigeration plant. Within the framework of the research the principles of construction and training
of neural network will be considered, as well as experiments to evaluate the effectiveness of the proposed approach
in comparison with traditional methods of regulation will be carried out. The effectiveness of the proposed
approach is analyzed by comparing it with traditional control methods. The results of modeling and experimental
studies show the improvement of ice cover temperature stability and reduction of energy consumption to maintain
the set parameters. The study will help to determine the prospects for the use of neural network technologies in
the control of industrial equipment and identify possible directions for further development of this method. The
importance of taking into account the specifics of each particular task when designing and tuning neural network
regulators is emphasized.
Keywords: refrigeration plant, neural network controller, data processing, error back propagation method, ice coating
Page numbers: 121-127.
Issue Archive
№ 2
2025
№ 1
2025
№ 4
2024
№ 3
2024
№ 2
2024
№ 1
2024
№ 4
2023
№ 3
2023
№ 2
2023
№ 1
2023
№ 4
2022
№ 3
2022
№ 2
2022
№ 1
2022
№ 4
2021
№ 3
2021
№ 2
2021
№ 1
2021
№ 4
2020
№ 3
2020
№ 2
2020
№ 1
2020
№ 4
2019
№ 3
2019
№ 2
2019
№ 1
2019
№ 4
2018
№ 3
2018
№ 2
2018
№ 1
2018
№ 4
2017
№ 3
2017
№ 2
2017
№ 1
2017
№ 4
2016
№ 3
2016
№ 2
2016
№ 1
2016
№ 4
2015
№ 3
2015
№ 2
2015
№ 1
2015
№ 3
2014