This paper examines and compares two neural networks, U-Net-Attention and SegGPT, which use different attention mechanisms to find relationships between different parts of the input and output data. The U-Net-Attention architecture is a dual-layer attention U-Net neural network, an efficient neural network for image segmentation. It has an encoder and decoder, combined connections between layers and connections that pass through hidden layers, which allows information about the local properties of feature maps to be conveyed. To improve the quality of segmentation, the original U-Net architecture includes an attention layer, which helps to enhance the search for the image features we need. The SegGPT model is based on the Visual Transformers architecture and also uses an attention mechanism. Both models focus attention on important aspects of a problem and can be effective in solving a variety of problems. In this work, we compared their work on segmenting cracks in road surface images to further classify the condition of the road surface as a whole. An analysis and conclusions are also made about the possibilities of using architectural transformers to solve a wide range of problems.
Keywords: machine learning, Transformer neural networks, U-Net-Attention, SegGPT, roadway condition analysis, computer vision
This article discusses the possibility of changing the formulation of an anti-icing mixture directly in a combined road machine, by integrating data from road weather stations and modernizing a universal spreader. To quickly change the recipe of the distributed mixture, using the example of a sand-salt mixture, it is proposed to use a two-hopper universal spreader with an automated control system. The recipe for the distributed anti-icing mixture is calculated depending on weather conditions. An example of LabVIEW software is given to solve the local problem of finding a current weather station.
Keywords: automation, road machine, universal spreader, de-icing materials, composition selection, control system