Recently, there have been certain important advances in the field of driver attention prediction in dynamic scenarios for the bionic perception of intelligent vehicles. Given the visual characteristics of experienced and skilled drivers, who can rapidly and accurately perceive their environment and effectively identify the significance of environmental stimuli, it is believed that an effective driver attention prediction model should not only extract temporal-spatial features comprehensively but also flexibly highlight the importance of these features. In light of this challenge, this paper proposing an improved multi-scale temporal-spatial fusion network model, which adopts an encoder-fusion-decoder architecture and can fully use the scale, spatial, and temporal information in video data. First, in the encoder, two independent feature extraction backbones, one 2D and another 3D, are used to extract four temporal-spatial features of different scales from the input video clip and align them in the feature dimension. Then, in the hierarchical temporal-spatial feature fusion, features from different levels are added to the channel and fused using an attention mechanism to achieve the 3D-2D soft combination effect guided by spatial features. Finally, in the hierarchical decoder and prediction module, hierarchical decoding and prediction are performed on temporal-spatial features of different branches, and the results of multiple branches are fused to generate saliency maps. Experiments on three challenging datasets show that the proposed method is superior to the state- of-the-art methods regarding several saliency evaluation metrics and can predict driver attention more accurately. By using an effective temporal-spatial fusion strategy based on attention mechanism, the proposed driver attention prediction method can detect important targets and identify risk areas for a human-like autonomous driving system.
This repository was used throughout the whole work presented in the paper so it contains quite a large amount of code. Nonetheless, it should be quite easy to navigate into. In particular:
-
Main Directory: The Python files in the main directory are used for training and testing, corresponding to different datasets (DR(eye)VE, DADA-2000, TDV).
-
model: Contains the MTSF model files.
-
data: Contains files used for loading data during training and testing, corresponding to different datasets.
-
saved_models: Contains the weight files saved during training.
Please note that you need to check the paths in the above scripts and change them to your own paths.
All Python code has been developed and tested with Pytorch.
Pre-trained weights of the MTSF model can be downloaded from this link(链接: https://pan.baidu.com/s/1EXLK_GemaG0h36A9VncELw 提取码: b8ny).
In order to be able to clearly demonstrate the contribution of our proposed MTSF, we made a video demo, you can find it from here.