Skip to Main content Skip to Navigation
Conference papers

Two-streams Fully Convolutional Networks for Abnormal Event Detection in Videos

Abstract : In the context of abnormal event detection in videos, only the normal events are available for the learning process, therefore the implementation of unsupervised learning method becomes paramount. We propose to use a new architecture denoted Two-Stream Fully Convolutional Networks (TS-FCNs) to extract robust representations able to describe the shapes and movements that can occur in a monitored scene. The learned FCNs are obtained by training two Convolutional Auto-Encoders (CAEs) and extracting the encoder part of each of them. The first CAE is trained with sequences of consecutive frames to extract spatio-temporal features. The second is learned to reconstruct optical flow images from the original images, which provides a better description of the movement. We enhance our (TS-FCN) with a Gaussian classifier in order to detect abnormal spatio-temporal events that could present a security risk. Experimental results on challenging dataset USCD Ped2 shows the effectiveness of the proposed method compared to the state-of-the-art in abnormal events detection.
Document type :
Conference papers
Complete list of metadata
Contributor : Jean-Baptiste VU VAN Connect in order to contact the contributor
Submitted on : Monday, August 16, 2021 - 1:51:32 PM
Last modification on : Sunday, June 26, 2022 - 4:41:55 AM


  • HAL Id : hal-03320773, version 1



Slim Hamdi, Samir Bouindour, Kais Loukil, Hichem Snoussi, Mohamed Abid. Two-streams Fully Convolutional Networks for Abnormal Event Detection in Videos. 12th International Conference on Agents and Artificial Intelligence, Feb 2020, Valletta, Malta. pp.514-521. ⟨hal-03320773⟩



Record views