Skip to Main content Skip to Navigation
Conference papers

Video Object Detection Base on RGB and Optical Flow Analysis

Abstract : Although the image object detection technology developed quickly, the object detection in videos has special conditions such as motion blur, video defocus, part occlusion, rare posture, and needs to keep the target detection result consistent in time. It cannot apply image object detection models directly on videos. In this paper, we proposed the video object detection method based on the YOLO-v3 together with the FlowNet 2.0 optical flow extraction network to make full use of the information of the consistent video frames. The proposed Flow-guided Partial Warp method operates at the feature map level to make the detection result more accurate. In order to solve the motion blur influence of background, the optical flow compression by bilinear interpolation can minimize the influence. Meanwhile, the proposed method also uses the frame skipping to avoid the repeated use of convolutional neural networks in frames with little change, which reduces the consumption of computing resources and speeds up the running of the model. In addition, this paper also attempts to use the autonomous vehicles simulator as the input data of the video target detection algorithm. Finally, after testing in some typical scenarios, the detection results of most scenes are obviously improved after using optical flow guidance and aggregation feature maps.
Complete list of metadatas
Contributor : Jean-Baptiste Vu Van <>
Submitted on : Thursday, November 21, 2019 - 9:18:09 AM
Last modification on : Friday, November 22, 2019 - 1:36:06 AM





Shunyao Zhang, Tian Wang, Chuanyun Wang, Yan Wang, Guangcun Shan, et al.. Video Object Detection Base on RGB and Optical Flow Analysis. 2019 2nd China Symposium on Cognitive Computing and Hybrid Intelligence (CCHI), Sep 2019, Xi'an, China. ⟨10.1109/CCHI.2019.8901921⟩. ⟨hal-02373483⟩



Record views