Traffic Camera Calibration

Traffic Surveillance Camera Calibration by 3D Model Bounding Box Alignment for Accurate Vehicle Speed Measurement [CVIU]

Paper teaserAbstract: In this paper, we focus on fully automatic traffic surveillance camera calibration which we use for speed measurement of passing vehicles. We improve over a recent state-of-the-art camera calibration method for traffic surveillance based on two detected vanishing points. More importantly, we propose a novel automatic scene scale inference based on matching bounding boxes of rendered 3D models of vehicles with detected bounding boxes in the image. The proposed method can be used from an arbitrary viewpoint and it has no constraints on camera placement. We evaluate our method on recent comprehensive dataset for speed measurement BrnoCompSpeed. Experiments show that our automatic camera calibration by detected two vanishing points method reduces the error by 50% compared to the previous state-of-the-art method. We also show that our scene scale inference method is much more precise (mean speed measurement error 1.10km/h) outperforming both state of the art automatic calibration method (error reduction by 86% — mean error 7.98km/h) and manual calibration (error reduction by 19% — mean error 1.35km/h). We also present qualitative results of automatic camera calibration method on video sequences obtained from real surveillance cameras on various places and under different lighting conditions (night, dawn, day).


BrnoCompSpeed: Review of Traffic Camera Calibration and A Comprehensive Dataset for Monocular Speed Measurement [arXiv] (IEEE ITS - under review)

BrnoCompSpeed teaserAbstract: In this paper, we focus on visual speed measurement from a single monocular camera, which is an important task of visual traffic surveillance. Existing methods addressing this problem are hard to compare due to lack of a common dataset with reliable ground truth. Therefore, it is not clear how the methods compare in various aspects and what are the factors affecting their performance. We captured a new dataset of 18 full-HD videos, each around one hour long, captured at 6 different locations. Vehicles in videos (20,865 instances in total) are annotated with precise speed measurements from optical gates using LIDAR and verified with several reference GPS tracks. We provide the videos and metadata (calibration, lengths of features in image, annotations, etc.) for future comparison and evaluation. Camera calibration is the most crucial part of the speed measurement; therefore, we analyze a recently published method for fully automatic camera calibration and vehicle speed measurement and report the results in detail on this dataset.


Automatic Camera Calibration For Traffic Understanding [BMVC 2014]

BMVC 2014 teaserAbstract: We propose a method for fully automatic calibration of traffic surveillance cameras. This method allows for calibration of the camera - including scale - without any user input, only from several minutes of input surveillance video. The targeted applications include speed measurement, measurement of vehicle dimensions, vehicle classification, etc. The achieved mean accuracy of speed and distance measurement is below 2%. Our efficient C++ implementation runs in real time on a lowend processor (Core i3) with a safe margin even for full-HD videos.


Fully Automatic Roadside Camera Calibration For Traffic Surveillance [IEEE ITS 2014]

IEEE ITS 2014 teaserAbstract: This paper deals with automatic calibration of roadside surveillance cameras. We focus on parameters necessary for measurements in traffic surveillance applications. Contrary to the existing solutions, our approach requires no a priori knowledge and it works with a very wide variety of road settings (number of lanes, occlusion, quality of ground marking), and with practically unlimited viewing angles. The main contribution is that our solution works fully automatically - without any per-camera or per-video manual settings or input whatsoever - and it is computationally cheap. Our approach uses tracking of local feature points and analyzes the trajectories in a manner based on Cascaded Hough Transform and parallel coordinates. An important assumption for the vehicle movement is that at least a part of the vehicle motion is approximately straight -- we discuss the impact of this assumption on the applicability of our approach and show experimentally, that this assumption does not limit the usability of our approach severely.