System and method for static and moving object detection
Inventors
Seetharaman, Gunasekaran • Palaniappan, Kannappan • Ersoy, Filiz Bunyak • Wang, Rui
Assignees
Department Of Air Force • United States Department of the Air Force
Publication Number
US-9454819-B1
Publication Date
2016-09-27
Expiration Date
2035-07-10
Interested in licensing this patent?
MTEC can help explore whether this patent might be available for licensing for your application.
Abstract
A method for static and moving object detection employing a motion computation method based on spatio-temporal tensor formulation, a foreground and background modeling method, and a multi-cue appearance comparison method. The invention operates in the presence of shadows, illumination changes, dynamic background, and both stopped and removed objects.
Core Innovation
The invention provides a method for static and moving object detection using a novel hybrid approach that integrates motion computation based on spatio-temporal tensor formulation (flux tensor), a split Gaussian modeling technique for foreground and background, and a multi-cue appearance comparison method. This method is designed to operate effectively in challenging conditions including shadows, illumination changes, dynamic backgrounds, and the presence of both stopped and removed objects.
The problem addressed is the difficulty in reliably detecting moving objects in real-world monitoring scenarios due to factors like background complexity, illumination variation, noise, and occlusions. Prior art techniques relying solely on motion detection methods like flux tensor fail to detect stationary or slowly moving objects as these become assimilated into the background model, resulting in loss of detection when objects stop moving.
The invention, named Flux Tensor with Split Gaussian models (FTSG), overcomes these limitations by fusing flux tensor based motion segmentation with a split Gaussian background subtraction model that separately models background and foreground with adaptive Gaussian mixtures. This fusion allows detection of both moving and stationary foreground objects and classifies static foreground into stopped objects versus revealed background due to object removal. The method also handles dynamic background elements, shadows, illumination changes, and camera jitter, providing improved accuracy and reliable detection over previous approaches.
Claims Coverage
The patent includes two independent claims that cover methods for static and moving object detection using motion segmentation fused with adaptive foreground and background modeling, and semantic labeling of changed pixels.
Motion segmentation using temporal variation of optical flow via flux tensor
The method performs pixel-level motion detection by computing temporal variation of optical flow within a local 3D spatiotemporal volume using a flux tensor matrix, enabling classification of moving and non-moving regions without expensive eigenvalue decompositions.
Background and foreground modeling with adaptive split Gaussian models
Background and foreground are modeled separately, where background modeling uses an adaptive plurality of Gaussians with the number of Gaussians spatially and temporally adaptive per pixel. Foreground modeling uses a separate appearance model to distinguish static foreground from ambiguous foreground detections.
Fusing motion segmentation and split Gaussian models for improved detection
The method fuses motion segmentation results with background and foreground modeling outputs to identify moving foreground objects as those detected by both methods and static foreground objects as those detected by background modeling with a match to the foreground appearance model.
Discriminating static foreground into stopped objects and background via edge matching
Static foreground objects are further discriminated by identifying static pixels, performing edge detection on current image, background model, and foreground mask, and classifying objects based on edge similarity to distinguish stopped objects from revealed background due to object removal.
Associating semantic labels with changed pixels using multi-cue fusion
The method associates semantic labels to changed pixels by performing motion segmentation, background subtraction with split Gaussian models, appearance agreement with foreground model, and object-based analysis to classify pixels into categories such as true moving object, stopped object, shadowing, illumination change, static background, revealed background, and dynamic background.
The claims collectively cover a comprehensive method that integrates motion-based detection via flux tensor, adaptive split Gaussian modeling of foreground and background, fusion of detection results, classification of static foreground objects, and semantic labeling of changed pixels to robustly detect and distinguish moving, stopped, and background objects under challenging conditions.
Stated Advantages
Improved performance and accuracy in detecting moving and stationary objects compared to prior art methods.
Capability to detect stationary objects that remain stopped without assimilating them into the background.
Reduction of false detections caused by shadows, illumination changes, dynamic background, and camera jitter.
Ability to distinguish between different types of static foreground regions such as stopped objects versus revealed background due to object removal.
Fast bootstrapping and adaptive parameter learning that reduce manual parameter specification and enhance detection accuracy.
Robust handling of dynamic backgrounds and environmental changes such as rain, snow, and lighting variations.
Documented Applications
Object tracking
Behavior understanding
Object or event recognition
Automated video surveillance
Detection in video taken by moving cameras after stabilization and registration
Interested in licensing this patent?