Dynamicboost
In dynamic textures, the temporal evolution of image intensities is captured by a linear dynamical system, whose parameters live in a Stiefel manifold: clearly non-Euclidean. Boosting is a remarkably simple and flexible classification algorithm with widespread applications in computer vision. However, the application of boosting to non- Euclidean, infinite length, and time-varying data, such as videos, is not straightforward. We present a novel boosting method for the recognition of visual dynamical processes. Our key contribution is the design of weak classifiers (features) that are formulated as linear dynamical systems. The main advantage of such features is that they can be applied to infinitely long sequences and that they can be efficiently computed by solving a set of Sylvester equations. This method can be applied to dynamic texture classification.
BAG OF DYNAMICAL SYSTEMS
In this paper, we consider the problem of categorizing videos of dynamic textures under varying view-point. We propose to model each video with a collection of Linear Dynamics Systems (LDSs) describing the dynamics of spatiotemporal video patches. This bag of systems (BoS) representation is analogous to the bag of features (BoF) representation, except that we use LDSs as feature descriptors. This poses several technical challenges to the BoF framework. Most notably, LDSs do not live in a Euclidean space, hence novel methods for clustering LDSs and computing codewords of LDSs need to be developed. Our framework makes use of nonlinear dimensionality reduction and clustering techniques combined with the Martin distance for LDSs for tackling these issues. Our experiments show that our BoS approach can be used for recognizing dynamic textures in challenging scenarios, which could not be handled by existing dynamic texture recognition methods.