Computer Science, Computer Engineering, Electrical Engineering, Biomedical Engineering, Mechanical Engineering, and Civil Engineering, together with physics, cognitive psychology, evolutionary biology and mathematics have been involved in computer vision to solve many theoretical and practical problems. For example, understanding of human vision as well as vision of terrestrial animals, underground creatures, underwater animals, and flying animals could be very helpful in developing improved computer vision algorithms for various "smart" autonomous machines such as terrestrial vehicles, mobile robots, underground drones, submersibles, and constellations of small satellites, including nano-, pico-, and femto¬satellites. Another fundamental reason for the quest to develop better computer vision is the expanding interest in cognitive dynamical systems. While adaptive control systems constituted the foundation of the previous revolution in systems, cognitive dynamical systems appear to be the next.
This book is right in the center of such developments. It is motivated by the need to solve many fundamental issues in computer vision, including static and moving object segmentation, scene registration and tracking, as well as object classification. Its approach is also matching the requirements of dynamic systems by including not only the three classic stable states (point, cyclic and toroidal stability), but also the fourth stable state, chaos. Under special conditions and driving functions, chaos can develop in a system to affect its behavior either negatively or positively. The positive impact is explored in the book using aperiodic forcing functions to induce temporal chaotic-like behavior (in image sequences) and spatial chaotic-like behavior (in image textures). This approach is of interest to many researchers today. Since the phase-space trajectories (behavior) of chaotic systems is fractal and multi-fractal, their analysis must also be multi-fractal. The book uses this approach to detect chaotic behavior in a system. This approach could also be used to quantify the behavior in order to improve it. Since a multi-fractal analysis is necessarily based on information-theoretic arguments, it often is superior to energy-based approaches in quantification of complexity of systems and their behavior. An example of the use of a unified approach to multi-fractal analysis of phase-space self-affine objects is presented in Chapter 5.
Since this well-crafted and unique book is supported by many examples, illustrations and images, it is easy to read and understand even though it deals with very difficult scientific and technical matters. It should be of interest to many students, researchers and practitioners not only in the area of computer vision, but also to the broader audience of mechatronics, robotics, secure and resilient systems, as well as adaptive, perceptual and cognitive systems.
Cognitive Systems Laboratory
Department of Electrical and Computer Engineering
University of Manitoba
Computer vision has been an active area of research in the Computer Science community for well over half a century. Various areas of research within this community such as image segmentation and object recognition continue to be actively pursued. These areas continue to prove challenging in terms of developing general approaches to broad classes of problems. Approaches that are promising in the laboratory and on standard data sets often fail to meet expectations in real world applications. Researchers have derived motivation for their algorithmic approaches from a variety of other research fields including, physics, physiology, psychology, evolutionary biology, and mathematics. Approaches inspired by physics include optical flow which is obviously based on fluid flow theory and normalized graph cut which is loosely based on coupled oscillators models. Likewise, numerous mathematically inspired models such as probability distribution mixture models, fractal analysis, support vector machines, and optimization theory have been employed. Approaches inspired by physiology obviously include neural networks, while evolutionary biology has inspired genetic algorithms and swarm-based algorithms.
In parallel over the past thirty years researchers in dynamical systems and physics have explored the field of chaos theory where complex system behaviors have been understood as emerging from relatively simple mathematical constructs. This research has found a place in a broad variety of applications such as weather analysis, mechanical vibration analysis, and electrical signals in biological neural pathways. Much of the biological research has been conducted in perception systems such as nasal and optical systems. Because of these efforts in explaining biological neural systems through chaotic dynamics it was quite natural that researchers began exploring adding chaos theory to artificial neural networks and this has been a common approach for enhancing neural network algorithmic performance.
Chaos theory to date, however, has been focused predominantly on applications to neural networks in the computer vision community. Some researchers have applied chaos theory to peripheral tasks such as image compression, but it has not found a home in the traditional computer vision tasks such as segmentation, image registration, or feature extraction. In this text we will remedy this shortcoming. We explore applying concepts from chaos theory to all aspects of computer vision including static and moving object segmentation, scene registration and tracking, as well as neural and genetic algorithm approaches to object classification.
One powerful concept that has arisen from the research in biological neural systems is the concept of the aperiodic forcing function that when applied to a system can create complex chaos-like behavior. This concept will be exploited throughout this text both for generating temporal chaotic-like behavior in image sequences as well as for generating spatial chaotic-like behavior in image textures. In particular, we will demonstrate that illumination effects in image scenes can be modeled by linear forcing functions while motion, contextual change and image texture can be modeled with these aperiodic forcing functions. We will exploit this distinct difference in forcing functions to develop approaches to computer vision applications. This is a dramatically different approach from algorithms to date which intentionally linearize these various effects with a variety of approximations. In this text we will embrace the inherent non-linearities present in complex image scenes and by analyzing them using the tools from chaos theory we will discover approaches that are inherently immune to illumination change. Results from a variety of real-world applications using datasets collected by numerous researchers throughout the world will demonstrate the efficacy of the proposed approaches.
One interesting result of this book is that we will present algorithms to solve a broad range of vision tasks all based on this common theme of chaos theory. Tasks that in the past have been analyzed using significantly different algorithms such as motion segmentation and texture analysis will be processed using the common approach of chaos theory. This text provides the first attempt to develop a fully unified view of computer vision tasks, with the various tasks being manifestations of temporal or spatial chaos, and a common toolset of algorithms are applicable across all of the tasks.
List of Contributors
University of Michigan-Flint