Dynamic scene analysis is a very useful task in numerous real-world applications, such as security and disaster management, where surveillance videos can be used for the early detection, classification and monitoring of natural hazards like floods and fires. The analysis of such videos is considered of utmost importance during natural disasters, since it can improve situational awareness by providing early detection of floods and fires.
The motions in dynamic scenes are complex, highly non-rigid, with many auto-correlations and occlusions that render their analysis costly in terms of computation and memory requirements. Surveillance videos of interest in natural disasters contain dynamic textures (fire, smoke, flood) for which various methods have been deployed either to model them using statistical and dynamic system modeling, or to represent them with a local descriptor, to discriminate between them and classify them.
Such techniques can be combined with person and vehicle detection ones so as to feedforward it to a semantic reasoner and understand whether people and vehicles are in danger of fire or flood.
Smart city technologies for assistive transportation and safe driving, make up one of the most intriguing domains of computer science and have attracted significant attention during the last decade. Video surveillance, along with various other types of monitoring infrastructure provide a huge amount of exploitable data for extracting optimal traffic management rules, increasing safety in busy streets, detecting or predicting and preventing accidents and numerous other applications of traffic monitoring.
Moreover, increasing industry trends towards autonomous driving, vehicles, and transportation in general, is changing the landscape of traffic analysis. The visual content from traffic cameras will, in the near future, also be used to manage autonomous vehicle navigation, by sending information about events elsewhere in the city, traffic conditions, pedestrian congestion, to optimally guide vehicles. The automated analysis of visual traffic data is necessary to extract useful information in a reasonable amount of time and with minimum human involvement in these cumbersome and extremely time-consuming tasks. Many computer vision algorithms have already been developed for the automated analysis of traffic video data. Examples such as automatic vehicle detection and tracking, speed and traffic flow analysis, detection of abnormal events, have been developed and their levels of accuracy are continuously increasing. A big challenge, however, lies in the development of fast and computationally efficient methods to be used in actual real-world scenarios that demand near real time solutions.
More and more patients that do not have a critical disease are called to live inside their own homes, as nursing homes and hospitals cannot accommodate them in their own premises for too long. However, for some of them it is essential that a doctor or a carer should continue monitor their health and keep a log file of their behaviours throughout time.
For that purposes, we present a series of human activity detection and recognition that not only can detect human activities in controlled but also in uncontrolled environments. Several algorithms have been deployed in both static (CCTV, IP cameras) and wearables systems (GoPro, etc) and achieved both computational efficient and accurate outcome.