The BREATHE Project aims to provide a rich platform for improving the quality of life of informal caregivers at all levels. In order to achieve a plausible scenario in which family caregivers can find useful services and resources to adequately support domiciliary care, it is necessary to involve them in the development process from the first stages.
This system is fed by three independent information sources:
There are three independent channels of interaction. The first is a web application adapted to the limited ICT (Information and Communications Technology) skills of informal caregivers and with special emphasis on making it appealing and friendly as well as being unobtrusive and not inhibiting their daily activities. The second is a smartphone application that allows the informal caregiver to have ubiquitous access to BREATHE facilities. The last is the AAL system at home as itself. BREATHE assumes that informal caregivers need to have "eyes" at the home of the elderly or assisted person who needs care. Though vision is the most basic cognitive process used for recognising a person, an event, or an action; fusion of video data and information acquired by other sensors can facilitate scene analysis. Appropriate measures will be taken to preserve dignity and maintain privacy and confidentiality.
Advanced systems in the surveillance application domain complement the use of fixed sensors with mobile platforms (UGV and UAV). Such mobile platforms, being endowed with limited or advanced capabilities will contribute to novel systems to provide more comprehensive solutions with more modalities jointly working to provide the user with an intelligent monitoring system.
Applications may vary greatly, depending on whether they are targeted for civilian or military domains, they depend on the amount, type and level of sophistication of the sensors and on the topology of deployment, whether static fixed, or dynamic and reconfigurable, if UGV and UAV platforms are deployed in support of the fixed system.
The main goal of PROACTIVE is to research a holistic citizen-friendly multi sensor fusion and intelligent reasoning framework enabling the prediction, detection, understanding and efficient response to terrorist interests, goals and courses of actions in an urban environment. To this end, PROACTIVE will rely on the fusion of both static knowledge (i.e. intelligence information) and dynamic information (i.e. data observed from sensors deployed in the urban environment). The framework will be user-driven, given that the project is supported by a rich set of end-users, which are either members of the consortium or members of a special end-user advisory board.
From a technological perspective, PROACTIVE will integrate a host of novel technologies enabling the fusion of multi-sensor data with contextual information (notably 3D digital terrain data), while also resolving the ambiguities of the fusion process. Moreover, the PROACTIVE framework will incorporate advanced reasoning techniques (such as adversarial reasoning) in order to intelligently process and derive high level terroristic semantics from a multitude of source streams. The later techniques will be adapted to the terrorist domain, in order to facilitate prediction and anticipation of actions and goals of the terroristic entities.
Overall, PROACTIVE will leverage cutting-edge technologies such as the Net-centric Enable Capability (NEC) approach and the emerging Internet-of-Things concept, which are key enablers of new capabilities associated with real-time awareness of the physical environment, as well as with tracking and analysing human behaviour. PROACTIVE will address the technological challenges that inhibit the wider deployment of NEC / IoT in anti-terrorist applications.
Following the deployment and evaluation of the framework, PROACTIVE will produce a set of best practices and blueprints, which will contribute to a common EU approach to terrorist prevention in an urban environment.
The number of people over 50 will rise by 35% between 2005 and 2050. The number of people over 85 will triple by 2050. OECD analyses forecast increasing costs as a result of ageing populations. Thus, attention to the needs of the elderly and disabled is today in all developed countries one of the great challenges of social and economic policies. The challenge is to serve people who, by being in vulnerable situations, need support to develop the most essential activities of daily living.
Under these requirements, it raises the question of how ICTs could be applied to the environments in which the users are, interacting with people in a natural way, wherever they are needed, sensitive to the user and the context and acting proactively.
There is a worldwide interest in the research and development of systems for the analysis of peoples activities, especially those most in need, elderly and disabled. These systems are composed of networks of sensors that can analyse the environment to extract knowledge from it in order to detect anomalous behaviour or send alarms to care services.
Vision systems for behaviour analysis have spread in recent years, mainly by security demands and the reducing cost of the devices. However, most systems are applied to outdoor environments. When they are installed indoor they are used in large facilities, seldom within the home. This is due to two facts: necessity of multi-camera distributed systems, and privacy requirements from users.
This project will deal with these aspects, designing and developing intelligent multimodal systems for the behaviour analysis of people in private environments, especially in their homes. People would accept these technology and services if we can ensure their privacy under any circumstance. So, techniques will be developed to ensure the privacy of those being studied. Information richness of these visual devices would open a new field of services for support people, but also their families, carers.
The recent interest for surveillance in public, military and commercial applications is increasing the need to create and deploy intelligent semi-automated visual surveillance systems. The overall objective of this project is to develop a system that allows for robust and efficient coordination among robots, vision sensors and human guards in order to enhance surveillance in crowded environments such as airports, federal buildings, shopping malls and other public places.
The system is structured hierarchically, with a central control node as the root, the monitored space being subdivided in regions with their local processing nodes, while at the bottom of the hierarchy there are conventional surveillance processing nodes (intelligent sensors) and mobile processing nodes represented by human personnel and robotic platforms.
The guards, mobile robots, and operators at a control centre communicate by means of a wireless network infrastructure, to which different devices such as mobile phones, PDAs, desktop PCs and computational units on-board the robots are also connected.
Phase one of this project is completed. See description in above.
The Morphometric Herbarium Image Data Analysis project, or MORPHIDAS project, involves the automatic extraction of leaf information from digital images of whole herbarium specimens.
Examples include measuring the length and width of leaves and characterising their veins and teeth. Such information can then be used to automate analysis of species, species identification and so on. Advances in the field of image interpretation and automatic plant characterisation, classification and species identification are ongoing.