Esplora contenuti correlati

torna all'inizio del contenuto

University of Catania


ITA | ENG

Project
Egocentric Vision and Wearable Computing for Learning through Imitation” aims to develop new intelligent systems having the capacity to learn human abilities using signals acquired by wearable sensors and transfer them to other robotic subjects and/or agents. The project aims to develop advanced ICT solutions for the creation of products and services that, based on the paradigms of “Home & Building Automation”, “Ambient Assisted Living” and “Ambient Intelligence” technologies, allow the redesign of work and living environments so as to ensure security, inclusion, assistance, health and an improved quality of life for people living or working in them.
Research activities include the design and development of advanced Artificial Intelligence algorithms based on Imitation, Reinforcement and Self Learning techniques able to process multimodal signals acquired through different wearable devices (wearable cameras, body sensors, etc.) using the paradigm of “First-Person” systems.

Financial Resources
The project was funded by the NOP Research and Innovation 2014-2020 under the Action “Innovative Industrial Doctorates”, Cycle XXXIV - Academic Year 2018-2019. For the implementation of this initiative, the University of Catania, which received 89.040,87 euro in funding, established a three-year doctoral scholarship in Computer Science at the Department of Mathematics and Computer Science.

Impact
The project falls within the “Smart Factory” specialization area and has a high degree of innovation with an impact on the market for Artificial Intelligence, Computer Vision and Machine Learning applications which in the last decade has seen the proliferation of wearable devices equipped with computational capabilities, inertial sensors, GPS, cameras, Augmented Reality visors and armbands. These devices, already available on the market, make it possible to think of new intelligent systems that could support the wearer in carrying out specific tasks and that could be used to acquire information from the first-person point of view which would then be processed by Machine Learning and Computer Vision algorithms to automatically understand the actions carried out by individuals, the context in which individuals move and the user’s behaviour.
The aim of the project is to advance the state of the art in first-person signal analysis in order to develop algorithms that are able to create imitation and self-learning models so as to transfer human abilities to robotic platforms and/or assistance systems in specific industrial applications or in everyday life (e.g. to understand and show the user how to perform a complex task consisting of several steps). The results obtained will be published in conference proceedings and international journals. The possibility of patenting the solutions produced will also be considered with the other project partners. The demonstrator developed in the project will be proposed to associations and companies in the reference application context so as to receive some feedback and be able to produce the proposed solutions on an industrial scale and capitalize on them. The project includes a 9-month placement at OrangeDev srl, a Sicilian developer with expertise in processing large amounts of data using Machine Learning algorithms and in the localization of robotic platforms navigating in environments, and a 9-month placement at the Department of Computer Science and Electronic Engineering, University of Essex, United Kingdom.


08/11/2021
torna all'inizio del contenuto