Completed projects

VR-Together

An end-to-end system for the production and delivery of photorealistic and social virtual reality experiences (2017: H2020 ΙΑ)

Funding Organization: European Commission
Funding Programme: Horizon H2020
Funding Instrument: Innovation Action
Start Date: October 1, 2017
Duration: 36 months
Total Budget: 3,929,937 EUR
ITI Budget: 509,000 EUR

VR-Together will offer new ground-breaking virtual reality experiences based on social photorealistic immersive content, which can be experienced together with friends, and demonstrate its use for domestic VR consumption. For this purpose, it will develop and assemble an end-to-end pipeline integrating state-of-the-art technologies and off-the-shelf components. Immersive media production and delivery will be achieved through innovative capture, encoding, delivery and rendering technologies.

The challenge of VR-Together is to create photorealistic truly social virtual reality experiences in a cost-effective manner. VR social experiences will be delivered by the orchestration of innovative media formats (video, a blend of videos, point cloud representations and 3D mesh interpolation). The production and delivery of such experiences will be demonstrated and analysed through dedicated real-world trials. Furthermore, the scalability of its approach for production and delivery of immersive content will be demonstrated as well, where such content will be consumed using off-the-shelf (OTS) hardware components and scalable cloud services (i.e. large community of users, in everyday living rooms, using OTS equipment).

Finally, it will introduce new methods for social VR evaluation and quantitative platform benchmarking for both live and interactive content production, thus providing production and delivery solutions with significant commercial value.

S. Thermos, G. T. Papadopoulos, P. Daras and G. Potamianos, "Deep Sensorimotor Learning for RGB-D Object Recognition" , Computer Vision and Image Understanding, 190, 102844, 2020. DOI: https://doi.org/10.1016/j.cviu.2019.102844

V. Sterzentsenko, L. Saroglou, A. Chatzitofis, S. Thermos, N. Zioulis, A. Doumanoglou, D. Zarpalas, P. Daras, "Self-Supervised Deep Depth Denoising" , In Proceedings of the International Conference on Computer Vision, Seoul, Republic of Korea, October 27 - November 2, 2019.

K. Christaki, K. Apostolakis, A. Doumanoglou, N. Zioulis, D. Zarpalas, P. Daras, "Space Wars: An AugmentedVR Game" , In 25th International Conference on MultiMedia Modeling (MMM), Thessaloniki, Greece, January 8-11, 2019.

Kyriaki Christaki, Emmanouil Christakis, Petros Drakoulis, Alexandros Doumanoglou, Nikolaos Zioulis, Dimitrios Zarpalas, and Petros Daras, "Subjective Visual Quality Assessment of Immersive 3D Media Compressed by Open-Source Static 3D Mesh Codecs" , 25th International Conference on MultiMedia Modeling (MMM), Thessaloniki, Greece, January 8-11, 2019.

A. Chatzitofis, D. Zarpalas, S. Kollias, P. Daras, "DeepMoCap: Deep Optical Motion Capture Using Multiple Depth Sensors and Retro-Reflectors" . Sensors, Special Issue: Depth Sensors and 3D Vision, 19(2), 282, 2019. https://doi.org/10.3390/s19020282

A. Karakottas, A. Papachristou, A. Doumanoglou, N. Zioulis, D. Zarpalas, P. Daras, "Augmented VR" , IEEE VR, Reutlingen, Germany, 18-22 March 2018, https://www.youtube.com/watch?v=7O_TrhtmP5Q [VIDEO]

ALADDIN

ALADDIN (2017: H2020-SEC-12-FCT-2016-2017)

Funding Organization: European Commission
Funding Programme: H2020-SEC-12-FCT-2016-2017
Funding Instrument: Research & Innovation Action
Start Date: September 1, 2017
Duration: 36 months
Total Budget: 4,998,240 EUR
ITI Budget: 496,250 EUR

ALADDIN (2017: H2020-SEC-12-FCT-2016-2017) ALADDIN will study, design, develop, and evaluate, in series of complementary pilots, a counter UAV system as a complete solution to the growing UAV threat problem, building upon a state-of-the-art system and enhancing it by researching on various technologies and functionalities. ALADDIN will follow a holistic and heavily user centred methodology involving a large number of LEAs and critical infrastructure operators, as well as an expert Advisory Board panel ensuring end-user diversity, as they all face different kinds of threats and work within different regulatory frameworks. This diversity is important to shape EU-wide system specifications and the innovative training curricula and training that will be realised to share the knowledge gained and raise awareness. Furthermore, within the project all regulations, social, ethical and legal elements will be studied thoroughly and continuously with an impact assessment produced and its results monitored during the project’s lifetime. ALADDIN’s sensing arsenal is comprised of a set of custom, innovative, and unique technologies as well as established and standard sensors used for UAV detection and localisation: 1) 2D/3D paired radars; 2) Innovative optro and thermal panoramic imaging; 3) Custom designed acoustic sensors. These will be fused through novel deep learning techniques in order to provide excellent detection accuracy. Further, ALADDIN will study and offer a set of neutralization effectors (jammers, physical and hacking). These sensing and countering capabilities will be operated through an advanced command and control (C2) system. The C2 will achieve great detection and classification accuracy within a large range, by fusing data acquired from all sensors through state-of-the-art deep learning techniques. Operator’s efficiency will be enhanced through a novel mixed reality interface with 3D cartographic and situational elements and will be complemented by support to operations like investigation and trainings.

VCL’s role in the project includes the development of Deep Learning methodologies for the pre-processing and analysis of each modality (2D/3D radars, optro, thermal, acoustic) to detect and track enemy targets. Moreover, VCL will work in the fusion of those modalities to produce a single robust detector for UAV threats.

V. Magoulianitis, D. Ataloglou, A. Dimou, D. Zarpalas, P. Daras, "Does Deep Super-Resolution Enhance UAV Detection?" , In Proceedings of the 16th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Taipei, Taiwan, September 18-21, 2019.

S. Samaras, V. Magoulianitis, A. Dimou, D. Zarpalas, P. Daras, "UAV Classification With Deep Learning Using Surveillance Radar Data" , In Proceedings of the 12th International Conference on Computer Vision Systems (ICVS), Springer, Cham, 2019.

S. Samaras, E. Diamantidou, D. Ataloglou, N. Sakellariou, A. Vafeiadis, V. Magoulianitis, A. Lalas, A. Dimou, D. Zarpalas, K. Votis, P. Daras, D. Tzovaras, "Deep Learning on Multi Sensor Data for Counter UAV Applications - A Systematic Review" , Sensors, Special Issue: Deep Learning for Multi-Sensor Fusion, 19(22), 4837, 2019

HYPER360

Enriching 360 media with 3D storytelling and personalization elements (2017: H2020 ΙΑ)

Funding Organization: European Commission
Funding Programme: Horizon H2020
Funding Instrument: Innovation Action
Start Date: October 1, 2017
Duration: 36 months
Total Budget: 3,736,255 EUR
ITI Budget: 560,438 EUR

Media technologies are broadly booming presently, creating an evolving landscape. Two media markets that can be singled out are the global Internet Protocol Television (IPTV/Internet TV), which produces “Over-the-top” (OTT) content (audio, video and other types of media over the internet) and the emerging Virtual Reality (VR) market. Amidst these rapidly growing market trends and emerging media, the numerous developments with respect to varying content types, their creation, retrieval and interactions, as well as the pervasiveness of mobile devices, offer unique and unexplored opportunities in terms of designing new experiences for the audiences. Hyper 360’s main objective, is to introduce a complete solution for the capture, production, enhancement, delivery and consumption of an innovative free viewpoint video (FVV) media format to the OTT media sectors, through careful validation and large demonstrations. Envisioning increasingly immersive experiences, the convergence of omnidirectional (360o) and 3D content will extend current short productions of 360o videos with novel and powerful storytelling opportunities. Furthermore, leveraging on the inherent to the selected format’s capabilities, the broadband delivered content will be offered with additional audio visual functionalities, bringing together more types of digital content, and adapted to the viewer’s preferences offering unique experiences upon content consumption.

The end-goal of Hyper360 is to develop an enriched 360o media production toolset capable of delivering compelling personalized and interactive viewing experiences. This toolset will enable content distributors to deliver these media through broadband (OTT) to a variety of platforms. To this intend, existing work and technologies among the consortium members (360o video – DRK/FOKUS, personalization engine – FOKUS/CERTH, 3D capture – CERTH) will be adapted and extended by their respective IP owners while being evaluated for integration (ENG), with expertise in the fields of image processing (FOKUS/CERTH) and video quality analysis (JRS) utilized to design and create the required technological components to complete the proposed production solution, namely the 360o/3D fusion and emplacement technology (CERTH) and the online produced content quality analysis (JRS).

A. Karakottas, N. Zioulis, A. Doumanoglou, V. Sterzentsenko, V. Gkitsas, D. Zarpalas, P. Daras, "XR360: A toolkit for mixed 360 and 3D productions" , In IEEE International Conference on Multimedia and Expo (ICME), London, United Kingdom, July 6-10, 2020.

V. Gkitsas, N. Zioulis, F. Alvarez, D. Zarpalas, P. Daras, "Deep Lighting Environment Map Estimation from Spherical Panoramas" , In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Washington, United States, June, 2020.

V. Sterzentsenko, A. Doumanoglou, S. Thermos, N. Zioulis, D. Zarpalas, P. Daras, "Deep Soft Procrustes for Markerless Volumetric Sensor Alignment" , In International Conference on Virtual Reality (IEEE VR), March 22-26, 2020.

R. Athanasoulas, P. Boutis, A. Chatzitofis, A. Doumanoglou, P. Drakoulis, L. Saroglou, V. Sterzentsenko, N. Zioulis, D. Zarpalas, P. Daras, "AVoidX: An Augmented VR Game" , In International Conference on Virtual Reality (IEEE VR), March 22-26, 2020.

B. Takacs, Z. Vincze, H. Fassold, A. Karakottas, N. Zioulis, D. Zarpalas, P. Daras, "Hyper 360 - Towards a Unified Tool Set Supporting Next Generation VR Film and TV Productions" . Journal of Software Engineering and Applications, 12, 127-148, 2019.

Vladimiros Sterzentsenko, Antonis Karakottas, Alexandros Papachristou, Nikolaos Zioulis, Alexandros Doumanoglou, Dimitrios Zarpalas, and Petros Daras, "A low-cost, flexible and portable volumetric capturing system" , The 14th International Conference on Signal Image Technology & Internet based Systems (SITIS 2018), Las Palmas de Gran Canaria, Spain, 26-29 November 2018.

A. Karakottas, N. Zioulis, D. Zarpalas, P. Daras, "360D: A dataset and baseline for dense depth estimation from 360 images" , 1st Workshop on 360o Perception and Interaction, European Conference on Computer Vision (ECCV) , Munich, Germany, 8 – 14 September 2018

N. Zioulis, A. Karakottas, D. Zarpalas, P. Daras, "OmniDepth: Dense Depth Estimation for Indoors Spherical Panoramas" , European Conference on Computer Vision (ECCV), Munich, Germany, 8 – 14 September 2018

A. Papachristou, N. Zioulis, D. Zarpalas, P. Daras, "Markerless Structure-based Multi-sensor Calibration for Free Viewpoint Video Capture" , the 26th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG 2018), Pilsen, Czech Republic, May 28 - June 1, 2018

D. Alexiadis, N. Zioulis, D. Zarpalas, P. Daras, "Fast deformable model-based human performance capture and FVV using consumer-grade RGB-D sensors" , Pattern Recognition (2018), 79, 260-278. DOI: https://doi.org/10.1016/j.patcog.2018.02.013

5G-Media

Programmable edge-to-cloud virtualization fabric for the 5G Media industry (2017: H2020 IA)

Funding Organization: European Commission
Funding Programme: Horizon H2020
Funding Instrument: Innovation Action
Start Date: June 1, 2017
Duration: 30 months
Total Budget: 6,128,531 EUR
ITI Budget: 376,000 EUR

The focus of 5G PPP H2020 remarkable research so far has been largely on the required advances in network architectures, technologies and infrastructures. Less attention has been put on the applications and services that will make use of and exploit advanced 5G network capabilities. 5G-MEDIA aims at innovating media-related applications by investigating how these applications and the underlying 5G network should be coupled and interwork to the benefit of both. In this respect, 5G-MEDIA addresses the objectives of 1) capitalizing and properly extending the valuable outcomes of the running 5G PPP projects to offer an agile programming, verification and orchestration platform for services, and 2) developing network functions and applications to be demonstrated in large-scale deployments, based on 3 well-defined use cases (in the areas of immersive media and VR, smart production and user-generated content, and UHD over CDN) of diverse requirements and particular interest for the consortium partners. Based on the adoption of the open innovation approach, 5G-MEDIA platform will be offered to third parties to develop, combine, verify, deploy and validate media applications by utilizing the SDK capabilities and Service Platform offerings.

Finally, 5G-MEDIA plans to create an ambitious business impact with the introduction of Streaming as a Service concept, built on top of a well-defined, consortium-wide exploitation plan and supported by the complementarity of expertise of its consortium, representing key industrial sectors in the network and media domains: telecom operators (OTE, TID), cloud providers (SILO), PaaS/SaaS vendors (IBM), service providers (ENG), application developers (NETAS), broadcasters (RTVE), SMEs (IINV, NXW, IRT, BIT) and research centers (UCL, UPM, CERTH). It is highlighted that the consortium includes partners with strong and active participation in 5G-PPP programme, complemented by new but important players on the media & entertainment industry sector.

1. P. Athanasoulis, E. Christakis, K. Konstantoudakis, P. Drakoulis, S. Rizou, A. Weitz, A. Doumanoglou, N. Zioulis, D. Zarpalas, "Optimizing QoE and Cost in a 3D Immersive Media Platform: A Reinforcement Learning Approach" , In International Conference on Advances in Multimedia (MMEDIA), Lisbon, Portugal, February 23-27, 2020.
2. F. Alvarez, D. Breitgand, D. Griffin, P. Adriani, S. Rizou, N. Zioulis, F. Moscatelli, J. Serrano, M. Keltsch, P. Trakadas, T. Khoa Phan, A. Weit, U. Acar, O. Prieto, F. Iadanza, G. Carrozzo, H. Koumaras, D. Zarpalas, D. Jimenez, "An Edge-to-Cloud Virtualized Multimedia Service Platform for 5G Networks" , IEEE Transactions on Broadcasting, Special issue on 5G for Broadband Multimedia Systems and Broadcasting, 65(2), 369-380, 2019.
3. A. Doumanoglou, P. Drakoulis, N. Zioulis, D. Zarpalas, P. Daras, "Benchmarking Open-Source Static 3D Mesh Codecs for Immersive Media Interactive Live Streaming" . IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 9(1), 190-203, 2019. 
4. Konstantinos Konstantoudakis, Emmanouil Christakis, Petros Drakoulis, Alexandros Doumanoglou, Nikolaos Zioulis, Dimitrios Zarpalas, and Petros Daras, "Comparing CNNs and JPEG for Real-Time Multi-view Streaming in Tele-Immersive Scenarios" , The 14th International Conference on Signal Image Technology & Internet based Systems (SITIS 2018), Las Palmas de Gran Canaria, Spain, 26-29 November 2018.
5. A. Doumanoglou, N. Zioulis, E. Christakis, D. Zarpalas, P. Daras, "Subjective quality assessment of textured human full-body 3D-reconstructions" , International Conference on Quality of Multimedia Experience (QoMEX 2018), Sardinia, Italy, 29 May - 1 June 2018
6. A. Doumanoglou, N. Zioulis, D. Griffin, J. Serrano, T. Khoa Phan, D. Jimenez, D. Zarpalas, F. Alvarez, M. Rio, P. Daras, "A System Architecture for Live Immersive 3D-Media Transcoding over 5G Networks" , Workshop on Media delivery innovations using flexible network models in 5G, IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB18), Valencia, Spain, 6th – 8th June 2018
7. A. Doumanoglou, D. Griffin, J. Serrano, N. Zioulis, T.K. Phan, D. Jimenez, D. Zarpalas, F. Alvarez, M. Rio, P. Daras, "Quality of Experience for 3D Immersive Media Streaming" , IEEE Transactions on Broadcasting (2018), Special Issue on Quality of Experience for Advanced Broadcast Service, 64(2), 379-391.

Factory2Fit

Empowering and participatory adaptation of factory automation to fit for workers (2016: H2020 IA) 

Funding Organization: European Commission
Funding Programme: Horizon 2020 – FoF
Funding Instrument: Research & Innovation Action
Start Date: October 1, 2016
Duration: 36 months
Total Budget: 4,322,463 EUR
ITI Budget: 502,500 EUR

Factory2Fit project aspires to take human-centred manufacturing to a new level by giving the factory floor workers a leading role in adapting and developing their own job tasks based on their individual cognitive, physical and sensorial traits, thus initiating a leap towards more attractive, inclusive and safe manufacturing jobs.

The main development effort of Factory2Fit, from a technological point of view, is to develop a “smart” factory environment that promotes human-centred automation, participatory job allocation, effective training, collaboration and knowledge sharing between a broad body of multi-disciplinary production personnel (shop floor managers, expert & novice workers, technicians, technical managers and production supervisors). The Factory2Fit core framework will exploit human factors, based on worker physical, sensorial and cognitive skills, engage and actively include workers in the design and adaptation of their immediate workplaces, and improve attractiveness and overall user experience. Towards this end, Factory2Fit will use sophisticated sensor arrays, advanced visualization and interaction tools, such as Augmented Reality and highly-engaging virtual factory simulation platforms.

Factory2Fit aims to make worker employment in the factory more flexible and allow contextual self-learning replace major parts of training for new and current workers by incorporating continuous learning while working innovative training solutions through virtual factory simulations. A central principle of the project is boosting understanding of employee skills and competencies and motivating them to learning and competence development at work. It also intends to provide contextual decision support and personalized guidance for work efficiency, as well as knowledge sharing solutions and collaborative problem-solving to support the demographic change. 

Teknologian tutkimuskeskus VTT (VTT Technical Research Centre of Finland Ltd.), FI
Amorph Systems GMbH ,DE
Carr Communications, IE
Centre for Research and Technology Hellas, GR
Continental Automotive GmbH, DE
Finn-Power Oy, FI
Technische Universitat Chemnitz, DE
United Technologies Research Center, IE
Visual Components Oy, FI 

1. M. Tsourma, S. Zikos, G. Albanis, K. C. Apostolakis, E. E. Lithoxoidou, A. Drosou, D. Zarpalas, P. Daras, D. Tzovaras, "Gamification Concepts for Leveraging Knowledge Sharing in Industry 4.0" . International Journal of Serious Games (2019), 6(2), 75 - 87. 
2. S. Aromaa, M., Liinasuo, E. Kaasinen, M. Bojko, F. Schmalfuß, K. Apostolakis, D. Zarpalas, P. Daras, C. Öztürk, M. Boubekeuer, "User evaluation of Industry 4.0 concepts for worker engagement" , 1st International Conference on Human Systems Engineering and Design (IHSED 2018), Université de Reims Champagne-Ardenne, France, 25-27 October 2018.
3. G. Albanis, K. Apostolakis, C. Öztürk, D. Zarpalas, P. Daras, "Adaptive web-based tools for capability-driven workforce management and task sequencing optimization" , International Conference on Industrial Internet of Things and Smart Manufacturing (IoTsm 2018), London, UK, 5-6 Sep. 2018.
4. X. Chen, M. Bojko, R. Riedel, K. Apostolakis, D. Zarpalas, P. Daras, "Human-centred Adaptation and Task Distribution utilizing Levels of Automation" , 16th IFAC Symposium on Information Control Problems in Manufacturing (INCOM 2018) 11-13 June 2018, Bergamo, Italy
5. N. Zioulis, A. Papachristou, D. Zarpalas, P. Daras, "Improving Camera Pose Estimation via Temporal EWA Surfel Splatting" , IEEE International Symposium on Mixed and Augmented Reality (ISMAR 2017), Nantes, France, October 9–13, 2017

PATHway

Technology enabled behavioural change as a pathway towards better self-management of CVD (2015: H2020 PHC-RIA)

Funding Organization: European Commission
Funding Programme: Horizon H2020
Funding Instrument: Research & Innovation Action
Start Date: February 1, 2015
Duration: 44 months
Total Budget: 4,899,080 EUR
ITI Budget: 402,500 EUR

PATHway proposes a radically novel approach to Cardiac Rehabilitation (CR) that will ensure a paradigm shift towards empowering patients to more effectively self-manage their Cardiovascular disease (CVD). It has the potential to deliver significant cost savings to the healthcare system.

PATHway will provide individualized rehabilitation programs that use socially inclusive exercise sessions as the basis upon which to provide a personalized comprehensive lifestyle intervention program (exercise/physical activity (PA), smoking, diet, stress management, alcohol use, medication compliance) to enable patients to both better understand and deal with their own condition and to lead a healthier lifestyle in general. This will be made possible by the provision of an internet enabled sensor-based home exercise platform that allows remote participation in CR exercise programs at any time, by a small number of patients from the comfort of their own living room.

The proposed platform will contain an autonomous avatar as a virtual coach, supporting an exercise class (‘Exerclass’) and exercise-based game (‘Exergame’), to be completed individually or by a small number of remote participants, who are at a similar stage of CR. The system will include peer mentoring, stimulating social support and interaction by allowing individuals to engage more. In addition, the platform will support real-time analysis and positive feedback on exercise technique. The platform will also support day-long monitoring of participants’ physiological responses. Sensed data will be combined with patient supplied information on other lifestyle factors. Both the sensed and user provided data will be continuously aggregated and used as the basis for analysis to adapt and personalize the patient’s rehabilitation program over time, with abstracted summaries provided as feedback to both the patient and his/her clinician. This will serve to motivate the patient and help the clinician make more informed decisions.

1. F. Patrona, A. Chatzitofis, D. Zarpalas, P. Daras, "Motion Analysis: Action Detection, Recognition and Evaluation based on motion capture data" , Pattern Recognition, Special Issue on Articulated Motion and Deformable Objects, Volume 76, Pages 612-622
2. A. Triantafyllidis, D. Filos, R. Buys, J. Claes, V. Cornelissen, E. Kouidi, A. Chatzitofis, D. Zarpalas, P. Daras, I. Chouvarda, and N. Maglaveras,, "A Computer-Assisted System with Kinect Sensors and Wristband Heart Rate Monitors for Group Classes of Exercise-based Rehabilitation" , International Conference on Biomedical and Health Informatics Thessaloniki, Greece, 18-21 November 2017
3. A. Chatzitofis, D. Zarpalas and P. Daras, "A Computerized System for Real-Time Exercise Performance Monitoring and e-Coaching Using Motion Capture Data" International Conference on Biomedical and Health Informatics Thessaloniki, Greece, 18-21 November 2017
4. A. Chatzitofis, D. Zarpalas, D. Filos, A. Triantafyllidis, I. Chouvarda, N. Maglaveras, P. Daras, "Technological module for unsupervised, personalized cardiac rehabilitation exercising" , Proceedings of the 2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC 2017), July 4-8, 2017, Turin, Italy.
5. D. Alexiadis, A. Chatzitofis, N. Zioulis, O. Zoidi, G. Louizis, D. Zarpalas, P. Daras, "An integrated platform for live 3D human reconstruction and motion capturing" , IEEE Transactions on Circuits and Systems for Video Technology (Volume:PP , Issue: 99)
6. K. Moran, H. Wei, D.S. Monaghan, C. Woods, N.E. O’Connor, D. Zarpalas, A. Chatzitofis, P. Daras, J. Piesk, A. Pomazanskyi, "A Technology Platform for Enabling Behavioural Change as a “PATHway” Towards Better Self-management of CVD" , Proceedings of the 2016 ACM Workshop on Multimedia for Personal Health and Health Care (MMHealth 2016), Oct 16, 2016 - Oct 16, 2016, Amsterdam, the Netherlands
7. N. Vretos, D. Alexiadis, D. Zarpalas, P. Daras, "Enhancing Real Time 3D reconstruction with pre scanned meshes" , IEEE International Conference on Image Processing, ICIP 2016, Sept 25-28, Phoenix, Arizona, USA
8. N. Zioulis, D. Alexiadis, A. Doumanoglou, G. Louizis, K. Apostolakis, D. Zarpalas, P. Daras, "3D Tele-Immersion Platform For Interactive Immersive Experiences Between Remote Users" , IEEE International Conference on Image Processing, ICIP 2016, Sept 25-28, Phoenix, Arizona, USA
9. A. Chatzitofis, D. Zarpalas, P. Daras, "Low-cost Motion Analysis for Cardiovascular Disease Patients" , IET Human Motion Analysis for Healthcare Applications Event, 19 May 2016, London, United Kingdom

RePlay

Reusable low-cost platform for digitizing and preserving traditional participative sport (2013: FP7 ICT – STREP)

Funding Organization: European Commission
Funding Instrument: Small or medium-scale focused research project (STREP)
Start Date: October 30, 2013
Duration: 36 months
Total Budget: 2,656,400 EUR
ITI Budget: 351,900 EUR

The RePlay project has as its goal the development of a technology platform that shall provide access and interpretation of digital content for Traditional Sports and Games (TSG). It will enable multiple modes of training, coaching and knowledge sharing that will contribute to the increased participation and preservation of traditional sport in the future. This will be achieved by developing a base technology and methodologies for the digitisation of the art and forms of play of a set of representative sports. In the case of RePlay, this will be field based Gaelic team sports and Basque individual/doubles ball and court sports. The fundamental structure of these sports is extensible to a vast majority of traditional minority sports and mainstream sports.

RePlay will consist of the design and implementation of a platform for the capture, annotation, indexing and provision of 3D sports content. It will include the analysis and specification of methodologies and ideal cost-effective hardware solutions for the extension of the project to other sports. The project will focus on the use of existing and near future 3D motion capture hardware. The project does not include the development of any hardware as an objective, as the market is already addressing this. It shall instead focus on the creation of the knowledge and underlying software tools that will provide a low-cost entry point to other TSG associations.

RePlay will focus on the analysis, capture and modelling of the basic styles and techniques of play common to all participants or the “Local Hero” using low-cost capture techniques. However, RePlay will also use advanced professional grade capture techniques on “National Heroes”. A national hero, or a recognised elite player, develops their sporting prowess to an extent that is unique. This presents Intangible Cultural Heritage to be preserved, an opportunity to allow the young to try to learn and emulate their heroes, and a scientific opportunity to compare and analyse the evolution in the changes of styles of play over time. RePlay will also include work on retrospective analysis of sports via video content.

1. Y. Tisserand, N. Magnenat-Thalmann, L. Unzueta, M.T. Linaza, A. Ahmadi, N.E. O'Connor, N. Zioulis, D. Zarpalas, P. Daras, "Preservation and Gamification of Traditional Sports." , Mixed Reality and Gamification for Cultural Heritage, Springer-Nature, M Ioannides, N. Magnenat-Thalmann, G. Papagiannakis (Eds), ISBN 978-3-319-49607-8
2. F. Destelle, A. Ahmadi, K. Moran, N. E. O'Connor, N. Zioulis, A. Chatzitofis, D. Zarpalas, P. Daras, L. Unzueta, J. Goenetxea, M. Rodriguez, M. T. Linaza, Y. Tisserand, N. Magnenat-Thalmann, "A Multi-Modal 3D Capturing Platform for Learning and Preservation of Traditional Sports and Games" , 23rd ACM Multimedia Conference 2015, 26-30 Oct 2015, Brisbane, Australia
3. N.E. O’Connor, Y. Tisserand, A. Chatzitofis, F. Destelle, J. Goenetxea, L. Unzueta, D. Zarpalas, P. Daras, M. Linaza, K. Moran, N. Magnenat Thalmann, "Interactive Games for Preservation and Promotion of Sporting Movements" , 22nd European Signal Processing Conference (EUSIPCO 2014), Lisbon, Portugal, 2-5 Sept. 2014
4. F. Destelle, A. Ahmadi, A. Chatzitofis, D. Zarpalas, N.E. O’Connor, K. Moran, P. Daras, "Low-cost Accurate Skeleton Tracking Based on Fusion of Kinect and Wearable Inertial Sensors" , 22nd European Signal Processing Conference (EUSIPCO 2014), Lisbon, Portugal, 2-5 Sept. 2014
5. G. T. Papadopoulos, P. Daras, "Local descriptions for human action recognition from 3D reconstruction data" , IEEE International Conference on Image Processing, ICIP 2014, Oct 27-30, Paris, France
6. D. Alexiadis, P. Daras, "Quaternionic signal processing techniques for automatic evaluation of dance performances from MoCap data" , IEEE Transactions on Multimedia, Vol: 16, Issue: 5, Aug. 2014

FI-STAR

Future Internet: Social Technological Alignment in Healthcare (2013: FP7-ICT systems for Energy Efficiency)

Funding Organization: European Commission
Funding Programme: Funding Instrument: FP7 – ICT systems for Energy Efficiency
Start Date: October 30, 2013
Duration: 30 months
Total Budget: 17,360,892 EUR
ITI Budget: 104,996 EUR

FI-STAR will establish early trials in the Health Care domain building on Future Internet (FI) technology leveraging on the outcomes of FI-PPP Phase 1.

It will become self-sufficient after the end of the project and will continue on a sustainable business model by several partners. In order to meet the requirements of a global Health industry FI-STAR will use a fundamentally different, “reverse” cloud approach that is. It will bring the software to the data, rather than bringing the data to the software. FI-STAR will create a robust framework based of the “software to data” paradigm.

A sustainable value chain following the life cycle of the Generic Enablers (GEs) will enable FI-STAR to grow beyond the lifetime of the project. FI-STAR will build a vertical community in order to create a sustainable ecosystem for all user groups in the global Health care and adjacent markets based on FI-PPP specifications.

FI-STAR will deploy and execute 7 early trials across Europe, serving more than 4 million people. Through the trials FI-STAR will validate the FI-PPP core platform concept by using GEs to build its framework and will introduce ultra-light interactive applications for user functionality.

It will pro-actively engage with the FI-PPP to propose specifications and standards.FI-STAR will use the latest digital media technology for community building and will proactively prepare for Phase 3 through targeted elicitation of new partners using open calls.

Finally, FI-STAR will collaborate with other FI-PPP projects, through the mechanisms in place, by actively interacting with all necessary bodies. FI-STAR is a unique opportunity for implementing Future Internet Private-Public Partnership in the Health Care domain, by offering to the community standardized and certified software including a safe, secure and resilient platform, taking advantage of all Cloud Computing benefits and guaranteeing the protection of sensitive and personal data travelling in Public Clouds.

1. Chatzitofis, D. Monaghan, E. Mitchell, F. Honohan, D. Zarpalas, N. O’Connor, P. Daras, "HeartHealth: A Cardiovascular Disease Home-Based Rehabilitation System" , 5th International Conference on Current and Future Trends of Information and Communication Technologies in Healthcare, ICTH 2015.
2. D. Monaghan, F. Honohan, E. Mitchell, N. O’Connor, A. Chatzitofis, D. Zarpalas, P. Daras, "HeartHealth: New Adventures in Serious Gaming" , ACMMULTIMEDIA 2015, 26 - 30 October 2015, Brisbane, Australia

EXPERIMEDIA

EXPERiments in live social and networked MEDIA experiences (2011: FP7 ICT – IP

Funding Organization: European Commission
Funding Programme: FP7 - Information & Communication Technologies
Funding Instrument: Integrated Project
Start Date: October 30, 2011
Duration: 36 months
Total Budget: 6,772,089 EUR
ITI Budget: 171,070 EUR

EXPERIMEDIA will develop and operate a unique facility that offers researchers what they need for large-scale “Future Media Internet” experiments, since extensive research into testbeds is needed to support the R&D of large-scale social and networked media systems as well as to understand and manage complex communities and ecosystems. Testbed technologies will include user-generated high quality content management and delivery, a 3D Internet platform and tools for 3D reconstruction from live events, augmented reality platform, tools for integration of social networks, access technologies and a range of network connectivity options.

Experiments will be conducted in the real-world at live events and to diverse communities to accelerate the adoption of FMI. Testbeds include the Schladming Ski Resort, the Multi-Sport High Performance Centre of Catalonia, historical sites provided by the Foundation for the Hellenic World and the 3D Innovation Living Lab. Experiments will explore new forms of social interaction and rich media experiences considering the demands of online and real-world communities.

1. G. Palaiokrassas, A. Voulodimos, K. Konstanteli, N. Vretos, D. S. Osborne, E. Chatzi, P. Daras, T. Varvarigou, "Social media interaction and analytics for enhanced educational experiences" , IEEE Multimedia Magazine, 23(1), 26-35.
2. D. Christopoulos, E. Hatzi, A. Chatzitofis, N. Vretos, P. Daras, "Future Media Internet Technologies for Digital Domes" , 16th International Conference on Human-Computer Interaction (HCI International 2014), 22 - 27 June 2014, Creta Maris, Heraklion, Crete, Greece
3. N. Vretos, P. Daras, "Temporal And Color Consistent Disparity Estimation In Stereo Videos" , IEEE International Conference on Image Processing, ICIP 2014, Oct 27-30, Paris, France
4. A. Chatzitofis, N. Vretos, D. Zarpalas, P. Daras, "Three-dimensional Monitoring of Weightlifting for Computer Assisted Training" , 15th International Conference on Virtual Reality and Converging Technologies, Laval, France, 2013

SHARE THIS PAGE!

© Copyright 2022 VCL - All Rights Reserved

Created with Mobirise - Find out