Notable Awards

Best paper award
12th International Conference on Advances in Multimedia
(MMEDIA) 2020


P. Athanasoulis, E. Christakis, K. Konstantoudakis, P. Drakoulis, S. Rizou, A. Weitz, A. Doumanoglou, N. Zioulis, D. Zarpalas, "Optimizing QoE and Cost in a 3D Immersive Media Platform: A Reinforcement Learning Approach", In International Conference on Advances in Multimedia (MMEDIA), Lisbon, Portugal, February 23-27, 2020.

Recent advances in media-related technologies, including capturing and processing, have facilitated novel forms of 3D media content, increasing the degree of user immersion. In order to ensure these technologies can readily support the rising demand for more captivating entertainment, both the production and delivery mechanisms should be transformed to support the application of media or network-related optimizations and refinements on-the-fly. Network peculiarities deriving from geographic and other factors make it difficult for a greedy or a supervised machine learning algorithm to successfully foresee the need for reconfiguration of the content production or delivery procedures. For these reasons, Reinforcement Learning (RL) approaches have lately gained popularity as partial information on the environment is enough for an algorithm to begin its training and converge to an optimal policy. The contribution of this work is a Cognitive Network Optimizer (CNO) in the form of an RL agent, designed to perform corrective actions on both the production and consumption ends of an immersive 3D media platform, depending on a collection of real-time monitoring parameters, including infrastructure, application-level and quality of experience (QoE) metrics. Our work demonstrates CNO approaches with different foci, i.e., a greedy maximization of the users’ QoE, a QoE-focused RL approach and a combined QoE-and-Cost RL approach.

Best demo award
25th International Conference on MultiMedia Modeling, 2019


K. Christaki, K. Apostolakis, A. Doumanoglou, N. Zioulis, D. Zarpalas, P. Daras, “Space Wars: An AugmentedVR Game”, 25th International Conference on MultiMedia Modeling (MMM), Thessaloniki, Greece, January 8-11, 2019.

Over the past couple of years, Virtual and Augmented Reality have been at the forefront of the Mixed Reality development scene, whereas Augmented Virtuality has significantly lacked behind. Widespread adoption however requires efficient low-cost platforms and minimalistic interference design. In this work we present Space Wars, an end-to-end proof of concept for an elegant and rapid-deployment Augmented VR platform. Through the engaging experience of Space Wars, we aim to demonstrate how digital games, as forerunners of innovative technology, are perfectly suited as an application area to embrace the underlying low-cost technology, and thus pave the way for other adopters (such as healthcare, education, tourism and e-commerce) to follow suit.

Best paper award candidate, VCIP 2014, Valletta, Malta


D. Alexiadis, D. Zarpalas, P. Daras, "Fast and smooth 3D
reconstruction using multiple RGB-Depth sensors", IEEE International Conference on Visual
Communications and Image Processing, VCIP 2014, Valletta, Malta

In this paper, the problem of real-time, full 3D reconstruction of foreground moving objects, an important task for Tele-Immersion applications, is addressed. More specifically, the proposed reconstruction method receives input from multiple consumer RGB-Depth cameras. A fast and efficient method to calibrate the sensors in initially described. More importantly, an efficient method to smoothly fuse the captured raw point sets is then presented, followed by a volumetric method to produce watertight and manifold meshes. Given the implementation etails, the proposed method can operate at high frame rates. The experimental results, with respect to reconstruction quality and rates, verify the effectiveness of the proposed methodology.

Best paper award, 11th IEEE IVMSP Workshop: 3D
Image/Video Technologies and Applications, 2013


D. Alexiadis, D. Zarpalas, P. Daras, "Real-time, Realistic Full-body 3D
Reconstruction and Texture Mapping from Multiple Kinects", 11th IEEE IVMSP Workshop: 3D
Image/Video Technologies and Applications, Yonsei University, Seoul, Korea, 10-12 June 2013.

Multi-party 3D Tele-Immersive (TI) environments, supporting realistic interaction among distant users, is the future of tele-conferencing. Real-time, full-body 3D reconstruction, an important task for TI applications, is addressed in this paper. A volumetric method for the reconstruction of watertight models of moving humans is presented, along with details for appropriate texture-mapping to enhance the visual quality. The reconstruction uses the input from multiple consumer depth cameras and specifically Kinect sensors. The presented results verify the effectiveness of the proposed methodologies, with respect to the visual quality and frame rates.

IEEE Distinguished paper award, IEEE COMSOC MMTC R-Letter, June 2013


 Dimitrios Alexiadis, Dimitrios Zarpalas, and Petros Daras, "Real-Time, Full 3-D Reconstruction of
Moving Foreground Objects From Multiple  Consumer Depth Cameras”, accepted for publication in IEEE Transactions on Multimedia, July 2012. IEEE Distinguished paper, IEEE MMTC R-Letter Vol 4, No 3, June2013.

The problem of robust, realistic and especially fast  3-D reconstruction of objects, although extensively studied, is still a challenging research task. Most of the state-of-the-art approaches that target real-time applications, such as immersive reality, address mainly the problem of synthesizing intermediate views for given view-points, rather than generating a single complete 3-D surface. In this paper, we present a multiple-Kinect  capturing system and a novel methodology for the creation of accurate, realistic, full 3-D reconstructions of moving foreground objects, e.g., humans, to be exploited in real-time applications. The proposed method generates multiple textured meshes from multiple RGB-Depth streams, applies a coarse-to-fine registration algorithm and finally merges the separate meshes into a single 3-D surface. Although the Kinect sensor has attracted the attention of  many researchers and home enthusiasts and has already appeared in many applications over the Internet, none of the already presented works can produce full 3-D models of moving objects from multiple Kinect streams in real-time. We present the capturing setup, the methodology for its calibration and the details of the proposed algorithm for real-time fusion of multiple meshes. The presented experimental results verify the effectiveness of the approach with respect to the 3-D econstruction quality, as well as the achieved frame rates

2nd place at the world contest «SHREC2006, 3D Shape Retrieval Contest»


2nd place at the world contest «SHREC2006, 3D Shape Retrieval Contest». Results
available at: Technical report, UU-CS-2006-030, ISSN: 0924-3275, June 2006.

SHARE THIS PAGE!

© Copyright 2022 VCL - All Rights Reserved

Designed with Mobirise