Monica Bordegoni
Politecnico di Milano, Italy
How Touch and Smell Enhance the Realism of our Virtual Experiences

Virtual Reality experiences are based on an integration of immersion, interaction and imagination. Users experience the virtual world through their senses, which in most applications are vision and hearing. The technological development is proposing new devices that can also simulate signals eliciting the sense of touch and smell, which can be integrated with vision and sounds. Specifically, haptics and olfactory displays can be integrated with head mounted displays and headset to allow users to live more engaging multisensory experiences, where immersion, interaction and imagination reach higher levels and are more engaging.

Monica Bordegoni is full professor at the Department of Mechanical Engineering, School of Design at Politecnico di Milano. She teaches Virtual Prototyping at the School of Design and at the School of Industrial Engineering, and is coordinator of the Virtual Prototyping Lab. Her research interest includes interactive Virtual Prototyping, Virtual/Augmented technology for industrial applications, haptic technology and haptic interaction, product experience, emotional engineering. She is member of the executive committee board of ASME Society – Computers and Information in Engineering, and co-chair of the Design Society SIG on Emotional Engineering.

 

 

Patrick Bourdot
CNRS/LIMSI, University of Paris-Sud, France
Collaborative Interactions within Immersive Environments: Advantages, Drawbacks and Current Research Issues on Multi-Stereoscopic CAVE-like Setups

Collaborative immersive interactions are possible through many technological systems. CAVE-like systems, even if they generally do not provide stereoscopy for several users, are a powerful type of Virtual Environment to address collaborative tasks, because collaborators are not virtualized and thus collective interactions are more natural. Conversely, interconnected HMDs or interconnected one-user CAVEs can provide an exact 3D perception for each user, at the expense of physical coexistence and rich social interactions. In the last ten years, multi-stereoscopic technology has achieved significant progress, enabling a new generation of CAVE-like systems where collaborators may share the same physical space while each having exact 3D perception on the virtual world. Thus it is now possible to preserve a natural dialogue with other collaborators inside a CAVE, while providing at the same time a better immersive experience for each of them. However, some perceptive and cognitive issues remain regarding such collaborative immersive systems. This talk will demonstrate when they occur, and will present some research in progress to analyse and overcome these issues.

Patrick Bourdot is Research Director at CNRS and head of VENISE team (http://www.limsi.fr/venise), the Virtual & Augmented Reality (V&AR) research group he has created in 2001 at CNRS/LIMSI Lab. Architect graduated in 1986, he received his PhD in Computer Sciences at the University of Aix-Marseille in 1992, joined the CNRS/LIMSI lab in 1993. His main research focuses are multi-sensorimotor, multimodal and collaborative V&AR interactions, and the related issues for users’ perception and cognition. He coordinated the scientific partnership of his Lab or led a number of research projects that have been or are currently funded by French government or by national and regional research institutes. He has been the founding secretary of AFRV, the French association of V&AR. At the international level, one of his actions has been to manage the CNRS Labs involved in INTUITION, the NoE of the 6th IST framework focused on V&AR, where he was member of the Core Group. He is founding member of EuroVR (www.eurovr-association.org), and has been re-elected last year to its executive board.

 

 

Luigi Gallo
Institute for High Performance Computing and Networking (ICAR-CNR), Italy
Touchless Interaction in Surgery: the Medical Imaging Toolkit experience

During the last few years, we have been witnessing a widespread interest on touchless technologies in the context of surgical procedures. The main reason is that surgeons often need to visualize medical images in operating rooms, but checking a computer through keyboard or mouse would result in a bacterial contamination. Touchless interfaces, which exploits sensor technologies and machine learning techniques for tracking and analyzing body movements, are advantageous in that they can preserve sterility around the patient. In fact, they allow surgeons to visualize medical images without having to physically touch any control or to rely on a proxy, who may not share the same level of professional vision. This talk aims to explore the main issues involved with the design of touchless user interfaces for intra-operative image control. It will overview state-of-the-art solutions, open challenges and research agenda in this area. Moreover, the talk will present the results of the Medical Imaging Toolkit (MITO) project, which has been focused on the design and implementation of a Kinect-based touchless user interface for pre- and intra-operative visualization of DICOM images.

Luigi Gallo received an M.Eng. in Computer Engineering from the University of Naples “Federico II” in July 2006 and a Ph.D. degree in Information Technology Engineering at the University of Naples “Parthenope” in April 2010.
He is a Research Scientist at the National Research Council of Italy (CNR) – Institute for High-Performance Computing and Networking (ICAR), and a Lecturer of Informatics at the University of Naples “Federico II”.
Since January 2011, he has been a member of the iHealthLab – Intelligent Healthcare Laboratory. Since June 2007 he has been a member of the Advanced Medical Imaging and Computing labOratory (AMICO), developed from a cooperation agreement between the IBB and ICAR institutes of the National Research Council of Italy.
His fields of interest include natural user interfaces and human interface aspects of Virtual/Augmented Reality, specifically considering medical application scenarios.

 

 

Sofia Pescarin
CNR ITABC, Italy
Virtual Museums Interacting and Augmenting Cultural Heritage: an European Perspective

Sofia Pescarin, Archaeologist, Degree in Topography of Ancient Italy, PhD in History and Computing, Master in “Technology of Museums”, is a specialist in 3D survey, GIS, landscape reconstruction, virtual museums, open source applied to cultural heritage and virtual archaeology. She works as a researcher at the Institute of Technologies applied to Cultural Heritage of the National Council of Researches in Rome (CNR ITABC), in the Virtual Heritage Lab. Here she coordinates a research dedicated to “Virtual Heritage” and has been the project coordinator of V-MUST.NET, FP7 ICT Network of Excellence focused on virtual museums (2011-2015).  She is the chair of the Italian School of Virtual Archaeology  (www.archeologiavirtuale.it) and the scientific director of Archeovirtual (www.archeovirtual.it). She has been co-chair of Digital Heritage 2013 international congress (Marseille, 28th Oct – 1st Nov 2013) and of the international school “drones in archaeology and cultural heritage” (Certosa di Pontignano, 17 – 27 Sept. 2013). Within V-MUST, she has recently coordinated the exhibition “Keys to Rome” in 4 museums and co-directed the Italian chapter of the exhibition: “Le chiavi di Roma. La città di Augusto” (Museo dei Fori Imperiali, 23th Sept 2014 – 10th May 2015).