ALDO FRANCO DRAGONI – Università Politecnica delle Marche, Italy

Aldo Franco Dragoni is an Associate Professor at the Polytechnic University of Marche where he teaches “Fundamentals of Computer Science”, “Dedicated Operating Systems” and “Artificial Intelligence”. After obtaining a degree in Electronic Engineering with a thesis entitled “Recognition of Robot Action Plans from Visual Information”, he begins his research activity within the Institute of Informatics of the University of Ancona working on “Distributed Artificial Intelligence”, developing a formal theory of communication between autonomous agents endowed with the ability of symbolic representation of knowledge and goals. He also begins to conceive the formal architecture of a system for the “Knowledge Revision” that allows an automatic reasoner to restore in a rational way the consistency of its knowledge base, compromised by incorrect information coming from various sources. This architecture then became the prototype of a “Support System for Investigations by the Judicial Police” during which a lot of contradictory information is collected and it is necessary to manage and order it according to criteria of maximum consistency, intrinsic credibility, and relative reliability of the sources that have supplied it. Subsequently, his research activity moves on more technological plans and applications, focusing on cybersecurity, digital terrestrial television, and health informatics. He deepens the themes of accessibility and, in particular, the assistive technologies for visually impaired people, especially synthesis and voice recognition.


Augmenting Reality with Artificial Intelligence

Virtual Reality (VR) and Artificial Intelligence (AI) are different technologies that have almost nothing to do with each other since VR is for representing scenarios in such a way that humans perceive them almost as real, while AI is for replacing humans ability to reason, learn and perceive. VR does make no sense without humans, while AI definitely does! The same could be said about the relation between Augmented Reality (AR) and AI. Although they are separate technologies completely unrelated to each other, they could be combined and their combination is one of the key factors that make a user’s experience of an AR application more profitable. Through the use of software development kits that deeply embodies AI technology, the rendering of augmented information specific and related to the real scene could make the experience more exciting, enjoyable, and useful. So, AR and AI may work hand in hand and some industries may benefit from marrying AR and AI, among them gaming, retail, manufacturing, education, and medicine. The speech will have a look at some examples of the utilization of AI-Augmented Reality.




MATIJA MAROLT – University of Ljubljana, Slovenia

Matija Marolt is an associate professor at the University of Ljubljana, Faculty of Computer and Information Science, where he is also the head of the Laboratory for Computer Graphics and Multimedia. He has obtained his Ph.D. in computer and information science from the University of Ljubljana in 2002. His main research interests lie in the fields of multimedia information retrieval, 3D visualization, and audio processing. His current research focuses on the segmentation and visualization of volumetric data and information retrieval in ethnomusicological archives. He has led or collaborated in a number of projects, currently, he leads the UL team in the H2020 project MiCreate and national research projects Thinking Folklore and Effectiveness of scaffolds in e-learning. 


Visualization of multimodal volumetric data

(The speech includes also the work of Žiga Lesar, and Ciril Bohak)

Direct volume rendering techniques, such as volumetric path tracing, are today’s state of the art for visualizing three-dimensional data from diverse scientific domains, e.g., medical imaging, engineering, environmental sciences, astronomy, and high energy physics. Often, several modalities are available for a dataset, either due to the use of multiple acquisition technologies or due to segmentation, where the internal structures are labeled and equipped with a set of properties. Every modality holds interesting information, so we need to combine them to enhance the visualization and to highlight the features of interest. The speech will discuss the visualization of multimodal volumetric data and present use cases for electron microscopy data. Such data is often densely populated with many different structures, making the existing visualization methods difficult to use. We will discuss how to use segmentation in combination with raw data to reduce the clutter in the volume, emphasize the structures with specific properties, and consequently produce more meaningful visualizations.




VOLKER PAELKE – University of Applied Science of Bremen, Germany

Prof. Dr. Volker Paelke is a professor of human-computer interaction at the University of Applied Science in Bremen since 2015. In 2002 he completed his doctorate on the “Design of Interactive 3D Illustrations” at the University of  Paderborn, working in C-LAB, a joint venture with Siemens AG. From 2002 to 2004 he worked as a post-doc in the special research cluster SFB 614 Self-Optimizing Systems, researching the use of VR in collaborative engineering applications. In 2004 he was appointed the junior professorship for 3D Geovisualization and Augmented Reality at the Leibniz University of Hanover. From 2010 to 2012 he worked as an institute professor and head of the 3D visualization and modelling group at the Geomatics Institute in Barcelona. From 2013 to 2014 he deputized the professorship for user-friendly design of technical systems at the OWL University of Applied Sciences and Arts in Lemgo and set up the User Experience Design group at Fraunhofer IOSB-INA in Lemgo. His research interests are in the user-centered design of visual-interactive applications, with a focus on AR / MR techniques, 3D visualization, and natural user interfaces. Prof. Paelke is a member of the guidance committee of the GI VR/AR special interest group and has co-authored the first academic German language text-book on Virtual and Augmented Reality with colleagues from that group, which is currently being prepared for an English language release.


Guidance in Mixed Reality Applications – Supporting Users in Complex Tasks and Environments

Guidance is a central function of many mixed reality applications. While the guidance is at the centre of application like navigation systems or picking assistance in logistics there is an even wider range of mixed reality applications where the need for guidance is less obvious. Users tend to see only what they know and expect, which often results in situations where information that is “presented” in a mixed reality is missed by users. It is up to the designer of a visualization to ensure that the user is aware of important information and able to “read” and “understand” it. This talk focuses on guidance techniques that help users navigate the complex information and interaction tasks in mixed reality environments. It examines why different forms of user guidance are required in mixed reality, why these pose a challenge for user interface designers, presents techniques that can be used to aid the design process and discusses remaining challenges as subjects for future research and development.