Salento AVR 2016

ANTONIO EMMANUELE UVA

Polytechnic Institute of Bari, Italy

Antonio Emmanuele Uva received the Ph.D. in Mechanical Engineering from the University of Naples in 2000. He was visiting researcher for over a year at the University of California at Davis and is with the Department of Mechanics, Mathematics and Management at the Polytechnic Institute of Bari, Italy, as Associate Professor since 2006.
Antonio E. Uva is the principal investigator of the Virtual Reality and Reality Reconstruction Lab (VR3Lab) of the Politecnico di Bari and also a member of Centre of Excellence for Computational Mechanics (CEMeC) of Polytechnic Institute of Bari. His main research interests are Virtual and Augmented Reality, CAD, Human-Computer Interaction, and Bioengineering.

Text Legibility Issues in Industrial Augmented Reality

In the Industry 4.0 vision, the creation of leading-edge options for interaction between people and technology occupies a key role. In this context, augmented reality (AR) is one of the most suitable solutions; however, it is still not ready to be effectively used in industry. A crucial problem is the legibility of text seen through AR head-worn displays (HWDs) because the AR interface designers have no standard guidelines to follow for these devices. Literature and anecdotal evidence suggest that legibility depends mainly on background, display technology, and text style. Furthermore, there are some constraints that have to be considered in industrial environments, such as standard color-coding practices and workplace lighting.
This keynote speech examines aspects affecting text legibility with an emphasis on deriving guidelines to support AR interface designers. The results suggest that enhancing text contrast via software, along with using the outline or billboard style, is an effective practice to improve legibility in many situations.

 

LEO JOSKOWICZ

University of Jerusalem, Israel

Leo Joskowicz is a Professor at the School of Engineering and Computer Science at the Hebrew University of Jerusalem, Israel, and the founder and director of the Computer-Aided Surgery and Medical Image Processing Laboratory (CASMIP Lab) since 1996. He obtained his PhD in Computer Science at the Courant Institute of Mathematical Sciences, New York University, in 1988. From 1988 to 1995, he was at the IBM T.J. Watson Research Center, Yorktown Heights, New York, where he conducted research in intelligent computer-aided design and computer-aided orthopaedic surgery. From 2001 to 2009 he was the Director of the Leibniz Center for Research in Computer Science.
Prof. Joskowicz is the recipient of the 2010 Maurice E. Muller Award for Excellence in Computer Assisted Surgery by the International Society of Computer Aided Orthopaedic Surgery and the 2007 Kaye Innovation Award by the Hebrew University. He is a Fellow of the IEEE (Institute of Electrical and Electronic Engineers) and the ASME (American Society of Mechanical Engineers), Member of Board of Directors of the MICCAI Society (Medical Image Processing and Computer Aided Intervention) and part of the Editorial Boards of Computer-Aided Surgery, Medical Image Analysis, Journal of Computer Assisted Radiology and Surgery, Advanced Engineering Informatics, ASME Journal of Computing and Information Science in Engineering, and Annals of Mathematics Artificial Intelligence.

Digital Models from Medical Images: from the Lab to the Clinic

Patient-specific models generated from volumetric medical images provide a quantitative basis for clinical decision-making and are a key enabler for big data processing in radiology. The models are geometric and functional representations of the anatomical structures and pathologies of interest for a specific clinical task. The segmentation of these structures is the key step of patient-specific modeling. Making patient-specific models widespread and routinely used in clinical practice poses a significant scientific and technical challenge that requires a paradigm shift in how volumetric medical image segmentation is currently performed.
In this talk, we will present an overview of our most significant work on patient-specific model generation for a variety of clinical tasks, including keyhole neurosurgery, longitudinal brain, liver, and lungs tumor follow up, and for Plexiform Neurofibromas progression evaluation. We will present the results of our experimental studies and the clinical experience with the software prototype at the Tel Aviv Sourasky Medical Center.

 

MATTEO DELLEPIANE

ISTI-CNR, Pisa, Italy

Matteo Dellepiane is a Researcher at CNR-ISTI. He received in 2002 an advanced Degree in Telecommunication Engineering from the University of Genova, and in 2009 a Ph.D. in Information Engineering from the University of Pisa.
He’s currently the responsible for the “3D Graphics and Cultural Heritage” branch of Visual Computing Laboratory of ISTI-CNR, Pisa. He’s been involved in several European and National projects related to the application of technologies to Cultural Heritage. His research interests include 3D scanning, digital archeology, color acquisition and visualization on 3D models and perceptual rendering.

3D acquisition today: all’s well that ends well?

In the last few years there’s been a real evolution of 3D acquisition techniques. 3D scanning is becoming more affordable, and the advent of depth cameras and multi-view stereo matching techniques opened the market to a huge potential audience. Everybody is now able to acquire a real object and obtain a nice 3D model, that can be used for a plurality of uses, from basic visualization to 3D printing. But are these really good news? Is the 3D acquisition a “solved” problem?
Actually no, if we have a closer look at the issues. There are still unsolved problems, and there are even more problematic consequences for the work of professionals in the field of Cultural Heritage and Visualization in general. The talk will present a short story of the recent development of 3D acquisition technologies, and it will focus on an overview of the still-standing and new issues: among them, the acquisition of color and material information, the control of data quality, the limitations in acquisition and presentation, the applications to 3D printing.

 

STEFANO BALDASSI

Meta Company, California, USA

Stefano Baldassi is the Director of User Research at Meta. He studied Experimental Psychology at the University of Rome “La Sapienza” and earned a PhD in Human Perception (2001), with a focus on visual search and attention performed at the Institute of Neuroscience of the CNR, in Pisa. He had research and teaching positions at UCL (London), SKERI (San Francisco), University of Florence, NYU and Stanford. His scientific interest in how humans make use of visual information during their natural tasks led him to focus on the wearable AR industry. While he was at Stanford University as a visiting professors in 2014, he was recruited by the founders of Meta, who had a strong vision about how human neuroscience and vision science could be the foundation for a new type of product. Over the two years since then, he has led a wide range of research projects within Meta that led to two generations of the first-to-market wearable AR products, making him an industry and academic thought leader based on his peer-reviewed publications and IP in the field.

 

From science to production: researching Augmented Reality while we build it

Wearable Augmented Reality is a completely novel technology that connects directly onto the user’s senses and brain. Differently from Virtual Reality, Augmented Reality is completely grounded on the physical world in which the 3D visual content is delivered. This demands the creation of an interaction system that exploits a full presence of the senses and the body in the physical world of the user. This problem generates an incredible number of scientific and technical challenges that need to be solved in order to build a powerful new type of product that some have described as “the computer of the future.” The solution comes from an unprecedented blend of science and research, including neuroscience, optics, ergonomics, imaging engineering, and more. At Meta, the output of these hybrid inquiries are documented, scientifically validated, and then immediately deployed into product design decisions and documents to build end-user products. In this keynote talk I will highlight how Meta’s Research team tackled many of these challenges and how data quickly integrated into the product development to improve the product experience and accelerate adoption.