• linkedin

<my first name>.de.amicis at gmail.com

+1 (541) 737-0741

+1 (541) 737-1300

Corvallis, OR, USA

©2017 BY RAFFAELE DE AMICIS

My current research pursuits are as outlined below:

 

Cross-Reality in Industrial Manufacturing

 

I am currently investigating how Cross-Reality (XR) and the Industrial Internet of Things can support assembly tasks in hybrid human-machine manufacturing lines and create service value at a competitive advantage. In this context, I am designing an XR framework to enhance human capabilities. My specific objective is to develop the fundamental knowledge required to design a mixed reality manufacturing system wherein workers interact with industrial sensors and robotic technology through the system’s intelligent detection of their cognitive needs rather than through their deliberate action. In the design of the XR framework, I am investigating how aspects of industrial planning and content creation can be automated, especially when it relates to short-lived tasks. I am also determining the extent to which the superposition of task-related content upon industrial spaces can provide seamless interoperability between devices and facilitate the transfer of skills  to human workers in human-machine hybrid environments. The transfer process not only includes learning the skills necessary to execute a new task, but also instruction as to how to seamlessly integrate them into the working environment.

 

Multimodal representation of spatial urban environments

The volume of geospatial data being collected and stored is growing at a tremendous rate – sensor enriched buildings, streets, and city features generate massive quantities of it every day. These datasets are by nature large and complex, often requiring extensive filtering and sorting which places a heavy burden on the resources of those who need to interact with it. To a certain extent, spatial data infrastructures gradually evolved to address this issue, allowing spatial data to be stored in federated databases that often serve as the basis for semantic driven 3D interactive presentations. However, the massive, complicated, and dynamic nature of geospatial datasets has caused such gradual, unplanned, and natural approaches to become obsolete, no longer able to effectively explore the spatial characteristics of the modern world. In my research, I am exploring the challenges associated with presenting very large geospatial datasets through multimodal user experiences. My contributions are algorithms and architectures that enable users to interact with large geospatial datasets through an auditory and visual experience.

 

Multimodal representation of monitoring data in mixed reality environments

Modern informational visualization techniques have been widely adopted to represent and display data patterns because they offer users streamlined access to data trends highlight specific points more successfully than plain figures and words. Hence, developing more effective ways of representing and visualizing information is fundamental to creating more productive connections between humans and the physical world. In recent years, advances in immersive technology have generated new interactive visualization modalities with the potential to change the way we perceive data. However, current applications of such technology have shown limited results. The aim of this research is to explore how new holographic interfaces and display technologies can be combined with sonified and tactile interaction to create more immersive analytic and exploratory environments. I investigate how auditory and visual rendering can support the spatial perception of abstract data in mixed reality and how the transformation of machine-generated data into vibrotactile feedback, sounds, and color luminance can empower users with data awareness and new ways of understanding real-time information.