Search results

1 – 10 of over 35000
Article
Publication date: 10 November 2020

Clement Onime, James Uhomoibhi, Hui Wang and Mattia Santachiara

This paper presents a reclassification of markers for mixed reality environments that is also applicable to the use of markers in robot navigation systems and 3D modelling. In the…

Abstract

Purpose

This paper presents a reclassification of markers for mixed reality environments that is also applicable to the use of markers in robot navigation systems and 3D modelling. In the case of Augmented Reality (AR) mixed reality environments, markers are used to integrate computer generated (virtual) objects into a predominantly real world, while in Augmented Virtuality (AV) mixed reality environments, the goal is to integrate real objects into a predominantly virtual (computer generated) world. Apart from AR/AV classifications, mixed reality environments have also been classified by reality; output technology/display devices; immersiveness as well as by visibility of markers.

Design/methodology/approach

The approach adopted consists of presenting six existing classifications of mixed reality environments and then extending them to define new categories of abstract, blended, virtual augmented, active and smart markers. This is supported with results/examples taken from the joint Mixed Augmented and Virtual Reality Laboratory (MAVRLAB) of the Ulster University, Belfast, Northern Ireland; the Abdus Salam International Centre for Theoretical Physics (ICTP), Trieste, Italy and Santasco SrL, Regio Emilia/Milan, Italy.

Findings

Existing classification of markers and mixed reality environments are mainly binary in nature and do not adequately capture the contextual relationship between markers and their use and application. The reclassification of markers into abstract, blended and virtual categories captures the context for simple use and applications while the categories of augmented, active and smart markers captures the relationship for enhanced or more complex use of markers. The new classifications are capable of improving the definitions of existing simple marker and markerless mixed reality environments as well as supporting more complex features within mixed reality environments such as co-location of objects, advanced interactivity, personalised user experience.

Research limitations/implications

It is thought that applications and devices in mixed reality environments when properly developed and deployed enhances the real environment by making invisible information visible to the user. The current work only marginally covers the use of internet of things (IoT) devices in mixed reality environments as well as potential implications for robot navigation systems and 3D modelling.

Practical implications

The use of these reclassifications enables researchers, developers and users of mixed reality environments to select and make informed decisions on best tools and environment for their respective application, while conveying information with additional clarity and accuracy. The development and application of more complex markers would contribute in no small measure to attaining greater advancements in extending current knowledge and developing applications to positively impact entertainment, business and health while minimizing costs and maximizing benefits.

Originality/value

The originality of this paper lies in the approach adopted in reclassifying markers. This is supported with results and work carried out at the MAV Reality Laboratory of Ulster University, Belfast–UK, the Abdus Salam International Centre for Theoretical Physics (ICTP), Trieste-Italy and Santasco SrL, Regio Emilia, Milan–Italy. The value of present research lies in the definitions of new categories as well as the discussions of how they improve mixed reality environments and application especially in the health and education sectors.

Details

The International Journal of Information and Learning Technology, vol. 38 no. 1
Type: Research Article
ISSN: 2056-4880

Keywords

Article
Publication date: 3 December 2018

João Neves, Diogo Serrario and J. Norberto Pires

Mixed reality is expanding in the industrial market and several companies in various fields are adapting this set of technologies for various purposes, such as optimizing…

Abstract

Purpose

Mixed reality is expanding in the industrial market and several companies in various fields are adapting this set of technologies for various purposes, such as optimizing processes, improving the programming tasks and promoting the interactivity of their products with the users, or even improving teaching or training. Robotics is another area that can benefit from these recent technologies. In fact, most of the current and futuristic robotic applications, namely, the areas related to advanced manufacturing tasks (e.g. additive-manufacturing, collaborative robotics, etc.), require new technics to actually perceive the result of several actions, including programming tasks, anticipate trajectories, visualize the motion and related information, interface with programmers and users and several other human–machine interfaces. Consequently, this paper aims to explain a new concept of human–machine interfaces aiming to improve the interaction between advanced users and industrial robotic work cells.

Design/methodology/approach

The presented concept uses two different applications (apps) developed to explore the advanced features of the Microsoft HoloLens device. The objectives of the project reported in this paper are to optimize robot paths, just by allowing the advanced user to adjust the selected path through the mixed reality environment, and create new paths, just by allowing the advanced user to insert points in the mixed reality environment, correct them as needed, connect them using a certain type of motion, parametrize them (in terms of velocity, motion precision, etc.) and command them to the robot controller.

Findings

The solutions demonstrated in this paper show how mixed reality can be used to allow users, with limited programming experience, to fully use the robotics fields. They also show clearly that the integration of the mixed reality technology in the current robot systems will be a turning point in reducing the complexity for end-users.

Research limitations/implications

There are two challenges in the developed applications. The first relates to the robot tool identification, which is very sensitive to lighting conditions or to very complex robot tools. This can result in positioning errors when the software shows the path in the mixed reality scene. The paper presents solutions to overcome this problem. Another unattended challenge is associated with handling the robot singularities when adjusting or creating new paths. Ongoing work is concentrated in creating mechanisms that prevent the end-user to create paths that contain unreachable points or paths that are not feasible because of bad motion parameters.

Practical implications

This paper demonstrates the utilization of mixed reality device to improve the tasks of programming and commanding manufacturing work cells based on industrial robots [see video in (Pires et al., 2018)]. As the presented devices and robot cells are the basis for Industry 4.0 objectives, this demonstration has a vast field of application in the near future, positively influencing the way complex applications, that require much close cooperation between humans and machines, are thought, planned and built. The paper presents two different applications fully ready to use in industrial environments. These applications are scientific experiments designed to demonstrate the principles and technologies of mixed reality applied to industrial robotics, namely, for improving the programming task.

Social implications

Although the HoloLens device opens outstanding new areas for robot command and programming, it is still expensive and somehow heavy for everyday use. Consequently, this opens an opportunity window to combine these devices with other mobile devices, such as tablets and phones, building applications that take advantage of their combined features.

Originality/value

The paper presents two different applications fully ready to use in industrial environments. These applications are scientific experiments designed to demonstrate the principles and technologies of mixed reality applied to industrial robotics, namely, for improving the programming task. The first application is about path visualization, i.e. enables the user to visualize, in a mixed reality environment, any path preplanned for the robot cell. With this feature, the advanced user can follow the robot path, identify problems, associate any difficulty in the final product with a particular issue in the robot paths, anticipate execution problems with impact on the final product quality, etc. This is particularly important for not only advanced applications, but also for cases where the robot path results from a CAD package (in an offline fashion). The second application consists of a graphical path manipulation procedure that allows the advanced user to create and optimize a robot path. Just by exploring this feature, the end-user can adjust any path obtained from any programming method, using the mixed reality approach to guide (visually) the path manipulation procedure. It can also create a completely new path using a process of graphical insertion of point positions and paths into the mixed reality scene. The ideas and implementations of the paper are original and there is no other example in the literature applied to industrial robot programming.

Details

Industrial Robot: An International Journal, vol. 45 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 25 September 2019

James Uhomoibhi, Clement Onime and Hui Wang

The purpose of this paper is to report on developments and applications of mixed reality cubicles and their impacts on learning in higher education. This paper investigates and…

Abstract

Purpose

The purpose of this paper is to report on developments and applications of mixed reality cubicles and their impacts on learning in higher education. This paper investigates and presents the cost effective application of augmented reality (AR) as a mixed reality technology via or to mobile devices such as head-mounted devices, smart phones and tablets. Discuss the development of mixed reality applications for mobile (smartphones and tablets) devices leading up to the implementation of a mixed reality cubicle for immersive three dimensional (3D) visualizations.

Design/methodology/approach

The approach adopted was to limit the considerations to the application of AR via mobile platforms including head-mounted devices with focus on smartphones and tablets, which contain basic feedback–to-user channels such as speakers and display screens. An AR visualization cubicle was jointly developed and applied by three collaborating institutions. The markers, acting as placeholders acts as identifiable reference points for objects being inserted in the mixed reality world. Hundreds of participants comprising academics and students from seven different countries took part in the studies and gave feedback on impact on their learning experience.

Findings

Results from current study show less than 30 percent had used mixed reality environments. This is lower than expected. About 70 percent of participants were first time users of mixed reality technologies. This indicates a relatively low use of mixed reality technologies in education. This is consistent with research findings reported that educational use and research on AR is still not common despite their categorization as emerging technologies with great promise for educational use.

Research limitations/implications

Current research has focused mainly on cubicles which provides immersive experience if used with head-mounted devices (goggles and smartphones), that are limited by their display/screen sizes. There are some issues with limited battery lifetime for energy to function, hence the need to use rechargeable batteries. Also, the standard dimension of cubicles does not allow for group visualizations. The current cubicle has limitations associated with complex gestures and movements involving two hands, as one hand are currently needed for holding the mobile phone.

Practical implications

The use of mixed reality cubicles would allow and enhance information visualization for big data in real time and without restrictions. There is potential to have this extended for use in exploring and studying otherwise inaccessible locations such as sea beds and underground caves. Social implications – Following on from this study further work could be done to developing and application of mixed reality cubicles that would impact businesses, health and entertainment.

Originality/value

The originality of this paper lies in the unique approach used in the study of developments and applications of mixed reality cubicles and their impacts on learning. The diverse composition in nature and location of participants drawn from many countries comprising of both tutors and students adds value to the present study. The value of this research include amongst others, the useful results obtained and scope for developments in the future.

Details

The International Journal of Information and Learning Technology, vol. 37 no. 1-2
Type: Research Article
ISSN: 2056-4880

Keywords

Abstract

Details

Marketing in Customer Technology Environments
Type: Book
ISBN: 978-1-83909-601-3

Article
Publication date: 8 February 2021

Shim Lew, Tugce Gul and John L. Pecore

Simulation technology has been used as a viable alternative to provide a real life setting in teacher education. Applying mixed-reality classroom simulations to English for…

Abstract

Purpose

Simulation technology has been used as a viable alternative to provide a real life setting in teacher education. Applying mixed-reality classroom simulations to English for Speakers of Other Languages (ESOL) teacher preparation, this qualitative case study aims to examine how pre-service teachers (PSTs) practice culturally and linguistically responsive teaching to work with an English learner (EL) avatar and other avatar students.

Design/methodology/approach

Using an embedded single case study, three PSTs’ teaching simulations and interviews were collected and analyzed.

Findings

This study found PST participants made meaningful connections between theory and practices of culturally and linguistically responsive teaching, particularly by connecting academic concepts to students’ life experiences, promoting cultural diversity, using instructional scaffolding and creating a safe environment. Nevertheless, they needed further improvement in incorporating cultural diversity into content lessons, creating a challenging and supportive classroom and developing interactional scaffolding for ELs’ language development. The findings also show that while PST participants perceived simulation technology as very beneficial, expanding the range of technological affordances could provide PSTs an opportunity to undertake a full range of critical teaching strategies for ELs.

Originality/value

This research contributes to broadening the realm of mixed-reality technology by applying it to ESOL teacher education and has implications for both ESOL teacher educators and simulation technology researchers.

Article
Publication date: 12 July 2007

David Mountain and Fotis Liarokapis

The motivation for this research is the emergence of mobile information systems where information is disseminated to mobile individuals via handheld devices. A key distinction…

1346

Abstract

Purpose

The motivation for this research is the emergence of mobile information systems where information is disseminated to mobile individuals via handheld devices. A key distinction between mobile and desktop computing is the significance of the relationship between the spatial location of an individual and the spatial location associated with information accessed by that individual. Given a set of spatially referenced documents retrieved from a mobile information system, this set can be presented using alternative interfaces of which two presently dominate: textual lists and graphical two‐dimensional maps. The purpose of this paper is to explore how mixed reality interfaces can be used for the presentation of information on mobile devices.

Design/methodology/approach

A review of relevant literature is followed by a proposed classification of four alternative interfaces. Each interface is the result of a rapid prototyping approach to software development. Some brief evaluation is described, based upon thinking aloud and cognitive walk‐through techniques with expert users.

Findings

The most suitable interface for mobile information systems is likely to be user‐ and task‐dependent; however, mixed reality interfaces offer promise in allowing mobile users to make associations between spatially referenced information and the physical world.

Research limitations/implications

Evaluation of these interfaces is limited to a small number of expert evaluators, and does not include a full‐scale evaluation with a large number of end users.

Originality/value

The application of mixed reality interfaces to the task of displaying spatially referenced information for mobile individuals.

Details

Aslib Proceedings, vol. 59 no. 4/5
Type: Research Article
ISSN: 0001-253X

Keywords

Article
Publication date: 9 January 2023

Omobolanle Ruth Ogunseiju, Nihar Gonsalves, Abiola Abosede Akanmu, Yewande Abraham and Chukwuma Nnaji

Construction companies are increasingly adopting sensing technologies like laser scanners, making it necessary to upskill the future workforce in this area. However, limited…

Abstract

Purpose

Construction companies are increasingly adopting sensing technologies like laser scanners, making it necessary to upskill the future workforce in this area. However, limited jobsite access hinders experiential learning of laser scanning, necessitating the need for an alternative learning environment. Previously, the authors explored mixed reality (MR) as an alternative learning environment for laser scanning, but to promote seamless learning, such learning environments must be proactive and intelligent. Toward this, the potentials of classification models for detecting user difficulties and learning stages in the MR environment were investigated in this study.

Design/methodology/approach

The study adopted machine learning classifiers on eye-tracking data and think-aloud data for detecting learning stages and interaction difficulties during the usability study of laser scanning in the MR environment.

Findings

The classification models demonstrated high performance, with neural network classifier showing superior performance (accuracy of 99.9%) during the detection of learning stages and an ensemble showing the highest accuracy of 84.6% for detecting interaction difficulty during laser scanning.

Research limitations/implications

The findings of this study revealed that eye movement data possess significant information about learning stages and interaction difficulties and provide evidence of the potentials of smart MR environments for improved learning experiences in construction education. The research implication further lies in the potential of an intelligent learning environment for providing personalized learning experiences that often culminate in improved learning outcomes. This study further highlights the potential of such an intelligent learning environment in promoting inclusive learning, whereby students with different cognitive capabilities can experience learning tailored to their specific needs irrespective of their individual differences.

Originality/value

The classification models will help detect learners requiring additional support to acquire the necessary technical skills for deploying laser scanners in the construction industry and inform the specific training needs of users to enhance seamless interaction with the learning environment.

Details

Smart and Sustainable Built Environment, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2046-6099

Keywords

Abstract

Details

The Emerald Handbook of Multi-Stakeholder Communication
Type: Book
ISBN: 978-1-80071-898-2

Article
Publication date: 9 July 2021

Xintian Tu, Chris Georgen, Joshua A. Danish and Noel Enyedy

This paper aims to show how collective embodiment with physical objects (i.e. props) support young children’s learning through the construction of liminal blends that merge…

Abstract

Purpose

This paper aims to show how collective embodiment with physical objects (i.e. props) support young children’s learning through the construction of liminal blends that merge physical, virtual and conceptual resources in a mixed-reality (MR) environment..

Design/methodology/approach

Building on Science through Technology Enhanced Play (STEP), we apply the Learning in Embodied Activity Framework to further explore how liminal blends can help us understand learning within MR environments. Twenty-two students from a mixed first- and second-grade classroom participated in a seven-part activity sequence in the STEP environment. The authors applied interaction analysis to analyze how student’s actions performed with the physical objects helped them to construct liminal blends that allowed key concepts to be made visible and shared for collective sensemaking.

Findings

The authors found that conceptually productive liminal blends occurred when students constructed connections between the resources in the MR environment and coordinated their embodiment with props to represent new understandings.

Originality/value

This study concludes with the implications for how the design of MR environment and teachers’ facilitation in MR environment supports students in constructing liminal blends and their understanding of complex science phenomena.

Details

Information and Learning Sciences, vol. 122 no. 7/8
Type: Research Article
ISSN: 2398-5348

Keywords

Article
Publication date: 9 August 2011

Sylvia M. Rabeler

The objective of this paper is two‐fold: to share information about color and to solicit information about sound, with the ultimate goal of producing a simple formula for…

358

Abstract

Purpose

The objective of this paper is two‐fold: to share information about color and to solicit information about sound, with the ultimate goal of producing a simple formula for generating a cybernetic mixed reality environment; and to serve as a vehicle for inviting conversation at the “Cybernetics: Art, Design and Mathematics 2010” Conference.

Design/methodology/approach

The majority of research in color focuses on the perceptual, experiential, phenomenon. Conceptual color is a perceptual and non‐experiential. It is color observed as a thought. By engaging in study, from this methodological approach, color can be modeled as a simple computational system of inter‐related abstract elements. This makes the complexity of the perceptual environment understandable and translatable to abstract data, but also makes the transition from the abstract back to actual possible.

Findings

The conceptual model approach has yielded a number of features for future study, not least of which is color, as a true mathematical system. This is very different from a simple color‐coding system.

Practical implications

With more development, the new system may prove to be of significance to future digital design applications. Given that music is also a spatial pattern system, revisiting the age‐old belief that there must be a correlation between color and music may now be productive.

Originality/value

This paper presents a puzzle with thought‐provoking questions, designed to solicit the information needed, in order to determine if a spatial correlation between color and sound is identifiable.

Details

Kybernetes, vol. 40 no. 7/8
Type: Research Article
ISSN: 0368-492X

Keywords

1 – 10 of over 35000