Search results

1 – 10 of over 3000
To view the access options for this content please click here
Article
Publication date: 3 April 2017

Zhiqiang Yu, Qing Shi, Huaping Wang, Ning Yu, Qiang Huang and Toshio Fukuda

The purpose of this paper is to present state-of-the-art approaches for precise operation of a robotic manipulator on a macro- to micro/nanoscale.

Abstract

Purpose

The purpose of this paper is to present state-of-the-art approaches for precise operation of a robotic manipulator on a macro- to micro/nanoscale.

Design/methodology/approach

This paper first briefly discussed fundamental issues associated with precise operation of a robotic manipulator on a macro- to micro/nanoscale. Second, this paper described and compared the characteristics of basic components (i.e. mechanical parts, actuators, sensors and control algorithm) of the robotic manipulator. Specifically, commonly used mechanisms of the manipulator were classified and analyzed. In addition, intuitive meaning and applications of its actuator explained and compared in details. Moreover, related research studies on general control algorithm and visual control that are used in a robotic manipulator to achieve precise operation have also been discussed.

Findings

Remarkable achievements in dexterous mechanical design, excellent actuators, accurate perception, optimized control algorithms, etc., have been made in precise operations of a robotic manipulator. Precise operation is critical for dealing with objects which need to be manufactured, modified and assembled. The operational accuracy is directly affected by the performance of mechanical design, actuators, sensors and control algorithms. Therefore, this paper provides a categorization showing the fundamental concepts and applications of these characteristics.

Originality/value

This paper presents a categorization of the mechanical design, actuators, sensors and control algorithms of robotic manipulators in the macro- to micro/nanofield for precise operation.

Details

Assembly Automation, vol. 37 no. 2
Type: Research Article
ISSN: 0144-5154

Keywords

To view the access options for this content please click here
Article
Publication date: 16 January 2017

Shervan Fekriershad and Farshad Tajeripour

The purpose of this paper is to propose a color-texture classification approach which uses color sensor information and texture features jointly. High accuracy, low noise…

Abstract

Purpose

The purpose of this paper is to propose a color-texture classification approach which uses color sensor information and texture features jointly. High accuracy, low noise sensitivity and low computational complexity are specified aims for this proposed approach.

Design/methodology/approach

One of the efficient texture analysis operations is local binary patterns (LBP). The proposed approach includes two steps. First, a noise resistant version of color LBP is proposed to decrease its sensitivity to noise. This step is evaluated based on combination of color sensor information using AND operation. In a second step, a significant points selection algorithm is proposed to select significant LBPs. This phase decreases final computational complexity along with increasing accuracy rate.

Findings

The proposed approach is evaluated using Vistex, Outex and KTH-TIPS-2a data sets. This approach has been compared with some state-of-the-art methods. It is experimentally demonstrated that the proposed approach achieves the highest accuracy. In two other experiments, results show low noise sensitivity and low computational complexity of the proposed approach in comparison with previous versions of LBP. Rotation invariant, multi-resolution and general usability are other advantages of our proposed approach.

Originality/value

In the present paper, a new version of LBP is proposed originally, which is called hybrid color local binary patterns (HCLBP). HCLBP can be used in many image processing applications to extract color/texture features jointly. Also, a significant point selection algorithm is proposed for the first time to select key points of images.

To view the access options for this content please click here
Article
Publication date: 8 June 2020

Zhe Wang, Xisheng Li, Xiaojuan Zhang, Yanru Bai and Chengcai Zheng

The purpose of this study is to use visual and inertial sensors to achieve real-time location. How to provide an accurate location has become a popular research topic in…

Abstract

Purpose

The purpose of this study is to use visual and inertial sensors to achieve real-time location. How to provide an accurate location has become a popular research topic in the field of indoor navigation. Although the complementarity of vision and inertia has been widely applied in indoor navigation, many problems remain, such as inertial sensor deviation calibration, unsynchronized visual and inertial data acquisition and large amount of stored data.

Design/methodology/approach

First, this study demonstrates that the vanishing point (VP) evaluation function improves the precision of extraction, and the nearest ground corner point (NGCP) of the adjacent frame is estimated by pre-integrating the inertial sensor. The Sequential Similarity Detection Algorithm (SSDA) and Random Sample Consensus (RANSAC) algorithms are adopted to accurately match the adjacent NGCP in the estimated region of interest. Second, the model of visual pose is established by using the parameters of the camera itself, VP and NGCP. The model of inertial pose is established by pre-integrating. Third, location is calculated by fusing the model of vision and inertia.

Findings

In this paper, a novel method is proposed to fuse visual and inertial sensor to locate indoor environment. The authors describe the building of an embedded hardware platform to the best of their knowledge and compare the result with a mature method and POSAV310.

Originality/value

This paper proposes a VP evaluation function that is used to extract the most advantages in the intersection of a plurality of parallel lines. To improve the extraction speed of adjacent frame, the authors first proposed fusing the NGCP of the current frame and the calibrated pre-integration to estimate the NGCP of the next frame. The visual pose model was established using extinction VP and NGCP, calibration of inertial sensor. This theory offers the linear processing equation of gyroscope and accelerometer by the model of visual and inertial pose.

To view the access options for this content please click here
Article
Publication date: 14 June 2013

Edgardo Molina, Alpha Diallo and Zhigang Zhu

The purpose of this paper is to propose a local orientation and navigation framework based on visual features that provide location recognition, context augmentation, and…

Abstract

Propose

The purpose of this paper is to propose a local orientation and navigation framework based on visual features that provide location recognition, context augmentation, and viewer localization information to a blind or low‐vision user.

Design/methodology/approach

The authors consider three types of “visual noun” features: signage, visual‐text, and visual‐icons that are proposed as a low‐cost method for augmenting environments. These are used in combination with an RGB‐D sensor and a simplified SLAM algorithm to develop a framework for navigation assistance suitable for the blind and low‐vision users.

Findings

It was found that signage detection cannot only help a blind user to find a location, but can also be used to give accurate orientation and location information to guide the user navigating a complex environment. The combination of visual nouns for orientation and RGB‐D sensing for traversable path finding can be one of the cost‐effective solutions for navigation assistance for blind and low‐vision users.

Research limitations/implications

This is the first step for a new approach in self‐localization and local navigation of a blind user using both signs and 3D data. The approach is meant to be cost‐effective but it only works in man‐made scenes where a lot of signs exist or can be placed and are relatively permanent in their appearances and locations.

Social implications

Based on 2012 World Health Organization, 285 million people are visually impaired, of which 39 million are blind. This project will have a direct impact on this community.

Originality/value

Signage detection has been widely studied for assisting visually impaired people in finding locations, but this paper provides the first attempt to use visual nouns as visual features to accurately locate and orient a blind user. The combination of visual nouns with 3D data from an RGB‐D sensor is also new.

Details

Journal of Assistive Technologies, vol. 7 no. 2
Type: Research Article
ISSN: 1754-9450

Keywords

To view the access options for this content please click here
Article
Publication date: 11 January 2021

Gursans Guven and Esin Ergen

The purpose of this study is to monitor the progress of construction activities in an automated way by using sensor-based technologies for tracking multiple resources that…

Abstract

Purpose

The purpose of this study is to monitor the progress of construction activities in an automated way by using sensor-based technologies for tracking multiple resources that are used in building construction.

Design/methodology/approach

An automated on-site progress monitoring approach was proposed and a proof-of-concept prototype was developed, followed by a field experimentation study at a high-rise building construction site. The developed approach was used to integrate sensor data collected from multiple resources used in different steps of an activity. It incorporated the domain-specific heuristics that were related to the site layout conditions and method of activity.

Findings

The prototype estimated the overall progress with 95% accuracy. More accurate and up-to-date progress measurement was achieved compared to the manual approach, and the need for visual inspections and manual data collection from the field was eliminated. Overall, the field experiments demonstrated that low-cost implementation is possible, if readily available or embedded sensors on equipment are used.

Originality/value

Previous studies either monitored one particular piece of equipment or the developed approaches were only applicable to limited activity types. This study demonstrated that it is technically feasible to determine progress at the site by fusing sensor data that are collected from multiple resources during the construction of building superstructure. The rule-based reasoning algorithms, which were developed based on a typical work practice of cranes and hoists, can be adapted to other activities that involve transferring bulk materials and use cranes and/or hoists for material handling.

Details

Construction Innovation , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1471-4175

Keywords

To view the access options for this content please click here
Article
Publication date: 9 October 2019

Elham Ali Shammar and Ammar Thabit Zahary

Internet has changed radically in the way people interact in the virtual world, in their careers or social relationships. IoT technology has added a new vision to this…

Abstract

Purpose

Internet has changed radically in the way people interact in the virtual world, in their careers or social relationships. IoT technology has added a new vision to this process by enabling connections between smart objects and humans, and also between smart objects themselves, which leads to anything, anytime, anywhere, and any media communications. IoT allows objects to physically see, hear, think, and perform tasks by making them talk to each other, share information and coordinate decisions. To enable the vision of IoT, it utilizes technologies such as ubiquitous computing, context awareness, RFID, WSN, embedded devices, CPS, communication technologies, and internet protocols. IoT is considered to be the future internet, which is significantly different from the Internet we use today. The purpose of this paper is to provide up-to-date literature on trends of IoT research which is driven by the need for convergence of several interdisciplinary technologies and new applications.

Design/methodology/approach

A comprehensive IoT literature review has been performed in this paper as a survey. The survey starts by providing an overview of IoT concepts, visions and evolutions. IoT architectures are also explored. Then, the most important components of IoT are discussed including a thorough discussion of IoT operating systems such as Tiny OS, Contiki OS, FreeRTOS, and RIOT. A review of IoT applications is also presented in this paper and finally, IoT challenges that can be recently encountered by researchers are introduced.

Findings

Studies of IoT literature and projects show the disproportionate importance of technology in IoT projects, which are often driven by technological interventions rather than innovation in the business model. There are a number of serious concerns about the dangers of IoT growth, particularly in the areas of privacy and security; hence, industry and government began addressing these concerns. At the end, what makes IoT exciting is that we do not yet know the exact use cases which would have the ability to significantly influence our lives.

Originality/value

This survey provides a comprehensive literature review on IoT techniques, operating systems and trends.

Details

Library Hi Tech, vol. 38 no. 1
Type: Research Article
ISSN: 0737-8831

Keywords

To view the access options for this content please click here
Article
Publication date: 21 January 2021

Mojtaba Valinejadshoubi, Osama Moselhi and Ashutosh Bagchi

To mitigate the problems in sensor-based facility management (FM) such as lack of detailed visual information about a built facility and the maintenance of large scale…

Abstract

Purpose

To mitigate the problems in sensor-based facility management (FM) such as lack of detailed visual information about a built facility and the maintenance of large scale sensor deployments, an integrated data source for the facility’s life cycle should be used. Building information modeling (BIM) provides a useful visual model and database that can be used as a repository for all data captured or made during the facility’s life cycle. It can be used for modeling the sensing-based system for data collection, serving as a source of all information for smart objects such as the sensors used for that purpose. Although few studies have been conducted in integrating BIM with sensor-based monitoring system, providing an integrated platform using BIM for improving the communication between FMs and Internet of Things (IoT) companies in cases encountered failed sensors has received the least attention in the technical literature. Therefore, the purpose of this paper is to conceptualize and develop a BIM-based system architecture for fault detection and alert generation for malfunctioning FM sensors in smart IoT environments during the operational phase of a building to ensure minimal disruption to monitoring services.

Design/methodology/approach

This paper describes an attempt to examine the applicability of BIM for an efficient sensor failure management system in smart IoT environments during the operational phase of a building. For this purpose, a seven-story office building with four typical types of FM-related sensors with all associated parameters was modeled in a commercial BIM platform. An integrated workflow was developed in Dynamo, a visual programming tool, to integrate the associated sensors maintenance-related information to a cloud-based tool to provide a fast and efficient communication platform between the building facility manager and IoT companies for intelligent sensor management.

Findings

The information within BIM allows better and more effective decision-making for building facility managers. Integrating building and sensors information within BIM to a cloud-based system can facilitate better communication between the building facility manager and IoT company for an effective IoT system maintenance. Using a developed integrated workflow (including three specifically designed modules) in Dynamo, a visual programming tool, the system was able to automatically extract and send all essential information such as the type of failed sensors as well as their model and location to IoT companies in the event of sensor failure using a cloud database that is effective for the timely maintenance and replacement of sensors. The system developed in this study was implemented, and its capabilities were illustrated through a case study. The use of the developed system can help facility managers in taking timely actions in the event of any sensor failure and/or malfunction to ensure minimal disruption to monitoring services.

Research limitations/implications

However, there are some limitations in this work which are as follows: while the present study demonstrates the feasibility of using BIM in the maintenance planning of monitoring systems in the building, the developed workflow can be expanded by integrating some type of sensors like an occupancy sensor to the developed workflow to automatically record and identify the number of occupants (visitors) to prioritize the maintenance work; and the developed workflow can be integrated with the sensors’ data and some machine learning techniques to automatically identify the sensors’ malfunction and update the BIM model accordingly.

Practical implications

Transferring the related information such as the room location, occupancy status, number of occupants, type and model of the sensor, sensor ID and required action from the BIM model to the cloud would be extremely helpful to the IoT companies to actually visualize workspaces in advance, and to plan for timely and effective decision-making without any physical inspection, and to support maintenance planning decisions, such as prioritizing maintenance works by considering different factors such as the importance of spaces and number of occupancies. The developed framework is also beneficial for preventive maintenance works. The system can be set up according to the maintenance and time-based expiration schedules, automatically sharing alerts with FMs and IoT maintenance contractors in advance about the IoT parts replacement. For effective predictive maintenance planning, machine learning techniques can be integrated into the developed workflow to efficiently predict the future condition of individual IoT components such as data loggers and sensors, etc. as well as MEP components.

Originality/value

Lack of detailed visual information about a built facility can be a reason behind the inefficient management of a facility. Detecting and repairing failed sensors at the earliest possible time is critical to ensure the functional continuity of the monitoring systems. On the other hand, the maintenance of large-scale sensor deployments becomes a significant challenge. Despite its importance, few studies have been conducted in integrating BIM with a sensor-based monitoring system, providing an integrated platform using BIM for improving the communication between facility managers and IoT companies in cases encountered failed sensors. In this paper, a cloud-based BIM platform was developed for the maintenance and timely replacement of sensors which are critical to ensure minimal disruption to monitoring services in sensor-based FM.

Details

Journal of Facilities Management , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1472-5967

Keywords

To view the access options for this content please click here
Article
Publication date: 3 August 2010

Bo Chen and Shanben Chen

The status of welding process is difficult to monitor because of the intense disturbance during the process. The purpose of this paper is to use multiple sensors to obtain…

Abstract

Purpose

The status of welding process is difficult to monitor because of the intense disturbance during the process. The purpose of this paper is to use multiple sensors to obtain information about the process from different aspects and use multi‐sensor information fusion technology to fuse the information, to obtain more precise information about the process than using a single sensor alone.

Design/methodology/approach

Arc sensor, visual sensor, and sound sensor were used simultaneously to obtain weld current, weld voltage, weld pool's image, and weld sound about the pulsed gas tungsten‐arc welding (GTAW) process. Then special algorithms were used to extract the signal features of different information. Fuzzy measure and fuzzy integral method were used to fuse the extracted signal features to predict the penetration status about the welding process.

Findings

Experiment results show that fuzzy measure and fuzzy integral method can effectively utilize the information obtained by different sensors and obtain better prediction results than a single sensor.

Originality/value

Arc sensor, visual sensor, and sound sensor are used in pulsed GTAW at the same time to obtain information, and fuzzy measure and fuzzy integral method are used to fuse the different features in welding process for the first time; experiment results show that multi‐sensor information can obtain better results than single sensor, this provides a new method for monitoring welding status and to control the welding process more precisely.

Details

Assembly Automation, vol. 30 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

To view the access options for this content please click here
Article
Publication date: 26 March 2021

Riyaz Ali Shaik and Elizabeth Rufus

This paper aims to review the shape sensing techniques using large area flexible electronics (LAFE). Shape perception of humanoid robots using tactile data is mainly focused.

Abstract

Purpose

This paper aims to review the shape sensing techniques using large area flexible electronics (LAFE). Shape perception of humanoid robots using tactile data is mainly focused.

Design/methodology/approach

Research papers on different shape sensing methodologies of objects with large area, published in the past 15 years, are reviewed with emphasis on contact-based shape sensors. Fiber optics based shape sensing methodology is discussed for comparison purpose.

Findings

LAFE-based shape sensors of humanoid robots incorporating advanced computational data handling techniques such as neural networks and machine learning (ML) algorithms are observed to give results with best resolution in 3D shape reconstruction.

Research limitations/implications

The literature review is limited to shape sensing application either two- or three-dimensional (3D) LAFE. Optical shape sensing is briefly discussed which is widely used for small area. Optical scanners provide the best 3D shape reconstruction in the noncontact-based shape sensing; here this paper focuses only on contact-based shape sensing.

Practical implications

Contact-based shape sensing using polymer nanocomposites is a very economical solution as compared to optical 3D scanners. Although optical 3D scanners can provide a high resolution and fast scan of the 3D shape of the object, they require line of sight and complex image reconstruction algorithms. Using LAFE larger objects can be scanned with ML and basic electronic circuitory, which reduces the price hugely.

Social implications

LAFE can be used as a wearable sensor to monitor critical biological parameters. They can be used to detect shape of large body parts and aid in designing prosthetic devices. Tactile sensing in humanoid robots is accomplished by electronic skin of the robot which is a prime example of human–machine interface at workplace.

Originality/value

This paper reviews a unique feature of LAFE in shape sensing of large area objects. It provides insights from mechanical, electrical, hardware and software perspective in the sensor design. The most suitable approach for large object shape sensing using LAFE is also suggested.

Details

Industrial Robot: the international journal of robotics research and application, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0143-991X

Keywords

To view the access options for this content please click here
Article
Publication date: 26 June 2009

Bo Chen, Jifeng Wang and Shanben Chen

Welding process is a complicated process influenced by many interference factors, a single sensor cannot get information describing welding process roundly. This paper…

Abstract

Purpose

Welding process is a complicated process influenced by many interference factors, a single sensor cannot get information describing welding process roundly. This paper simultaneously uses different sensors to get different information about the welding process, and uses multi‐sensor information fusion technology to fuse the different information. By using multi‐sensors, this paper aims to describe the welding process more precisely.

Design/methodology/approach

Electronic and welding pool image information are, respectively, obtained by arc sensor and image sensor, then electronic signal processing and image processing algorithms are used to extract the features of the signals, the features are then fused by neural network to predict the backside width of weld pool.

Findings

Comparative experiments show that the multi‐sensor fusion technology can predict the weld pool backside width more precisely.

Originality/value

The multi‐sensor fusion technology is used to fuse the different information obtained by different sensors in a gas tungsten arc welding process. This method gives a new approach to obtaining information and describing the welding process.

Details

Sensor Review, vol. 29 no. 3
Type: Research Article
ISSN: 0260-2288

Keywords

1 – 10 of over 3000