Search results

1 – 10 of over 8000
Article
Publication date: 23 August 2022

Siyuan Huang, Limin Liu, Xiongjun Fu, Jian Dong, Fuyu Huang and Ping Lang

The purpose of this paper is to summarize the existing point cloud target detection algorithms based on deep learning, and provide reference for researchers in related fields. In…

Abstract

Purpose

The purpose of this paper is to summarize the existing point cloud target detection algorithms based on deep learning, and provide reference for researchers in related fields. In recent years, with its outstanding performance in target detection of 2D images, deep learning technology has been applied in light detection and ranging (LiDAR) point cloud data to improve the automation and intelligence level of target detection. However, there are still some difficulties and room for improvement in target detection from the 3D point cloud. In this paper, the vehicle LiDAR target detection method is chosen as the research subject.

Design/methodology/approach

Firstly, the challenges of applying deep learning to point cloud target detection are described; secondly, solutions in relevant research are combed in response to the above challenges. The currently popular target detection methods are classified, among which some are compared with illustrate advantages and disadvantages. Moreover, approaches to improve the accuracy of network target detection are introduced.

Findings

Finally, this paper also summarizes the shortcomings of existing methods and signals the prospective development trend.

Originality/value

This paper introduces some existing point cloud target detection methods based on deep learning, which can be applied to a driverless, digital map, traffic monitoring and other fields, and provides a reference for researchers in related fields.

Details

Sensor Review, vol. 42 no. 5
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 27 July 2012

Leonard Rusli and Anthony Luscher

The purpose of this paper is to create an assembly verification system that is capable of verifying complete assembly and torque for each individual fastener.

Abstract

Purpose

The purpose of this paper is to create an assembly verification system that is capable of verifying complete assembly and torque for each individual fastener.

Design/methodology/approach

The 3D position of the tool used to torque the fastener and the assembly pallet will be tracked using an infrared (IR) tracking system. A set of retro‐reflective markers are attached to the tool and assembly while being tracked by multiple IR cameras. Software is used to triangulate the relative position of the tool in order to identify the fastener being torqued. The torque value is obtained from the tool controller device. By combining the location of the tool and the torque value from the tool controller, assembly of each individual fastener can be verified and its achieved torque recorded.

Findings

The IR tracking is capable of tracking within 2‐3 mm for each tracking ball, with a resulting practical resolution of 24 mm distance between fasteners while maintaining 99.9999 per cent reliability without false positive fastener identification.

Research limitations/implications

This experiment was run under simulated assembly line lighting conditions.

Practical implications

By being able to verify assembly reliably, the need for manual torque check is eliminate and hence yield significant cost savings. This will also allow programming electric tools according in real time based on the fastener in proximity identification.

Originality/value

Currently, assembly verification is only done using the torque values. In automated assembly line, each process might involve fastening multiple fasteners. Using this system, a new level of assembly verification is achieved by recording the assembled fastener and its associated torque.

Article
Publication date: 11 July 2016

Meiyin Liu, SangUk Han and SangHyun Lee

As a means of data acquisition for the situation awareness, computer vision-based motion capture technologies have increased the potential to observe and assess manual activities…

1215

Abstract

Purpose

As a means of data acquisition for the situation awareness, computer vision-based motion capture technologies have increased the potential to observe and assess manual activities for the prevention of accidents and injuries in construction. This study thus aims to present a computationally efficient and robust method of human motion data capture for the on-site motion sensing and analysis.

Design/methodology/approach

This study investigated a tracking approach to three-dimensional (3D) human skeleton extraction from stereo video streams. Instead of detecting body joints on each image, the proposed method tracks locations of the body joints over all the successive frames by learning from the initialized body posture. The corresponding body joints to the ones tracked are then identified and matched on the image sequences from the other lens and reconstructed in a 3D space through triangulation to build 3D skeleton models. For validation, a lab test is conducted to evaluate the accuracy and working ranges of the proposed method, respectively.

Findings

Results of the test reveal that the tracking approach produces accurate outcomes at a distance, with nearly real-time computational processing, and can be potentially used for site data collection. Thus, the proposed approach has a potential for various field analyses for construction workers’ safety and ergonomics.

Originality/value

Recently, motion capture technologies have rapidly been developed and studied in construction. However, existing sensing technologies are not yet readily applicable to construction environments. This study explores two smartphones as stereo cameras as a potentially suitable means of data collection in construction for the less operational constrains (e.g. no on-body sensor required, less sensitivity to sunlight, and flexible ranges of operations).

Details

Construction Innovation, vol. 16 no. 3
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 26 October 2018

Tharmalingam Sivarupan, Mohamed El Mansori, Keith Daly, Mark Noel Mavrogordato and Fabrice Pierron

Micro-focus X-ray computed tomography (CT) can be used to quantitatively evaluate the packing density, pore connectivity and provide the basis for specimen derived simulations of…

Abstract

Purpose

Micro-focus X-ray computed tomography (CT) can be used to quantitatively evaluate the packing density, pore connectivity and provide the basis for specimen derived simulations of gas permeability of sand mould. This non-destructive experiment or following simulations can be done on any section of any size sand mould just before casting to validate the required properties. This paper aims to describe the challenges of this method and use it to simulate the gas permeability of 3D printed sand moulds for a range of controlling parameters. The permeability simulations are compared against experimental results using traditional measurement techniques. It suggests that a minimum volume of only 700 × 700 × 700 µm3 is required to obtain, a reliable and most representative than the value obtained by the traditional measurement technique, the simulated permeability of a specimen.

Design/methodology/approach

X-ray tomography images were used to reconstruct 3D models to simulate them for gas permeability of the 3D printed sand mould specimens, and the results were compared with the experimental result of the same.

Findings

The influence of printing parameters, especially the re-coater speed, on the pore connectivity of the 3D printed sand mould and related permeability has been identified. Characterisation of these sand moulds using X-ray CT and its suitability, compared to the traditional means, are also studied. While density and 3PB strength are a measure of the quality of the moulds, the pore connectivity from the tomographic images precisely relates to the permeability. The main conclusions of the present study are provided below. A minimum required sample size of 700 × 700 × 700 µm3 is required to provide representative permeability results. This was obtained from sand specimens with an average sand grain size of 140 µm, using the tomographic volume images to define a 3D mesh to run permeability calculations. Z-direction permeability is always lower than that in the X-/Y-directions due to the lower values of X-(120/140 µm) and Y-(101.6 µm) resolutions of the furan droplets. The anisotropic permeability of the 3D printed sand mould is mainly due to, the only adjustable, X-directional resolution of the furan droplets; the Y-directional resolution is a fixed distance, 102.6 µm, between the printhead nozzles and the Z-directional one is usually, 280 µm, twice the size of an average sand grain.A non-destructive and most representative permeability value can be obtained, using the computer simulation, on the reconstructed 3D X-ray tomography images obtained on a specific location of a 3D printed sand mould. This saves time and effort on printing a separate specimen for the traditional test which may not be the most representative to the printed mould.

Originality/value

The experimental result is compared with the computer simulated results.

Details

Rapid Prototyping Journal, vol. 25 no. 2
Type: Research Article
ISSN: 1355-2546

Keywords

Article
Publication date: 15 December 2020

Reyes Rios-Cabrera, Ismael Lopez-Juarez, Alejandro Maldonado-Ramirez, Arturo Alvarez-Hernandez and Alan de Jesus Maldonado-Ramirez

This paper aims to present an object detection methodology to categorize 3D object models in an efficient manner. The authors propose a dynamically generated hierarchical…

Abstract

Purpose

This paper aims to present an object detection methodology to categorize 3D object models in an efficient manner. The authors propose a dynamically generated hierarchical architecture to compute very fast objects’ 3D pose for mobile service robots to grasp them.

Design/methodology/approach

The methodology used in this study is based on a dynamic pyramid search and fast template representation, metadata and context-free grammars. In the experiments, the authors use an omnidirectional KUKA mobile manipulator equipped with an RGBD camera, to localize objects requested by humans. The proposed architecture is based on efficient object detection and visual servoing. In the experiments, the robot successfully finds 3D poses. The present proposal is not restricted to specific robots or objects and can grow as much as needed.

Findings

The authors present the dynamic categorization using context-free grammars and 3D object detection, and through several experiments, the authors perform a proof of concept. The authors obtained promising results, showing that their methods can scale to more complex scenes and they can be used in future applications in real-world scenarios where mobile robot are needed in areas such as service robots or industry in general.

Research limitations/implications

The experiments were carried out using a mobile KUKA youBot. Scalability and more robust algorithms will improve the present proposal. In the first stage, the authors carried out an experimental validation.

Practical implications

The current proposal describes a scalable architecture, where more agents can be added or reprogrammed to handle more complicated tasks.

Originality/value

The main contribution of this study resides in the dynamic categorization scheme for fast detection of 3D objects, and the issues and experiments carried out to test the viability of the methods. Usually, state-of-the-art treats categories as rigid and make static queries to datasets. In the present approach, there are no fixed categories and they are created and combined on the fly to speed up detection.

Details

Industrial Robot: the international journal of robotics research and application, vol. 48 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 3 April 2017

Wenjun Zhu, Peng Wang, Rui Li and Xiangli Nie

This paper aims to propose a novel real-time three-dimensional (3D) model-based work-piece tracking method with monocular camera for high-precision assembly. Tracking of 3D

Abstract

Purpose

This paper aims to propose a novel real-time three-dimensional (3D) model-based work-piece tracking method with monocular camera for high-precision assembly. Tracking of 3D work-pieces with real-time speed is becoming more and more important for some industrial tasks, such as work-pieces grasping and assembly, especially in complex environment.

Design/methodology/approach

A three-step process method was provided, i.e. the offline static global library generation process, the online dynamic local library updating and selection process and the 3D work-piece localization process. In the offline static global library generation process, the computer-aided design models of the work-piece are used to generate a set of discrete two-dimensional (2D) hierarchical views matching libraries. In the online dynamic library updating and selection process, the previous 3D location information of the work-piece is used to predict the following location range, and a discrete matching library with a small number of 2D hierarchical views is selected from dynamic local library for localization. Then, the work-piece is localized with high-precision and real-time speed in the 3D work-piece localization process.

Findings

The method is suitable for the texture-less work-pieces in industrial applications.

Originality/value

The small range of the library enables a real-time matching. Experimental results demonstrate the high accuracy and high efficiency of the proposed method.

Details

Assembly Automation, vol. 37 no. 2
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 14 June 2013

Edgardo Molina, Alpha Diallo and Zhigang Zhu

The purpose of this paper is to propose a local orientation and navigation framework based on visual features that provide location recognition, context augmentation, and viewer…

Abstract

Propose

The purpose of this paper is to propose a local orientation and navigation framework based on visual features that provide location recognition, context augmentation, and viewer localization information to a blind or low‐vision user.

Design/methodology/approach

The authors consider three types of “visual noun” features: signage, visual‐text, and visual‐icons that are proposed as a low‐cost method for augmenting environments. These are used in combination with an RGB‐D sensor and a simplified SLAM algorithm to develop a framework for navigation assistance suitable for the blind and low‐vision users.

Findings

It was found that signage detection cannot only help a blind user to find a location, but can also be used to give accurate orientation and location information to guide the user navigating a complex environment. The combination of visual nouns for orientation and RGB‐D sensing for traversable path finding can be one of the cost‐effective solutions for navigation assistance for blind and low‐vision users.

Research limitations/implications

This is the first step for a new approach in self‐localization and local navigation of a blind user using both signs and 3D data. The approach is meant to be cost‐effective but it only works in man‐made scenes where a lot of signs exist or can be placed and are relatively permanent in their appearances and locations.

Social implications

Based on 2012 World Health Organization, 285 million people are visually impaired, of which 39 million are blind. This project will have a direct impact on this community.

Originality/value

Signage detection has been widely studied for assisting visually impaired people in finding locations, but this paper provides the first attempt to use visual nouns as visual features to accurately locate and orient a blind user. The combination of visual nouns with 3D data from an RGB‐D sensor is also new.

Details

Journal of Assistive Technologies, vol. 7 no. 2
Type: Research Article
ISSN: 1754-9450

Keywords

Article
Publication date: 2 March 2015

Junqiang Su, Bingfei Gu, Guolian Liu and Bugao Xu

– The purpose of this paper is to focus on the determination of distance ease of pants from the 3D scanning data of a clothed and unclothed body.

Abstract

Purpose

The purpose of this paper is to focus on the determination of distance ease of pants from the 3D scanning data of a clothed and unclothed body.

Design/methodology/approach

A human model whose body size conformed to the Chinese dummy standard and four pairs of suit pants were chosen for the study. The scanned surfaces of both the body and the pant were superimposed based on the preset markers. The circumferences at four important positions – abdomen, hip, thigh and knee – were selected for pant ease determination. At one position (e.g. hip), the two cross-sections were divided into several characteristic sections and the distance ease, i.e. the space between the cross-sections at each section was measured. The regression equations between the distance ease and ease allowance were then derived so that the distance ease can be estimated.

Findings

The relationship was found between the distance ease and the ease allowance. Meanwhile, a mathematic model was established to convert the distance ease into the increments of a pant pattern, which helps to develop an individual pant pattern automatically.

Social implications

The paper provided the concept and the method to customize a pant by using the 3D scanning data of body. It created a link between the 3D distance ease and the 2D ease allowance, and the model to calculate the distance ease increments which warrant proper ease distributions. The method helps to develop an individualized garment pattern automatically from a basic and tight pant pattern.

Originality/value

Understanding the relationship between the distance ease and the ease allowance and increments of pattern could help develop an individual apparel pattern from 3D measurements. This paper showed a way to solve the problem of distribution of the apparel ease in a virtual environment and convert body measurements from a 3D scanner into personalized apparel patterns.

Details

International Journal of Clothing Science and Technology, vol. 27 no. 1
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 12 April 2018

Abdul Fatah Firdaus Abu Hanipah and Khairul Nizam Tahar

Laser scanning technique is used to measure and model objects using point cloud data generated laser pulses. Conventional techniques to construct 3D models are time consuming…

Abstract

Purpose

Laser scanning technique is used to measure and model objects using point cloud data generated laser pulses. Conventional techniques to construct 3D models are time consuming, costly and need more manpower. The purpose of this paper is to assess the 3D model of the Sultan Salahuddin Abdul Aziz Shah Mosque’s main dome using a terrestrial laser scanner.

Design/methodology/approach

A laser scanner works through line of sight, which indicates that multiple scans need to be taken from a different view to ensure a complete data set. Targets must spread in all directions, and targets should be placed on fixed structures and flat surfaces for the normal scan and fine scan. After the scanning operation, point cloud data from the laser scanner were cleaned and registered before a 3D model could be developed.

Findings

As a result, the reconstruction of the 3D model was successfully developed. The samples are based on the triangle dimension, curve line, horizontal dimension and vertical dimension at the dome. The standard deviation and accuracy are calculated based on the comparison of the 21 samples taken between the high-resolution and low-resolution scanning data.

Originality/value

There are many ways to develop the 3D model and based on this study, the less complex ways also produce the best result. The authors implement the different types of dimensions for the 3D model assessment, which have not yet been considered in the past.

Details

International Journal of Building Pathology and Adaptation, vol. 36 no. 2
Type: Research Article
ISSN: 2398-4708

Keywords

Article
Publication date: 7 June 2013

Guy A. Bingham and Richard Hague

The purpose of this paper is to investigate, develop and validate a three‐dimensional modelling strategy for the efficient generation of conformal textile data suitable for…

1095

Abstract

Purpose

The purpose of this paper is to investigate, develop and validate a three‐dimensional modelling strategy for the efficient generation of conformal textile data suitable for additive manufacture.

Design/methodology/approach

A series of additive manufactured (AM) textiles samples were modelled using currently available computer‐aided design software to understand the limitations associated with the generation of conformal data. Results of the initial three‐dimensional modelling processes informed the exploration and development of a new dedicated efficient modelling strategy that was tested to understand its capabilities.

Findings

The research demonstrates the dramatically improved capabilities of the developed three‐dimensional modelling strategy, over existing approaches by accurately mapping complex geometries described as STL data to a mapping mesh without distortion and correctly matching the orientation and surface normal.

Originality/value

To date the generation of data for AM textiles has been seen as a manual and time‐consuming process. The research presents a new dedicated methodology for the efficient generation of complex and conformal AM textile data that will underpin further research in this area.

Details

Rapid Prototyping Journal, vol. 19 no. 4
Type: Research Article
ISSN: 1355-2546

Keywords

1 – 10 of over 8000