Search results

1 – 10 of over 43000
Article
Publication date: 9 June 2023

Wahib Saif and Adel Alshibani

This paper aims to present a highly accessible and affordable tracking model for earthmoving operations in an attempt to overcome some of the limitations of current tracking…

Abstract

Purpose

This paper aims to present a highly accessible and affordable tracking model for earthmoving operations in an attempt to overcome some of the limitations of current tracking models.

Design/methodology/approach

The proposed methodology involves four main processes: acquiring onsite terrestrial images, processing the images into 3D scaled cloud data, extracting volumetric measurements and crew productivity estimations from multiple point clouds using Delaunay triangulation and conducting earned value/schedule analysis and forecasting the remaining scope of work based on the estimated performance. For validation, the tracking model was compared with an observation-based tracking approach for a backfilling site. It was also used for tracking a coarse base aggregate inventory for a road construction project.

Findings

The presented model has proved to be a practical and accurate tracking approach that algorithmically estimates and forecasts all performance parameters from the captured data.

Originality/value

The proposed model is unique in extracting accurate volumetric measurements directly from multiple point clouds in a developed code using Delaunay triangulation instead of extracting them from textured models in modelling software which is neither automated nor time-effective. Furthermore, the presented model uses a self-calibration approach aiming to eliminate the pre-calibration procedure required before image capturing for each camera intended to be used. Thus, any worker onsite can directly capture the required images with an easily accessible camera (e.g. handheld camera or a smartphone) and can be sent to any processing device via e-mail, cloud-based storage or any communication application (e.g. WhatsApp).

Article
Publication date: 1 September 2000

J. Paul Siebert and Stephen J. Marshall

Describes a non‐contact optical sensing technology called C3D that is based on speckle texture projection photogrammetry. C3D has been applied to capturing all‐round 3D models of…

2412

Abstract

Describes a non‐contact optical sensing technology called C3D that is based on speckle texture projection photogrammetry. C3D has been applied to capturing all‐round 3D models of the human body of high dimensional accuracy and photorealistic appearance. The essential strengths and limitation of the C3D approach are presented and the basic principles of this stereo‐imaging approach are outlined, from image capture and basic 3D model construction to multi‐view capture and all‐round 3D model integration. A number of law enforcement, medical and commercial applications are described briefly including prisoner 3D face models, maxillofacial and orofacial cleft assessment, breast imaging and foot scanning. Ongoing research in real‐time capture and processing, and model construction from naturally illuminated image sources is also outlined.

Details

Sensor Review, vol. 20 no. 3
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 5 October 2015

Tony de Souza-Daw, Robert Ross, Truong Duy Nhan, Le Anh Hung, Nguyen Duc Quoc Trung, Le Hai Chau, Hoang Minh Phuong, Le Hoang Ngoc and Mathews Nkhoma

The purpose of this paper is to present a low-cost, highly mobile system for performing street-level imaging. Street-level imaging and geo-location-based services are rapidly…

Abstract

Purpose

The purpose of this paper is to present a low-cost, highly mobile system for performing street-level imaging. Street-level imaging and geo-location-based services are rapidly growing in both popularity and coverage. Google Street View and Bing StreetSide are two of the free, online services which allow users to search location-based information on interactive maps. In addition, these services also provide software developers and researchers a rich source of street-level images for different purposes – from identifying traffic routes to augmented reality applications. Currently, coverage for Street View and StreetSide is limited to more affluent Western countries with sparse coverage throughout south-east Asia and Africa. In this paper, we present a low-cost system to perform street-level imaging targeted towards the congested, motorcycle-dominant south-east Asian countries. The proposed system uses a catadioptric imaging system to capture 360-degree panoramic images which are geo-located using an on-board GPS. The system is mounted on the back of a motorcycle to provide maximum mobility and access to narrow roads. An innovative backwards remapping technique for flattening the images is discussed along with some results from the first 150 km which have been captured from Southern Vietnam.

Design/methodology/approach

The design was a low-cost prototype design using low-cost off-the-shelf hardware with custom software and assembly to facilitate functionality.

Findings

The system was shown to work well as a low-cost omnidirectional mapping solution targeted toward sea-of-motorbike road conditions.

Research limitations/implications

Some of the pictures returned by the system were unclear. These could be improved by having artificial lighting (currently only ambient light is used), a gyroscope-stabilised imaging platform and a higher resolution camera.

Originality/value

This paper discusses a design which facilitates low-cost, street-level imaging for a sea-of-motorcycle environment. The system uses a catadioptric imaging approach to give a wide field of view without excessive image storage requirements using dozens of cameras.

Details

Journal of Engineering, Design and Technology, vol. 13 no. 4
Type: Research Article
ISSN: 1726-0531

Keywords

Book part
Publication date: 28 November 2016

Toni Eagar and Stephen Dann

This research was conducted to outline the capturing and analysis of composite texts. We contextualize this using selfies as image and textual data sourced from Instagram and…

Abstract

Purpose

This research was conducted to outline the capturing and analysis of composite texts. We contextualize this using selfies as image and textual data sourced from Instagram and analyzed using a three stage analysis approach from a genre perspective.

Methodology/approach

The capturing of composite texts is outlined for numerous services available to researchers to study social media contexts. The analysis applies a three-stage technique of (1) what is shown, (2) what is said, and (3) what is the central narrative to overcome interpretive limitations of privileging text over image or vice versa.

Findings

Based on their structural characteristics, seven genre types emerged from the coded sample set.

Research limitations/implications

Issues arise in capturing this data as social media platforms change their access and usage policies and as capturing services alter their capabilities.

Originality/value

The paper outlines a novel approach to capturing and understanding the mimesis and diegesis of selfies as composite texts.

Details

Consumer Culture Theory
Type: Book
ISBN: 978-1-78635-495-2

Keywords

Article
Publication date: 1 September 2004

Xiangyang Ju, J. Paul Siebert, Nigel J.B. McFarlane, Jiahua Wu, Robin D. Tillett and Charles Patrick Schofield

We have succeeded in capturing porcine 3D surface anatomy in vivo by developing a high‐resolution stereo imaging system. The system achieved accurate 3D shape recovery by matching…

Abstract

We have succeeded in capturing porcine 3D surface anatomy in vivo by developing a high‐resolution stereo imaging system. The system achieved accurate 3D shape recovery by matching stereo pair images containing only natural surface textures at high (image) resolution. The 3D imaging system presented for pig shape capture is based on photogrammetry and comprises: stereo pair image acquisition, stereo camera calibration and stereo matching and surface and texture integration. Practical issues have been addressed, and in particular the integration of multiple range images into a single 3D surface. Robust image segmentation successfully isolated the pigs within the stereo images and was employed in conjunction with depth discontinuity detection to facilitate the integration process. The capture and processing chain is detailed here and the resulting 3D pig anatomy obtained using the system presented.

Details

Sensor Review, vol. 24 no. 3
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 17 August 2012

Yanling Xu, Huanwei Yu, Jiyong Zhong, Tao Lin and Shanben Chen

The purpose of this paper is to analyze the technology of capturing and processing weld images in real‐time, which is very important to the seam tracking and the weld quality…

1096

Abstract

Purpose

The purpose of this paper is to analyze the technology of capturing and processing weld images in real‐time, which is very important to the seam tracking and the weld quality control during the robotic gas tungsten arc welding (GTAW) process.

Design/methodology/approach

By analyzing some main parameters on the effect of image capturing, a passive vision sensor for welding robot was designed in order to capture clear and steady welding images. Based on the analysis of the characteristic of the welding images, a new improved Canny algorithm was proposed to detect the edges of seam and pool, and extract the seam and pool characteristic parameters. Finally, the image processing precision was verified by the random welding experiments.

Findings

It was found that the seam and pool images can be clearly acquired by using the passive vision system, and the welding image characteristic parameters were accurately extracted through processing. The experiment results show that the precision range of the image processing can be controlled about within ±0.169 mm, which can completely meet the requirement of real‐time seam tracking for welding robot.

Research limitations/implications

This system will be applied to the industrial welding robot production during the GTAW process.

Originality/value

It is very important for the type of teaching‐playback robots with the passive vision that the real‐time images of seam and pool are acquired clearly and processed accurately during the robotic welding process, which helps determine follow‐up seam track and the control of welding quality.

Details

Industrial Robot: An International Journal, vol. 39 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Open Access
Article
Publication date: 18 January 2022

Srinimalan Balakrishnan Selvakumaran and Daniel Mark Hall

The purpose of this paper is to investigate the feasibility of an end-to-end simplified and automated reconstruction pipeline for digital building assets using the design science…

1490

Abstract

Purpose

The purpose of this paper is to investigate the feasibility of an end-to-end simplified and automated reconstruction pipeline for digital building assets using the design science research approach. Current methods to create digital assets by capturing the state of existing buildings can provide high accuracy but are time-consuming, expensive and difficult.

Design/methodology/approach

Using design science research, this research identifies the need for a crowdsourced and cloud-based approach to reconstruct digital building assets. The research then develops and tests a fully functional smartphone application prototype. The proposed end-to-end smartphone workflow begins with data capture and ends with user applications.

Findings

The resulting implementation can achieve a realistic three-dimensional (3D) model characterized by different typologies, minimal trade-off in accuracy and low processing costs. By crowdsourcing the images, the proposed approach can reduce costs for asset reconstruction by an estimated 93% compared to manual modeling and 80% compared to locally processed reconstruction algorithms.

Practical implications

The resulting implementation achieves “good enough” reconstruction of as-is 3D models with minimal tradeoffs in accuracy compared to automated approaches and 15× cost savings compared to a manual approach. Potential facility management use cases include the issue and information tracking, 3D mark-up and multi-model configurators.

Originality/value

Through user engagement, development, testing and validation, this work demonstrates the feasibility and impact of a novel crowdsourced and cloud-based approach for the reconstruction of digital building assets.

Details

Journal of Facilities Management , vol. 20 no. 3
Type: Research Article
ISSN: 1472-5967

Keywords

Article
Publication date: 2 May 2019

Hadi Mahami, Farnad Nasirzadeh, Ali Hosseininaveh Ahmadabadian, Farid Esmaeili and Saeid Nahavandi

This paper aims to propose an automatic imaging network design to improve the efficiency and accuracy of automated construction progress monitoring. The proposed method will…

Abstract

Purpose

This paper aims to propose an automatic imaging network design to improve the efficiency and accuracy of automated construction progress monitoring. The proposed method will address two shortcomings of the previous studies, including the large number of captured images required and the incompleteness and inaccuracy of generated as-built models.

Design/methodology/approach

Using the proposed method, the number of required images is minimized in two stages. In the first stage, the manual photogrammetric network design is used to decrease the number of camera stations considering proper constraints. Then the image acquisition is done and the captured images are used to generate 3D points cloud model. In the second stage, a new software for automatic imaging network design is developed and used to cluster and select the optimal images automatically, using the existing dense points cloud model generated before, and the final optimum camera stations are determined. Therefore, the automated progress monitoring can be done by imaging at the selected camera stations to produce periodic progress reports.

Findings

The achieved results show that using the proposed manual and automatic imaging network design methods, the number of required images is decreased by 65 and 75 per cent, respectively. Moreover, the accuracy and completeness of points cloud reconstruction is improved and the quantity of performed work is determined with the accuracy, which is close to 100 per cent.

Practical implications

It is believed that the proposed method may present a novel and robust tool for automated progress monitoring using unmanned aerial vehicles and based on photogrammetry and computer vision techniques. Using the proposed method, the number of required images is minimized, and the accuracy and completeness of points cloud reconstruction is improved.

Originality/value

To generate the points cloud reconstruction based on close-range photogrammetry principles, more than hundreds of images must be captured and processed, which is time-consuming and labor-intensive. There has been no previous study to reduce the large number of required captured images. Moreover, lack of images in some areas leads to an incomplete or inaccurate model. This research resolves the mentioned shortcomings.

Article
Publication date: 1 June 2003

Jenn Riley and Ichiro Fujinaga

Like other complex visual articles with small details, musical scores are difficult to capture and present well in digital form. This article presents methods that can be used to…

Abstract

Like other complex visual articles with small details, musical scores are difficult to capture and present well in digital form. This article presents methods that can be used to reproduce detail and tone from printed scores for creating archival images, based on best practices commonly used by the library community. Capture decisions should be made with a clear idea of the purpose of the imaging project yet be flexible enough to fulfill unanticipated future uses. Options and recommendations for file formats for archival storage, Web delivery and printing of musical materials are discussed.

Details

OCLC Systems & Services: International digital library perspectives, vol. 19 no. 2
Type: Research Article
ISSN: 1065-075X

Keywords

Article
Publication date: 16 May 2023

Wanbin Pan, Hongyi Jiang, Shufang Wang, Wen Feng Lu, Weijuan Cao and Zhenlei Weng

This paper aims to detect the printing failures (such as warpage and collapse) in material extrusion (MEX) process effectively and timely to reduce the waste of printing time…

Abstract

Purpose

This paper aims to detect the printing failures (such as warpage and collapse) in material extrusion (MEX) process effectively and timely to reduce the waste of printing time, energy and material.

Design/methodology/approach

The approach is designed based on the frequently observed fact that printing failures are accompanied by abnormal material phenomena occurring close to the nozzle. To effectively and timely capture the phenomena near the nozzle, a camera is delicately installed on a typical MEX printer. Then, aided by the captured phenomena (images), a smart printing failure predictor is built based on the artificial neural network (ANN). Finally, based on the predictor, the printing failures, as well as their types, can be effectively detected from the images captured by the camera in real-time.

Findings

Experiments show that printing failures can be detected timely with an accuracy of more than 98% on average. Comparisons in methodology demonstrate that this approach has advantages in real-time printing failure detection in MEX.

Originality/value

A novel real-time approach for failure detection is proposed based on ANN. The following characteristics make the approach have a great potential to be implemented easily and widely: (1) the scheme designed to capture the phenomena near the nozzle is simple, low-cost, and effective; and (2) the predictor can be conveniently extended to detect more types of failures by using more abnormal material phenomena that are occurring close to the nozzle.

Details

Rapid Prototyping Journal, vol. 29 no. 8
Type: Research Article
ISSN: 1355-2546

Keywords

1 – 10 of over 43000