Search results
1 – 10 of over 13000Shilong Zhang, Changyong Liu, Kailun Feng, Chunlai Xia, Yuyin Wang and Qinghe Wang
The swivel construction method is a specially designed process used to build bridges that cross rivers, valleys, railroads and other obstacles. To carry out this construction…
Abstract
Purpose
The swivel construction method is a specially designed process used to build bridges that cross rivers, valleys, railroads and other obstacles. To carry out this construction method safely, real-time monitoring of the bridge rotation process is required to ensure a smooth swivel operation without collisions. However, the traditional means of monitoring using Electronic Total Station tools cannot realize real-time monitoring, and monitoring using motion sensors or GPS is cumbersome to use.
Design/methodology/approach
This study proposes a monitoring method based on a series of computer vision (CV) technologies, which can monitor the rotation angle, velocity and inclination angle of the swivel construction in real-time. First, three proposed CV algorithms was developed in a laboratory environment. The experimental tests were carried out on a bridge scale model to select the outperformed algorithms for rotation, velocity and inclination monitor, respectively, as the final monitoring method in proposed method. Then, the selected method was implemented to monitor an actual bridge during its swivel construction to verify the applicability.
Findings
In the laboratory study, the monitoring data measured with the selected monitoring algorithms was compared with those measured by an Electronic Total Station and the errors in terms of rotation angle, velocity and inclination angle, were 0.040%, 0.040%, and −0.454%, respectively, thus validating the accuracy of the proposed method. In the pilot actual application, the method was shown to be feasible in a real construction application.
Originality/value
In a well-controlled laboratory the optimal algorithms for bridge swivel construction are identified and in an actual project the proposed method is verified. The proposed CV method is complementary to the use of Electronic Total Station tools, motion sensors, and GPS for safety monitoring of swivel construction of bridges. It also contributes to being a possible approach without data-driven model training. Its principal advantages are that it both provides real-time monitoring and is easy to deploy in real construction applications.
Details
Keywords
Syed Muhammad Rafy Syed Jaafar, Hairul Nizam Ismail and Nurul Diyana Md Khairi
This paper aims to capture real-time images of tourists during their visitation. This effort is to clarify a debate among scholars that there is a lack of current effort to…
Abstract
Purpose
This paper aims to capture real-time images of tourists during their visitation. This effort is to clarify a debate among scholars that there is a lack of current effort to genuinely represent an accurate image of the tourist experience during their visit. Previous studies on destination image focused on measuring and successfully capturing the tourists' perceived image using the perspective of “before and after” visitation.
Design/methodology/approach
The paper applies volunteer-employed photography and questionnaire methods to capture real-time tourist images. The paper was conducted in Kuala Lumpur, involving 384 international tourists. The data are analysed by supplemental photo analysis, was categorised into manifest and latent content.
Findings
The paper provides empirical insights into the changes in tourists' image when visiting an urban destination. The insights suggest that a city's image during visitation continuously changes based on the tourists' movement and preferences.
Practical implications
The findings of this paper are critical in assisting tourism agencies and authorities in portraying an accurate image to achieve greater tourism satisfaction.
Originality/value
This paper contributes to the interpretation and portrayal of the real-time image of Kuala Lumpur based on the manifest and latent content of the photos taken.
Details
Keywords
Yanling Xu, Huanwei Yu, Jiyong Zhong, Tao Lin and Shanben Chen
The purpose of this paper is to analyze the technology of capturing and processing weld images in real‐time, which is very important to the seam tracking and the weld quality…
Abstract
Purpose
The purpose of this paper is to analyze the technology of capturing and processing weld images in real‐time, which is very important to the seam tracking and the weld quality control during the robotic gas tungsten arc welding (GTAW) process.
Design/methodology/approach
By analyzing some main parameters on the effect of image capturing, a passive vision sensor for welding robot was designed in order to capture clear and steady welding images. Based on the analysis of the characteristic of the welding images, a new improved Canny algorithm was proposed to detect the edges of seam and pool, and extract the seam and pool characteristic parameters. Finally, the image processing precision was verified by the random welding experiments.
Findings
It was found that the seam and pool images can be clearly acquired by using the passive vision system, and the welding image characteristic parameters were accurately extracted through processing. The experiment results show that the precision range of the image processing can be controlled about within ±0.169 mm, which can completely meet the requirement of real‐time seam tracking for welding robot.
Research limitations/implications
This system will be applied to the industrial welding robot production during the GTAW process.
Originality/value
It is very important for the type of teaching‐playback robots with the passive vision that the real‐time images of seam and pool are acquired clearly and processed accurately during the robotic welding process, which helps determine follow‐up seam track and the control of welding quality.
Details
Keywords
In this volatile and increasingly fast-revolving world, it has become crucially important to monitor, measure and manage nation image and its dynamic changes in real time…
Abstract
Purpose
In this volatile and increasingly fast-revolving world, it has become crucially important to monitor, measure and manage nation image and its dynamic changes in real time. However, few studies have been conducted on a model to measure the image and/or its changes. The purpose of this paper is to find an economically affordable methodology to measure nation image and its changes online in real time.
Design/methodology/approach
The study took an approach to build dynamic ontology that may reflect to change nation image in real-time. With it, the authors attempted to measure nation image in real time.
Findings
Among many social media, the authors found that Wikipedia is particularly suitable for the purpose of measuring nation image. An ontology of nation image was built from the keywords collected from the pages directly related to the big three exporting countries in East Asia, i.e. Korea, Japan and China. The click views on the pages of the countries in two different language editions of Wikipedia, Vietnamese and Indonesian were counted.
Originality/value
The study confirms the objective: the data from a social media service, Wikipedia, may work very well as an economically affordable real-time supplement to offline nation image indices that are currently used.
Details
Keywords
The digital media recording and broadcasting classroom using Internet real-time intelligent image positioning and opinion monitoring in communication teaching is researched and…
Abstract
Purpose
The digital media recording and broadcasting classroom using Internet real-time intelligent image positioning and opinion monitoring in communication teaching is researched and analyzed.
Design/methodology/approach
First, spatial grid positioning and monitoring and image intelligent recognition technologies were used to extract and analyze teaching images by mastering Internet of Things (IoT) technology and establishing an intelligent image positioning and opinion monitoring digital media recording and broadcasting system framework. Next, a positioning node algorithm was utilized to measure the image distance, and then a moving node location model under the IoT was established. In addition, a radial basis function (RBF) neural network was used to realize the signal transmission function. The experimental data of the adopted RBF based on the optimization of the adaptive cuckoo search (ACS-RBF) neural network, particle swarm algorithm neural network, and method of least squares optimization were compared and analyzed. In addition, a more efficient RBF neural network was adopted. Finally, the digital media recording and broadcasting classroom scheme of real-time intelligent image positioning and opinion monitoring was designed. In addition, the application environment of digital media actual teacher teaching was detected, and recording and broadcasting pictures were analyzed and researched.
Findings
The actual value, predicted value, and the number of predicted samples of the ACS-RBF model were all better than those of the two other neural networks. According to the analysis and comparison of the sampling optimization Monte Carlo localization (SOMCL), Monte Carlo, and genetic algorithm optimization-based Monte Carlo positioning algorithms, the SOMCL algorithm showed better robustness, and its positioning efficiency was superior to that of the two other algorithms. In addition, the SOMCL algorithm greatly reduced the positioning and monitoring energy consumption.
Originality/value
The application of real-time intelligent image positioning and monitoring technology in actual communication teaching was realized in the study.
Details
Keywords
Wanbin Pan, Hongyi Jiang, Shufang Wang, Wen Feng Lu, Weijuan Cao and Zhenlei Weng
This paper aims to detect the printing failures (such as warpage and collapse) in material extrusion (MEX) process effectively and timely to reduce the waste of printing time…
Abstract
Purpose
This paper aims to detect the printing failures (such as warpage and collapse) in material extrusion (MEX) process effectively and timely to reduce the waste of printing time, energy and material.
Design/methodology/approach
The approach is designed based on the frequently observed fact that printing failures are accompanied by abnormal material phenomena occurring close to the nozzle. To effectively and timely capture the phenomena near the nozzle, a camera is delicately installed on a typical MEX printer. Then, aided by the captured phenomena (images), a smart printing failure predictor is built based on the artificial neural network (ANN). Finally, based on the predictor, the printing failures, as well as their types, can be effectively detected from the images captured by the camera in real-time.
Findings
Experiments show that printing failures can be detected timely with an accuracy of more than 98% on average. Comparisons in methodology demonstrate that this approach has advantages in real-time printing failure detection in MEX.
Originality/value
A novel real-time approach for failure detection is proposed based on ANN. The following characteristics make the approach have a great potential to be implemented easily and widely: (1) the scheme designed to capture the phenomena near the nozzle is simple, low-cost, and effective; and (2) the predictor can be conveniently extended to detect more types of failures by using more abnormal material phenomena that are occurring close to the nozzle.
Details
Keywords
M.A. Latif, J.C. Chedjou and K. Kyamakya
An image contrast enhancement is one of the most important low‐level image pre‐processing tasks required by the vision‐based advanced driver assistance systems (ADAS). This paper…
Abstract
Purpose
An image contrast enhancement is one of the most important low‐level image pre‐processing tasks required by the vision‐based advanced driver assistance systems (ADAS). This paper seeks to address this important issue keeping the real time constraints in focus, which is especially vital for the ADAS.
Design/methodology/approach
The approach is based on a paradigm of nonlinear‐coupled oscillators in image processing. Each layer of the colored images is treated as an independent grayscale image and is processed separately by the paradigm. The pixels with the lowest and the highest gray levels are chosen and their difference is enhanced to span all the gray levels in an image over the entire gray level range, i.e. [0 1]. This operation enhances the contrast in each layer and the enhanced layers are finally combined to produce a color image of a much improved quality.
Findings
The approach performs robust contrast enhancement as compared to other approaches available in the relevant literature. Generally, other approaches do need a new setting of parameters for every new image to perform its task, i.e. contrast enhancement. These approaches are not useful for real‐time applications such as ADAS. Whereas, the proposed approach presented in this paper performs contrast enhancement for different images under the same setting of parameters, hence giving rise to the robustness in the system. The unique setting of parameters is derived through a bifurcation analysis explained in the paper.
Originality/value
The proposed approach is novel in different aspects. First, the proposed paradigm comprises of coupled differential equations, and therefore, offers a continuous model as opposed to other approaches in the relevant literature. This continuity in the model is an inherent feature of the proposed approach, which could be useful in realizing real‐time image processing with an analog implemented circuit of the approach. Furthermore, a novel framework combining coupled oscillatory paradigm and cellular neural network is also possible to achieve ultra‐fast solution in image contrast enhancement.
Details
Keywords
John Oyekan, Axel Fischer, Windo Hutabarat, Christopher Turner and Ashutosh Tiwari
The purpose of this paper is to explore the role that computer vision can play within new industrial paradigms such as Industry 4.0 and in particular to support production line…
Abstract
Purpose
The purpose of this paper is to explore the role that computer vision can play within new industrial paradigms such as Industry 4.0 and in particular to support production line improvements to achieve flexible manufacturing. As Industry 4.0 requires “big data”, it is accepted that computer vision could be one of the tools for its capture and efficient analysis. RGB-D data gathered from real-time machine vision systems such as Kinect ® can be processed using computer vision techniques.
Design/methodology/approach
This research exploits RGB-D cameras such as Kinect® to investigate the feasibility of using computer vision techniques to track the progress of a manual assembly task on a production line. Several techniques to track the progress of a manual assembly task are presented. The use of CAD model files to track the manufacturing tasks is also outlined.
Findings
This research has found that RGB-D cameras can be suitable for object recognition within an industrial environment if a number of constraints are considered or different devices/techniques combined. Furthermore, through the use of a HMM inspired state-based workflow, the algorithm presented in this paper is computationally tractable.
Originality/value
Processing of data from robust and cheap real-time machine vision systems could bring increased understanding of production line features. In addition, new techniques that enable the progress tracking of manual assembly sequences may be defined through the further analysis of such visual data. The approaches explored within this paper make a contribution to the utilisation of visual information “big data” sets for more efficient and automated production.
Details
Keywords
Abstract
Details
Keywords
Muhammad Ali Khan, Ahmed Farooq Cheema, Sohaib Zia Khan and Shafiq-ur-Rehman Qureshi
The purpose of this paper is to show the development of an image processing-based portable equipment for an automatic wear debris analysis. It can analyze both the qualitative and…
Abstract
Purpose
The purpose of this paper is to show the development of an image processing-based portable equipment for an automatic wear debris analysis. It can analyze both the qualitative and quantitative features of machine wear debris: size, quantity, size distribution, shape, surface texture and material composition via color.
Design/methodology/approach
It comprises hardware and software components which can take debris in near real-time from a machine oil sump and process it for features diagnosis. This processing provides the information of the basic features on the user screen which can further be used for machine component health diagnosis.
Findings
The developed system has the capacity to replace the existing off-line methods due to its cost effectiveness and simplicity in operation. The system is able to analyze debris basic quantitative and qualitative features greater than 50 micron and less than 300 micron.
Originality/value
Wear debris basic features analysis tool is developed and discussed. The portable and near real-time analysis offered by the discussed work can be more technically effective as compared to the existing off-line and online techniques.
Details