Search results

1 – 10 of 192
Article
Publication date: 26 October 2018

Tugrul Oktay, Harun Celik and Ilke Turkmen

The purpose of this paper is to examine the success of constrained control on reducing motion blur which occurs as a result of helicopter vibration.

Abstract

Purpose

The purpose of this paper is to examine the success of constrained control on reducing motion blur which occurs as a result of helicopter vibration.

Design/methodology/approach

Constrained controllers are designed to reduce the motion blur on images taken by helicopter. Helicopter vibrations under tight and soft constrained controllers are modeled and added to images to show the performance of controllers on reducing blur.

Findings

The blur caused by vibration can be reduced via constrained control of helicopter.

Research limitations/implications

The motion of camera is modeled and assumed same as the motion of helicopter. In model of exposing image, image noise is neglected, and blur is considered as the only distorting effect on image.

Practical implications

Tighter constrained controllers can be implemented to take higher quality images by helicopters.

Social implications

Recently, aerial vehicles are widely used for aerial photography. Images taken by helicopters mostly suffer from motion blur. Reducing motion blur can provide users to take higher quality images by helicopters.

Originality/value

Helicopter control is performed to reduce motion blur on image for the first time. A control-oriented and physic-based model of helicopter is benefited. Helicopter vibration which causes motion blur is modeled as blur kernel to see the effect of helicopter vibration on taken images. Tight and soft constrained controllers are designed and compared to denote their performance in reducing motion blur. It is proved that images taken by helicopter can be prevented from motion blur by controlling helicopter tightly.

Details

Aircraft Engineering and Aerospace Technology, vol. 90 no. 9
Type: Research Article
ISSN: 1748-8842

Keywords

Article
Publication date: 4 October 2017

Mehdi Habibi and Ahmad Reza Danesh

The purpose of this study is to propose a pulse width based, in-pixel, arbitrary size kernel convolution processor. When image sensors are used in machine vision tasks, large…

Abstract

Purpose

The purpose of this study is to propose a pulse width based, in-pixel, arbitrary size kernel convolution processor. When image sensors are used in machine vision tasks, large amount of data need to be transferred to the output and fed to a processor. Basic and low-level image processing functions such as kernel convolution is used extensively in the early stages of most machine vision tasks. These low-level functions are usually computationally extensive and if the computation is performed inside every pixel, the burden on the external processor will be greatly reduced.

Design/methodology/approach

In the proposed architecture, digital pulse width processing is used to perform kernel convolution on the image sensor data. With this approach, while the photocurrent fluctuations are expressed with changes in the pulse width of an output signal, the small processor incorporated in each pixel receives the output signal of the corresponding pixel and its neighbors and produces a binary coded output result for that specific pixel. The process is commenced in parallel among all pixels of the image sensor.

Findings

It is shown that using the proposed architecture, not only kernel convolution can be performed in the digital domain inside smart image sensors but also arbitrary kernel coefficients are obtainable simply by adjusting the sampling frequency at different phases of the processing.

Originality/value

Although in-pixel digital kernel convolution has been previously reported however with the presented approach no in-pixel analog to binary coded digital converter is required. Furthermore, arbitrary kernel coefficients and scaling can be deployed in the processing. The given architecture is a suitable choice for smart image sensors which are to be used in high-speed machine vision tasks.

Details

Sensor Review, vol. 37 no. 4
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 16 August 2021

V. Vinolin and M. Sucharitha

With the advancements in photo editing software, it is possible to generate fake images, degrading the trust in digital images. Forged images, which appear like authentic images…

Abstract

Purpose

With the advancements in photo editing software, it is possible to generate fake images, degrading the trust in digital images. Forged images, which appear like authentic images, can be created without leaving any visual clues about the alteration in the image. Image forensic field has introduced several forgery detection techniques, which effectively distinguish fake images from the original ones, to restore the trust in digital images. Among several forgery images, spliced images involving human faces are more unsafe. Hence, there is a need for a forgery detection approach to detect the spliced images.

Design/methodology/approach

This paper proposes a Taylor-rider optimization algorithm-based deep convolutional neural network (Taylor-ROA-based DeepCNN) for detecting spliced images. Initially, the human faces in the spliced images are detected using the Viola–Jones algorithm, from which the 3-dimensional (3D) shape of the face is established using landmark-based 3D morphable model (L3DMM), which estimates the light coefficients. Then, the distance measures, such as Bhattacharya, Seuclidean, Euclidean, Hamming, Chebyshev and correlation coefficients are determined from the light coefficients of the faces. These form the feature vector to the proposed Taylor-ROA-based DeepCNN, which determines the spliced images.

Findings

Experimental analysis using DSO-1, DSI-1, real dataset and hybrid dataset reveal that the proposed approach acquired the maximal accuracy, true positive rate (TPR) and true negative rate (TNR) of 99%, 98.88% and 96.03%, respectively, for DSO-1 dataset. The proposed method reached the performance improvement of 24.49%, 8.92%, 6.72%, 4.17%, 0.25%, 0.13%, 0.06%, and 0.06% in comparison to the existing methods, such as Kee and Farid's, shape from shading (SFS), random guess, Bo Peng et al., neural network, FOA-SVNN, CNN-based MBK, and Manoj Kumar et al., respectively, in terms of accuracy.

Originality/value

The Taylor-ROA is developed by integrating the Taylor series in rider optimization algorithm (ROA) for optimally tuning the DeepCNN.

Details

Data Technologies and Applications, vol. 56 no. 1
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 6 May 2021

Zhe Wang, Xisheng Li, Xiaojuan Zhang, Yanru Bai and Chengcai Zheng

How to model blind image deblurring that arises when a camera undergoes ego-motion while observing a static and close scene. In particular, this paper aims to detail how the…

Abstract

Purpose

How to model blind image deblurring that arises when a camera undergoes ego-motion while observing a static and close scene. In particular, this paper aims to detail how the blurry image can be restored under a sequence of the linear model of the point spread function (PSF) that are derived from the 6-degree of freedom (DOF) camera’s accurate path during the long exposure time.

Design/methodology/approach

There are two existing techniques, namely, an estimation of the PSF and a blind image deconvolution. Based on online and short-period inertial measurement unit (IMU) self-calibration, this motion path has discretized a sequence of the uniform speed of 3-DOF rectilinear motion, which unites with a 3-DOF rotational motion to form a discrete 6-DOF camera’s path. These PSFs are evaluated through the discrete path, then combine with a blurry image to restoration through deconvolution.

Findings

This paper describes to build a hardware attachment, which is composed of a consumer camera, an inexpensive IMU and a 3-DOF motion mechanism to the best of the knowledge, together with experimental results demonstrating its overall effectiveness.

Originality/value

First, the paper proposes that a high-precision 6-DOF motion platform periodically adjusts the speed of a three-axis rotational motion and a three-axis rectilinear motion in a short time to compensate the bias of the gyroscope and the accelerometer. Second, this paper establishes a model of 6-DOF motion and emphasizes on rotational motion, translational motion and scene depth motion. Third, this paper addresses a novel model of the discrete path that the motion during long exposure time is discretized at a uniform speed, then to estimate a sequence of PSFs.

Details

Sensor Review, vol. 41 no. 2
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 14 August 2020

Sadik Lafta Omairey, Peter Donald Dunning and Srinivas Sriramula

The purpose of this study is to enable performing reliability-based design optimisation (RBDO) for a composite component while accounting for several multi-scale uncertainties…

Abstract

Purpose

The purpose of this study is to enable performing reliability-based design optimisation (RBDO) for a composite component while accounting for several multi-scale uncertainties using a large representative volume element (LRVE). This is achieved using an efficient finite element analysis (FEA)-based multi-scale reliability framework and sequential optimisation strategy.

Design/methodology/approach

An efficient FEA-based multi-scale reliability framework used in this study is extended and combined with a proposed sequential optimisation strategy to produce an efficient, flexible and accurate RBDO framework for fibre-reinforced composite laminate components. The proposed RBDO strategy is demonstrated by finding the optimum design solution for a composite component under the effect of multi-scale uncertainties while meeting a specific stiffness reliability requirement. Performing this using the double-loop approach is computationally expensive because of the number of uncertainties and function evaluations required to assess the reliability. Thus, a sequential optimisation concept is proposed, which starts by finding a deterministic optimum solution, then assesses the reliability and shifts the constraint limit to a safer region. This is repeated until the desired level of reliability is reached. This is followed by a final probabilistic optimisation to reduce the mass further and meet the desired level of stiffness reliability. In addition, the proposed framework uses several surrogate models to replace expensive FE function evaluations during optimisation and reliability analysis. The numerical example is also used to investigate the effect of using different sizes of LRVEs, compared with a single RVE. In future work, other problem-dependent surrogates such as Kriging will be used to allow predicting lower probability of failures with high accuracy.

Findings

The integration of the developed multi-scale reliability framework with the sequential RBDO optimisation strategy is proven computationally feasible, and it is shown that the use of LRVEs leads to less conservative designs compared with the use of single RVE, i.e. up to 3.5% weight reduction in the case of the 1 × 1 RVE optimised component. This is because the LRVE provides a representation of the spatial variability of uncertainties in a composite material while capturing a wider range of uncertainties at each iteration.

Originality/value

Fibre-reinforced composite laminate components designed using reliability and optimisation have been investigated before. Still, they have not previously been combined in a comprehensive multi-scale RBDO. Therefore, this study combines the probabilistic framework with an optimisation strategy to perform multi-scale RBDO and demonstrates its feasibility and efficiency for an fibre reinforced polymer component design.

Details

Engineering Computations, vol. 38 no. 3
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 14 June 2011

Tai Kuang, Qing‐Xin Zhu and Yue Sun

The purpose of this paper is to detect edge of image in high noise level, suffering Gaussian noise.

Abstract

Purpose

The purpose of this paper is to detect edge of image in high noise level, suffering Gaussian noise.

Design/methodology/approach

Canny edge detection algorithm performs poorly when applied to highly distorted images suffering from Gaussian noise. In Canny algorithm, 2D‐gaussian function is used to remove noise and preserve edge. In high noise level, 2D‐gaussian function cannot meet the needs. In this paper, an improving Canny edge detection algorithm is presented. The algorithm presented is based on local linear kernel smoothing, in which local neighborhoods are adapted to the local smoothness of the surface measured by the observed data. The procedure can therefore remove noise correctly in continuity regions of the surface, and preserve discontinuities at the same time.

Findings

The statistical model of removing noise and preserving edge can meet the need of edge detection in images highly corrupted by Gaussian noise.

Research limitations/implications

It was found that when the noise ratio is higher than 40 percent, the edge detection algorithm performs poorly.

Practical implications

A very useful method for detecting highly distorted images suffering Gaussian noise.

Originality/value

Since an image can be regarded as a surface of the image intensity function and such a surface has discontinuities at the outlines of objects, this algorithm can be applied directly to detect edge of image in high noise level.

Details

Kybernetes, vol. 40 no. 5/6
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 5 April 2021

Zhixin Wang, Peng Xu, Bohan Liu, Yankun Cao, Zhi Liu and Zhaojun Liu

This paper aims to demonstrate the principle and practical applications of hyperspectral object detection, carry out the problem we now face and the possible solution. Also some…

Abstract

Purpose

This paper aims to demonstrate the principle and practical applications of hyperspectral object detection, carry out the problem we now face and the possible solution. Also some challenges in this field are discussed.

Design/methodology/approach

First, the paper summarized the current research status of the hyperspectral techniques. Then, the paper demonstrated the development of underwater hyperspectral techniques from three major aspects, which are UHI preprocess, unmixing and applications. Finally, the paper presents a conclusion of applications of hyperspectral imaging and future research directions.

Findings

Various methods and scenarios for underwater object detection with hyperspectral imaging are compared, which include preprocessing, unmixing and classification. A summary is made to demonstrate the application scope and results of different methods, which may play an important role in the application of underwater hyperspectral object detection in the future.

Originality/value

This paper introduced several methods of hyperspectral image process, give out the conclusion of the advantages and disadvantages of each method, then demonstrated the challenges we face and the possible way to deal with them.

Details

Sensor Review, vol. 41 no. 2
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 8 August 2022

Chengyao Xin

This paper aims to present a case study of virtual-reality-based product demonstrations featuring items of furniture. The results will be of use in further design and development…

Abstract

Purpose

This paper aims to present a case study of virtual-reality-based product demonstrations featuring items of furniture. The results will be of use in further design and development of virtual-reality-based product demonstration systems and could also support effective student learning.

Design/methodology/approach

A new method was introduced to guide the experiment by confirming orthogonal arrays. User interactions were then planned, and a furniture demonstration system was implemented. The experiment comprised two stages. In the evaluation stage, participants were invited to experience the virtual-reality (VR)-based furniture demonstration system and complete a user experience (UX) survey. Taguchi-style robust design methods were used to design orthogonal table experiments and planning and design operation methods were used to implement an experimental display system in order to obtain optimized combinations of control factors and levels. The second stage involved a confirmatory test for the optimized combinations. A pilot questionnaire was first applied to survey demonstration scenarios that are important to customers.

Findings

The author found in terms of furniture products, product interactive display through VR can achieve good user satisfaction through quality design planning. VR can better grasp the characteristics of products than paper catalogs and website catalogs. And VR can better grasp the characteristics of products than online videos. For “interactive inspection”, “function simulation”, “style customization” and “set-out customization” were the most valuable demonstration scenarios for customers. The results of the experiment confirmed that the “overall rating”, “hedonic appeal” and “practical quality” were the three most important optimized operating methods, constituting a benchmark of user satisfaction.

Originality/value

The author found that it is possible to design and build a VR-based furniture demonstration system with a good level of usability when a suitable quality design method is applied. The optimized user interaction indicators and implementation experience for the VR-based product demonstration presented in this study will be of use in further design and development of similar systems.

Details

Library Hi Tech, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 15 December 2022

Rong Zhang

The purpose of this research was to explore the stickiness of players' motivation in a virtual community and to explore the important factors for gamers.

Abstract

Purpose

The purpose of this research was to explore the stickiness of players' motivation in a virtual community and to explore the important factors for gamers.

Design/methodology/approach

In this research, motivation was the independent variable; the virtual community was the mediator; and stickiness was the dependent variable. An online questionnaire survey was conducted, with users of augmented reality (AR) as the research objects. Statistical analysis was carried out using SPSS and AMOS software to verify the research model and research hypotheses, to understand the relation between player motivation and stickiness and to determine whether there were any changes in the virtual community.

Findings

The authors found that the relation between players' motivation in AR-based games and the virtual community had a significant positive impact. Ingress had a significant positive impact on the virtual community and stickiness, and Pokémon had a significant positive impact too. The virtual community of the Ingress game played a completely mediating role in motivation and stickiness, but the virtual community in Pokémon did not have a mediating effect.

Originality/value

The novel approach adopted in this study enabled us to determine the causal relation between player motivation, the virtual community and stickiness, on the basis of the theoretical framework formulated, and the latter was used to construct a path analysis model diagram. The correlation between motivation and the virtual community, between the virtual community and stickiness, and the causal relation between all three was verified. The study results and conclusions may help companies understand how to use virtual communities in AR games to improve stickiness and motivate gamers to continue playing.

Details

Library Hi Tech, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-8831

Keywords

Open Access
Article
Publication date: 29 July 2020

Abdullah Alharbi, Wajdi Alhakami, Sami Bourouis, Fatma Najar and Nizar Bouguila

We propose in this paper a novel reliable detection method to recognize forged inpainting images. Detecting potential forgeries and authenticating the content of digital images is…

Abstract

We propose in this paper a novel reliable detection method to recognize forged inpainting images. Detecting potential forgeries and authenticating the content of digital images is extremely challenging and important for many applications. The proposed approach involves developing new probabilistic support vector machines (SVMs) kernels from a flexible generative statistical model named “bounded generalized Gaussian mixture model”. The developed learning framework has the advantage to combine properly the benefits of both discriminative and generative models and to include prior knowledge about the nature of data. It can effectively recognize if an image is a tampered one and also to identify both forged and authentic images. The obtained results confirmed that the developed framework has good performance under numerous inpainted images.

Details

Applied Computing and Informatics, vol. 20 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

1 – 10 of 192