Search results
1 – 10 of over 5000Li Hong‐jun, Hu Wei, Xie Zheng‐guang and Wang Wei
The paper aims to do some further research on grey relational analysis applied in wavelet transform, and proposed a grey relational threshold algorithm for image denoising. This…
Abstract
Purpose
The paper aims to do some further research on grey relational analysis applied in wavelet transform, and proposed a grey relational threshold algorithm for image denoising. This study tries to suppress the noise while retaining the edges and important structures as much as possible.
Design/methodology/approach
The paper analyzed the characters of noises and edges distribution in different subbands; then used the grey relational value to calculate the relationship of scale, direction and noise deviation. This paper used the grey relational value of scale, direction and noise deviation as influenced factors, and proposed a grey relational threshold algorithm.
Findings
Grey relational analysis used in threshold setting has the superiority in image denoising. The simulation results have already certified both in visual effect and peak signal to noise ratio (PSNR).
Originality/value
This paper applied grey relation theory into image denoising, and proposed a grey relational threshold algorithm. It provides a novel method for image denoising.
Details
Keywords
Ambaji S. Jadhav, Pushpa B. Patil and Sunil Biradar
Diabetic retinopathy (DR) is a central root of blindness all over the world. Though DR is tough to diagnose in starting stages, and the detection procedure might be time-consuming…
Abstract
Purpose
Diabetic retinopathy (DR) is a central root of blindness all over the world. Though DR is tough to diagnose in starting stages, and the detection procedure might be time-consuming even for qualified experts. Nowadays, intelligent disease detection techniques are extremely acceptable for progress analysis and recognition of various diseases. Therefore, a computer-aided diagnosis scheme based on intelligent learning approaches is intended to propose for diagnosing DR effectively using a benchmark dataset.
Design/methodology/approach
The proposed DR diagnostic procedure involves four main steps: (1) image pre-processing, (2) blood vessel segmentation, (3) feature extraction, and (4) classification. Initially, the retinal fundus image is taken for pre-processing with the help of Contrast Limited Adaptive Histogram Equalization (CLAHE) and average filter. In the next step, the blood vessel segmentation is carried out using a segmentation process with optimized gray-level thresholding. Once the blood vessels are extracted, feature extraction is done, using Local Binary Pattern (LBP), Texture Energy Measurement (TEM based on Laws of Texture Energy), and two entropy computations – Shanon's entropy, and Kapur's entropy. These collected features are subjected to a classifier called Neural Network (NN) with an optimized training algorithm. Both the gray-level thresholding and NN is enhanced by the Modified Levy Updated-Dragonfly Algorithm (MLU-DA), which operates to maximize the segmentation accuracy and to reduce the error difference between the predicted and actual outcomes of the NN. Finally, this classification error can correctly prove the efficiency of the proposed DR detection model.
Findings
The overall accuracy of the proposed MLU-DA was 16.6% superior to conventional classifiers, and the precision of the developed MLU-DA was 22% better than LM-NN, 16.6% better than PSO-NN, GWO-NN, and DA-NN. Finally, it is concluded that the implemented MLU-DA outperformed state-of-the-art algorithms in detecting DR.
Originality/value
This paper adopts the latest optimization algorithm called MLU-DA-Neural Network with optimal gray-level thresholding for detecting diabetic retinopathy disease. This is the first work utilizes MLU-DA-based Neural Network for computer-aided Diabetic Retinopathy diagnosis.
Details
Keywords
Yong Liu, Jun-liang Du, Ren-Shi Zhang and Jeffrey Yi-Lin Forrest
This paper aims to establish a novel three-way decisions-based grey incidence analysis clustering approach and exploit it to extract information and rules implied in panel data.
Abstract
Purpose
This paper aims to establish a novel three-way decisions-based grey incidence analysis clustering approach and exploit it to extract information and rules implied in panel data.
Design/methodology/approach
Because of taking on the spatiotemporal characteristics, panel data can well-describe and depict the systematic and dynamic of the decision objects. However, it is difficult for traditional panel data analysis methods to efficiently extract information and rules implied in panel data. To effectively deal with panel data clustering problem, according to the spatiotemporal characteristics of panel data, from the three dimensions of absolute amount level, increasing amount level and volatility level, the authors define the conception of the comprehensive distance between decision objects, and then construct a novel grey incidence analysis clustering approach for panel data and study its computing mechanism of threshold value by exploiting the thought and method of three-way decisions; finally, the authors take a case of the clustering problems on the regional high-tech industrialization in China to illustrate the validity and rationality of the proposed model.
Findings
The results show that the proposed model can objectively determine the threshold value of clustering and achieve the extraction of information and rules inherent in the data panel.
Practical implications
The novel model proposed in the paper can well-describe and resolve panel data clustering problem and efficiently extract information and rules implied in panel data.
Originality/value
The proposed model can deal with panel data clustering problem and realize the extraction of information and rules inherent in the data panel.
Details
Keywords
Ntogas Nikolaos and Ventzas Dimitrios
The purpose of this paper is to introduce an innovative procedure for digital historical documents image binarization based on image pre‐processing and image condition…
Abstract
Purpose
The purpose of this paper is to introduce an innovative procedure for digital historical documents image binarization based on image pre‐processing and image condition classification. The estimated results for each class of images and each method have shown improved image quality for the six categories of document images described by their separate characteristics.
Design/methodology/approach
The applied technique consists of five stages, i.e. text image acquisition, image preparation, denoising, image type classification in six categories according to image condition, image thresholding and final refinement, a very effective approach to binarize document images. The results achieved by the authors' method require minimal pre‐processing steps for best quality of the image and increased text readability. This methodology performs better compared to current state‐of‐the‐art adaptive thresholding techniques.
Findings
An innovative procedure for digital historical documents image binarization based on image pre‐processing, image type classification in categories according to image condition and further enhancement. This methodology is robust and simple, with minimal pre‐processing steps for best quality of the image, increased text readability and it performs better compared to available thresholding techniques.
Research limitations/implications
The technique consists of limited but optimized pre‐processing sequential steps, and attention should be given in document image preparation and denoising, and on image condition classification for thresholding and refinement, since bad results in a single stage corrupt the final document image quality and text readability.
Originality/value
The paper contributes in digital image binarization of text images suggesting a procedure based on image preparation, image type classification and thresholding and image refinement with applicability on Byzantine historical documents.
Details
Keywords
Manak Jain, Sanjay Dhande and Nalinaksh Vyas
Congenital telipes equinovarus (CTEV) or club foot is a historical foot deformity where the foot is turned in and pointing down causing the subject to walk on the outside edges of…
Abstract
Purpose
Congenital telipes equinovarus (CTEV) or club foot is a historical foot deformity where the foot is turned in and pointing down causing the subject to walk on the outside edges of foot. The non‐surgical correction of this deformity is an unsolved challenging problem in the medical domain and it becomes interesting due to the increasing number of such patients. The purpose of this paper is to build a biomodel of this historical foot deformity in newborn babies and hence an attempt to develop a corrective procedure using rapid prototyping (RP).
Design/methodology/approach
Biomodeling is a new technology that allows medical scan data sets to generate solid plastic replicas of anatomical structures. The medical scan data sets of live club foot baby patients were acquired and after image processing, biomodels of four live unilateral club foot baby patients are developed in a fused deposition modeling RP system.
Findings
The paper shows the location and position of abnormal bones and abnormal tarsal joints and is useful for management of club foot deformity in newborn babies. On visual study, it is observed that the talus is underdeveloped, talar neck is shorter and deviated in the medial and planter direction.
Research limitations/implications
The major outcome of this paper is the detailed geometrical visualization of talus bone of club foot and normal foot that assists in diagnosis and better treatment of CTEV. In future, the developed biomodels of club foot help to develop a corrective device that assists in bringing the club to normal foot geometrys.
Practical implications
These developed biomodels of club foot help in deciding the best corrective procedure for surgeons. The geometrical comparison between normal and club foot helps in developing a non‐surgical corrective procedure of this historical foot deformity. A 3D representation of talus bone provides an opportunity to view talus and analyse the ankle joint geometry that develops a favorable condition for diagnosis and treatment of this deformity.
Originality/value
The first time developed biomodels of clubfeet helps orthopaedic surgeons in preoperative surgical planning and consequently in carrying out biomechanical studies of club foot. The presented research plays a major role in planning a non‐surgical corrective procedure of this historical deformity. It also provides a platform for finite element analysis of club foot.
Details
Keywords
Saeed Fathi, Phill Dickens and Richard Hague
The purpose of this paper is to present the findings on jet array instabilities of molten caprolactam. Initial investigations showed that although a suitable range of parameters…
Abstract
Purpose
The purpose of this paper is to present the findings on jet array instabilities of molten caprolactam. Initial investigations showed that although a suitable range of parameters was found for stable jetting, there were cases where instabilities occurred due to external sources such as contamination.
Design/methodology/approach
The inkjet system consisted of a melt supply unit, filtration unit and printhead with pneumatic and thermal control. A start‐up strategy was developed to initiate the jetting trials. A digital microscope camera monitored the printhead nozzle plate to record the jet array stability within the recommended range of parameters from earlier research. The trials with jet instabilities were studied to analyse the instability behaviour.
Findings
It was found that instabilities occurred in three forms which were jet trajectory error, single jet failure and jet array failure. Occasionally, the jet with incorrect trajectory remained stable. When a jet failed, bleeding of melt from the nozzle due to the actuations influenced the adjacent jets initiating an array of jets to fail similar to falling dominos.
Originality/value
The research concept is novel and investigating the jet array instability behaviours could give an understanding on jetting reliability issues.
Details
Keywords
Kun Guo and Qishan Zhang
The purpose of this paper is to discover social communities from the social networks by propagating affinity messages among members in a localized way. The affinity between any…
Abstract
Purpose
The purpose of this paper is to discover social communities from the social networks by propagating affinity messages among members in a localized way. The affinity between any two members is computed by grey relational analysis method.
Design/methodology/approach
First, the responsibility messages and the availability messages are restricted to be broadcasted only among a node and its neighbours, i.e. the nodes that connected to it directly. In this way, both the time complexity and the space complexity can be reduced to be near linear to the network size. The near-linear time and space complexity is quite important for social network analysis because social networks are generally very large. Second, instead of the widely used Euclidean distance, the grey relational degree is adopted in the calculation of node similarity, because the latter is more suitable for the discovery of the hidden relations among the nodes. On the basis of the two improvements, a new social community detection algorithm is proposed. Finally, experiments are conducted to verify the performance of the new algorithm.
Findings
The new algorithm is evaluated by the experiments on both the real-world and the artificial data sets. The experimental results prove the proposed algorithm to be quite effective and efficient at community discovery.
Practical implications
The algorithm proposed in the paper can be applied to discover communities in many social networks. After the recognition of the social communities, the authors can send advertisements, spot valuable customers or locate criminals more precisely.
Originality/value
The new algorithm revises the affinity propagation progress to be localized to improve both time and space complexity. Furthermore, the grey relational analysis is applied to solve the complex relations among members of the social networks.
Details
Keywords
Baohua Yang, Junming Jiang and Jinshuai Zhao
The purpose of this study is to construct a gray relational model based on information diffusion to avoid rank reversal when the available decision information is insufficient, or…
Abstract
Purpose
The purpose of this study is to construct a gray relational model based on information diffusion to avoid rank reversal when the available decision information is insufficient, or the decision objects vary.
Design/methodology/approach
Considering that the sample dependence of the ideal sequence selection in gray relational decision-making is based on case sampling, which causes the phenomenon of rank reversal, this study designs an ideal point diffusion method based on the development trend and distribution skewness of the sample information. In this method, a gray relational model for sample classification is constructed using a virtual-ideal sequence. Subsequently, an optimization model is established to obtain the criteria weights and classification radius values that minimize the deviation between the comprehensive relational degree of the classification object and the critical value.
Findings
The rank-reversal problem in gray relational models could drive decision-makers away from using this method. The results of this study demonstrate that the proposed gray relational model based on information diffusion and virtual-ideal sequencing can effectively avoid rank reversal. The method is applied to classify 31 brownfield redevelopment projects based on available interval gray information. The case analysis verifies the rationality and feasibility of the model.
Originality/value
This study proposes a robust method for ideal point choice when the decision information is limited or dynamic. This method can reduce the influence of ideal sequence changes in gray relational models on decision-making results considerably better than other approaches.
Details
Keywords
Jurgita Domskienė, Eugenija Strazdienė and Paule Bekampienė
The purpose of this paper is to optimise parameters of digital image analysis to investigate the deformation behaviour of woven sample and to detect the onset and variation of…
Abstract
Purpose
The purpose of this paper is to optimise parameters of digital image analysis to investigate the deformation behaviour of woven sample and to detect the onset and variation of wrinkling that occurs due to bias‐tensioned fabric buckling.
Design/methodology/approach
Using models of predescribed shape, the relationship between the digitized gray scale intensities and wrinkles of the surface are analysed and conditions of specimen illumination and filtering procedures are chosen.
Findings
It is proposed to convert acquired images to binary to record the onset of buckling and to estimate critical buckling parameters of stretched woven samples. The threshold value is determined as mean value of approximated histogram of stretched specimen centre line. It is defined that profile curve and gray scale disperse presented by parameter CV can be used to obtain additional information and to compare behaviour of different samples during bias tension.
Research limitations/implications
Proposed image analysis technique allows detection of the onset of buckling wave formation and evaluation of surface waviness changes in woven samples different in colour and weave type tension. However, the behaviour of fabric samples with sharp multicoloured and complicated patterns cannot be assessed by gray scale imaging.
Originality/value
The proposed approach can be adjusted to investigate different wrinkling problems – buckling during simple shearing or picture frame test, seam puckering, draping.
Details
Keywords
The purpose of this paper is to develop a system to analyse the characteristics of infrared objects.
Abstract
Purpose
The purpose of this paper is to develop a system to analyse the characteristics of infrared objects.
Design/methodology/approach
According to the gray scale of image pixel by the image entropy, gray scale estimating is carries on to construct the neural networks. And then the grey relational analysis and grey clustering methods are applied to filter the possible object. The target is predicted through image segmentation pretreatment based on the forecasting value by grey system and assigned corresponding mark. The forecasting precision is greatly elevated by GM (1, 1) model.
Findings
The paper illustrates that, based on the analysis and its experimental results, this system has a good recognition rate for infrared target.
Research limitations/implications
This paper provides a way to grasp the minutial feature of the image. The filtering operation based on pixel level provided auto‐adapted filtering with a new stage.
Practical implications
Applications of grey theory deepened the content of detecting infrared targets and enriched technology of image processing.
Originality/value
This system introduces an effective method for detecting infrared targets.
Details