Search results

1 – 8 of 8
Article
Publication date: 19 May 2021

Mithun B. Patil and Rekha Patil

Vertical handoff mechanism (VHO) becomes very popular because of the improvements in the mobility models. These developments are less to certain circumstances and thus do not…

Abstract

Purpose

Vertical handoff mechanism (VHO) becomes very popular because of the improvements in the mobility models. These developments are less to certain circumstances and thus do not provide support in generic mobility, but the vertical handover management providing in the heterogeneous wireless networks (HWNs) is crucial and challenging. Hence, this paper introduces the vertical handoff management approach based on an effective network selection scheme.

Design/methodology/approach

This paper aims to improve the working principle of previous methods and make VHO more efficient and reliable for the HWN.Initially, the handover triggering techniques is modelled for identifying an appropriate place to initiate handover based on the computed coverage area of cellular base station or wireless local area network (WLAN) access point. Then, inappropriate networks are eliminated for determining the better network to perform handover. Accordingly, a network selection approach is introduced on the basis ofthe Fractional-dolphin echolocation-based support vector neural network (Fractional-DE-based SVNN). The Fractional-DE is designed by integrating Fractional calculus (FC) in Dolphin echolocation (DE), and thereby, modifying the update rule of the DE algorithm based on the location of the solutions in past iterations. The proposed Fractional-DE algorithm is used to train Support vector neural network (SVNN) for selecting the best weights. Several parameters, like Bit error rate (BER), End to end delay (EED), jitter, packet loss, and energy consumption are considered for choosing the best network.

Findings

The performance of the proposed VHO mechanism based on Fractional-DE is evaluated based on delay, energy consumption, staytime, and throughput. The proposed Fractional-DE method achieves the minimal delay of 0.0100 sec, the minimal energy consumption of 0.348, maximal staytime of 4.373 sec, and the maximal throughput of 109.20 kbps.

Originality/value

In this paper, a network selection approach is introduced on the basis of the Fractional-Dolphin Echolocation-based Support vector neural network (Fractional-DE-based SVNN). The Fractional-DE is designed by integrating Fractional calculus (FC) in Dolphin echolocation (DE), and thereby, modifying the update rule of the DE algorithm based on the location of the solutions in past iterations. The proposed Fractional-DE algorithm is used to train SVNN for selecting the best weights. Several parameters, like Bit error rate (BER), End to end delay (EED), jitter, packet loss, and energy consumption are considered for choosing the best network.The performance of the proposed VHO mechanism based on Fractional-DE is evaluated based on delay, energy consumption, staytime, and throughput, in which the proposed method offers the best performance.

Details

International Journal of Pervasive Computing and Communications, vol. 19 no. 1
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 2 July 2020

N. Venkata Sailaja, L. Padmasree and N. Mangathayaru

Text mining has been used for various knowledge discovery based applications, and thus, a lot of research has been contributed towards it. Latest trending research in the text…

176

Abstract

Purpose

Text mining has been used for various knowledge discovery based applications, and thus, a lot of research has been contributed towards it. Latest trending research in the text mining is adopting the incremental learning data, as it is economical while dealing with large volume of information.

Design/methodology/approach

The primary intention of this research is to design and develop a technique for incremental text categorization using optimized Support Vector Neural Network (SVNN). The proposed technique involves four major steps, such as pre-processing, feature selection, classification and feature extraction. Initially, the data is pre-processed based on stop word removal and stemming. Then, the feature extraction is done by extracting semantic word-based features and Term Frequency and Inverse Document Frequency (TF-IDF). From the extracted features, the important features are selected using Bhattacharya distance measure and the features are subjected as the input to the proposed classifier. The proposed classifier performs incremental learning using SVNN, wherein the weights are bounded in a limit using rough set theory. Moreover, for the optimal selection of weights in SVNN, Moth Search (MS) algorithm is used. Thus, the proposed classifier, named Rough set MS-SVNN, performs the text categorization for the incremental data, given as the input.

Findings

For the experimentation, the 20 News group dataset, and the Reuters dataset are used. Simulation results indicate that the proposed Rough set based MS-SVNN has achieved 0.7743, 0.7774 and 0.7745 for the precision, recall and F-measure, respectively.

Originality/value

In this paper, an online incremental learner is developed for the text categorization. The text categorization is done by developing the Rough set MS-SVNN classifier, which classifies the incoming texts based on the boundary condition evaluated by the Rough set theory, and the optimal weights from the MS. The proposed online text categorization scheme has the basic steps, like pre-processing, feature extraction, feature selection and classification. The pre-processing is carried out to identify the unique words from the dataset, and the features like semantic word-based features and TF-IDF are obtained from the keyword set. Feature selection is done by setting a minimum Bhattacharya distance measure, and the selected features are provided to the proposed Rough set MS-SVNN for the classification.

Details

Data Technologies and Applications, vol. 54 no. 5
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 21 March 2022

A. Ashwitha and C.A. Latha

Automated crop prediction is needed for the following reasons: First, agricultural yields were decided by a farmer's ability to work in a certain field and with a particular crop…

Abstract

Purpose

Automated crop prediction is needed for the following reasons: First, agricultural yields were decided by a farmer's ability to work in a certain field and with a particular crop previously. They were not always able to predict the crop and its yield solely on that idea alone. Second, seed firms frequently monitor how well new plant varieties would grow in certain settings. Third, predicting agricultural production is critical for solving emerging food security concerns, especially in the face of global climate change. Accurate production forecasts not only assist farmers in making informed economic and management decisions but they also aid in the prevention of famine. This results in farming systems’ efficiency and productivity gains, as well as reduced risk from environmental factors.

Design/methodology/approach

This research paper proposes a machine learning technique for effective autonomous crop and yield prediction, which makes use of solution encoding to create solutions randomly, and then for every generated solution, fitness is evaluated to meet highest accuracy. Major focus of the proposed work is to optimize the weight parameter in the input data. The algorithm continues until the optimal agent or optimal weight is selected, which contributes to maximum accuracy in automated crop prediction.

Findings

Performance of the proposed work is compared with different existing algorithms, such as Random Forest, support vector machine (SVM) and artificial neural network (ANN). The proposed method support vector neural network (SVNN) with gravitational search agent (GSA) is analysed based on different performance metrics, such as accuracy, sensitivity, specificity, CPU memory usage and training time, and maximum performance is determined.

Research limitations/implications

Rather than real-time data collected by Internet of Things (IoT) devices, this research focuses solely on historical data; the proposed work does not impose IoT-based smart farming, which enhances the overall agriculture system by monitoring the field in real time. The present study only predicts the sort of crop to sow not crop production.

Originality/value

The paper proposes a novel optimization algorithm, which is based on the law of gravity and mass interactions. The search agents in the proposed algorithm are a cluster of weights that interact with one another using Newtonian gravity and motion principles. A comparison was made between the suggested method and various existing strategies. The obtained results confirm the high-performance in solving diverse nonlinear functions.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 16 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 16 August 2021

V. Vinolin and M. Sucharitha

With the advancements in photo editing software, it is possible to generate fake images, degrading the trust in digital images. Forged images, which appear like authentic images…

Abstract

Purpose

With the advancements in photo editing software, it is possible to generate fake images, degrading the trust in digital images. Forged images, which appear like authentic images, can be created without leaving any visual clues about the alteration in the image. Image forensic field has introduced several forgery detection techniques, which effectively distinguish fake images from the original ones, to restore the trust in digital images. Among several forgery images, spliced images involving human faces are more unsafe. Hence, there is a need for a forgery detection approach to detect the spliced images.

Design/methodology/approach

This paper proposes a Taylor-rider optimization algorithm-based deep convolutional neural network (Taylor-ROA-based DeepCNN) for detecting spliced images. Initially, the human faces in the spliced images are detected using the Viola–Jones algorithm, from which the 3-dimensional (3D) shape of the face is established using landmark-based 3D morphable model (L3DMM), which estimates the light coefficients. Then, the distance measures, such as Bhattacharya, Seuclidean, Euclidean, Hamming, Chebyshev and correlation coefficients are determined from the light coefficients of the faces. These form the feature vector to the proposed Taylor-ROA-based DeepCNN, which determines the spliced images.

Findings

Experimental analysis using DSO-1, DSI-1, real dataset and hybrid dataset reveal that the proposed approach acquired the maximal accuracy, true positive rate (TPR) and true negative rate (TNR) of 99%, 98.88% and 96.03%, respectively, for DSO-1 dataset. The proposed method reached the performance improvement of 24.49%, 8.92%, 6.72%, 4.17%, 0.25%, 0.13%, 0.06%, and 0.06% in comparison to the existing methods, such as Kee and Farid's, shape from shading (SFS), random guess, Bo Peng et al., neural network, FOA-SVNN, CNN-based MBK, and Manoj Kumar et al., respectively, in terms of accuracy.

Originality/value

The Taylor-ROA is developed by integrating the Taylor series in rider optimization algorithm (ROA) for optimally tuning the DeepCNN.

Details

Data Technologies and Applications, vol. 56 no. 1
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 26 July 2019

Ayalapogu Ratna Raju, Suresh Pabboju and Ramisetty Rajeswara Rao

Brain tumor segmentation and classification is the interesting area for differentiating the tumorous and the non-tumorous cells in the brain and classifies the tumorous cells for…

Abstract

Purpose

Brain tumor segmentation and classification is the interesting area for differentiating the tumorous and the non-tumorous cells in the brain and classifies the tumorous cells for identifying its level. The methods developed so far lack the automatic classification, consuming considerable time for the classification. In this work, a novel brain tumor classification approach, namely, harmony cuckoo search-based deep belief network (HCS-DBN) has been proposed. Here, the images present in the database are segmented based on the newly developed hybrid active contour (HAC) segmentation model, which is the integration of the Bayesian fuzzy clustering (BFC) and the active contour model. The proposed HCS-DBN algorithm is trained with the features obtained from the segmented images. Finally, the classifier provides the information about the tumor class in each slice available in the database. Experimentation of the proposed HAC and the HCS-DBN algorithm is done using the MRI image available in the BRATS database, and results are observed. The simulation results prove that the proposed HAC and the HCS-DBN algorithm have an overall better performance with the values of 0.945, 0.9695 and 0.99348 for accuracy, sensitivity and specificity, respectively.

Design/methodology/approach

The proposed HAC segmentation approach integrates the properties of the AC model and BFC. Initially, the brain image with different modalities is subjected to segmentation with the BFC and AC models. Then, the Laplacian correction is applied to fuse the segmented outputs from each model. Finally, the proposed HAC segmentation provides the error-free segments of the brain tumor regions prevailing in the MRI image. The next step is to extract the useful features, based on scattering transform, wavelet transform and local Gabor binary pattern, from the segmented brain image. Finally, the extracted features from each segment are provided to the DBN for the training, and the HCS algorithm chooses the optimal weights for DBN training.

Findings

The experimentation of the proposed HAC with the HCS-DBN algorithm is analyzed with the standard BRATS database, and its performance is evaluated based on metrics such as accuracy, sensitivity and specificity. The simulation results of the proposed HAC with the HCS-DBN algorithm are compared against existing works such as k-NN, NN, multi-SVM and multi-SVNN. The results achieved by the proposed HAC with the HCS-DBN algorithm are eventually higher than the existing works with the values of 0.945, 0.9695 and 0.99348 for accuracy, sensitivity and specificity, respectively.

Originality/value

This work presents the brain tumor segmentation and the classification scheme by introducing the HAC-based segmentation model. The proposed HAC model combines the BFC and the active contour model through a fusion process, using the Laplacian correction probability for segmenting the slices in the database.

Details

Sensor Review, vol. 39 no. 4
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 4 September 2020

Mehdi Khashei and Bahareh Mahdavi Sharif

The purpose of this paper is to propose a comprehensive version of a hybrid autoregressive integrated moving average (ARIMA), and artificial neural networks (ANNs) in order to…

Abstract

Purpose

The purpose of this paper is to propose a comprehensive version of a hybrid autoregressive integrated moving average (ARIMA), and artificial neural networks (ANNs) in order to yield a more general and more accurate hybrid model for exchange rates forecasting. For this purpose, the Kalman filter technique is used in the proposed model to preprocess and detect the trend of raw data. It is basically done to reduce the existing noise in the underlying data and better modeling, respectively.

Design/methodology/approach

In this paper, ARIMA models are applied to construct a new hybrid model to overcome the above-mentioned limitations of ANNs and to yield a more general and more accurate model than traditional hybrid ARIMA and ANNs models. In our proposed model, a time series is considered as a function of a linear and nonlinear component, so, in the first phase, an ARIMA model is first used to identify and magnify the existing linear structures in data. In the second phase, a multilayer perceptron is used as a nonlinear neural network to model the preprocessed data, in which the existing linear structures are identified and magnified by ARIMA and to predict the future value of time series.

Findings

In this paper, a new Kalman filter based hybrid artificial neural network and ARIMA model are proposed as an alternate forecasting technique to the traditional hybrid ARIMA/ANNs models. In the proposed model, similar to the traditional hybrid ARIMA/ANNs models, the unique strengths of ARIMA and ANN in linear and nonlinear modeling are jointly used, aiming to capture different forms of relationship in the data; especially, in complex problems that have both linear and nonlinear correlation structures. However, there are no aforementioned assumptions in the modeling process of the proposed model. Therefore, in the proposed model, in contrast to the traditional hybrid ARIMA/ANNs, it can be generally guaranteed that the performance of the proposed model will not be worse than either of their components used separately. In addition, empirical results in both weekly and daily exchange rate forecasting indicate that the proposed model can be an effective way to improve forecasting accuracy achieved by traditional hybrid ARIMA/ANNs models.

Originality/value

In the proposed model, in contrast to the traditional hybrid ARIMA/ANNs, it can be guaranteed that the performance of the proposed model will not be worse than either of the components used separately. In addition, empirical results in exchange rate forecasting indicate that the proposed model can be an effective way to improve forecasting accuracy achieved by traditional hybrid ARIMA/ANNs models. Therefore, it can be used as an appropriate alternate model for forecasting in exchange ratemarkets, especially when higher forecasting accuracy is needed.

Article
Publication date: 8 June 2022

Chenguang Wang, Zixin Hu and Zongke Bao

Entrepreneurship as a development engine has a distinct character in the economic growth of countries. Therefore, governments must support entrepreneurship in order to succeed in…

Abstract

Purpose

Entrepreneurship as a development engine has a distinct character in the economic growth of countries. Therefore, governments must support entrepreneurship in order to succeed in the future. The best way to improve the performance of this entrepreneurial advocacy is through efficient measurement methods. For this reason, the purpose of this paper is to propose a new integrated dynamic multi-attribute decision-making (MADM) model based on neutrosophic set (NS) for assessment of the government entrepreneurship support.

Design/methodology/approach

Due to the nature of entrepreneurship issues, which are multifaceted and full of uncertain, indeterminate and ambiguous dimensions, this measurement requires multi-criteria decision-making methods in spaces of uncertainty and indeterminacy. Also, due to the change in the size of indicators in different periods, researchers need a special type of decision model that can handle the dynamics of indicators. So, in this paper, the authors proposed a dynamic neutrosophic weighted geometric operator to aggregate dynamic neutrosophic information. Furthermore, in view of the deficiencies of current dynamic neutrosophic MADM methods a compromised model based on time degrees was proposed. The principle of time degrees was introduced, and the subjective and objective weighting methods were synthesized based on the proposed aggregated operator and a nonlinear programming problem based on the entropy concept was applied to determine the attribute weights under different time sequence.

Findings

The information of ten countries with the indicators such as connections (C), the country's level of education and experience (EE), cultural aspects (CA), government policies (GP) and funding (F) over four years was gathered and the proposed dynamic MADM model to assess the level of entrepreneurial support for these countries. The findings show that the flexibility of the model based on decision-making thought and we can see that the weights of the criteria have a considerable impact on the final evaluations.

Originality/value

In many decision areas the original decision information is usually collected at different periods. Thus, it is necessary to develop some approaches to deal with these issues. In the government entrepreneurship support problem, the researchers need tools to handle the dynamics of indicators in neutrosophic environments. Given that this issue is very important, nonetheless as far as is known, few studies have been done in this area. Furthermore, in view of the deficiencies of current dynamic neutrosophic MADM making methods a compromised model based on time degrees was proposed. Moreover, the presented neutrosophic aggregation operator is very suitable for aggregating the neutrosophic information collected at different periods. The developed approach can solve the several problems where all pieces of decision information take the form of neutrosophic information collected at different periods.

Article
Publication date: 31 May 2019

Sanjay Kumar, Abid Haleem and Sushil

The purpose of this paper is to provide a framework for assessing the overall innovativeness of manufacturing firms using a multi-attribute group decision-making methodology.

Abstract

Purpose

The purpose of this paper is to provide a framework for assessing the overall innovativeness of manufacturing firms using a multi-attribute group decision-making methodology.

Design/methodology/approach

This study identifies the indicators of firms’ innovativeness from the literature. The concept of neutrosophic numbers has been used to assign different importance weights to individual decision makers to account for the differences in their educational backgrounds and practical experience. An intuitionistic fuzzy based TOPSIS procedure is adapted for ranking the candidate firms based on their performance on identified criteria. The implementation of the proposed methodology is demonstrated through an explanatory example. Sensitivity analysis is carried out to judge the robustness of the proposed framework.

Findings

The proposed framework provides an efficient and reliable tool to subjectively evaluate and compare the innovativeness of manufacturing firms. The sensitivity analysis shows that the methodology is robust enough to absorb the noise factors/errors/variations, etc.

Research limitations/implications

Motivated by this work, future studies can consider developing an integrated innovativeness index for evaluation of innovativeness of manufacturing firms. The concept of interval valued intuitionistic fuzzy and neutrosophic sets can be utilized to reduce the margin of perceptual errors even further.

Practical implications

The study will provide the firms with a framework for benchmarking their innovative performance. The firms can analyze their current performance and reconfigure their resources and capabilities suitably to improve their competitive position.

Originality/value

This study is one of the few attempts that have been made to articulate a firm level innovativeness assessment tool for manufacturing firms operating in an industry sector. Advanced concepts of fuzzy and neutrosophic sets have been utilized to eliminate the chances of bias/perceptual errors that most often affect the quality of decisions in today’s dynamic and uncertain decision-making environment.

Details

Benchmarking: An International Journal, vol. 26 no. 6
Type: Research Article
ISSN: 1463-5771

Keywords

1 – 8 of 8