Search results

1 – 10 of over 1000
To view the access options for this content please click here
Book part
Publication date: 13 December 2017

Qiongwei Ye and Baojun Ma

Internet + and Electronic Business in China is a comprehensive resource that provides insight and analysis into E-commerce in China and how it has revolutionized and…

Abstract

Internet + and Electronic Business in China is a comprehensive resource that provides insight and analysis into E-commerce in China and how it has revolutionized and continues to revolutionize business and society. Split into four distinct sections, the book first lays out the theoretical foundations and fundamental concepts of E-Business before moving on to look at internet+ innovation models and their applications in different industries such as agriculture, finance and commerce. The book then provides a comprehensive analysis of E-business platforms and their applications in China before finishing with four comprehensive case studies of major E-business projects, providing readers with successful examples of implementing E-Business entrepreneurship projects.

Internet + and Electronic Business in China is a comprehensive resource that provides insights and analysis into how E-commerce has revolutionized and continues to revolutionize business and society in China.

Details

Internet+ and Electronic Business in China: Innovation and Applications
Type: Book
ISBN: 978-1-78743-115-7

To view the access options for this content please click here
Article
Publication date: 2 October 2009

Ioannis G. Mariolis and Evangelos S. Dermatas

The purpose of this paper is to provide a robust method for automatic detection of seam lines based only on digital images of the garments.

Abstract

Purpose

The purpose of this paper is to provide a robust method for automatic detection of seam lines based only on digital images of the garments.

Design/methodology/approach

A local standard deviation pre‐processing filter is applied to enhance the contrast between the seam line and the texture and the Prewitt operator extracts the edges of the enhanced image. The seam line is detected by a maximum at the Radon transform. The proposed method is invariant to the illumination intensity and it has been also tested with moving average and fast Fourier transform low‐pass filters used in the pre‐processing module. Extensive experiments are carried out in the presence of additive Gaussian and uniform noise.

Findings

The proposed method detects 109 out of 118 seams when the local standard deviation is used at the pre‐processing stage, giving a mean distance error between the real and the estimated line of 2 mm when the image is digitised at 97 dpi. However, in case the images are distorted by additive Gaussian noise at 20 dB signal‐to‐noise ratio, the moving average low‐pass filtering method gives the best results, detecting 104 noisy images.

Research limitations/implications

The proposed method detects seam lines that can be approximated by a continuation of straight lines. The current work can be extended in the detection of the curved parts of seam lines.

Practical implications

Since the method addresses garments instead of seam specimens, the proposed approach can be imported in automatic systems for online quality control of seams.

Originality/value

Local standard deviation belongs to first‐order statistics, which makes it suitable for texture analysis and that is why it is mostly used in web defect detection. The novelty in the approach, however, is that by considering the seam as an abnormality of the texture, the authors applied that method at the pre‐processing stage to enhance the seam before the detection. Moreover, the presented method is illumination invariant, a property that has not been addressed in similar methods.

Details

International Journal of Clothing Science and Technology, vol. 21 no. 5
Type: Research Article
ISSN: 0955-6222

Keywords

To view the access options for this content please click here
Article
Publication date: 13 August 2020

Chandra Sekhar Kolli and Uma Devi Tatavarthi

Fraud transaction detection has become a significant factor in the communication technologies and electronic commerce systems, as it affects the usage of electronic…

Abstract

Purpose

Fraud transaction detection has become a significant factor in the communication technologies and electronic commerce systems, as it affects the usage of electronic payment. Even though, various fraud detection methods are developed, enhancing the performance of electronic payment by detecting the fraudsters results in a great challenge in the bank transaction.

Design/methodology/approach

This paper aims to design the fraud detection mechanism using the proposed Harris water optimization-based deep recurrent neural network (HWO-based deep RNN). The proposed fraud detection strategy includes three different phases, namely, pre-processing, feature selection and fraud detection. Initially, the input transactional data is subjected to the pre-processing phase, where the data is pre-processed using the Box-Cox transformation to remove the redundant and noise values from data. The pre-processed data is passed to the feature selection phase, where the essential and the suitable features are selected using the wrapper model. The selected feature makes the classifier to perform better detection performance. Finally, the selected features are fed to the detection phase, where the deep recurrent neural network classifier is used to achieve the fraud detection process such that the training process of the classifier is done by the proposed Harris water optimization algorithm, which is the integration of water wave optimization and Harris hawks optimization.

Findings

Moreover, the proposed HWO-based deep RNN obtained better performance in terms of the metrics, such as accuracy, sensitivity and specificity with the values of 0.9192, 0.7642 and 0.9943.

Originality/value

An effective fraud detection method named HWO-based deep RNN is designed to detect the frauds in the bank transaction. The optimal features selected using the wrapper model enable the classifier to find fraudulent activities more efficiently. However, the accurate detection result is evaluated through the optimization model based on the fitness measure such that the function with the minimal error value is declared as the best solution, as it yields better detection results.

To view the access options for this content please click here
Article
Publication date: 16 September 2021

Sireesha Jasti

Internet has endorsed a tremendous change with the advancement of the new technologies. The change has made the users of the internet to make comments regarding the…

Abstract

Purpose

Internet has endorsed a tremendous change with the advancement of the new technologies. The change has made the users of the internet to make comments regarding the service or product. The Sentiment classification is the process of analyzing the reviews for helping the user to decide whether to purchase the product or not.

Design/methodology/approach

A rider feedback artificial tree optimization-enabled deep recurrent neural networks (RFATO-enabled deep RNN) is developed for the effective classification of sentiments into various grades. The proposed RFATO algorithm is modeled by integrating the feedback artificial tree (FAT) algorithm in the rider optimization algorithm (ROA), which is used for training the deep RNN classifier for the classification of sentiments in the review data. The pre-processing is performed by the stemming and the stop word removal process for removing the redundancy for smoother processing of the data. The features including the sentiwordnet-based features, a variant of term frequency-inverse document frequency (TF-IDF) features and spam words-based features are extracted from the review data to form the feature vector. Feature fusion is performed based on the entropy of the features that are extracted. The metrics employed for the evaluation in the proposed RFATO algorithm are accuracy, sensitivity, and specificity.

Findings

By using the proposed RFATO algorithm, the evaluation metrics such as accuracy, sensitivity and specificity are maximized when compared to the existing algorithms.

Originality/value

The proposed RFATO algorithm is modeled by integrating the FAT algorithm in the ROA, which is used for training the deep RNN classifier for the classification of sentiments in the review data. The pre-processing is performed by the stemming and the stop word removal process for removing the redundancy for smoother processing of the data. The features including the sentiwordnet-based features, a variant of TF-IDF features and spam words-based features are extracted from the review data to form the feature vector. Feature fusion is performed based on the entropy of the features that are extracted.

Details

International Journal of Web Information Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1744-0084

Keywords

To view the access options for this content please click here
Article
Publication date: 10 February 2021

Sathies Kumar Thangarajan and Arun Chokkalingam

The purpose of this paper is to develop an efficient brain tumor detection model using the beneficial concept of hybrid classification using magnetic resonance imaging…

Abstract

Purpose

The purpose of this paper is to develop an efficient brain tumor detection model using the beneficial concept of hybrid classification using magnetic resonance imaging (MRI) images Brain tumors are the most familiar and destructive disease, resulting to a very short life expectancy in their highest grade. The knowledge and the sudden progression in the area of brain imaging technologies have perpetually ready for an essential role in evaluating and concentrating the novel perceptions of brain anatomy and operations. The system of image processing has prevalent usage in the part of medical science for enhancing the early diagnosis and treatment phases.

Design/methodology/approach

The proposed detection model involves five main phases, namely, image pre-processing, tumor segmentation, feature extraction, third-level discrete wavelet transform (DWT) extraction and detection. Initially, the input MRI image is subjected to pre-processing using different steps called image scaling, entropy-based trilateral filtering and skull stripping. Image scaling is used to resize the image, entropy-based trilateral filtering extends to eradicate the noise from the digital image. Moreover, skull stripping is done by Otsu thresholding. Next to the pre-processing, tumor segmentation is performed by the fuzzy centroid-based region growing algorithm. Once the tumor is segmented from the input MRI image, feature extraction is done, which focuses on the first-order and higher-order statistical measures. In the detection side, a hybrid classifier with the merging of neural network (NN) and convolutional neural network (CNN) is adopted. Here, NN takes the first-order and higher-order statistical measures as input, whereas CNN takes the third level DWT image as input. As an improvement, the number of hidden neurons of both NN and CNN is optimized by a novel meta-heuristic algorithm called Crossover Operated Rooster-based Chicken Swarm Optimization (COR-CSO). The AND operation of outcomes obtained from both optimized NN and CNN categorizes the input image into two classes such as normal and abnormal. Finally, a valuable performance evaluation will prove that the performance of the proposed model is quite good over the entire existing model.

Findings

From the experimental results, the accuracy of the suggested COR-CSO-NN + CNN was seemed to be 18% superior to support vector machine, 11.3% superior to NN, 22.9% superior to deep belief network, 15.6% superior to CNN and 13.4% superior to NN + CNN, 11.3% superior to particle swarm optimization-NN + CNN, 9.2% superior to grey wolf optimization-NN + CNN, 5.3% superior to whale optimization algorithm-NN + CNN and 3.5% superior to CSO-NN + CNN. Finally, it was concluded that the suggested model is superior in detecting brain tumors effectively using MRI images.

Originality/value

This paper adopts the latest optimization algorithm called COR-CSO to detect brain tumors using NN and CNN. This is the first study that uses COR-CSO-based optimization for accurate brain tumor detection.

To view the access options for this content please click here
Article
Publication date: 2 July 2020

N. Venkata Sailaja, L. Padmasree and N. Mangathayaru

Text mining has been used for various knowledge discovery based applications, and thus, a lot of research has been contributed towards it. Latest trending research in the…

Abstract

Purpose

Text mining has been used for various knowledge discovery based applications, and thus, a lot of research has been contributed towards it. Latest trending research in the text mining is adopting the incremental learning data, as it is economical while dealing with large volume of information.

Design/methodology/approach

The primary intention of this research is to design and develop a technique for incremental text categorization using optimized Support Vector Neural Network (SVNN). The proposed technique involves four major steps, such as pre-processing, feature selection, classification and feature extraction. Initially, the data is pre-processed based on stop word removal and stemming. Then, the feature extraction is done by extracting semantic word-based features and Term Frequency and Inverse Document Frequency (TF-IDF). From the extracted features, the important features are selected using Bhattacharya distance measure and the features are subjected as the input to the proposed classifier. The proposed classifier performs incremental learning using SVNN, wherein the weights are bounded in a limit using rough set theory. Moreover, for the optimal selection of weights in SVNN, Moth Search (MS) algorithm is used. Thus, the proposed classifier, named Rough set MS-SVNN, performs the text categorization for the incremental data, given as the input.

Findings

For the experimentation, the 20 News group dataset, and the Reuters dataset are used. Simulation results indicate that the proposed Rough set based MS-SVNN has achieved 0.7743, 0.7774 and 0.7745 for the precision, recall and F-measure, respectively.

Originality/value

In this paper, an online incremental learner is developed for the text categorization. The text categorization is done by developing the Rough set MS-SVNN classifier, which classifies the incoming texts based on the boundary condition evaluated by the Rough set theory, and the optimal weights from the MS. The proposed online text categorization scheme has the basic steps, like pre-processing, feature extraction, feature selection and classification. The pre-processing is carried out to identify the unique words from the dataset, and the features like semantic word-based features and TF-IDF are obtained from the keyword set. Feature selection is done by setting a minimum Bhattacharya distance measure, and the selected features are provided to the proposed Rough set MS-SVNN for the classification.

Details

Data Technologies and Applications, vol. 54 no. 5
Type: Research Article
ISSN: 2514-9288

Keywords

To view the access options for this content please click here
Article
Publication date: 18 April 2017

Jasgurpreet Singh Chohan and Rupinder Singh

The purpose of this paper is to review the various pre-processing and post-processing approaches used to ameliorate the surface characteristics of fused deposition…

Abstract

Purpose

The purpose of this paper is to review the various pre-processing and post-processing approaches used to ameliorate the surface characteristics of fused deposition modelling (FDM)-based acrylonitrile butadiene styrene (ABS) prototypes. FDM being simple and versatile additive manufacturing technique has a calibre to comply with present need of tailor-made and cost-effective products with low cycle time. But the poor surface finish and dimensional accuracy are the primary hurdles ahead the implementation of FDM for rapid casting and tooling applications.

Design/methodology/approach

The consequences and scope of FDM pre-processing and post-processing parameters have been studied independently. The comprehensive study includes dominance, limitations, validity and reach of various techniques embraced to improve surface characteristics of ABS parts. The replicas of hip implant are fabricated by maintaining the optimum pre-processing parameters as reviewed, and a case study has been executed to evaluate the capability of vapour smoothing process to enhance surface finish.

Findings

The pre-processing techniques are quite deficient when different geometries are required to be manufactured within limited time and required range of surface finish and accuracy. The post-processing techniques of surface finishing, being effective disturbs the dimensional stability and mechanical strength of parts thus incapacitates them for specific applications. The major challenge for FDM is the development of precise, automatic and controlled mass finishing techniques with low cost and time.

Research limitations/implications

The research assessed the feasibility of vapour smoothing technique for surface finishing which can make consistent castings of customized implants at low cost and shorter lead times.

Originality/value

The extensive research regarding surface finish and dimensional accuracy of FDM parts has been collected, and inferences made by study have been used to fabricate replicas to further examine advanced finishing technique of vapour smoothing.

Details

Rapid Prototyping Journal, vol. 23 no. 3
Type: Research Article
ISSN: 1355-2546

Keywords

To view the access options for this content please click here
Article
Publication date: 19 April 2011

S.K. Bag, P.P. Srivastav and H.N. Mishra

The purpose of this paper is to develop FT‐NIR technique for determination of moisture content in bael pulp.

Abstract

Purpose

The purpose of this paper is to develop FT‐NIR technique for determination of moisture content in bael pulp.

Design/methodology/approach

Calibration and validation sets were designed for the conception and evaluation of the method adequacy in the range of moisture content 70 to 95 per cent (wb). The prediction models based on partial least squares (PLS) regression, were developed in the near‐infrared region (4,000‐2,500cm‐1). Conventional criteria such as the R2, the root mean square errors of cross validation (RMSECV), root mean square errors of estimation (RMSEE) as well as the number of PLS factors were considered for the selection of three pre‐processing (vector normalization, minimum‐maximum normalization and multiplicative scatter correction) methods.

Findings

The best calibration model was developed with min‐max normalization (MMN) spectral pre‐processing (R2=99.3). The MMN pre‐processing method was found most suitable and the maximum coefficient of determination (R2) value of 0.993 was obtained for the calibration model developed. The developed results indicated that FTNIR spectroscopy could be used for rapid detection of moisture content in bael pulp samples without any sample destruction.

Originality/value

The research in this paper is useful for the quick detection of moisture content of bael fruit pulp during processing.

Details

British Food Journal, vol. 113 no. 4
Type: Research Article
ISSN: 0007-070X

Keywords

To view the access options for this content please click here
Article
Publication date: 27 March 2009

Ntogas Nikolaos and Ventzas Dimitrios

The purpose of this paper is to introduce an innovative procedure for digital historical documents image binarization based on image pre‐processing and image condition…

Abstract

Purpose

The purpose of this paper is to introduce an innovative procedure for digital historical documents image binarization based on image pre‐processing and image condition classification. The estimated results for each class of images and each method have shown improved image quality for the six categories of document images described by their separate characteristics.

Design/methodology/approach

The applied technique consists of five stages, i.e. text image acquisition, image preparation, denoising, image type classification in six categories according to image condition, image thresholding and final refinement, a very effective approach to binarize document images. The results achieved by the authors' method require minimal pre‐processing steps for best quality of the image and increased text readability. This methodology performs better compared to current state‐of‐the‐art adaptive thresholding techniques.

Findings

An innovative procedure for digital historical documents image binarization based on image pre‐processing, image type classification in categories according to image condition and further enhancement. This methodology is robust and simple, with minimal pre‐processing steps for best quality of the image, increased text readability and it performs better compared to available thresholding techniques.

Research limitations/implications

The technique consists of limited but optimized pre‐processing sequential steps, and attention should be given in document image preparation and denoising, and on image condition classification for thresholding and refinement, since bad results in a single stage corrupt the final document image quality and text readability.

Originality/value

The paper contributes in digital image binarization of text images suggesting a procedure based on image preparation, image type classification and thresholding and image refinement with applicability on Byzantine historical documents.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 2 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

To view the access options for this content please click here
Article
Publication date: 5 October 2012

Samuel Forsman, Niclas Björngrim, Anders Bystedt, Lars Laitila, Peter Bomark and Micael Öhman

The construction industry has been criticized for not keeping up with other production industries in terms of cost efficiency, innovation, and production methods. The…

Abstract

Purpose

The construction industry has been criticized for not keeping up with other production industries in terms of cost efficiency, innovation, and production methods. The purpose of this paper is to contribute to the knowledge about what hampers efficiency in supplying engineer‐to‐order (ETO) joinery‐products to the construction process. The objective is to identify the main contributors to inefficiency and to define areas for innovation in improving this industry.

Design/methodology/approach

Case studies of the supply chain of a Swedish ETO joinery‐products supplier are carried out, and observations, semi‐structured interviews, and documents from these cases are analysed from an efficiency improvement perspective.

Findings

From a lean thinking and information modelling perspective, longer‐term procurement relations and efficient communication of information are the main areas of innovation for enhancing the efficiency of supplying ETO joinery‐products. It seems to be possible to make improvements in planning and coordination, assembly information, and spatial measuring through information modelling and spatial scanning technology. This is likely to result in an increased level of prefabrication, decreased assembly time, and increased predictability of on‐site work.

Originality/value

The role of supplying ETO joinery‐products is a novel research area in construction. There is a need to develop each segment of the manufacturing industry supplying construction and this paper contributes to the collective knowledge in this area. The focus is on the possibilities for innovation in the ETO joinery‐products industry and on its improved integration in the construction industry value chain in general.

1 – 10 of over 1000