Search results

21 – 30 of over 12000
Article
Publication date: 5 October 2015

Oduetse Matsebe, Khumbulani Mpofu, John Terhile Agee and Sesan Peter Ayodeji

The purpose of this paper is to present a method to extract corner features for map building purposes in man-made structured underwater environments using the sliding-window…

Abstract

Purpose

The purpose of this paper is to present a method to extract corner features for map building purposes in man-made structured underwater environments using the sliding-window technique.

Design/methodology/approach

The sliding-window technique is used to extract corner features, and Mechanically Scanned Imaging Sonar (MSIS) is used to scan the environment for map building purposes. The tests were performed with real data collected in a swimming pool.

Findings

The change in application environment and the use of MSIS present some important differences, which must be taken into account when dealing with acoustic data. These include motion-induced distortions, continuous data flow, low scan frequency and high noise levels. Only part of the data stored in each scan sector is important for feature extraction; therefore, a segmentation process is necessary to extract more significant information. To deal with continuous flow of data, data must be separated into 360° scan sectors. Although the vehicle is assumed to be static, there is a drift in both its rotational and translational motions because of currents in the water; these drifts induce distortions in acoustic images. Therefore, the bearing information and the current vehicle pose corresponding to the selected scan-lines must be stored and used to compensate for motion-induced distortions in the acoustic images. As the data received is very noisy, an averaging filter should be applied to achieve an even distribution of data points, although this is partly achieved through the segmentation process. On the selected sliding window, all the point pairs must pass the distance and angle tests before a corner can be initialised. This minimises mapping of outlier data points but can make the algorithm computationally expensive if the selected window is too wide. The results show the viability of this procedure under very noisy data. The technique has been applied to 50 data sets/scans sectors with a success rate of 83 per cent.

Research limitations/implications

MSIS gives very noisy data. There are limited sensorial modes for underwater applications.

Practical implications

The extraction of corner features in structured man-made underwater environments opens the door for SLAM systems to a wide range of applications and environments.

Originality/value

A method to extract corner features for map building purposes in man-made structured underwater environments is presented using the sliding-window technique.

Details

Journal of Engineering, Design and Technology, vol. 13 no. 4
Type: Research Article
ISSN: 1726-0531

Keywords

Article
Publication date: 6 April 2012

Chengzhi Zhang and Dan Wu

Terminology is the set of technical words or expressions used in specific contexts, which denotes the core concept in a formal discipline and is usually applied in the fields of…

697

Abstract

Purpose

Terminology is the set of technical words or expressions used in specific contexts, which denotes the core concept in a formal discipline and is usually applied in the fields of machine translation, information retrieval, information extraction and text categorization, etc. Bilingual terminology extraction plays an important role in the application of bilingual dictionary compilation, bilingual ontology construction, machine translation and cross‐language information retrieval etc. This paper aims to address the issues of monolingual terminology extraction and bilingual term alignment based on multi‐level termhood.

Design/methodology/approach

A method based on multi‐level termhood is proposed. The new method computes the termhood of the terminology candidate as well as the sentence that includes the terminology by the comparison of the corpus. Since terminologies and general words usually have different distribution in the corpus, termhood can also be used to constrain and enhance the performance of term alignment when aligning bilingual terms on the parallel corpus. In this paper, bilingual term alignment based on termhood constraints is presented.

Findings

Experimental results show multi‐level termhood can get better performance than the existing method for terminology extraction. If termhood is used as a constraining factor, the performance of bilingual term alignment can be improved.

Originality/value

The termhood of the candidate terminology and the sentence that includes the terminology is used for terminology extraction, which is called multi‐level termhood. Multi‐level termhood is computed by the comparison of the corpus. Bilingual term alignment method based on termhood constraint is put forward and termhood is used in the task of bilingual terminology extraction. Experimental results show that termhood constraints can improve the performance of terminology alignment to some extent.

Article
Publication date: 15 August 2016

Ioana Barbantan, Mihaela Porumb, Camelia Lemnaru and Rodica Potolea

Improving healthcare services by developing assistive technologies includes both the health aid devices and the analysis of the data collected by them. The acquired data modeled…

Abstract

Purpose

Improving healthcare services by developing assistive technologies includes both the health aid devices and the analysis of the data collected by them. The acquired data modeled as a knowledge base give more insight into each patient’s health status and needs. Therefore, the ultimate goal of a health-care system is obtaining recommendations provided by an assistive decision support system using such knowledge base, benefiting the patients, the physicians and the healthcare industry. This paper aims to define the knowledge flow for a medical assistive decision support system by structuring raw medical data and leveraging the knowledge contained in the data proposing solutions for efficient data search, medical investigation or diagnosis and medication prediction and relationship identification.

Design/methodology/approach

The solution this paper proposes for implementing a medical assistive decision support system can analyze any type of unstructured medical documents which are processed by applying Natural Language Processing (NLP) tasks followed by semantic analysis, leading to the medical concept identification, thus imposing a structure on the input documents. The structured information is filtered and classified such that custom decisions regarding patients’ health status can be made. The current research focuses on identifying the relationships between medical concepts as defined by the REMed (Relation Extraction from Medical documents) solution that aims at finding the patterns that lead to the classification of concept pairs into concept-to-concept relations.

Findings

This paper proposed the REMed solution expressed as a multi-class classification problem tackled using the support vector machine classifier. Experimentally, this paper determined the most appropriate setup for the multi-class classification problem which is a combination of lexical, context, syntactic and grammatical features, as each feature category is good at representing particular relations, but not all. The best results we obtained are expressed as F1-measure of 74.9 per cent which is 1.4 per cent better than the results reported by similar systems.

Research limitations/implications

The difficulty to discriminate between TrIP and TrAP relations revolves around the hierarchical relationship between the two classes as TrIP is a particular type (an instance) of TrAP. The intuition behind this behavior was that the classifier cannot discern the correct relations because of the bias toward the majority classes. The analysis was conducted by using only sentences from electronic health record that contain at least two medical concepts. This limitation was introduced by the availability of the annotated data with reported results, as relations were defined at sentence level.

Originality/value

The originality of the proposed solution lies in the methodology to extract valuable information from the medical records via semantic searches; concept-to-concept relation identification; and recommendations for diagnosis, treatment and further investigations. The REMed solution introduces a learning-based approach for the automatic discovery of relations between medical concepts. We propose an original list of features: lexical – 3, context – 6, grammatical – 4 and syntactic – 4. The similarity feature introduced in this paper has a significant influence on the classification, and, to the best of the authors’ knowledge, it has not been used as feature in similar solutions.

Details

International Journal of Web Information Systems, vol. 12 no. 3
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 24 June 2021

Yan Wan, Ziqing Peng, Yalu Wang, Yifan Zhang, Jinping Gao and Baojun Ma

This paper aims to reveal the factors patients consider when choosing a doctor for consultation on an online medical consultation (OMC) platform and how these factors influence…

1312

Abstract

Purpose

This paper aims to reveal the factors patients consider when choosing a doctor for consultation on an online medical consultation (OMC) platform and how these factors influence doctors' consultation volumes.

Design/methodology/approach

In Study 1, influencing factors reflected as service features were identified by applying a feature extraction method to physician reviews, and the importance of each feature was determined based on word frequencies and the PageRank algorithm. Sentiment analysis was used to analyze patient satisfaction with each service feature. In Study 2, regression models were used to analyze the relationships between the service features obtained from Study 1 and the doctor's consultation volume.

Findings

The study identified 14 service features of patients' concerns and found that patients mostly care about features such as trust, phraseology, overall service experience, word of mouth and personality traits, all of which describe a doctor's soft skills. These service features affect patients' trust in doctors, which, in turn, affects doctors' consultation volumes.

Originality/value

This research is important as it informs doctors about the features they should improve, to increase their consultation volume on OMC platforms. Furthermore, it not only enriches current trust-related research in the field of OMC, which has a certain reference significance for subsequent research on establishing trust in online doctor–patient relationships, but it also provides a reference for research concerning the antecedents of trust in general.

Article
Publication date: 1 January 1992

Nanua Singh and Dengzhou Qi

As most existing computer‐aided design systems do not provide partfeature information which is essential for process planning, automaticpart feature recognition systems serve as…

Abstract

As most existing computer‐aided design systems do not provide part feature information which is essential for process planning, automatic part feature recognition systems serve as an important link between Computer Aided Design (CAD) and Computer Aided Process Planning (CAPP). Attempts to provide a structural framework for understanding various issues related to part feature recognition. Reviews previous work in the field of part feature recognition and classifies known feature recognition systems for the sake of updating information and future research. Briefly introduces about 12 systems. Studies 31 systems and lists them in the Appendix based on 60 references. Comments on future research directions.

Details

Integrated Manufacturing Systems, vol. 3 no. 1
Type: Research Article
ISSN: 0957-6061

Keywords

Article
Publication date: 9 November 2021

Shilpa B L and Shambhavi B R

Stock market forecasters are focusing to create a positive approach for predicting the stock price. The fundamental principle of an effective stock market prediction is not only…

Abstract

Purpose

Stock market forecasters are focusing to create a positive approach for predicting the stock price. The fundamental principle of an effective stock market prediction is not only to produce the maximum outcomes but also to reduce the unreliable stock price estimate. In the stock market, sentiment analysis enables people for making educated decisions regarding the investment in a business. Moreover, the stock analysis identifies the business of an organization or a company. In fact, the prediction of stock prices is more complex due to high volatile nature that varies a large range of investor sentiment, economic and political factors, changes in leadership and other factors. This prediction often becomes ineffective, while considering only the historical data or textural information. Attempts are made to make the prediction more precise with the news sentiment along with the stock price information.

Design/methodology/approach

This paper introduces a prediction framework via sentiment analysis. Thereby, the stock data and news sentiment data are also considered. From the stock data, technical indicator-based features like moving average convergence divergence (MACD), relative strength index (RSI) and moving average (MA) are extracted. At the same time, the news data are processed to determine the sentiments by certain processes like (1) pre-processing, where keyword extraction and sentiment categorization process takes place; (2) keyword extraction, where WordNet and sentiment categorization process is done; (3) feature extraction, where Proposed holoentropy based features is extracted. (4) Classification, deep neural network is used that returns the sentiment output. To make the system more accurate on predicting the sentiment, the training of NN is carried out by self-improved whale optimization algorithm (SIWOA). Finally, optimized deep belief network (DBN) is used to predict the stock that considers the features of stock data and sentiment results from news data. Here, the weights of DBN are tuned by the new SIWOA.

Findings

The performance of the adopted scheme is computed over the existing models in terms of certain measures. The stock dataset includes two companies such as Reliance Communications and Relaxo Footwear. In addition, each company consists of three datasets (a) in daily option, set start day 1-1-2019 and end day 1-12-2020, (b) in monthly option, set start Jan 2000 and end Dec 2020 and (c) in yearly option, set year 2000. Moreover, the adopted NN + DBN + SIWOA model was computed over the traditional classifiers like LSTM, NN + RF, NN + MLP and NN + SVM; also, it was compared over the existing optimization algorithms like NN + DBN + MFO, NN + DBN + CSA, NN + DBN + WOA and NN + DBN + PSO, correspondingly. Further, the performance was calculated based on the learning percentage that ranges from 60, 70, 80 and 90 in terms of certain measures like MAE, MSE and RMSE for six datasets. On observing the graph, the MAE of the adopted NN + DBN + SIWOA model was 91.67, 80, 91.11 and 93.33% superior to the existing classifiers like LSTM, NN + RF, NN + MLP and NN + SVM, respectively for dataset 1. The proposed NN + DBN + SIWOA method holds minimum MAE value of (∼0.21) at learning percentage 80 for dataset 1; whereas, the traditional models holds the value for NN + DBN + CSA (∼1.20), NN + DBN + MFO (∼1.21), NN + DBN + PSO (∼0.23) and NN + DBN + WOA (∼0.25), respectively. From the table, it was clear that the RMSRE of the proposed NN + DBN + SIWOA model was 3.14, 1.08, 1.38 and 15.28% better than the existing classifiers like LSTM, NN + RF, NN + MLP and NN + SVM, respectively, for dataset 6. In addition, he MSE of the adopted NN + DBN + SIWOA method attain lower values (∼54944.41) for dataset 2 than other existing schemes like NN + DBN + CSA(∼9.43), NN + DBN + MFO (∼56728.68), NN + DBN + PSO (∼2.95) and NN + DBN + WOA (∼56767.88), respectively.

Originality/value

This paper has introduced a prediction framework via sentiment analysis. Thereby, along with the stock data and news sentiment data were also considered. From the stock data, technical indicator based features like MACD, RSI and MA are extracted. Therefore, the proposed work was said to be much appropriate for stock market prediction.

Details

Kybernetes, vol. 52 no. 3
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 18 March 2021

Pandiaraj A., Sundar C. and Pavalarajan S.

Up to date development in sentiment analysis has resulted in a symbolic growth in the volume of study, especially on more subjective text types, namely, product or movie reviews…

Abstract

Purpose

Up to date development in sentiment analysis has resulted in a symbolic growth in the volume of study, especially on more subjective text types, namely, product or movie reviews. The key difference between these texts with news articles is that their target is defined and unique across the text. Hence, the reviews on newspaper articles can deal with three subtasks: correctly spotting the target, splitting the good and bad content from the reviews on the concerned target and evaluating different opinions provided in a detailed manner. On defining these tasks, this paper aims to implement a new sentiment analysis model for article reviews from the newspaper.

Design/methodology/approach

Here, tweets from various newspaper articles are taken and the sentiment analysis process is done with pre-processing, semantic word extraction, feature extraction and classification. Initially, the pre-processing phase is performed, in which different steps such as stop word removal, stemming, blank space removal are carried out and it results in producing the keywords that speak about positive, negative or neutral. Further, semantic words (similar) are extracted from the available dictionary by matching the keywords. Next, the feature extraction is done for the extracted keywords and semantic words using holoentropy to attain information statistics, which results in the attainment of maximum related information. Here, two categories of holoentropy features are extracted: joint holoentropy and cross holoentropy. These extracted features of entire keywords are finally subjected to a hybrid classifier, which merges the beneficial concepts of neural network (NN), and deep belief network (DBN). For improving the performance of sentiment classification, modification is done by inducing the idea of a modified rider optimization algorithm (ROA), so-called new steering updated ROA (NSU-ROA) into NN and DBN for weight update. Hence, the average of both improved classifiers will provide the classified sentiment as positive, negative or neutral from the reviews of newspaper articles effectively.

Findings

Three data sets were considered for experimentation. The results have shown that the developed NSU-ROA + DBN + NN attained high accuracy, which was 2.6% superior to particle swarm optimization, 3% superior to FireFly, 3.8% superior to grey wolf optimization, 5.5% superior to whale optimization algorithm and 3.2% superior to ROA-based DBN + NN from data set 1. The classification analysis has shown that the accuracy of the proposed NSU − DBN + NN was 3.4% enhanced than DBN + NN, 25% enhanced than DBN and 28.5% enhanced than NN and 32.3% enhanced than support vector machine from data set 2. Thus, the effective performance of the proposed NSU − ROA + DBN + NN on sentiment analysis of newspaper articles has been proved.

Originality/value

This paper adopts the latest optimization algorithm called the NSU-ROA to effectively recognize the sentiments of the newspapers with NN and DBN. This is the first work that uses NSU-ROA-based optimization for accurate identification of sentiments from newspaper articles.

Details

Kybernetes, vol. 51 no. 1
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 23 January 2024

Wang Zhang, Lizhe Fan, Yanbin Guo, Weihua Liu and Chao Ding

The purpose of this study is to establish a method for accurately extracting torch and seam features. This will improve the quality of narrow gap welding. An adaptive deflection…

Abstract

Purpose

The purpose of this study is to establish a method for accurately extracting torch and seam features. This will improve the quality of narrow gap welding. An adaptive deflection correction system based on passive light vision sensors was designed using the Halcon software from MVtec Germany as a platform.

Design/methodology/approach

This paper proposes an adaptive correction system for welding guns and seams divided into image calibration and feature extraction. In the image calibration method, the field of view distortion because of the position of the camera is resolved using image calibration techniques. In the feature extraction method, clear features of the weld gun and weld seam are accurately extracted after processing using algorithms such as impact filtering, subpixel (XLD), Gaussian Laplacian and sense region for the weld gun and weld seam. The gun and weld seam centers are accurately fitted using least squares. After calculating the deviation values, the error values are monitored, and error correction is achieved by programmable logic controller (PLC) control. Finally, experimental verification and analysis of the tracking errors are carried out.

Findings

The results show that the system achieves great results in dealing with camera aberrations. Weld gun features can be effectively and accurately identified. The difference between a scratch and a weld is effectively distinguished. The system accurately detects the center features of the torch and weld and controls the correction error to within 0.3mm.

Originality/value

An adaptive correction system based on a passive light vision sensor is designed which corrects the field-of-view distortion caused by the camera’s position deviation. Differences in features between scratches and welds are distinguished, and image features are effectively extracted. The final system weld error is controlled to 0.3 mm.

Details

Industrial Robot: the international journal of robotics research and application, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 12 March 2019

Prafulla Bafna, Shailaja Shirwaikar and Dhanya Pramod

Text mining is growing in importance proportionate to the growth of unstructured data and its applications are increasing day by day from knowledge management to social media…

Abstract

Purpose

Text mining is growing in importance proportionate to the growth of unstructured data and its applications are increasing day by day from knowledge management to social media analysis. Mapping skillset of a candidate and requirements of job profile is crucial for conducting new recruitment as well as for performing internal task allocation in the organization. The automation in the process of selecting the candidates is essential to avoid bias or subjectivity, which may occur while shuffling through thousands of resumes and other informative documents. The system takes skillset in the form of documents to build the semantic space and then takes appraisals or resumes as input and suggests the persons appropriate to complete a task or job position and employees needing additional training. The purpose of this study is to extend the term-document matrix and achieve refined clusters to produce an improved recommendation. The study also focuses on achieving consistency in cluster quality in spite of increasing size of data set, to solve scalability issues.

Design/methodology/approach

In this study, a synset-based document matrix construction method is proposed where semantically similar terms are grouped to reduce the dimension curse. An automated Task Recommendation System is proposed comprising synset-based feature extraction, iterative semantic clustering and mapping based on semantic similarity.

Findings

The first step in knowledge extraction from the unstructured textual data is converting it into structured form either as Term frequency–Inverse document frequency (TF-IDF) matrix or synset-based TF-IDF. Once in structured form, a range of mining algorithms from classification to clustering can be applied. The algorithm gives a better feature vector representation and improved cluster quality. The synset-based grouping and feature extraction for resume data optimizes the candidate selection process by reducing entropy and error and by improving precision and scalability.

Research limitations/implications

The productivity of any organization gets enhanced by assigning tasks to employees with a right set of skills. Efficient recruitment and task allocation can not only improve productivity but also cater to satisfy employee aspiration and identifying training requirements.

Practical implications

Industries can use the approach to support different processes related to human resource management such as promotions, recruitment and training and, thus, manage the talent pool.

Social implications

The task recommender system creates knowledge by following the steps of the knowledge management cycle and this methodology can be adopted in other similar knowledge management applications.

Originality/value

The efficacy of the proposed approach and its enhancement is validated by carrying out experiments on the benchmarked dataset of resumes. The results are compared with existing techniques and show refined clusters. That is Absolute error is reduced by 30 per cent, precision is increased by 20 per cent and dimensions are lowered by 60 per cent than existing technique. Also, the proposed approach solves issue of scalability by producing improved recommendation for 1,000 resumes with reduced entropy.

Details

VINE Journal of Information and Knowledge Management Systems, vol. 49 no. 2
Type: Research Article
ISSN: 2059-5891

Keywords

Open Access
Article
Publication date: 19 December 2018

Min Wang, Shuguang Li, Lei Zhu and Jin Yao

Analysis of characteristic driving operations can help develop supports for drivers with different driving skills. However, the existing knowledge on analysis of driving skills…

1103

Abstract

Purpose

Analysis of characteristic driving operations can help develop supports for drivers with different driving skills. However, the existing knowledge on analysis of driving skills only focuses on single driving operation and cannot reflect the differences on proficiency of coordination of driving operations. Thus, the purpose of this paper is to analyze driving skills from driving coordinating operations. There are two main contributions: the first involves a method for feature extraction based on AdaBoost, which selects features critical for coordinating operations of experienced drivers and inexperienced drivers, and the second involves a generating method for candidate features, called the combined features method, through which two or more different driving operations at the same location are combined into a candidate combined feature. A series of experiments based on driving simulator and specific course with several different curves were carried out, and the result indicated the feasibility of analyzing driving behavior through AdaBoost and the combined features method.

Design/methodology/approach

AdaBoost was used to extract features and the combined features method was used to combine two or more different driving operations at the same location.

Findings

A series of experiments based on driving simulator and specific course with several different curves were carried out, and the result indicated the feasibility of analyzing driving behavior through AdaBoost and the combined features method.

Originality/value

There are two main contributions: the first involves a method for feature extraction based on AdaBoost, which selects features critical for coordinating operations of experienced drivers and inexperienced drivers, and the second involves a generating method for candidate features, called the combined features method, through which two or more different driving operations at the same location are combined into a candidate combined feature.

Details

Journal of Intelligent and Connected Vehicles, vol. 1 no. 3
Type: Research Article
ISSN: 2399-9802

Keywords

21 – 30 of over 12000