Search results
1 – 10 of 11This study aims to present the concept of aircraft turbofan engine health status prediction with artificial neural network (ANN) pattern recognition but augmented with automated…
Abstract
Purpose
This study aims to present the concept of aircraft turbofan engine health status prediction with artificial neural network (ANN) pattern recognition but augmented with automated features engineering (AFE).
Design/methodology/approach
The main concept of engine health status prediction was based on three case studies and a validation process. The first two were performed on the engine health status parameters, namely, performance margin and specific fuel consumption margin. The third one was generated and created for the engine performance and safety data, specifically created for the final test. The final validation of the neural network pattern recognition was the validation of the proposed neural network architecture in comparison to the machine learning classification algorithms. All studies were conducted for ANN, which was a two-layer feedforward network architecture with pattern recognition. All case studies and tests were performed for both simple pattern recognition network and network augmented with automated feature engineering (AFE).
Findings
The greatest achievement of this elaboration is the presentation of how on the basis of the real-life engine operational data, the entire process of engine status prediction might be conducted with the application of the neural network pattern recognition process augmented with AFE.
Practical implications
This research could be implemented into the engine maintenance strategy and planning. Engine health status prediction based on ANN augmented with AFE is an extremely strong tool in aircraft accident and incident prevention.
Originality/value
Although turbofan engine health status prediction with ANN is not a novel approach, what is absolutely worth emphasizing is the fact that contrary to other publications this research was based on genuine, real engine performance operational data as well as AFE methodology, which makes the entire research very reliable. This is also the reason the prediction results reflect the effect of the real engine wear and deterioration process.
Details
Keywords
Chunxiu Qin, Yulong Wang, XuBu Ma, Yaxi Liu and Jin Zhang
To address the shortcomings of existing academic user information needs identification methods, such as low efficiency and high subjectivity, this study aims to propose an…
Abstract
Purpose
To address the shortcomings of existing academic user information needs identification methods, such as low efficiency and high subjectivity, this study aims to propose an automated method of identifying online academic user information needs.
Design/methodology/approach
This study’s method consists of two main parts: the first is the automatic classification of academic user information needs based on the bidirectional encoder representations from transformers (BERT) model. The second is the key content extraction of academic user information needs based on the improved MDERank key phrase extraction (KPE) algorithm. Finally, the applicability and effectiveness of the method are verified by an example of identifying the information needs of academic users in the field of materials science.
Findings
Experimental results show that the BERT-based information needs classification model achieved the highest weighted average F1 score of 91.61%. The improved MDERank KPE algorithm achieves the highest F1 score of 61%. The empirical analysis results reveal that the information needs of the categories “methods,” “experimental phenomena” and “experimental materials” are relatively high in the materials science field.
Originality/value
This study provides a solution for automated identification of academic user information needs. It helps online academic resource platforms to better understand their users’ information needs, which in turn facilitates the platform’s academic resource organization and services.
Details
Keywords
Dilawar Ali, Kenzo Milleville, Steven Verstockt, Nico Van de Weghe, Sally Chambers and Julie M. Birkholz
Historical newspaper collections provide a wealth of information about the past. Although the digitization of these collections significantly improves their accessibility, a large…
Abstract
Purpose
Historical newspaper collections provide a wealth of information about the past. Although the digitization of these collections significantly improves their accessibility, a large portion of digitized historical newspaper collections, such as those of KBR, the Royal Library of Belgium, are not yet searchable at article-level. However, recent developments in AI-based research methods, such as document layout analysis, have the potential for further enriching the metadata to improve the searchability of these historical newspaper collections. This paper aims to discuss the aforementioned issue.
Design/methodology/approach
In this paper, the authors explore how existing computer vision and machine learning approaches can be used to improve access to digitized historical newspapers. To do this, the authors propose a workflow, using computer vision and machine learning approaches to (1) provide article-level access to digitized historical newspaper collections using document layout analysis, (2) extract specific types of articles (e.g. feuilletons – literary supplements from Le Peuple from 1938), (3) conduct image similarity analysis using (un)supervised classification methods and (4) perform named entity recognition (NER) to link the extracted information to open data.
Findings
The results show that the proposed workflow improves the accessibility and searchability of digitized historical newspapers, and also contributes to the building of corpora for digital humanities research. The AI-based methods enable automatic extraction of feuilletons, clustering of similar images and dynamic linking of related articles.
Originality/value
The proposed workflow enables automatic extraction of articles, including detection of a specific type of article, such as a feuilleton or literary supplement. This is particularly valuable for humanities researchers as it improves the searchability of these collections and enables corpora to be built around specific themes. Article-level access to, and improved searchability of, KBR's digitized newspapers are demonstrated through the online tool (https://tw06v072.ugent.be/kbr/).
Details
Keywords
Gaurav Sarin, Pradeep Kumar and M. Mukund
Text classification is a widely accepted and adopted technique in organizations to mine and analyze unstructured and semi-structured data. With advancement of technological…
Abstract
Purpose
Text classification is a widely accepted and adopted technique in organizations to mine and analyze unstructured and semi-structured data. With advancement of technological computing, deep learning has become more popular among academicians and professionals to perform mining and analytical operations. In this work, the authors study the research carried out in field of text classification using deep learning techniques to identify gaps and opportunities for doing research.
Design/methodology/approach
The authors adopted bibliometric-based approach in conjunction with visualization techniques to uncover new insights and findings. The authors collected data of two decades from Scopus global database to perform this study. The authors discuss business applications of deep learning techniques for text classification.
Findings
The study provides overview of various publication sources in field of text classification and deep learning together. The study also presents list of prominent authors and their countries working in this field. The authors also presented list of most cited articles based on citations and country of research. Various visualization techniques such as word cloud, network diagram and thematic map were used to identify collaboration network.
Originality/value
The study performed in this paper helped to understand research gaps that is original contribution to body of literature. To best of the authors' knowledge, in-depth study in the field of text classification and deep learning has not been performed in detail. The study provides high value to scholars and professionals by providing them opportunities of research in this area.
Details
Keywords
This study focuses on the classification of targets with varying shapes using radar cross section (RCS), which is influenced by the target’s shape. This study aims to develop a…
Abstract
Purpose
This study focuses on the classification of targets with varying shapes using radar cross section (RCS), which is influenced by the target’s shape. This study aims to develop a robust classification method by considering an incident angle with minor random fluctuations and using a physical optics simulation to generate data sets.
Design/methodology/approach
The approach involves several supervised machine learning and classification methods, including traditional algorithms and a deep neural network classifier. It uses histogram-based definitions of the RCS for feature extraction, with an emphasis on resilience against noise in the RCS data. Data enrichment techniques are incorporated, including the use of noise-impacted histogram data sets.
Findings
The classification algorithms are extensively evaluated, highlighting their efficacy in feature extraction from RCS histograms. Among the studied algorithms, the K-nearest neighbour is found to be the most accurate of the traditional methods, but it is surpassed in accuracy by a deep learning network classifier. The results demonstrate the robustness of the feature extraction from the RCS histograms, motivated by mm-wave radar applications.
Originality/value
This study presents a novel approach to target classification that extends beyond traditional methods by integrating deep neural networks and focusing on histogram-based methodologies. It also incorporates data enrichment techniques to enhance the analysis, providing a comprehensive perspective for target detection using RCS.
Details
Keywords
Adel Almasarwah, Khalid Y. Aram and Yaseen S. Alhaj-Yaseen
This study aims to apply machine learning (ML) to identify new financial elements managers might use for earnings management (EM), assessing their impact on the Standard Jones…
Abstract
Purpose
This study aims to apply machine learning (ML) to identify new financial elements managers might use for earnings management (EM), assessing their impact on the Standard Jones Model and modified Jones model for EM detection and examining managerial motives for using these components.
Design/methodology/approach
Using eXtreme gradient boosting on 23,310 the US firm-year observations from 2012 to2021, the study pinpoints nine financial variables potentially used for earnings manipulation, not covered by traditional accruals models.
Findings
Cost of goods sold and earnings before interest, taxes, depreciation and amortization are identified as the most significant for EM, with relative importances of 40.2% and 11.5%, respectively.
Research limitations/implications
The study’s scope, limited to a specific data set and timeframe, and the exclusion of some financial variables may impact the findings’ broader applicability.
Practical implications
The results are crucial for researchers, practitioners, regulators and investors, offering strategies for detecting and addressing EM.
Social implications
Insights from the study advocate for greater financial transparency and integrity in businesses.
Originality/value
By incorporating ML in EM detection and spotlighting overlooked financial variables, the research brings fresh perspectives and opens new avenues for further exploration in the field.
Details
Keywords
This research endeavors to assess the influence of financial shared service centers (FSSCs) on the quality of accounting information within China’s A-share listed companies. Using…
Abstract
Purpose
This research endeavors to assess the influence of financial shared service centers (FSSCs) on the quality of accounting information within China’s A-share listed companies. Using a multi-period difference-in-differences (DID) model, the study aims to empirically examine the correlation between the adoption of FSSCs and the quality of accounting information.
Design/methodology/approach
The study uses a robust methodology to evaluate the relationship between FSSCs and accounting information quality (AIQ). Leveraging the established FSSCs within China’s A-share listed companies as the treatment group, this research adopts a multi-period DID model. This approach enables a rigorous empirical examination of the influence exerted by FSSCs on the overall quality of accounting information.
Findings
The present study delves into the impact of FSSCs on AIQ and conducts empirical analysis using data from Chinese A-share listed companies between 2004 and 2021. The findings substantiate that: FSSCs significantly bolster the quality of accounting information, a conclusion retained even after robustness tests. Specifically, FSSCs exhibit a positive correlation with the comparability, timeliness and disclosure quality of accounting information while demonstrating no significant influence on relevance, robustness and reliability factors.
Research limitations/implications
First, the analysis primarily rests upon data from Chinese A-share listed companies between 2004 and 2021, potentially constraining the generalizability of findings across diverse contexts. Second, despite controlling for various factors, unobserved variables or external factors not encompassed in the model might influence the relationship between FSSCs and AIQ. Additionally, the study’s reliance solely on quantitative data confines exploration into qualitative aspects that might offer a more comprehensive understanding of FSSCs’ impact on AIQ.
Practical implications
This paper establishes a nuanced connection between FSSC operations and AIQ, furnishing direct empirical evidence for their economic implications and propounding a novel avenue for augmenting AIQ. And, it furnishes guidance for forthcoming FSSC development, accentuating the necessity of harnessing information technology to enhance the relevance, reliability and robustness of accounting information.
Originality/value
Majority of prior empirical studies assessing AIQ have focused on singular indicators, lacking a comprehensive depiction of its overall level. To address this gap, this paper pioneers the construction of a comprehensive index for AIQ, providing a holistic representation of its level. Furthermore, this study stands as the inaugural investigation into the relationship between China’s A-share listed firms’ FSSCs and the quality of accounting information.
Details
Keywords
Ahmad Honarjoo, Ehsan Darvishan, Hassan Rezazadeh and Amir Homayoon Kosarieh
This article introduces SigBERT, a novel approach that fine-tunes bidirectional encoder representations from transformers (BERT) for the purpose of distinguishing between intact…
Abstract
Purpose
This article introduces SigBERT, a novel approach that fine-tunes bidirectional encoder representations from transformers (BERT) for the purpose of distinguishing between intact and impaired structures by analyzing vibration signals. Structural health monitoring (SHM) systems are crucial for identifying and locating damage in civil engineering structures. The proposed method aims to improve upon existing methods in terms of cost-effectiveness, accuracy and operational reliability.
Design/methodology/approach
SigBERT employs a fine-tuning process on the BERT model, leveraging its capabilities to effectively analyze time-series data from vibration signals to detect structural damage. This study compares SigBERT's performance with baseline models to demonstrate its superior accuracy and efficiency.
Findings
The experimental results, obtained through the Qatar University grandstand simulator, show that SigBERT outperforms existing models in terms of damage detection accuracy. The method is capable of handling environmental fluctuations and offers high reliability for non-destructive monitoring of structural health. The study mentions the quantifiable results of the study, such as achieving a 99% accuracy rate and an F-1 score of 0.99, to underline the effectiveness of the proposed model.
Originality/value
SigBERT presents a significant advancement in SHM by integrating deep learning with a robust transformer model. The method offers improved performance in both computational efficiency and diagnostic accuracy, making it suitable for real-world operational environments.
Details
Keywords
This review article is focused on the following research questions: RQ1: What are the methods used by authors to collect data in order to evaluate one's profile? RQ2: What are the…
Abstract
Purpose
This review article is focused on the following research questions: RQ1: What are the methods used by authors to collect data in order to evaluate one's profile? RQ2: What are the classification algorithms and ranking metrics used to give suggestions to users? RQ3: How effective are these algorithms and metrics identified in RQ2?
Design/methodology/approach
There are four major systematic review phases being carried out in this survey, namely the formulation of research questions, conducting the review, which includes the selection of articles and appraising evidence quality, data extraction and narrative data synthesis.
Findings
Collecting from primary sources is more personalized and relevant. Embedded skill sets that have a considerable impact on one’s career aspirations could be mined from secondary sources. A hybrid recommender system helped mitigate the limitations of both. The effectiveness of the models depends not only rely on the filtering techniques used but also on the metrics used to measure similarity and the frequency of words or phrases used in a document.
Research limitations/implications
The study benefits internship program coordinators of a university aiming to develop a recommender or matching system platform for their students. The content of the study may shed a light on how university decision-makers can explore options on what are the techniques or algorithms to be integrated. One of the advantages of internship or industrial training programs is that they would help students align them with their career goals. Research studies have discussed other RS filtering techniques apart from the three major filtering techniques.
Practical implications
The outcome of the study, which is a recommendation system to match a student's profile with the knowledge and skills being sought by organizations, may help ease the challenges encountered by both parties. The study benefits internship coordinators of a university who are planning to create a recommendation system, an innovative project to be used in teaching and learning.
Social implications
Internship programs can help a student grow personally and professionally. A university student looking for internship opportunities can find it a daunting task to undertake, as there is a vast pool of opportunities offered in the market. The confidence levels needed to match their knowledge, skills and career goals with the job descriptions (JDs) could be challenging. The same holds with companies, as finding the right people for the right job is a tough endeavor. The main objective of conducting this study is to identify models implemented in recommendation systems to give and/or rank suggestions given to users.
Originality/value
While surveys regarding recommender systems (RS) exist, there are gaps in the presentation of various data collection methods and the comparison of recommendation filtering techniques used for both primary and secondary sources of data. Most recommendation systems for internship programs are intended for European universities and not much for Southeast Asia. There are also a limited number of comparative studies or systematic review articles related to recommendation systems for internship programs offered in an Southeast Asian landscape. Systematic reviews on the usability of the proposed recommendation systems are also limited. The study presents reviews of articles, from data collection and techniques used to the usability of the proposed recommendation systems, which were presented in the articles being studied.
Details
Keywords
Armin Mahmoodi, Leila Hashemi, Amin Mahmoodi, Benyamin Mahmoodi and Milad Jasemi
The proposed model has been aimed to predict stock market signals by designing an accurate model. In this sense, the stock market is analysed by the technical analysis of Japanese…
Abstract
Purpose
The proposed model has been aimed to predict stock market signals by designing an accurate model. In this sense, the stock market is analysed by the technical analysis of Japanese Candlestick, which is combined by the following meta heuristic algorithms: support vector machine (SVM), meta-heuristic algorithms, particle swarm optimization (PSO), imperialist competition algorithm (ICA) and genetic algorithm (GA).
Design/methodology/approach
In addition, among the developed algorithms, the most effective one is chosen to determine probable sell and buy signals. Moreover, the authors have proposed comparative results to validate the designed model in this study with the same basic models of three articles in the past. Hence, PSO is used as a classification method to search the solution space absolutelyand with the high speed of running. In terms of the second model, SVM and ICA are examined by the time. Where the ICA is an improver for the SVM parameters. Finally, in the third model, SVM and GA are studied, where GA acts as optimizer and feature selection agent.
Findings
Results have been indicated that, the prediction accuracy of all new models are high for only six days, however, with respect to the confusion matrixes results, it is understood that the SVM-GA and SVM-ICA models have correctly predicted more sell signals, and the SCM-PSO model has correctly predicted more buy signals. However, SVM-ICA has shown better performance than other models considering executing the implemented models.
Research limitations/implications
In this study, the authors to analyze the data the long length of time between the years 2013–2021, makes the input data analysis challenging. They must be changed with respect to the conditions.
Originality/value
In this study, two methods have been developed in a candlestick model, they are raw based and signal-based approaches which the hit rate is determined by the percentage of correct evaluations of the stock market for a 16-day period.
Details