Search results

1 – 10 of over 2000
Open Access
Article
Publication date: 30 March 2023

Sofia Baroncini, Bruno Sartini, Marieke Van Erp, Francesca Tomasi and Aldo Gangemi

In the last few years, the size of Linked Open Data (LOD) describing artworks, in general or domain-specific Knowledge Graphs (KGs), is gradually increasing. This provides…

Abstract

Purpose

In the last few years, the size of Linked Open Data (LOD) describing artworks, in general or domain-specific Knowledge Graphs (KGs), is gradually increasing. This provides (art-)historians and Cultural Heritage professionals with a wealth of information to explore. Specifically, structured data about iconographical and iconological (icon) aspects, i.e. information about the subjects, concepts and meanings of artworks, are extremely valuable for the state-of-the-art of computational tools, e.g. content recognition through computer vision. Nevertheless, a data quality evaluation for art domains, fundamental for data reuse, is still missing. The purpose of this study is filling this gap with an overview of art-historical data quality in current KGs with a focus on the icon aspects.

Design/methodology/approach

This study’s analyses are based on established KG evaluation methodologies, adapted to the domain by addressing requirements from art historians’ theories. The authors first select several KGs according to Semantic Web principles. Then, the authors evaluate (1) their structures’ suitability to describe icon information through quantitative and qualitative assessment and (2) their content, qualitatively assessed in terms of correctness and completeness.

Findings

This study’s results reveal several issues on the current expression of icon information in KGs. The content evaluation shows that these domain-specific statements are generally correct but often not complete. The incompleteness is confirmed by the structure evaluation, which highlights the unsuitability of the KG schemas to describe icon information with the required granularity.

Originality/value

The main contribution of this work is an overview of the actual landscape of the icon information expressed in LOD. Therefore, it is valuable to cultural institutions by providing them a first domain-specific data quality evaluation. Since this study’s results suggest that the selected domain information is underrepresented in Semantic Web datasets, the authors highlight the need for the creation and fostering of such information to provide a more thorough art-historical dimension to LOD.

Details

Journal of Documentation, vol. 79 no. 7
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 23 January 2024

Parisa Mousavi, Mehdi Shamizanjani, Fariborz Rahimnia and Mohammad Mehraeen

Customer experience management (CXM), which aims to achieve and maintain customers' long-term loyalty, has attracted the attention of many organizations. Improving customer…

Abstract

Purpose

Customer experience management (CXM), which aims to achieve and maintain customers' long-term loyalty, has attracted the attention of many organizations. Improving customer experience management in organizations requires that, first, their relevant capabilities be evaluated. The present study aimed to offer a set of key performance indicators for evaluating customer experience management in commercial banks.

Design/methodology/approach

The study, first, attempted to identify the components of evaluating customer experience management by reviewing the related literature and conducting interviews with experts. Then, the extracted components were transformed into assessable metrics using the goal question metric method, and the key performance indicators relevant to customer experience management in commercial banks were selected according to the experts' opinions and the Fuzzy Delphi method.

Findings

According to the findings of the study, 21 key performance indicators were identified for customer experience management in commercial banks, and customer satisfaction, the mean number of calls to resolve an issue in customer journey touchpoints, the NPS, and the ratio of the budget allocated to the CXM department to the budget of the marketing department were found as the most significant performance indicator according to banking experts.

Originality/value

The present study was among the first research projects intended to evaluate CXM and offer key performance indicators that could help the managers of commercial banks assess the maturity levels of their CXM.

Details

The TQM Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1754-2731

Keywords

Article
Publication date: 22 March 2024

Mohd Mustaqeem, Suhel Mustajab and Mahfooz Alam

Software defect prediction (SDP) is a critical aspect of software quality assurance, aiming to identify and manage potential defects in software systems. In this paper, we have…

Abstract

Purpose

Software defect prediction (SDP) is a critical aspect of software quality assurance, aiming to identify and manage potential defects in software systems. In this paper, we have proposed a novel hybrid approach that combines Gray Wolf Optimization with Feature Selection (GWOFS) and multilayer perceptron (MLP) for SDP. The GWOFS-MLP hybrid model is designed to optimize feature selection, ultimately enhancing the accuracy and efficiency of SDP. Gray Wolf Optimization, inspired by the social hierarchy and hunting behavior of gray wolves, is employed to select a subset of relevant features from an extensive pool of potential predictors. This study investigates the key challenges that traditional SDP approaches encounter and proposes promising solutions to overcome time complexity and the curse of the dimensionality reduction problem.

Design/methodology/approach

The integration of GWOFS and MLP results in a robust hybrid model that can adapt to diverse software datasets. This feature selection process harnesses the cooperative hunting behavior of wolves, allowing for the exploration of critical feature combinations. The selected features are then fed into an MLP, a powerful artificial neural network (ANN) known for its capability to learn intricate patterns within software metrics. MLP serves as the predictive engine, utilizing the curated feature set to model and classify software defects accurately.

Findings

The performance evaluation of the GWOFS-MLP hybrid model on a real-world software defect dataset demonstrates its effectiveness. The model achieves a remarkable training accuracy of 97.69% and a testing accuracy of 97.99%. Additionally, the receiver operating characteristic area under the curve (ROC-AUC) score of 0.89 highlights the model’s ability to discriminate between defective and defect-free software components.

Originality/value

Experimental implementations using machine learning-based techniques with feature reduction are conducted to validate the proposed solutions. The goal is to enhance SDP’s accuracy, relevance and efficiency, ultimately improving software quality assurance processes. The confusion matrix further illustrates the model’s performance, with only a small number of false positives and false negatives.

Details

International Journal of Intelligent Computing and Cybernetics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 16 November 2023

Ehsan Goudarzi, Hamid Esmaeeli, Kia Parsa and Shervin Asadzadeh

The target of this research is to develop a mathematical model which combines the Resource-Constrained Multi-Project Scheduling Problem (RCMPSP) and the Multi-Skilled…

Abstract

Purpose

The target of this research is to develop a mathematical model which combines the Resource-Constrained Multi-Project Scheduling Problem (RCMPSP) and the Multi-Skilled Resource-Constrained Project Scheduling Problem (MSRCPSP). Due to the importance of resource management, the proposed formulation comprises resource leveling considerations as well. The model aims to simultaneously optimize: (1) the total time to accomplish all projects and (2) the total deviation of resource consumptions from the uniform utilization levels.

Design/methodology/approach

The K-Means (KM) and Fuzzy C-Means (FCM) clustering methods have been separately applied to discover the clusters of activities which have the most similar resource demands. The discovered clusters are given to the scheduling process as priori knowledge. Consequently, the execution times of the activities with the most common resource requests will not overlap. The intricacy of the problem led us to incorporate the KM and FCM techniques into a meta-heuristic called the Bi-objective Symbiosis Organisms Search (BSOS) algorithm so that the real-life samples of this problem could be solved. Therefore, two clustering-based algorithms, namely, the BSOS-KM and BSOS-FCM have been developed.

Findings

Comparisons between the BSOS-KM, BSOS-FCM and the BSOS method without any clustering approach show that the clustering techniques could enhance the optimization process. Another hybrid clustering-based methodology called the NSGA-II-SPE has been added to the comparisons to evaluate the developed resource leveling framework.

Practical implications

The practical importance of the model and the clustering-based algorithms have been demonstrated in planning several construction projects, where multiple water supply systems are concurrently constructed.

Originality/value

Reviewing the literature revealed that there was a need for a hybrid formulation that embraces the characteristics of the RCMPSP and MSRCPSP with resource leveling considerations. Moreover, the application of clustering algorithms as resource leveling techniques was not studied sufficiently in the literature.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 6 December 2022

Samuel Façanha Câmara, Brenno Buarque, Glauco Paula Pinto, Thiago Vasconcelos Ribeiro and Jorge Barbosa Soares

This study aims to evaluates a public policy program that finances projects for the development of innovative technological solutions. This paper analyzed the influence of human…

1676

Abstract

Purpose

This study aims to evaluates a public policy program that finances projects for the development of innovative technological solutions. This paper analyzed the influence of human and social capital on the development of the projects, under the perspective of the policy’s effectiveness and efficiency. This specific policy adopted the funding model of economic subsidy by means of grants, which shows the significant engagement of the public sector in applying nonrefundable resources more directly through loans, assuming the role of an entrepreneurial state, according to Mazzucato (2011, 2018) and Tavani and Zamparelli (2020).

Design/methodology/approach

This is a quantitative-descriptive study, according to Marconi and Lakatos (2017). This study is descriptive, for presenting information on innovation projects funded by FUNCAP (Ceará Foundation for Support to Scientific and Technological Development). In addition, this study is quantitative, by establishing multivariate relationships among the variables that relate to human capital and social capital, which are relevant to technological and innovative development, and by introducing variables on technological evolution, proposed as measures of the program’s effectiveness (DTRL, MkTRL) and efficiency (ETRL).

Findings

This paper sought to contribute on public policies for innovation, more specifically on analyzing variables that may affect the development of technological and innovative projects in knowledge-intensive companies. The authors studied capitals potentially important for these companies in the development of innovative projects. Specifically, the authors sought to understand the importance of human capital and how it reflects in technical and scientific knowledge of the project team and of social capital and how it reflects the connection and social relationship among different team members. The results presented that the degree of efficiency of the public funding program depends on how much the teams of the benefited projects have accumulated knowledge, skills and technical capacities – the so-called teams’ human capital.

Research limitations/implications

It is important to address the research sample as a research limitation, which had 72 responses obtained, from a submission rate of 284. Another study limitation is on the qualitative analysis of the topics addressed from the companies and policymakers perspectives, considering that the quantitative nature of the study does not allow for a deeper understanding of the qualitative perspective of the actors involved in the phenomenon studied. As recommendations for future studies, it is suggested to conduct qualitative studies on the aspects studied here. In this sense, it is possible to conduct case studies for specific companies, or policymakers, to clarify and deepen the relationships between the themes addressed here.

Practical implications

As for the practical implications of the research, both for managers of public funding programs and for company managers, the benefits of human capital, related to innovative project development teams, are important in programs that deal with technological development projects. In practice, this means that the greater the human capital of academic background of the members of the supported project teams, the more efficient the projects are in the process of developing their technologies by using the resources provided (Ashford, 2000; Chen et al., 2008; Lerro et al., 2014).

Social implications

Hence, the authors conclude that the evaluated innovation-funding program through grants achieved acceptable results in terms of promoting the technological evolution of the benefited projects and bringing the technologies closer to the market. Its efficiency was the least favorable result, showing that the program needs to focus on improving this specific aspect. Within the investigated program, the issue that needs enhancement (efficiency – ETRL) was the one that presented significant relationships with the human and social capital of the benefited projects’ teams. Thus, it is possible that, by selecting more projects that have teams with high capital, the efficiency of the public policy, in this case the development of projects with high technological and innovative potential, will be possibly reached.

Originality/value

The findings strengthen the need for innovation public policies designed and implemented in a systemic way in the science, technology and innovation ecosystem, to provide a technological infrastructure and human capital necessary for developing projects with high technological and innovative potential (Ergas, 1987; Audretsch and Link, 2012; Caloghirou et al., 2015; Edler and Fagerberg, 2017; Silvio et al., 2019).

Details

Journal of Science and Technology Policy Management, vol. 15 no. 2
Type: Research Article
ISSN: 2053-4620

Keywords

Article
Publication date: 26 May 2022

Ismail Abiodun Sulaimon, Hafiz Alaka, Razak Olu-Ajayi, Mubashir Ahmad, Saheed Ajayi and Abdul Hye

Road traffic emissions are generally believed to contribute immensely to air pollution, but the effect of road traffic data sets on air quality (AQ) predictions has not been fully…

260

Abstract

Purpose

Road traffic emissions are generally believed to contribute immensely to air pollution, but the effect of road traffic data sets on air quality (AQ) predictions has not been fully investigated. This paper aims to investigate the effects traffic data set have on the performance of machine learning (ML) predictive models in AQ prediction.

Design/methodology/approach

To achieve this, the authors have set up an experiment with the control data set having only the AQ data set and meteorological (Met) data set, while the experimental data set is made up of the AQ data set, Met data set and traffic data set. Several ML models (such as extra trees regressor, eXtreme gradient boosting regressor, random forest regressor, K-neighbors regressor and two others) were trained, tested and compared on these individual combinations of data sets to predict the volume of PM2.5, PM10, NO2 and O3 in the atmosphere at various times of the day.

Findings

The result obtained showed that various ML algorithms react differently to the traffic data set despite generally contributing to the performance improvement of all the ML algorithms considered in this study by at least 20% and an error reduction of at least 18.97%.

Research limitations/implications

This research is limited in terms of the study area, and the result cannot be generalized outside of the UK as some of the inherent conditions may not be similar elsewhere. Additionally, only the ML algorithms commonly used in literature are considered in this research, therefore, leaving out a few other ML algorithms.

Practical implications

This study reinforces the belief that the traffic data set has a significant effect on improving the performance of air pollution ML prediction models. Hence, there is an indication that ML algorithms behave differently when trained with a form of traffic data set in the development of an AQ prediction model. This implies that developers and researchers in AQ prediction need to identify the ML algorithms that behave in their best interest before implementation.

Originality/value

The result of this study will enable researchers to focus more on algorithms of benefit when using traffic data sets in AQ prediction.

Details

Journal of Engineering, Design and Technology , vol. 22 no. 3
Type: Research Article
ISSN: 1726-0531

Keywords

Article
Publication date: 7 November 2023

Christian Nnaemeka Egwim, Hafiz Alaka, Youlu Pan, Habeeb Balogun, Saheed Ajayi, Abdul Hye and Oluwapelumi Oluwaseun Egunjobi

The study aims to develop a multilayer high-effective ensemble of ensembles predictive model (stacking ensemble) using several hyperparameter optimized ensemble machine learning…

66

Abstract

Purpose

The study aims to develop a multilayer high-effective ensemble of ensembles predictive model (stacking ensemble) using several hyperparameter optimized ensemble machine learning (ML) methods (bagging and boosting ensembles) trained with high-volume data points retrieved from Internet of Things (IoT) emission sensors, time-corresponding meteorology and traffic data.

Design/methodology/approach

For a start, the study experimented big data hypothesis theory by developing sample ensemble predictive models on different data sample sizes and compared their results. Second, it developed a standalone model and several bagging and boosting ensemble models and compared their results. Finally, it used the best performing bagging and boosting predictive models as input estimators to develop a novel multilayer high-effective stacking ensemble predictive model.

Findings

Results proved data size to be one of the main determinants to ensemble ML predictive power. Second, it proved that, as compared to using a single algorithm, the cumulative result from ensemble ML algorithms is usually always better in terms of predicted accuracy. Finally, it proved stacking ensemble to be a better model for predicting PM2.5 concentration level than bagging and boosting ensemble models.

Research limitations/implications

A limitation of this study is the trade-off between performance of this novel model and the computational time required to train it. Whether this gap can be closed remains an open research question. As a result, future research should attempt to close this gap. Also, future studies can integrate this novel model to a personal air quality messaging system to inform public of pollution levels and improve public access to air quality forecast.

Practical implications

The outcome of this study will aid the public to proactively identify highly polluted areas thus potentially reducing pollution-associated/ triggered COVID-19 (and other lung diseases) deaths/ complications/ transmission by encouraging avoidance behavior and support informed decision to lock down by government bodies when integrated into an air pollution monitoring system

Originality/value

This study fills a gap in literature by providing a justification for selecting appropriate ensemble ML algorithms for PM2.5 concentration level predictive modeling. Second, it contributes to the big data hypothesis theory, which suggests that data size is one of the most important factors of ML predictive capability. Third, it supports the premise that when using ensemble ML algorithms, the cumulative output is usually always better in terms of predicted accuracy than using a single algorithm. Finally developing a novel multilayer high-performant hyperparameter optimized ensemble of ensembles predictive model that can accurately predict PM2.5 concentration levels with improved model interpretability and enhanced generalizability, as well as the provision of a novel databank of historic pollution data from IoT emission sensors that can be purchased for research, consultancy and policymaking.

Details

Journal of Engineering, Design and Technology , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1726-0531

Keywords

Article
Publication date: 28 February 2023

V. Senthil Kumaran and R. Latha

The purpose of this paper is to provide adaptive access to learning resources in the digital library.

Abstract

Purpose

The purpose of this paper is to provide adaptive access to learning resources in the digital library.

Design/methodology/approach

A novel method using ontology-based multi-attribute collaborative filtering is proposed. Digital libraries are those which are fully automated and all resources are in digital form and access to the information available is provided to a remote user as well as a conventional user electronically. To satisfy users' information needs, a humongous amount of newly created information is published electronically in digital libraries. While search applications are improving, it is still difficult for the majority of users to find relevant information. For better service, the framework should also be able to adapt queries to search domains and target learners.

Findings

This paper improves the accuracy and efficiency of predicting and recommending personalized learning resources in digital libraries. To facilitate a personalized digital learning environment, the authors propose a novel method using ontology-supported collaborative filtering (CF) recommendation system. The objective is to provide adaptive access to learning resources in the digital library. The proposed model is based on user-based CF which suggests learning resources for students based on their course registration, preferences for topics and digital libraries. Using ontological framework knowledge for semantic similarity and considering multiple attributes apart from learners' preferences for the learning resources improve the accuracy of the proposed model.

Research limitations/implications

The results of this work majorly rely on the developed ontology. More experiments are to be conducted with other domain ontologies.

Practical implications

The proposed approach is integrated into Nucleus, a Learning Management System (https://nucleus.amcspsgtech.in). The results are of interest to learners, academicians, researchers and developers of digital libraries. This work also provides insights into the ontology for e-learning to improve personalized learning environments.

Originality/value

This paper computes learner similarity and learning resources similarity based on ontological knowledge, feedback and ratings on the learning resources. The predictions for the target learner are calculated and top N learning resources are generated by the recommendation engine using CF.

Article
Publication date: 29 November 2023

Tarun Jaiswal, Manju Pandey and Priyanka Tripathi

The purpose of this study is to investigate and demonstrate the advancements achieved in the field of chest X-ray image captioning through the utilization of dynamic convolutional…

Abstract

Purpose

The purpose of this study is to investigate and demonstrate the advancements achieved in the field of chest X-ray image captioning through the utilization of dynamic convolutional encoder–decoder networks (DyCNN). Typical convolutional neural networks (CNNs) are unable to capture both local and global contextual information effectively and apply a uniform operation to all pixels in an image. To address this, we propose an innovative approach that integrates a dynamic convolution operation at the encoder stage, improving image encoding quality and disease detection. In addition, a decoder based on the gated recurrent unit (GRU) is used for language modeling, and an attention network is incorporated to enhance consistency. This novel combination allows for improved feature extraction, mimicking the expertise of radiologists by selectively focusing on important areas and producing coherent captions with valuable clinical information.

Design/methodology/approach

In this study, we have presented a new report generation approach that utilizes dynamic convolution applied Resnet-101 (DyCNN) as an encoder (Verelst and Tuytelaars, 2019) and GRU as a decoder (Dey and Salemt, 2017; Pan et al., 2020), along with an attention network (see Figure 1). This integration innovatively extends the capabilities of image encoding and sequential caption generation, representing a shift from conventional CNN architectures. With its ability to dynamically adapt receptive fields, the DyCNN excels at capturing features of varying scales within the CXR images. This dynamic adaptability significantly enhances the granularity of feature extraction, enabling precise representation of localized abnormalities and structural intricacies. By incorporating this flexibility into the encoding process, our model can distil meaningful and contextually rich features from the radiographic data. While the attention mechanism enables the model to selectively focus on different regions of the image during caption generation. The attention mechanism enhances the report generation process by allowing the model to assign different importance weights to different regions of the image, mimicking human perception. In parallel, the GRU-based decoder adds a critical dimension to the process by ensuring a smooth, sequential generation of captions.

Findings

The findings of this study highlight the significant advancements achieved in chest X-ray image captioning through the utilization of dynamic convolutional encoder–decoder networks (DyCNN). Experiments conducted using the IU-Chest X-ray datasets showed that the proposed model outperformed other state-of-the-art approaches. The model achieved notable scores, including a BLEU_1 score of 0.591, a BLEU_2 score of 0.347, a BLEU_3 score of 0.277 and a BLEU_4 score of 0.155. These results highlight the efficiency and efficacy of the model in producing precise radiology reports, enhancing image interpretation and clinical decision-making.

Originality/value

This work is the first of its kind, which employs DyCNN as an encoder to extract features from CXR images. In addition, GRU as the decoder for language modeling was utilized and the attention mechanisms into the model architecture were incorporated.

Details

Data Technologies and Applications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 14 December 2023

Huaxiang Song, Chai Wei and Zhou Yong

The paper aims to tackle the classification of Remote Sensing Images (RSIs), which presents a significant challenge for computer algorithms due to the inherent characteristics of…

Abstract

Purpose

The paper aims to tackle the classification of Remote Sensing Images (RSIs), which presents a significant challenge for computer algorithms due to the inherent characteristics of clustered ground objects and noisy backgrounds. Recent research typically leverages larger volume models to achieve advanced performance. However, the operating environments of remote sensing commonly cannot provide unconstrained computational and storage resources. It requires lightweight algorithms with exceptional generalization capabilities.

Design/methodology/approach

This study introduces an efficient knowledge distillation (KD) method to build a lightweight yet precise convolutional neural network (CNN) classifier. This method also aims to substantially decrease the training time expenses commonly linked with traditional KD techniques. This approach entails extensive alterations to both the model training framework and the distillation process, each tailored to the unique characteristics of RSIs. In particular, this study establishes a robust ensemble teacher by independently training two CNN models using a customized, efficient training algorithm. Following this, this study modifies a KD loss function to mitigate the suppression of non-target category predictions, which are essential for capturing the inter- and intra-similarity of RSIs.

Findings

This study validated the student model, termed KD-enhanced network (KDE-Net), obtained through the KD process on three benchmark RSI data sets. The KDE-Net surpasses 42 other state-of-the-art methods in the literature published from 2020 to 2023. Compared to the top-ranked method’s performance on the challenging NWPU45 data set, KDE-Net demonstrated a noticeable 0.4% increase in overall accuracy with a significant 88% reduction in parameters. Meanwhile, this study’s reformed KD framework significantly enhances the knowledge transfer speed by at least three times.

Originality/value

This study illustrates that the logit-based KD technique can effectively develop lightweight CNN classifiers for RSI classification without substantial sacrifices in computation and storage costs. Compared to neural architecture search or other methods aiming to provide lightweight solutions, this study’s KDE-Net, based on the inherent characteristics of RSIs, is currently more efficient in constructing accurate yet lightweight classifiers for RSI classification.

Details

International Journal of Web Information Systems, vol. 20 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

1 – 10 of over 2000