Search results

1 – 10 of 265
Article
Publication date: 25 October 2021

Mandeep Kaur, Rajinder Sandhu and Rajni Mohana

The purpose of this study is to verify that if applications categories are segmented and resources are allocated based on their specific category, how effective scheduling can be…

Abstract

Purpose

The purpose of this study is to verify that if applications categories are segmented and resources are allocated based on their specific category, how effective scheduling can be done?.

Design/methodology/approach

This paper proposes a scheduling framework for IoT application jobs, based upon the Quality of Service (QoS) parameters, which works at coarse grained level to select a fog environment and at fine grained level to select a fog node. Fog environment is chosen considering availability, physical distance, latency and throughput. At fine grained (node selection) level, a probability triad (C, M, G) is anticipated using Naïve Bayes algorithm which provides probability of newly submitted application job to fall in either of the categories Compute (C) intensive, Memory (M) intensive and GPU (G) intensive.

Findings

Experiment results showed that the proposed framework performed better than traditional cloud and fog computing paradigms.

Originality/value

The proposed framework combines types of applications and computation capabilities of Fog computing environment, which is not carried out to the best of knowledge of authors.

Details

International Journal of Pervasive Computing and Communications, vol. 19 no. 3
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 10 July 2023

Surabhi Singh, Shiwangi Singh, Alex Koohang, Anuj Sharma and Sanjay Dhir

The primary aim of this study is to detail the use of soft computing techniques in business and management research. Its objectives are as follows: to conduct a comprehensive…

Abstract

Purpose

The primary aim of this study is to detail the use of soft computing techniques in business and management research. Its objectives are as follows: to conduct a comprehensive scientometric analysis of publications in the field of soft computing, to explore the evolution of keywords, to identify key research themes and latent topics and to map the intellectual structure of soft computing in the business literature.

Design/methodology/approach

This research offers a comprehensive overview of the field by synthesising 43 years (1980–2022) of soft computing research from the Scopus database. It employs descriptive analysis, topic modelling (TM) and scientometric analysis.

Findings

This study's co-citation analysis identifies three primary categories of research in the field: the components, the techniques and the benefits of soft computing. Additionally, this study identifies 16 key study themes in the soft computing literature using TM, including decision-making under uncertainty, multi-criteria decision-making (MCDM), the application of deep learning in object detection and fault diagnosis, circular economy and sustainable development and a few others.

Practical implications

This analysis offers a valuable understanding of soft computing for researchers and industry experts and highlights potential areas for future research.

Originality/value

This study uses scientific mapping and performance indicators to analyse a large corpus of 4,512 articles in the field of soft computing. It makes significant contributions to the intellectual and conceptual framework of soft computing research by providing a comprehensive overview of the literature on soft computing literature covering a period of four decades and identifying significant trends and topics to direct future research.

Details

Industrial Management & Data Systems, vol. 123 no. 8
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 6 September 2022

Elena Fedorova, Pavel Drogovoz, Anna Popova and Vladimir Shiboldenkov

The paper examines whether, along with the financial performance, the disclosure of research and development (R&D) expenses, patent portfolios, patent citations and innovation…

Abstract

Purpose

The paper examines whether, along with the financial performance, the disclosure of research and development (R&D) expenses, patent portfolios, patent citations and innovation activities affect the market capitalization of Russian companies.

Design/methodology/approach

The paper opted for a set of techniques including bag-of-words (BoW) to retrieve additional innovation-related data from companies' annual reports, self-organizing maps (SOM) to perform visual exploratory analysis and panel data regression (PDR) to conduct confirmatory analysis using data on 74 Russian publicly traded companies for the period 2013–2019.

Findings

The paper observes that the disclosure of nonfinancial data on R&D, patents and primarily product and marketing innovations positively affects the market capitalization of the largest Russian companies, which are mainly focused on energy, raw materials and utilities and are operating on international markets. The study suggests that these companies are financially well-resourced to innovate at risk and thus to provide positive signals to stakeholders and external agents.

Research limitations/implications

Our findings are important to management, investors, financial analysts, regulators and various agencies providing guidance on corporate governance and sustainability reporting. However, the authors acknowledge that the research results may lack generalizability due to the sample covering a single national context. Researchers are encouraged to test the proposed approach further on other countries' data by using the compiled lexicons.

Originality/value

The study aims to expand the domains of signaling theory and market valuation by providing new insights into the impact that companies' reporting on R&D, patents and innovation activities has on market capitalization. New nonfinancial factors that previous research does not investigate – innovation disclosure indicators (IDI) – are tested.

Details

Kybernetes, vol. 52 no. 12
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 16 August 2023

Jialiang Xie, Shanli Zhang, Honghui Wang and Mingzhi Chen

With the rapid development of Internet technology, cybersecurity threats such as security loopholes, data leaks, network fraud, and ransomware have become increasingly prominent…

Abstract

Purpose

With the rapid development of Internet technology, cybersecurity threats such as security loopholes, data leaks, network fraud, and ransomware have become increasingly prominent, and organized and purposeful cyberattacks have increased, posing more challenges to cybersecurity protection. Therefore, reliable network risk assessment methods and effective network security protection schemes are urgently needed.

Design/methodology/approach

Based on the dynamic behavior patterns of attackers and defenders, a Bayesian network attack graph is constructed, and a multitarget risk dynamic assessment model is proposed based on network availability, network utilization impact and vulnerability attack possibility. Then, the self-organizing multiobjective evolutionary algorithm based on grey wolf optimization is proposed. And the authors use this algorithm to solve the multiobjective risk assessment model, and a variety of different attack strategies are obtained.

Findings

The experimental results demonstrate that the method yields 29 distinct attack strategies, and then attacker's preferences can be obtained according to these attack strategies. Furthermore, the method efficiently addresses the security assessment problem involving multiple decision variables, thereby providing constructive guidance for the construction of security network, security reinforcement and active defense.

Originality/value

A method for network risk assessment methods is given. And this study proposed a multiobjective risk dynamic assessment model based on network availability, network utilization impact and the possibility of vulnerability attacks. The example demonstrates the effectiveness of the method in addressing network security risks.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 17 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 30 March 2023

Wilson Charles Chanhemo, Mustafa H. Mohsini, Mohamedi M. Mjahidi and Florence U. Rashidi

This study explores challenges facing the applicability of deep learning (DL) in software-defined networks (SDN) based campus networks. The study intensively explains the…

Abstract

Purpose

This study explores challenges facing the applicability of deep learning (DL) in software-defined networks (SDN) based campus networks. The study intensively explains the automation problem that exists in traditional campus networks and how SDN and DL can provide mitigating solutions. It further highlights some challenges which need to be addressed in order to successfully implement SDN and DL in campus networks to make them better than traditional networks.

Design/methodology/approach

The study uses a systematic literature review. Studies on DL relevant to campus networks have been presented for different use cases. Their limitations are given out for further research.

Findings

Following the analysis of the selected studies, it showed that the availability of specific training datasets for campus networks, SDN and DL interfacing and integration in production networks are key issues that must be addressed to successfully deploy DL in SDN-enabled campus networks.

Originality/value

This study reports on challenges associated with implementation of SDN and DL models in campus networks. It contributes towards further thinking and architecting of proposed SDN-based DL solutions for campus networks. It highlights that single problem-based solutions are harder to implement and unlikely to be adopted in production networks.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 16 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 17 November 2022

Navid Mohammadi, Nader Seyyedamiri and Saeed Heshmati

The purpose of this study/paper is conducting a Systematic mapping review, as a systematic literature review method for reviewing the literature of new product development by…

Abstract

Purpose

The purpose of this study/paper is conducting a Systematic mapping review, as a systematic literature review method for reviewing the literature of new product development by textmining and mapping the results of this review.

Design/methodology/approach

This research has been conducted with the aim of systematically reviewing the literature on the field of design and development of products based on textual data. This research wants to know, how text data and text mining methods, can use for the design and development of new products.

Findings

This review finds out what are the most popular algorithms in this field? What are the most popular areas in using these approaches? What types of data are used in this area? What software is used in this regard? And what are the research gaps in this area?

Originality/value

The contribution of this review is creating a macro and comprehensive map for research in this field of study from various aspects and identifying the pros and cons of this field of study by systematic mapping review.

Details

Nankai Business Review International, vol. 14 no. 4
Type: Research Article
ISSN: 2040-8749

Keywords

Article
Publication date: 16 August 2023

Fanshu Zhao, Jin Cui, Mei Yuan and Juanru Zhao

The purpose of this paper is to present a weakly supervised learning method to perform health evaluation and predict the remaining useful life (RUL) of rolling bearings.

Abstract

Purpose

The purpose of this paper is to present a weakly supervised learning method to perform health evaluation and predict the remaining useful life (RUL) of rolling bearings.

Design/methodology/approach

Based on the principle that bearing health degrades with the increase of service time, a weak label qualitative pairing comparison dataset for bearing health is extracted from the original time series monitoring data of bearing. A bearing health indicator (HI) quantitative evaluation model is obtained by training the delicately designed neural network structure with bearing qualitative comparison data between different health statuses. The remaining useful life is then predicted using the bearing health evaluation model and the degradation tolerance threshold. To validate the feasibility, efficiency and superiority of the proposed method, comparison experiments are designed and carried out on a widely used bearing dataset.

Findings

The method achieves the transformation of bearing health from qualitative comparison to quantitative evaluation via a learning algorithm, which is promising in industrial equipment health evaluation and prediction.

Originality/value

The method achieves the transformation of bearing health from qualitative comparison to quantitative evaluation via a learning algorithm, which is promising in industrial equipment health evaluation and prediction.

Details

Engineering Computations, vol. 40 no. 7/8
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 3 November 2023

Salam Abdallah and Ashraf Khalil

This study aims to understand and a lay a foundation of how analytics has been used in depression management, this study conducts a systematic literature review using two…

112

Abstract

Purpose

This study aims to understand and a lay a foundation of how analytics has been used in depression management, this study conducts a systematic literature review using two techniques – text mining and manual review. The proposed methodology would aid researchers in identifying key concepts and research gaps, which in turn, will help them to establish the theoretical background supporting their empirical research objective.

Design/methodology/approach

This paper explores a hybrid methodology for literature review (HMLR), using text mining prior to systematic manual review.

Findings

The proposed rapid methodology is an effective tool to automate and speed up the process required to identify key and emerging concepts and research gaps in any specific research domain while conducting a systematic literature review. It assists in populating a research knowledge graph that does not reach all semantic depths of the examined domain yet provides some science-specific structure.

Originality/value

This study presents a new methodology for conducting a literature review for empirical research articles. This study has explored an “HMLR” that combines text mining and manual systematic literature review. Depending on the purpose of the research, these two techniques can be used in tandem to undertake a comprehensive literature review, by combining pieces of complex textual data together and revealing areas where research might be lacking.

Details

Information Discovery and Delivery, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2398-6247

Keywords

Article
Publication date: 20 February 2023

Zakaria Sakyoud, Abdessadek Aaroud and Khalid Akodadi

The main goal of this research work is the optimization of the purchasing business process in the Moroccan public sector in terms of transparency and budgetary optimization. The…

Abstract

Purpose

The main goal of this research work is the optimization of the purchasing business process in the Moroccan public sector in terms of transparency and budgetary optimization. The authors have worked on the public university as an implementation field.

Design/methodology/approach

The design of the research work followed the design science research (DSR) methodology for information systems. DSR is a research paradigm wherein a designer answers questions relevant to human problems through the creation of innovative artifacts, thereby contributing new knowledge to the body of scientific evidence. The authors have adopted a techno-functional approach. The technical part consists of the development of an intelligent recommendation system that supports the choice of optimal information technology (IT) equipment for decision-makers. This intelligent recommendation system relies on a set of functional and business concepts, namely the Moroccan normative laws and Control Objectives for Information and Related Technology's (COBIT) guidelines in information system governance.

Findings

The modeling of business processes in public universities is established using business process model and notation (BPMN) in accordance with official regulations. The set of BPMN models constitute a powerful repository not only for business process execution but also for further optimization. Governance generally aims to reduce budgetary wastes, and the authors' recommendation system demonstrates a technical and methodological approach enabling this feature. Implementation of artificial intelligence techniques can bring great value in terms of transparency and fluidity in purchasing business process execution.

Research limitations/implications

Business limitations: First, the proposed system was modeled to handle one type products, which are computer-related equipment. Hence, the authors intend to extend the model to other types of products in future works. Conversely, the system proposes optimal purchasing order and assumes that decision makers will rely on this optimal purchasing order to choose between offers. In fact, as a perspective, the authors plan to work on a complete automation of the workflow to also include vendor selection and offer validation. Technical limitations: Natural language processing (NLP) is a widely used sentiment analysis (SA) technique that enabled the authors to validate the proposed system. Even working on samples of datasets, the authors noticed NLP dependency on huge computing power. The authors intend to experiment with learning and knowledge-based SA and assess the' computing power consumption and accuracy of the analysis compared to NLP. Another technical limitation is related to the web scraping technique; in fact, the users' reviews are crucial for the authors' system. To guarantee timeliness and reliable reviews, the system has to look automatically in websites, which confront the authors with the limitations of the web scraping like the permanent changing of website structure and scraping restrictions.

Practical implications

The modeling of business processes in public universities is established using BPMN in accordance with official regulations. The set of BPMN models constitute a powerful repository not only for business process execution but also for further optimization. Governance generally aims to reduce budgetary wastes, and the authors' recommendation system demonstrates a technical and methodological approach enabling this feature.

Originality/value

The adopted techno-functional approach enabled the authors to bring information system governance from a highly abstract level to a practical implementation where the theoretical best practices and guidelines are transformed to a tangible application.

Details

Kybernetes, vol. 53 no. 5
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 18 March 2024

Raj Kumar Bhardwaj, Ritesh Kumar and Mohammad Nazim

This paper evaluates the precision of four metasearch engines (MSEs) – DuckDuckGo, Dogpile, Metacrawler and Startpage, to determine which metasearch engine exhibits the highest…

Abstract

Purpose

This paper evaluates the precision of four metasearch engines (MSEs) – DuckDuckGo, Dogpile, Metacrawler and Startpage, to determine which metasearch engine exhibits the highest level of precision and to identify the metasearch engine that is most likely to return the most relevant search results.

Design/methodology/approach

The research is divided into two parts: the first phase involves four queries categorized into two segments (4-Q-2-S), while the second phase includes six queries divided into three segments (6-Q-3-S). These queries vary in complexity, falling into three types: simple, phrase and complex. The precision, average precision and the presence of duplicates across all the evaluated metasearch engines are determined.

Findings

The study clearly demonstrated that Startpage returned the most relevant results and achieved the highest precision (0.98) among the four MSEs. Conversely, DuckDuckGo exhibited consistent performance across both phases of the study.

Research limitations/implications

The study only evaluated four metasearch engines, which may not be representative of all available metasearch engines. Additionally, a limited number of queries were used, which may not be sufficient to generalize the findings to all types of queries.

Practical implications

The findings of this study can be valuable for accreditation agencies in managing duplicates, improving their search capabilities and obtaining more relevant and precise results. These findings can also assist users in selecting the best metasearch engine based on precision rather than interface.

Originality/value

The study is the first of its kind which evaluates the four metasearch engines. No similar study has been conducted in the past to measure the performance of metasearch engines.

Details

Performance Measurement and Metrics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1467-8047

Keywords

1 – 10 of 265