Search results

1 – 10 of 220
Article
Publication date: 30 June 2023

Ruan Wang, Jun Deng, Xinhui Guan and Yuming He

With the development of data mining technology, diverse and broader domain knowledge can be extracted automatically. However, the research on applying knowledge mapping and data…

164

Abstract

Purpose

With the development of data mining technology, diverse and broader domain knowledge can be extracted automatically. However, the research on applying knowledge mapping and data visualization techniques to genealogical data is limited. This paper aims to fill this research gap by providing a systematic framework and process guidance for practitioners seeking to uncover hidden knowledge from genealogy.

Design/methodology/approach

Based on a literature review of genealogy's current knowledge reasoning research, the authors constructed an integrated framework for knowledge inference and visualization application using a knowledge graph. Additionally, the authors applied this framework in a case study using “Manchu Clan Genealogy” as the data source.

Findings

The case study shows that the proposed framework can effectively decompose and reconstruct genealogy. It demonstrates the reasoning, discovery, and web visualization application process of implicit information in genealogy. It enhances the effective utilization of Manchu genealogy resources by highlighting the intricate relationships among people, places, and time entities.

Originality/value

This study proposed a framework for genealogy knowledge reasoning and visual analysis utilizing a knowledge graph, including five dimensions: the target layer, the resource layer, the data layer, the inference layer, and the application layer. It helps to gather the scattered genealogy information and establish a data network with semantic correlations while establishing reasoning rules to enable inference discovery and visualization of hidden relationships.

Details

Library Hi Tech, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-8831

Keywords

Open Access
Article
Publication date: 2 April 2024

Koraljka Golub, Osma Suominen, Ahmed Taiye Mohammed, Harriet Aagaard and Olof Osterman

In order to estimate the value of semi-automated subject indexing in operative library catalogues, the study aimed to investigate five different automated implementations of an…

Abstract

Purpose

In order to estimate the value of semi-automated subject indexing in operative library catalogues, the study aimed to investigate five different automated implementations of an open source software package on a large set of Swedish union catalogue metadata records, with Dewey Decimal Classification (DDC) as the target classification system. It also aimed to contribute to the body of research on aboutness and related challenges in automated subject indexing and evaluation.

Design/methodology/approach

On a sample of over 230,000 records with close to 12,000 distinct DDC classes, an open source tool Annif, developed by the National Library of Finland, was applied in the following implementations: lexical algorithm, support vector classifier, fastText, Omikuji Bonsai and an ensemble approach combing the former four. A qualitative study involving two senior catalogue librarians and three students of library and information studies was also conducted to investigate the value and inter-rater agreement of automatically assigned classes, on a sample of 60 records.

Findings

The best results were achieved using the ensemble approach that achieved 66.82% accuracy on the three-digit DDC classification task. The qualitative study confirmed earlier studies reporting low inter-rater agreement but also pointed to the potential value of automatically assigned classes as additional access points in information retrieval.

Originality/value

The paper presents an extensive study of automated classification in an operative library catalogue, accompanied by a qualitative study of automated classes. It demonstrates the value of applying semi-automated indexing in operative information retrieval systems.

Details

Journal of Documentation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 26 February 2024

Sabrine Cherni and Anis Ben Amar

This study aims to examine how digitalization affects the work efficiency of the Shariah Supervisory Board (SSB) in Islamic banks.

Abstract

Purpose

This study aims to examine how digitalization affects the work efficiency of the Shariah Supervisory Board (SSB) in Islamic banks.

Design/methodology/approach

This study uses panel data analysis of annual report disclosures over the past 10 years. The authors have selected 79 Islamic banks for the period ranging from 2012 to 2021. The criteria for SSB efficiency used in this research are disclosure of Zakat and disclosure in the SSB report.

Findings

The econometric results show that digitalization has a positive effect on improving the work efficiency of the SSB in Islamic banks. Accordingly, the authors provide evidence that the higher the bank's digital engagement, the higher the quality of the SSB.

Originality/value

The findings highlight the need to improve the current understanding of SSB structures and governance mechanisms that can better assist Islamic banks in engaging in effective compliance with recent governance and accounting reforms. Moreover, Islamic banks are the most capable and appropriate to implement and activate digitalization because they are based on a vital root calling for development if there are executives believing in it, as well as legislation supporting and serving them.

Details

Journal of Islamic Accounting and Business Research, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1759-0817

Keywords

Article
Publication date: 8 August 2023

Mengkai Liu and Meng Luo

The poor capacity of prefabricated construction cost estimation is the essential reason for the low profitability of the general contractor. Therefore, this study aims to focus on…

Abstract

Purpose

The poor capacity of prefabricated construction cost estimation is the essential reason for the low profitability of the general contractor. Therefore, this study aims to focus on the cost estimation of prefabricated construction as the research object. This research aims to enhance the accuracy of total project cost estimation for general contractors, ultimately leading to improved profitability.

Design/methodology/approach

This study used Vensim PLE software to establish a system dynamics model. In the modeling process, a systematic research review was used to identify cost-influencing factors; ABC classification and the analytic hierarchy process were used to score and determine the weights of influencing factors.

Findings

The total cost error obtained by the model is less than 2% compared with the actual value. It can be used to cost estimation and analysis. The analysis results indicate that there are 7 key factors, among which the prefabrication rate has the most significant impact. Furthermore, the model can provide the extreme range cost; the minimum cost can reduce by 13% from the value in the case. The factor's value can compose a cost control strategy for general contractors.

Practical implications

The cost of prefabricated buildings can be estimated well, and deciding the prefabrication rate is crucial. The cost can be declined by correct cost control strategies when bidding and subcontracting are in process. The strategies can follow the direction of the model.

Originality/value

A systemic, quantitative and qualitative analysis of cost estimation of prefabricated buildings for general contractors has been conducted. A mathematical model has been developed and validated to facilitate more effective cost-control measures.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 28 April 2023

Xiaohua Shi, Chen Hao, Ding Yue and Hongtao Lu

Traditional library book recommendation methods are mainly based on association rules and user profiles. They may help to learn about students' interest in different types of…

253

Abstract

Purpose

Traditional library book recommendation methods are mainly based on association rules and user profiles. They may help to learn about students' interest in different types of books, e.g., students majoring in science and engineering tend to pay more attention to computer books. Nevertheless, most of them still need to identify users' interests accurately. To solve the problem, the authors propose a novel embedding-driven model called InFo, which refers to users' intrinsic interests and academic preferences to provide personalized library book recommendations.

Design/methodology/approach

The authors analyze the characteristics and challenges in real library book recommendations and then propose a method considering feature interactions. Specifically, the authors leverage the attention unit to extract students' preferences for different categories of books from their borrowing history, after which we feed the unit into the Factorization Machine with other context-aware features to learn students' hybrid interests. The authors employ a convolution neural network to extract high-order correlations among feature maps which are obtained by the outer product between feature embeddings.

Findings

The authors evaluate the model by conducting experiments on a real-world dataset in one university. The results show that the model outperforms other state-of-the-art methods in terms of two metrics called Recall and NDCG.

Research limitations/implications

It requires a specific data size to prevent overfitting during model training, and the proposed method may face the user/item cold-start challenge.

Practical implications

The embedding-driven book recommendation model could be applied in real libraries to provide valuable recommendations based on readers' preferences.

Originality/value

The proposed method is a practical embedding-driven model that accurately captures diverse user preferences.

Details

Library Hi Tech, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-8831

Keywords

Open Access
Article
Publication date: 18 November 2021

Fauziah Eddyono, Dudung Darusman, Ujang Sumarwan and Fauziah Sunarminto

This study aims to find a dynamic model in an effort to optimize tourism performance in ecotourism destinations. The model structure is built based on competitive performance in…

4797

Abstract

Purpose

This study aims to find a dynamic model in an effort to optimize tourism performance in ecotourism destinations. The model structure is built based on competitive performance in geographic areas and the application of ecotourism elements that are integrated with big data innovation through artificial intelligence technology.

Design/methodology/approach

Data analysis is performed through dynamic system modeling. Simulations are carried out in three models: First, existing simulation models. Second, Scenario 1 is carried out by utilizing a causal loop through innovation of big data-based artificial intelligence technology to ecotourism elements. Third, Scenario 2 is carried out by utilizing a causal loop through big data-based artificial intelligence technology on aspects of ecotourism elements and destination competitiveness.

Findings

This study provides empirical insight into the competitiveness performance of destinations and the performance of implementing ecotourism elements if integrated with big data innovations that will be able to massively demonstrate the growth of sustainable tourism performance.

Research limitations/implications

This study does not use a primary database, but uses secondary data from official sources that can be accessed by the public.

Practical implications

The paper includes implications for the development of intelligent technology based on big data and also requires policy innovation.

Social implications

Sustainable tourism development.

Originality/value

This study finds the expansion of new theory competitiveness of ecotourism destinations.

Details

Journal of Tourism Futures, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2055-5911

Keywords

Article
Publication date: 22 June 2022

Shubangini Patil and Rekha Patil

Until now, a lot of research has been done and applied to provide security and original data from one user to another, such as third-party auditing and several schemes for…

Abstract

Purpose

Until now, a lot of research has been done and applied to provide security and original data from one user to another, such as third-party auditing and several schemes for securing the data, such as the generation of the key with the help of encryption algorithms like Rivest–Shamir–Adleman and others. Here are some of the related works that have been done previously. Remote damage control resuscitation (RDCR) scheme by Yan et al. (2017) is proposed based on the minimum bandwidth. By enabling the third party to perform the verification of public integrity. Although it supports the repair management for the corrupt data and tries to recover the original data, in practicality it fails to do so, and thus it takes more computation and communication cost than our proposed system. In a paper by Chen et al. (2015), using broadcast encryption, an idea for cloud storage data sharing has been developed. This technique aims to accomplish both broadcast data and dynamic sharing, allowing users to join and leave a group without affecting the electronic press kit (EPK). In this case, the theoretical notion was true and new, but the system’s practicality and efficiency were not acceptable, and the system’s security was also jeopardised because it proposed adding a member without altering any keys. In this research, an identity-based encryption strategy for data sharing was investigated, as well as key management and metadata techniques to improve model security (Jiang and Guo, 2017). The forward and reverse ciphertext security is supplied here. However, it is more difficult to put into practice, and one of its limitations is that it can only be used for very large amounts of cloud storage. Here, it extends support for dynamic data modification by batch auditing. The important feature of the secure and efficient privacy preserving provable data possession in cloud storage scheme was to support every important feature which includes data dynamics, privacy preservation, batch auditing and blockers verification for an untrusted and an outsourced storage model (Pathare and Chouragadec, 2017). A homomorphic signature mechanism was devised to prevent the usage of the public key certificate, which was based on the new id. This signature system was shown to be resistant to the id attack on the random oracle model and the assault of forged message (Nayak and Tripathy, 2018; Lin et al., 2017). When storing data in a public cloud, one issue is that the data owner must give an enormous number of keys to the users in order for them to access the files. At this place, the knowledge assisted software engineering (KASE) plan was publicly unveiled for the first time. While sharing a huge number of documents, the data owner simply has to supply the specific key to the user, and the user only needs to provide the single trapdoor. Although the concept is innovative, the KASE technique does not apply to the increasingly common manufactured cloud. Cui et al. (2016) claim that as the amount of data grows, distribution management system (DMS) will be unable to handle it. As a result, various proven data possession (PDP) schemes have been developed, and practically all data lacks security. So, here in these certificates, PDP was introduced, which was based on bilinear pairing. Because of its feature of being robust as well as efficient, this is mostly applicable in DMS. The main purpose of this research is to design and implement a secure cloud infrastructure for sharing group data. This research provides an efficient and secure protocol for multiple user data in the cloud, allowing many users to easily share data.

Design/methodology/approach

The methodology and contribution of this paper is given as follows. The major goal of this study is to design and implement a secure cloud infrastructure for sharing group data. This study provides an efficient and secure protocol for multiple user data in cloud, allowing several users to share data without difficulty. The primary purpose of this research is to design and implement a secure cloud infrastructure for sharing group data. This research develops an efficient and secure protocol for multiple user data in the cloud, allowing numerous users to exchange data without difficulty. Selection scheme design (SSD) comprises two algorithms; first algorithm is designed for limited users and algorithm 2 is redesigned for the multiple users. Further, the authors design SSD-security protocol which comprises a three-phase model, namely, Phase 1, Phase 2 and Phase 3. Phase 1 generates the parameters and distributes the private key, the second phase generates the general key for all the users that are available and third phase is designed to prevent the dishonest user to entertain in data sharing.

Findings

Data sharing in cloud computing provides unlimited computational resources and storage to enterprise and individuals; moreover, cloud computing leads to several privacy and security concerns such as fault tolerance, reliability, confidentiality and data integrity. Furthermore, the key consensus mechanism is fundamental cryptographic primitive for secure communication; moreover, motivated by this phenomenon, the authors developed SSDmechanismwhich embraces the multiple users in the data-sharing model.

Originality/value

Files shared in the cloud should be encrypted for security purpose; later these files are decrypted for the users to access the file. Furthermore, the key consensus process is a crucial cryptographic primitive for secure communication; additionally, the authors devised the SSD mechanism, which incorporates numerous users in the data-sharing model, as a result of this phenomena. For evaluation of the SSD method, the authors have considered the ideal environment of the system, that is, the authors have used java as a programming language and eclipse as the integrated drive electronics tool for the proposed model evaluation. Hardware configuration of the model is such that it is packed with 4 GB RAM and i7 processor, the authors have used the PBC library for the pairing operations (PBC Library, 2022). Furthermore, in the following section of this paper, the number of users is varied to compare with the existing methodology RDIC (Li et al., 2020). For the purposes of the SSD-security protocol, a prime number is chosen as the number of users in this work.

Details

International Journal of Pervasive Computing and Communications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 24 May 2023

Cheuk Hang Au, Barney Tan and Chunmian Ge

The success of sharing economy (SE) platforms has made it attractive for many firms to adopt this business model. However, the inherent weaknesses of these platforms, such as…

Abstract

Purpose

The success of sharing economy (SE) platforms has made it attractive for many firms to adopt this business model. However, the inherent weaknesses of these platforms, such as their unstandardized service quality, the burden of maintenance on resource owners and the threat of multi-homing, have become increasingly apparent. Previous prescriptions for addressing these weaknesses, however, are limited because they do not account for factors such as compliance costs and information asymmetry, and tend to solve the problem on only one side of the platform at the expense of the others. By exploring the strategies deployed and actions undertaken across the development of Xbed, a successful accommodation-sharing platform in China, this study aims to explore an alternative solution that would overcome the aforementioned weaknesses without the corresponding compromises.

Design/methodology/approach

The authors conducted a case study consisting of secondary data and interviews with 15 informants who were representatives of Xbed's top management, organizational IT functions and its various business units.

Findings

The authors identified three inherent weaknesses that may be found in SE business models and how these weaknesses can be overcome without compromising other stakeholders through an auxiliary platform. The authors also discuss the advantages, characteristics, deployment and nature of auxiliary platforms.

Originality/value

This model contributes an in-depth view of establishing and nurturing auxiliary platforms to complement a primary SE platform. Owners and managers of SE platforms may use our model as the basis of guidelines for optimizing their platforms' development, thereby extending the benefits of SE to more stakeholders.

Details

Internet Research, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1066-2243

Keywords

Open Access
Article
Publication date: 5 December 2022

Kittisak Chotikkakamthorn, Panrasee Ritthipravat, Worapan Kusakunniran, Pimchanok Tuakta and Paitoon Benjapornlert

Mouth segmentation is one of the challenging tasks of development in lip reading applications due to illumination, low chromatic contrast and complex mouth appearance. Recently…

Abstract

Purpose

Mouth segmentation is one of the challenging tasks of development in lip reading applications due to illumination, low chromatic contrast and complex mouth appearance. Recently, deep learning methods effectively solved mouth segmentation problems with state-of-the-art performances. This study presents a modified Mobile DeepLabV3 based technique with a comprehensive evaluation based on mouth datasets.

Design/methodology/approach

This paper presents a novel approach to mouth segmentation by Mobile DeepLabV3 technique with integrating decode and auxiliary heads. Extensive data augmentation, online hard example mining (OHEM) and transfer learning have been applied. CelebAMask-HQ and the mouth dataset from 15 healthy subjects in the department of rehabilitation medicine, Ramathibodi hospital, are used in validation for mouth segmentation performance.

Findings

Extensive data augmentation, OHEM and transfer learning had been performed in this study. This technique achieved better performance on CelebAMask-HQ than existing segmentation techniques with a mean Jaccard similarity coefficient (JSC), mean classification accuracy and mean Dice similarity coefficient (DSC) of 0.8640, 93.34% and 0.9267, respectively. This technique also achieved better performance on the mouth dataset with a mean JSC, mean classification accuracy and mean DSC of 0.8834, 94.87% and 0.9367, respectively. The proposed technique achieved inference time usage per image of 48.12 ms.

Originality/value

The modified Mobile DeepLabV3 technique was developed with extensive data augmentation, OHEM and transfer learning. This technique gained better mouth segmentation performance than existing techniques. This makes it suitable for implementation in further lip-reading applications.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Article
Publication date: 15 March 2024

Huimin Li, Boxin Dai, Yongchao Cao, Limin Su and Feng Li

Trust is the glue that holds cooperative relationships together and often exists in an asymmetric manner. The purpose of this study is to explore how to mitigate the issue of…

27

Abstract

Purpose

Trust is the glue that holds cooperative relationships together and often exists in an asymmetric manner. The purpose of this study is to explore how to mitigate the issue of losses or increased transaction costs caused by opportunistic behavior in a soft environment where trust asymmetry is quite common and difficult to avoid.

Design/methodology/approach

This study focuses on examining asymmetric trust between the government and the private sector in public-private partnership (PPP) projects. Drawing upon both project realities and relevant literature, the primary conditional variables influencing asymmetric trust are identified. These variables encompass power perception asymmetry, information asymmetry, interaction behavior, risk perception differences and government-side control. Subsequently, through the use of a survey questionnaire, binary-matched data from both the government and the private sector are collected. The study employs fuzzy-set qualitative comparative analysis (fsQCA) to conduct a configurational analysis, aiming to investigate the causal pathways that trigger asymmetric trust.

Findings

No single conditional variable is a necessary condition for the emergence of trust asymmetry. The pathways leading to a high degree of trust asymmetry can be categorized into two types: those dominated by power perception and those involving a combination of multiple factors. Differences in power perception play a crucial role in the occurrence of high trust asymmetry, yet the influence of other conditional variables in triggering trust asymmetry should not be overlooked.

Originality/value

The findings can contribute to advancing the study of trust relationships in the field of Chinese PPP projects. Furthermore, they hold practical value in facilitating the enhancement of trust relationships between the government and the private sector.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

1 – 10 of 220