Search results

1 – 9 of 9
Article
Publication date: 1 November 2023

Saravanan R., Mohammad Firoz and Sumit Dalal

This study aims to empirically investigate the effect of International Financial Reporting Standards (IFRS) convergence on corporate risk disclosure, with a particular emphasis on…

Abstract

Purpose

This study aims to empirically investigate the effect of International Financial Reporting Standards (IFRS) convergence on corporate risk disclosure, with a particular emphasis on the quantity and coverage of risk information. The research also conducts economic benefit and cost analysis to investigate the economic implications that may arise from the transition to IFRS reporting.

Design/methodology/approach

A content analysis approach is used to measure two broader dimensions of risk disclosure, namely, risk disclosure quantity and risk topic coverage. Furthermore, using firm-fixed effect regression on a sample of 143 Indian-listed companies, this study investigates the variations in these risk disclosure dimensions before (2012–2016) and subsequent to (2017–2021) the convergence with IFRS.

Findings

The empirical results of this research demonstrate that IFRS convergence has led to a significant improvement in firms’ risk disclosure across several dimensions. Particularly, during the post-IFRS period, firms’ usage of risk-related words and sentences has considerably surged in MD&A, Notes and whole annual reports. In addition, upon IFRS convergence, firms’ risk descriptions have become more extensive and evenly distributed across risk topic categories. Moreover, the in-depth benefit and cost analysis revealed that firms reporting under IFRS benefit from decreased cost of equity capital, but they also incur a higher cost of audit fees.

Originality/value

This study contributes to the literature in two ways. First, this is the only study, to the best of the authors’ knowledge, to conduct a broader examination of the impact of mandatory IFRS convergence on corporate risk disclosure, with a major focus on quantity and coverage of risk information. Second, by conducting economic benefit and cost analysis, this study provides novel insights into the critical role of IFRS risk disclosures toward multiple economic outcomes.

Details

International Journal of Accounting & Information Management, vol. 31 no. 5
Type: Research Article
ISSN: 1834-7649

Keywords

Article
Publication date: 6 February 2024

Somayeh Tamjid, Fatemeh Nooshinfard, Molouk Sadat Hosseini Beheshti, Nadjla Hariri and Fahimeh Babalhavaeji

The purpose of this study is to develop a domain independent, cost-effective, time-saving and semi-automated ontology generation framework that could extract taxonomic concepts…

Abstract

Purpose

The purpose of this study is to develop a domain independent, cost-effective, time-saving and semi-automated ontology generation framework that could extract taxonomic concepts from unstructured text corpus. In the human disease domain, ontologies are found to be extremely useful for managing the diversity of technical expressions in favour of information retrieval objectives. The boundaries of these domains are expanding so fast that it is essential to continuously develop new ontologies or upgrade available ones.

Design/methodology/approach

This paper proposes a semi-automated approach that extracts entities/relations via text mining of scientific publications. Text mining-based ontology (TmbOnt)-named code is generated to assist a user in capturing, processing and establishing ontology elements. This code takes a pile of unstructured text files as input and projects them into high-valued entities or relations as output. As a semi-automated approach, a user supervises the process, filters meaningful predecessor/successor phrases and finalizes the demanded ontology-taxonomy. To verify the practical capabilities of the scheme, a case study was performed to drive glaucoma ontology-taxonomy. For this purpose, text files containing 10,000 records were collected from PubMed.

Findings

The proposed approach processed over 3.8 million tokenized terms of those records and yielded the resultant glaucoma ontology-taxonomy. Compared with two famous disease ontologies, TmbOnt-driven taxonomy demonstrated a 60%–100% coverage ratio against famous medical thesauruses and ontology taxonomies, such as Human Disease Ontology, Medical Subject Headings and National Cancer Institute Thesaurus, with an average of 70% additional terms recommended for ontology development.

Originality/value

According to the literature, the proposed scheme demonstrated novel capability in expanding the ontology-taxonomy structure with a semi-automated text mining approach, aiming for future fully-automated approaches.

Details

The Electronic Library , vol. 42 no. 2
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 28 June 2024

Haihua Chen, Jeonghyun (Annie) Kim, Jiangping Chen and Aisa Sakata

This study aims to explore the applications of natural language processing (NLP) and data analytics in understanding large-scale digital collections in oral history archives.

Abstract

Purpose

This study aims to explore the applications of natural language processing (NLP) and data analytics in understanding large-scale digital collections in oral history archives.

Design/methodology/approach

NLP and data analytics were used to analyse the oral interview transcripts of 904 survivors of the Japanese American incarceration camps collected from Densho Digital Repository, relying specifically on descriptive analysis, keyword extraction, topic modelling and sentiment analysis (SA).

Findings

The researchers found multiple geographic areas of large residential communities of ethnic Japanese people and the place names of the concentration camps. The keywords and topics extracted reflect the deplorable conditions and militaristic nature of the camps and the forced labour of the internees. When remembering history, the main focus for the narrators remains the redress and reparation movement to obtain the restitution of their civil rights. SA further found that the forcible removal and incarceration of Japanese Americans during Second World War negatively impacted and brought deep trauma to the narrators.

Originality/value

This case study demonstrated how NLP and data analytics could be applied to analyse oral history archives and open avenues for discovery. Archival researchers and the general public may benefit from this type of analysis in making connections between temporal, spatial and emotional elements, which will contribute to a holistic understanding of individuals and communities in terms of their collective memory.

Details

The Electronic Library , vol. 42 no. 4
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 9 July 2024

Ziling Chen, Chengzhi Zhang, Heng Zhang, Yi Zhao, Chen Yang and Yang Yang

The composition of author teams is a significant factor affecting the novelty of academic papers. Existing research lacks studies focusing on institutional types and measures of…

Abstract

Purpose

The composition of author teams is a significant factor affecting the novelty of academic papers. Existing research lacks studies focusing on institutional types and measures of novelty remained at a general level, making it difficult to analyse the types of novelty in papers and to provide a detailed explanation of novelty. This study aims to take the field of natural language processing (NLP) as an example to analyse the relationship between team institutional composition and the fine-grained novelty of academic papers.

Design/methodology/approach

Firstly, author teams are categorized into three types: academic institutions, industrial institutions and mixed academic and industrial institutions. Next, the authors extract four types of entities from the full paper: methods, data sets, tools and metric. The novelty of papers is evaluated using entity combination measurement methods. Additionally, pairwise combinations of different types of fine-grained entities are analysed to assess their contributions to novel papers.

Findings

The results of the study found that in the field of NLP, for industrial institutions, collaboration with academic institutions has a higher probability of producing novel papers. From the contribution rate of different types of fine-grained knowledge entities, the mixed academic and industrial institutions pay more attention to the novelty of the combination of method indicators, and the industrial institutions pay more attention to the novelty of the combination of method tools.

Originality/value

This paper explores the relationship between the team institutional composition and the novelty of academic papers and reveals the importance of cooperation between industry and academia through fine-grained novelty measurement, which provides key guidance for improving the quality of papers and promoting industry–university–research cooperation.

Details

The Electronic Library , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 18 May 2023

Rongen Yan, Depeng Dang, Hu Gao, Yan Wu and Wenhui Yu

Question answering (QA) answers the questions asked by people in the form of natural language. In the QA, due to the subjectivity of users, the questions they query have different…

Abstract

Purpose

Question answering (QA) answers the questions asked by people in the form of natural language. In the QA, due to the subjectivity of users, the questions they query have different expressions, which increases the difficulty of text retrieval. Therefore, the purpose of this paper is to explore new query rewriting method for QA that integrates multiple related questions (RQs) to form an optimal question. Moreover, it is important to generate a new dataset of the original query (OQ) with multiple RQs.

Design/methodology/approach

This study collects a new dataset SQuAD_extend by crawling the QA community and uses word-graph to model the collected OQs. Next, Beam search finds the best path to get the best question. To deeply represent the features of the question, pretrained model BERT is used to model sentences.

Findings

The experimental results show three outstanding findings. (1) The quality of the answers is better after adding the RQs of the OQs. (2) The word-graph that is used to model the problem and choose the optimal path is conducive to finding the best question. (3) Finally, BERT can deeply characterize the semantics of the exact problem.

Originality/value

The proposed method can use word-graph to construct multiple questions and select the optimal path for rewriting the question, and the quality of answers is better than the baseline. In practice, the research results can help guide users to clarify their query intentions and finally achieve the best answer.

Details

Data Technologies and Applications, vol. 58 no. 1
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 5 June 2024

Azanzi Jiomekong and Sanju Tiwari

This paper aims to curate open research knowledge graph (ORKG) with papers related to ontology learning and define an approach using ORKG as a computer-assisted tool to organize…

Abstract

Purpose

This paper aims to curate open research knowledge graph (ORKG) with papers related to ontology learning and define an approach using ORKG as a computer-assisted tool to organize key-insights extracted from research papers.

Design/methodology/approach

Action research was used to explore, test and evaluate the use of the Open Research Knowledge Graph as a computer assistant tool for knowledge acquisition from scientific papers.

Findings

To extract, structure and describe research contributions, the granularity of information should be decided; to facilitate the comparison of scientific papers, one should design a common template that will be used to describe the state of the art of a domain.

Originality/value

This approach is currently used to document “food information engineering,” “tabular data to knowledge graph matching” and “question answering” research problems and the “neurosymbolic AI” domain. More than 200 papers are ingested in ORKG. From these papers, more than 800 contributions are documented and these contributions are used to build over 100 comparison tables. At the end of this work, we found that ORKG is a valuable tool that can reduce the working curve of state-of-the-art research.

Article
Publication date: 24 May 2024

Shupeng Liu, Jianhong Shen and Jing Zhang

Learning from past construction accident reports is critical to reducing their occurrence. Digital technology provides feasibility for extracting risk factors from unstructured…

Abstract

Purpose

Learning from past construction accident reports is critical to reducing their occurrence. Digital technology provides feasibility for extracting risk factors from unstructured reports, but there are few related studies, and there is a limitation that textual contextual information cannot be considered during extraction, which tends to miss some important factors. Meanwhile, further analysis, assessment and control for the extracted factors are lacking. This paper aims to explore an integrated model that combines the advantages of multiple digital technologies to effectively solve the above problems.

Design/methodology/approach

A total of 1000 construction accident reports from Chinese government websites were used as the dataset of this paper. After text pre-processing, the risk factors related to accident causes were extracted using KeyBERT, and the accident texts were encoded into structured data. Tree-augmented naive (TAN) Bayes was used to learn the data and construct a visualized risk analysis network for construction accidents.

Findings

The use of KeyBERT successfully considered the textual contextual information, prompting the extracted risk factors to be more complete. The integrated TAN successfully further explored construction risk factors from multiple perspectives, including the identification of key risk factors, the coupling analysis of risk factors and the troubleshooting method of accident risk source. The area under curve (AUC) value of the model reaches up to 0.938 after 10-fold cross-validation, indicating good performance.

Originality/value

This paper presents a new machine-assisted integrated model for accident report mining and risk factor analysis, and the research findings can provide theoretical and practical support for accident safety management.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 16 April 2024

Fathima Sabrina Nazeer, Imriyas Kamardeen and Abid Hasan

Many buildings fail to meet user expectations, causing a performance gap. Pre-occupancy evaluation (PrOE) is believed to have the potential to close the gap. It enables designers…

Abstract

Purpose

Many buildings fail to meet user expectations, causing a performance gap. Pre-occupancy evaluation (PrOE) is believed to have the potential to close the gap. It enables designers to obtain end-user feedback in the design phase and improve the design for better performance. However, PrOE implementation faces challenges due to still maturing knowledgebase. This study aims to understand the state-of-the-art knowledge of PrOE, thereby identifying future research needs to advance the domain.

Design/methodology/approach

A systematic literature review following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) framework was conducted. A thorough search in five databases and Google Scholar retrieved 90 articles, with 30 selected for systematic review after eliminating duplicates and irrelevant articles. Bibliometric analyses were performed using VOSviewer and Biblioshiny on the article metadata, and thematic analyses were conducted on their contents.

Findings

PrOE is a vehicle for engaging building end-users in the design phase to address the credibility gap caused by the discrepancies between the expected and actual performance of buildings. PrOE has gained limited applications in healthcare, residential, office and educational building design for two broad purposes: design management and marketing. Using virtual reality technologies for PrOE has demonstrated significant benefits. Yet, the PrOE domain needs to mature in multiple perspectives to serve its intended purpose effectively.

Originality/value

This study identifies four knowledge gaps for future research to advance the PrOE domain: (1) developing a holistic PrOE framework, integrating comprehensive performance evaluation criteria, useable at different stages of the design phase and multi-criteria decision algorithms, (2) developing a mixed reality tool, embodying the holistic PrOE framework, (3) formulating a PrOE framework for adaptive reuse of buildings and (4) managing uncertainties in user requirements during the lifecycle in PrOE decisions.

Details

Built Environment Project and Asset Management, vol. 14 no. 4
Type: Research Article
ISSN: 2044-124X

Keywords

Open Access
Article
Publication date: 11 October 2023

Bachriah Fatwa Dhini, Abba Suganda Girsang, Unggul Utan Sufandi and Heny Kurniawati

The authors constructed an automatic essay scoring (AES) model in a discussion forum where the result was compared with scores given by human evaluators. This research proposes…

Abstract

Purpose

The authors constructed an automatic essay scoring (AES) model in a discussion forum where the result was compared with scores given by human evaluators. This research proposes essay scoring, which is conducted through two parameters, semantic and keyword similarities, using a SentenceTransformers pre-trained model that can construct the highest vector embedding. Combining these models is used to optimize the model with increasing accuracy.

Design/methodology/approach

The development of the model in the study is divided into seven stages: (1) data collection, (2) pre-processing data, (3) selected pre-trained SentenceTransformers model, (4) semantic similarity (sentence pair), (5) keyword similarity, (6) calculate final score and (7) evaluating model.

Findings

The multilingual paraphrase-multilingual-MiniLM-L12-v2 and distilbert-base-multilingual-cased-v1 models got the highest scores from comparisons of 11 pre-trained multilingual models of SentenceTransformers with Indonesian data (Dhini and Girsang, 2023). Both multilingual models were adopted in this study. A combination of two parameters is obtained by comparing the response of the keyword extraction responses with the rubric keywords. Based on the experimental results, proposing a combination can increase the evaluation results by 0.2.

Originality/value

This study uses discussion forum data from the general biology course in online learning at the open university for the 2020.2 and 2021.2 semesters. Forum discussion ratings are still manual. In this survey, the authors created a model that automatically calculates the value of discussion forums, which are essays based on the lecturer's answers moreover rubrics.

Details

Asian Association of Open Universities Journal, vol. 18 no. 3
Type: Research Article
ISSN: 1858-3431

Keywords

1 – 9 of 9