Search results

1 – 10 of over 3000
Article
Publication date: 8 January 2024

Denis Šimunović, Grazia Murtarelli and Stefania Romenti

The purpose of this study is to conduct a comprehensive investigation into the utilization of visual impression management techniques within sustainability reporting…

Abstract

Purpose

The purpose of this study is to conduct a comprehensive investigation into the utilization of visual impression management techniques within sustainability reporting. Specifically, the study aims to determine whether Italian companies employ impression management tactics in the presentation of graphs within their sustainability reports and, thus, problematize visual data communication in corporate social responsibility (CSR).

Design/methodology/approach

The research adopts a multimodal content analysis of the 58 sustainability reports from Italian listed companies that are GRI-compliant. The analysis focused on three types of graphs: pie charts, line graphs and bar graphs. In total, 860 graphs have been examined.

Findings

The study found evidence of graphical distortion techniques being employed by companies in their sustainability reports to create a favorable impression. Specifically, graph distortions are found in column graphs and not in line or pie charts. In particular, selectivity, presentation enhancement and measurement distortion techniques seem to be extensively used when adopting column graphs in sustainability communication. Moreover, social sustainability–related topics tend to be more represented of other area of CSR reporting. This suggests that companies, whether consciously or unconsciously, engage in impression management techniques when using graphs in their sustainability reports.

Social implications

The study findings suggest that more consciousness is needed for companies when engaging in the construction and selection of graphs in their sustainability reports and that decision-makers should develop a clear guide for ethical visual communication.

Originality/value

The paper systematically analyzes visual impression management techniques in communicating sustainability data and, in particular, advances literature on graphical distortion. The value lies in empirical evidence of distortion adoption in GRI-compliant reports as well as problematizing visual data communication as a fundamental challenge for sustainability communication management.

Details

Journal of Communication Management, vol. 28 no. 1
Type: Research Article
ISSN: 1363-254X

Keywords

Article
Publication date: 8 September 2023

Xiancheng Ou, Yuting Chen, Siwei Zhou and Jiandong Shi

With the continuous growth of online education, the quality issue of online educational videos has become increasingly prominent, causing students in online learning to face the…

Abstract

Purpose

With the continuous growth of online education, the quality issue of online educational videos has become increasingly prominent, causing students in online learning to face the dilemma of knowledge confusion. The existing mechanisms for controlling the quality of online educational videos suffer from subjectivity and low timeliness. Monitoring the quality of online educational videos involves analyzing metadata features and log data, which is an important aspect. With the development of artificial intelligence technology, deep learning techniques with strong predictive capabilities can provide new methods for predicting the quality of online educational videos, effectively overcoming the shortcomings of existing methods. The purpose of this study is to find a deep neural network that can model the dynamic and static features of the video itself, as well as the relationships between videos, to achieve dynamic monitoring of the quality of online educational videos.

Design/methodology/approach

The quality of a video cannot be directly measured. According to previous research, the authors use engagement to represent the level of video quality. Engagement is the normalized participation time, which represents the degree to which learners tend to participate in the video. Based on existing public data sets, this study designs an online educational video engagement prediction model based on dynamic graph neural networks (DGNNs). The model is trained based on the video’s static features and dynamic features generated after its release by constructing dynamic graph data. The model includes a spatiotemporal feature extraction layer composed of DGNNs, which can effectively extract the time and space features contained in the video's dynamic graph data. The trained model is used to predict the engagement level of learners with the video on day T after its release, thereby achieving dynamic monitoring of video quality.

Findings

Models with spatiotemporal feature extraction layers consisting of four types of DGNNs can accurately predict the engagement level of online educational videos. Of these, the model using the temporal graph convolutional neural network has the smallest prediction error. In dynamic graph construction, using cosine similarity and Euclidean distance functions with reasonable threshold settings can construct a structurally appropriate dynamic graph. In the training of this model, the amount of historical time series data used will affect the model’s predictive performance. The more historical time series data used, the smaller the prediction error of the trained model.

Research limitations/implications

A limitation of this study is that not all video data in the data set was used to construct the dynamic graph due to memory constraints. In addition, the DGNNs used in the spatiotemporal feature extraction layer are relatively conventional.

Originality/value

In this study, the authors propose an online educational video engagement prediction model based on DGNNs, which can achieve the dynamic monitoring of video quality. The model can be applied as part of a video quality monitoring mechanism for various online educational resource platforms.

Details

International Journal of Web Information Systems, vol. 19 no. 5/6
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 6 September 2023

Antonio Llanes, Baldomero Imbernón Tudela, Manuel Curado and Jesús Soto

The authors will review the main concepts of graphs, present the implemented algorithm, as well as explain the different techniques applied to the graph, to achieve an efficient…

Abstract

Purpose

The authors will review the main concepts of graphs, present the implemented algorithm, as well as explain the different techniques applied to the graph, to achieve an efficient execution of the algorithm, both in terms of the use of multiple cores that the authors have available today, and the use of massive data parallelism through the parallelization of the algorithm, bringing the graph closer to the execution through CUDA on GPUs.

Design/methodology/approach

In this work, the authors approach the graphs isomorphism problem, approaching this problem from a point of view very little worked during all this time, the application of parallelism and the high-performance computing (HPC) techniques to the detection of isomorphism between graphs.

Findings

Results obtained give compelling reasons to ensure that more in-depth studies on the HPC techniques should be applied in these fields, since gains of up to 722x speedup are achieved in the most favorable scenarios, maintaining an average performance speedup of 454x.

Originality/value

The paper is new and original.

Details

Engineering Computations, vol. 40 no. 7/8
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 3 October 2023

Haklae Kim

Despite ongoing research into archival metadata standards, digital archives are unable to effectively represent records in their appropriate contexts. This study aims to propose a…

Abstract

Purpose

Despite ongoing research into archival metadata standards, digital archives are unable to effectively represent records in their appropriate contexts. This study aims to propose a knowledge graph that depicts the diverse relationships between heterogeneous digital archive entities.

Design/methodology/approach

This study introduces and describes a method for applying knowledge graphs to digital archives in a step-by-step manner. It examines archival metadata standards, such as Records in Context Ontology (RiC-O), for characterising digital records; explains the process of data refinement, enrichment and reconciliation with examples; and demonstrates the use of knowledge graphs constructed using semantic queries.

Findings

This study introduced the 97imf.kr archive as a knowledge graph, enabling meaningful exploration of relationships within the archive’s records. This approach facilitated comprehensive record descriptions about different record entities. Applying archival ontologies with general-purpose vocabularies to digital records was advised to enhance metadata coherence and semantic search.

Originality/value

Most digital archives serviced in Korea are limited in the proper use of archival metadata standards. The contribution of this study is to propose a practical application of knowledge graph technology for linking and exploring digital records. This study details the process of collecting raw data on archives, data preprocessing and data enrichment, and demonstrates how to build a knowledge graph connected to external data. In particular, the knowledge graph of RiC-O vocabulary, Wikidata and Schema.org vocabulary and the semantic query using it can be applied to supplement keyword search in conventional digital archives.

Details

The Electronic Library , vol. 42 no. 1
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 12 September 2023

Wenjing Wu, Caifeng Wen, Qi Yuan, Qiulan Chen and Yunzhong Cao

Learning from safety accidents and sharing safety knowledge has become an important part of accident prevention and improving construction safety management. Considering the…

Abstract

Purpose

Learning from safety accidents and sharing safety knowledge has become an important part of accident prevention and improving construction safety management. Considering the difficulty of reusing unstructured data in the construction industry, the knowledge in it is difficult to be used directly for safety analysis. The purpose of this paper is to explore the construction of construction safety knowledge representation model and safety accident graph through deep learning methods, extract construction safety knowledge entities through BERT-BiLSTM-CRF model and propose a data management model of data–knowledge–services.

Design/methodology/approach

The ontology model of knowledge representation of construction safety accidents is constructed by integrating entity relation and logic evolution. Then, the database of safety incidents in the architecture, engineering and construction (AEC) industry is established based on the collected construction safety incident reports and related dispute cases. The construction method of construction safety accident knowledge graph is studied, and the precision of BERT-BiLSTM-CRF algorithm in information extraction is verified through comparative experiments. Finally, a safety accident report is used as an example to construct the AEC domain construction safety accident knowledge graph (AEC-KG), which provides visual query knowledge service and verifies the operability of knowledge management.

Findings

The experimental results show that the combined BERT-BiLSTM-CRF algorithm has a precision of 84.52%, a recall of 92.35%, and an F1 value of 88.26% in named entity recognition from the AEC domain database. The construction safety knowledge representation model and safety incident knowledge graph realize knowledge visualization.

Originality/value

The proposed framework provides a new knowledge management approach to improve the safety management of practitioners and also enriches the application scenarios of knowledge graph. On the one hand, it innovatively proposes a data application method and knowledge management method of safety accident report that integrates entity relationship and matter evolution logic. On the other hand, the legal adjudication dimension is innovatively added to the knowledge graph in the construction safety field as the basis for the postincident disposal measures of safety accidents, which provides reference for safety managers' decision-making in all aspects.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 7 February 2023

Riju Bhattacharya, Naresh Kumar Nagwani and Sarsij Tripathi

A community demonstrates the unique qualities and relationships between its members that distinguish it from other communities within a network. Network analysis relies heavily on…

Abstract

Purpose

A community demonstrates the unique qualities and relationships between its members that distinguish it from other communities within a network. Network analysis relies heavily on community detection. Despite the traditional spectral clustering and statistical inference methods, deep learning techniques for community detection have grown in popularity due to their ease of processing high-dimensional network data. Graph convolutional neural networks (GCNNs) have received much attention recently and have developed into a potential and ubiquitous method for directly detecting communities on graphs. Inspired by the promising results of graph convolutional networks (GCNs) in analyzing graph structure data, a novel community graph convolutional network (CommunityGCN) as a semi-supervised node classification model has been proposed and compared with recent baseline methods graph attention network (GAT), GCN-based technique for unsupervised community detection and Markov random fields combined with graph convolutional network (MRFasGCN).

Design/methodology/approach

This work presents the method for identifying communities that combines the notion of node classification via message passing with the architecture of a semi-supervised graph neural network. Six benchmark datasets, namely, Cora, CiteSeer, ACM, Karate, IMDB and Facebook, have been used in the experimentation.

Findings

In the first set of experiments, the scaled normalized average matrix of all neighbor's features including the node itself was obtained, followed by obtaining the weighted average matrix of low-dimensional nodes. In the second set of experiments, the average weighted matrix was forwarded to the GCN with two layers and the activation function for predicting the node class was applied. The results demonstrate that node classification with GCN can improve the performance of identifying communities on graph datasets.

Originality/value

The experiment reveals that the CommunityGCN approach has given better results with accuracy, normalized mutual information, F1 and modularity scores of 91.26, 79.9, 92.58 and 70.5 per cent, respectively, for detecting communities in the graph network, which is much greater than the range of 55.7–87.07 per cent reported in previous literature. Thus, it has been concluded that the GCN with node classification models has improved the accuracy.

Details

Data Technologies and Applications, vol. 57 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 28 November 2022

Anuraj Mohan, Karthika P.V., Parvathi Sankar, K. Maya Manohar and Amala Peter

Money laundering is the process of concealing unlawfully obtained funds by presenting them as coming from a legitimate source. Criminals use crypto money laundering to hide the…

Abstract

Purpose

Money laundering is the process of concealing unlawfully obtained funds by presenting them as coming from a legitimate source. Criminals use crypto money laundering to hide the illicit origin of funds using a variety of methods. The most simplified form of bitcoin money laundering leans hard on the fact that transactions made in cryptocurrencies are pseudonymous, but open data gives more power to investigators and enables the crowdsourcing of forensic analysis. With the motive to curb these illegal activities, there exist various rules, policies and technologies collectively known as anti-money laundering (AML) tools. When properly implemented, AML restrictions reduce the negative effects of illegal economic activity while also promoting financial market integrity and stability, but these bear high costs for institutions. The purpose of this work is to motivate the opportunity to reconcile the cause of safety with that of financial inclusion, bearing in mind the limitations of the available data. The authors use the Elliptic dataset; to the best of the authors' knowledge, this is the largest labelled transaction dataset publicly available in any cryptocurrency.

Design/methodology/approach

AML in bitcoin can be modelled as a node classification task in dynamic networks. In this work, graph convolutional decision forest will be introduced, which combines the potentialities of evolving graph convolutional network and deep neural decision forest (DNDF). This model will be used to classify the unknown transactions in the Elliptic dataset. Additionally, the application of knowledge distillation (KD) over the proposed approach gives finest results compared to all the other experimented techniques.

Findings

The importance of utilising a concatenation between dynamic graph learning and ensemble feature learning is demonstrated in this work. The results show the superiority of the proposed model to classify the illicit transactions in the Elliptic dataset. Experiments also show that the results can be further improved when the system is fine-tuned using a KD framework.

Originality/value

Existing works used either ensemble learning or dynamic graph learning to tackle the problem of AML in bitcoin. The proposed model provides a novel view to combine the power of random forest with dynamic graph learning methods. Furthermore, the work also demonstrates the advantage of KD in improving the performance of the whole system.

Details

Data Technologies and Applications, vol. 57 no. 3
Type: Research Article
ISSN: 2514-9288

Keywords

Open Access
Article
Publication date: 8 February 2023

Edoardo Ramalli and Barbara Pernici

Experiments are the backbone of the development process of data-driven predictive models for scientific applications. The quality of the experiments directly impacts the model…

Abstract

Purpose

Experiments are the backbone of the development process of data-driven predictive models for scientific applications. The quality of the experiments directly impacts the model performance. Uncertainty inherently affects experiment measurements and is often missing in the available data sets due to its estimation cost. For similar reasons, experiments are very few compared to other data sources. Discarding experiments based on the missing uncertainty values would preclude the development of predictive models. Data profiling techniques are fundamental to assess data quality, but some data quality dimensions are challenging to evaluate without knowing the uncertainty. In this context, this paper aims to predict the missing uncertainty of the experiments.

Design/methodology/approach

This work presents a methodology to forecast the experiments’ missing uncertainty, given a data set and its ontological description. The approach is based on knowledge graph embeddings and leverages the task of link prediction over a knowledge graph representation of the experiments database. The validity of the methodology is first tested in multiple conditions using synthetic data and then applied to a large data set of experiments in the chemical kinetic domain as a case study.

Findings

The analysis results of different test case scenarios suggest that knowledge graph embedding can be used to predict the missing uncertainty of the experiments when there is a hidden relationship between the experiment metadata and the uncertainty values. The link prediction task is also resilient to random noise in the relationship. The knowledge graph embedding outperforms the baseline results if the uncertainty depends upon multiple metadata.

Originality/value

The employment of knowledge graph embedding to predict the missing experimental uncertainty is a novel alternative to the current and more costly techniques in the literature. Such contribution permits a better data quality profiling of scientific repositories and improves the development process of data-driven models based on scientific experiments.

Article
Publication date: 15 July 2022

Hongming Gao, Hongwei Liu, Weizhen Lin and Chunfeng Chen

Purchase conversion prediction aims to improve user experience and convert visitors into real buyers to drive sales of firms; however, the total conversion rate is low, especially…

Abstract

Purpose

Purchase conversion prediction aims to improve user experience and convert visitors into real buyers to drive sales of firms; however, the total conversion rate is low, especially for e-retailers. To date, little is known about how e-retailers can scientifically detect users' intents within a purchase conversion funnel during their ongoing sessions and strategically optimize real-time marketing tactics corresponding to dynamic intent states. This study mainly aims to detect a real-time state of the conversion funnel based on graph theory, which refers to a five-class classification problem in the overt real-time choice decisions (RTCDs)—click, tag-to-wishlist, add-to-cart, remove-from-cart and purchase—during an ongoing session.

Design/methodology/approach

The authors propose a novel graph-theoretic framework to detect different states of the conversion funnel by identifying a user's unobserved mindset revealed from their navigation process graph, namely clickstream graph. First, the raw clickstream data are identified into individual sessions based on a 30-min time-out heuristic approach. Then, the authors convert each session into a sequence of temporal item-level clickstream graphs and conduct a temporal graph feature engineering according to the basic, single-, dyadic- and triadic-node and global characteristics. Furthermore, the synthetic minority oversampling technique is adopted to address with the problem of classifying imbalanced data. Finally, the authors train and test the proposed approach with several popular artificial intelligence algorithms.

Findings

The graph-theoretic approach validates that users' latent intent states within the conversion funnel can be interpreted as time-varying natures of their online graph footprints. In particular, the experimental results indicate that the graph-theoretic feature-oriented models achieve a substantial improvement of over 27% in line with the macro-average and micro-average area under the precision-recall curve, as compared to the conventional ones. In addition, the top five informative graph features for RTCDs are found to be Transitivity, Edge, Node, Degree and Reciprocity. In view of interpretability, the basic, single-, dyadic- and triadic-node and global characteristics of clickstream graphs have their specific advantages.

Practical implications

The findings suggest that the temporal graph-theoretic approach can form an efficient and powerful AI-based real-time intent detecting decision-support system. Different levels of graph features have their specific interpretability on RTCDs from the perspectives of consumer behavior and psychology, which provides a theoretical basis for the design of computer information systems and the optimization of the ongoing session intervention or recommendation in e-commerce.

Originality/value

To the best of the authors' knowledge, this is the first study to apply clickstream graphs and real-time decision choices in conversion prediction and detection. Most studies have only meditated on a binary classification problem, while this study applies a graph-theoretic approach in a five-class classification problem. In addition, this study constructs temporal item-level graphs to represent the original structure of clickstream session data based on graph theory. The time-varying characteristics of the proposed approach enhance the performance of purchase conversion detection during an ongoing session.

Details

Kybernetes, vol. 52 no. 11
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 16 March 2023

Yishan Liu, Wenming Cao and Guitao Cao

Session-based recommendation aims to predict the user's next preference based on the user's recent activities. Although most existing studies consider the global characteristics…

Abstract

Purpose

Session-based recommendation aims to predict the user's next preference based on the user's recent activities. Although most existing studies consider the global characteristics of items, they only learn the global characteristics of items based on a single connection relationship, which cannot fully capture the complex transformation relationship between items. We believe that multiple relationships between items in learning sessions can improve the performance of session recommendation tasks and the scalability of recommendation models. At the same time, high-quality global features of the item help to explore the potential common preferences of users.

Design/methodology/approach

This work proposes a session-based recommendation method with a multi-relation global context–enhanced network to capture this global transition relationship. Specifically, we construct a multi-relation global item graph based on a group of sessions, use a graded attention mechanism to learn different types of connection relations independently and obtain the global feature of the item according to the multi-relation weight.

Findings

We did related experiments on three benchmark datasets. The experimental results show that our proposed model is superior to the existing state-of-the-art methods, which verifies the effectiveness of our model.

Originality/value

First, we construct a multi-relation global item graph to learn the complex transition relations of the global context of the item and effectively mine the potential association of items between different sessions. Second, our model effectively improves the scalability of the model by obtaining high-quality item global features and enables some previously unconsidered items to make it onto the candidate list.

Details

Data Technologies and Applications, vol. 57 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

1 – 10 of over 3000