Search results

1 – 10 of 889
Open Access
Article
Publication date: 28 July 2020

Prabhat Pokharel, Roshan Pokhrel and Basanta Joshi

Analysis of log message is very important for the identification of a suspicious system and network activity. This analysis requires the correct extraction of variable entities

1075

Abstract

Analysis of log message is very important for the identification of a suspicious system and network activity. This analysis requires the correct extraction of variable entities. The variable entities are extracted by comparing the logs messages against the log patterns. Each of these log patterns can be represented in the form of a log signature. In this paper, we present a hybrid approach for log signature extraction. The approach consists of two modules. The first module identifies log patterns by generating log clusters. The second module uses Named Entity Recognition (NER) to extract signatures by using the extracted log clusters. Experiments were performed on event logs from Windows Operating System, Exchange and Unix and validation of the result was done by comparing the signatures and the variable entities against the standard log documentation. The outcome of the experiments was that extracted signatures were ready to be used with a high degree of accuracy.

Details

Applied Computing and Informatics, vol. 19 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 23 October 2023

Rebecca Maughan and Aideen O'Dochartaigh

This study examines how accounting tools and techniques are used to create and support membership and reporting boundaries for a multi-entity sustainability scheme. It also…

1170

Abstract

Purpose

This study examines how accounting tools and techniques are used to create and support membership and reporting boundaries for a multi-entity sustainability scheme. It also considers whether boundary setting for this initiative helps to connect corporate activity with planetary boundaries and the SDGs.

Design/methodology/approach

A case study of a national agrifood sustainability scheme, analysing extensive documentary data and multi-entity sustainability reports. The concept of partial organising is used to frame the analysis.

Findings

Accounting, in the form of planning, verification, target setting, annual review and reporting, can be used to create a membership and a reporting boundary. Accounting tools and techniques support the scheme's standard-setting and monitoring elements. The study demonstrates that the scheme offers innovation in how sustainability reporting is managed. However, it does not currently provide a cumulative assessment of the effect of the sector's activity on ecological carrying capacity or connect this activity to global sustainability indicators.

Research limitations/implications

Future research can build on this study's insights to further develop our understanding of multi-entity sustainability reporting and accounting's role in organising for sustainability. The authors identify several research avenues including: boundary setting in ecologically significant sectors, integrating global sustainability indicators at sectoral and organisational levels, sustainability controls in multi-entity settings and the potential of multi-entity reporting to provide substantive disclosure.

Originality/value

This paper provides insight into accounting's role in boundary setting for a multi-entity sustainability initiative. It adds to our understanding of the potential of a multi-entity reporting boundary to support connected measurement between corporate activity and global sustainability indicators. It builds on work on partial organising and provides insight into how accounting can support this form of organising for sustainability.

Details

Accounting, Auditing & Accountability Journal, vol. 36 no. 9
Type: Research Article
ISSN: 0951-3574

Keywords

Open Access
Article
Publication date: 15 July 2022

Susanne Leitner-Hanetseder and Othmar M. Lehner

With the help of “self-learning” algorithms and high computing power, companies are transforming Big Data into artificial intelligence (AI)-powered information and gaining…

4451

Abstract

Purpose

With the help of “self-learning” algorithms and high computing power, companies are transforming Big Data into artificial intelligence (AI)-powered information and gaining economic benefits. AI-powered information and Big Data (simply data henceforth) have quickly become some of the most important strategic resources in the global economy. However, their value is not (yet) formally recognized in financial statements, which leads to a growing gap between book and market values and thus limited decision usefulness of the underlying financial statements. The objective of this paper is to identify ways in which the value of data can be reported to improve decision usefulness.

Design/methodology/approach

Based on the authors' experience as both long-term practitioners and theoretical accounting scholars, the authors conceptualize and draw up a potential data value chain and show the transformation from raw Big Data to business-relevant AI-powered information during its process.

Findings

Analyzing current International Financial Reporting Standards (IFRS) regulations and their applicability, the authors show that current regulations are insufficient to provide useful information on the value of data. Following this, the authors propose a Framework for AI-powered Information and Big Data (FAIIBD) Reporting. This framework also provides insights on the (good) governance of data with the purpose of increasing decision usefulness and connecting to existing frameworks even further. In the conclusion, the authors raise questions concerning this framework that may be worthy of discussion in the scholarly community.

Research limitations/implications

Scholars and practitioners alike are invited to follow up on the conceptual framework from many perspectives.

Practical implications

The framework can serve as a guide towards a better understanding of how to recognize and report AI-powered information and by that (a) limit the valuation gap between book and market value and (b) enhance decision usefulness of financial reporting.

Originality/value

This article proposes a conceptual framework in IFRS to regulators to better deal with the value of AI-powered information and improve the good governance of (Big)data.

Details

Journal of Applied Accounting Research, vol. 24 no. 2
Type: Research Article
ISSN: 0967-5426

Keywords

Open Access
Article
Publication date: 14 August 2017

Xiu Susie Fang, Quan Z. Sheng, Xianzhi Wang, Anne H.H. Ngu and Yihong Zhang

This paper aims to propose a system for generating actionable knowledge from Big Data and use this system to construct a comprehensive knowledge base (KB), called GrandBase.

2047

Abstract

Purpose

This paper aims to propose a system for generating actionable knowledge from Big Data and use this system to construct a comprehensive knowledge base (KB), called GrandBase.

Design/methodology/approach

In particular, this study extracts new predicates from four types of data sources, namely, Web texts, Document Object Model (DOM) trees, existing KBs and query stream to augment the ontology of the existing KB (i.e. Freebase). In addition, a graph-based approach to conduct better truth discovery for multi-valued predicates is also proposed.

Findings

Empirical studies demonstrate the effectiveness of the approaches presented in this study and the potential of GrandBase. The future research directions regarding GrandBase construction and extension has also been discussed.

Originality/value

To revolutionize our modern society by using the wisdom of Big Data, considerable KBs have been constructed to feed the massive knowledge-driven applications with Resource Description Framework triples. The important challenges for KB construction include extracting information from large-scale, possibly conflicting and different-structured data sources (i.e. the knowledge extraction problem) and reconciling the conflicts that reside in the sources (i.e. the truth discovery problem). Tremendous research efforts have been contributed on both problems. However, the existing KBs are far from being comprehensive and accurate: first, existing knowledge extraction systems retrieve data from limited types of Web sources; second, existing truth discovery approaches commonly assume each predicate has only one true value. In this paper, the focus is on the problem of generating actionable knowledge from Big Data. A system is proposed, which consists of two phases, namely, knowledge extraction and truth discovery, to construct a broader KB, called GrandBase.

Details

PSU Research Review, vol. 1 no. 2
Type: Research Article
ISSN: 2399-1747

Keywords

Open Access
Article
Publication date: 6 July 2020

Basma Makhlouf Shabou, Julien Tièche, Julien Knafou and Arnaud Gaudinat

This paper aims to describe an interdisciplinary and innovative research conducted in Switzerland, at the Geneva School of Business Administration HES-SO and supported by the…

4134

Abstract

Purpose

This paper aims to describe an interdisciplinary and innovative research conducted in Switzerland, at the Geneva School of Business Administration HES-SO and supported by the State Archives of Neuchâtel (Office des archives de l'État de Neuchâtel, OAEN). The problem to be addressed is one of the most classical ones: how to extract and discriminate relevant data in a huge amount of diversified and complex data record formats and contents. The goal of this study is to provide a framework and a proof of concept for a software that helps taking defensible decisions on the retention and disposal of records and data proposed to the OAEN. For this purpose, the authors designed two axes: the archival axis, to propose archival metrics for the appraisal of structured and unstructured data, and the data mining axis to propose algorithmic methods as complementary or/and additional metrics for the appraisal process.

Design/methodology/approach

Based on two axes, this exploratory study designs and tests the feasibility of archival metrics that are paired to data mining metrics, to advance, as much as possible, the digital appraisal process in a systematic or even automatic way. Under Axis 1, the authors have initiated three steps: first, the design of a conceptual framework to records data appraisal with a detailed three-dimensional approach (trustworthiness, exploitability, representativeness). In addition, the authors defined the main principles and postulates to guide the operationalization of the conceptual dimensions. Second, the operationalization proposed metrics expressed in terms of variables supported by a quantitative method for their measurement and scoring. Third, the authors shared this conceptual framework proposing the dimensions and operationalized variables (metrics) with experienced professionals to validate them. The expert’s feedback finally gave the authors an idea on: the relevance and the feasibility of these metrics. Those two aspects may demonstrate the acceptability of such method in a real-life archival practice. In parallel, Axis 2 proposes functionalities to cover not only macro analysis for data but also the algorithmic methods to enable the computation of digital archival and data mining metrics. Based on that, three use cases were proposed to imagine plausible and illustrative scenarios for the application of such a solution.

Findings

The main results demonstrate the feasibility of measuring the value of data and records with a reproducible method. More specifically, for Axis 1, the authors applied the metrics in a flexible and modular way. The authors defined also the main principles needed to enable computational scoring method. The results obtained through the expert’s consultation on the relevance of 42 metrics indicate an acceptance rate above 80%. In addition, the results show that 60% of all metrics can be automated. Regarding Axis 2, 33 functionalities were developed and proposed under six main types: macro analysis, microanalysis, statistics, retrieval, administration and, finally, the decision modeling and machine learning. The relevance of metrics and functionalities is based on the theoretical validity and computational character of their method. These results are largely satisfactory and promising.

Originality/value

This study offers a valuable aid to improve the validity and performance of archival appraisal processes and decision-making. Transferability and applicability of these archival and data mining metrics could be considered for other types of data. An adaptation of this method and its metrics could be tested on research data, medical data or banking data.

Details

Records Management Journal, vol. 30 no. 2
Type: Research Article
ISSN: 0956-5698

Keywords

Open Access
Article
Publication date: 4 March 2020

Hsin-Chen Lin and Patrick F. Bruning

The paper aims to compare two general team identification processes of consumers’ in-group-favor and out-group-animosity responses to sports sponsorship.

1779

Abstract

Purpose

The paper aims to compare two general team identification processes of consumers’ in-group-favor and out-group-animosity responses to sports sponsorship.

Design/methodology/approach

The paper draws on two studies and four samples of professional baseball fans in Taiwan (N = 1,294). In Study 1, data from the fans of three teams were analyzed by using multi-group structural equation modeling to account for team effects and to consider parallel in-group-favor and out-group-animosity processes. In Study 2, the fans of one team were sampled and randomly assigned to assess the sponsors of one of three specific competitor teams to account for differences in team competition and rivalry. In both studies, these two processes were compared using patterns of significant relationships and differences in the indirect identification-attitude-outcome relationships.

Findings

Positive outcomes of in-group-favor processes were broader in scope and were more pronounced in absolute magnitude than the negative outcomes of out-group-animosity processes across all outcomes and studies.

Research limitations/implications

The research was conducted in one country and considered the sponsorship of one sport. It is possible that the results could differ for leagues within different countries, more global leagues and different fan bases.

Practical implications

The results suggest that managers should carefully consider whether the negative out-group-animosity outcomes are actually present, broad enough or strong enough to warrant costly or compromising intervention, because they might not always be present or meaningful.

Originality/value

The paper demonstrates the comparatively greater breadth and strength of in-group-favor processes when compared directly to out-group-animosity processes.

Details

European Journal of Marketing, vol. 54 no. 4
Type: Research Article
ISSN: 0309-0566

Keywords

Open Access
Article
Publication date: 9 April 2020

Xiaodong Zhang, Ping Li, Xiaoning Ma and Yanjun Liu

The operating wagon records were produced from distinct railway information systems, which resulted in the wagon routing record with the same oriental destination (OD) was…

Abstract

Purpose

The operating wagon records were produced from distinct railway information systems, which resulted in the wagon routing record with the same oriental destination (OD) was different. This phenomenon has brought considerable difficulties to the railway wagon flow forecast. Some were because of poor data quality, which misled the actual prediction, while others were because of the existence of another actual wagon routings. This paper aims at finding all the wagon routing locus patterns from the history records, and thus puts forward an intelligent recognition method for the actual routing locus pattern of railway wagon flow based on SST algorithm.

Design/methodology/approach

Based on the big data of railway wagon flow records, the routing metadata model is constructed, and the historical data and real-time data are fused to improve the reliability of the path forecast results in the work of railway wagon flow forecast. Based on the division of spatial characteristics and the reduction of dimension in the distributary station, the improved Simhash algorithm is used to calculate the routing fingerprint. Combined with Squared Error Adjacency Matrix Clustering algorithm and Tarjan algorithm, the fingerprint similarity is calculated, the spatial characteristics are clustering and identified, the routing locus mode is formed and then the intelligent recognition of the actual wagon flow routing locus is realized.

Findings

This paper puts forward a more realistic method of railway wagon routing pattern recognition algorithm. The problem of traditional railway wagon routing planning is converted into the routing locus pattern recognition problem, and the wagon routing pattern of all OD streams is excavated from the historical data results. The analysis is carried out from three aspects: routing metadata, routing locus fingerprint and routing locus pattern. Then, the intelligent recognition SST-based algorithm of railway wagon routing locus pattern is proposed, which combines the history data and instant data to improve the reliability of the wagon routing selection result. Finally, railway wagon routing locus could be found out accurately, and the case study tests the validity of the algorithm.

Practical implications

Before the forecasting work of railway wagon flow, it needs to know how many kinds of wagon routing locus exist in a certain OD. Mining all the OD routing locus patterns from the railway wagon operating records is helpful to forecast the future routing combined with the wagon characteristics. The work of this paper is the basis of the railway wagon routing forecast.

Originality/value

As the basis of the railway wagon routing forecast, this research not only improves the accuracy and efficiency for the railway wagon routing forecast but also provides the further support of decision-making for the railway freight transportation organization.

Details

Smart and Resilient Transportation, vol. 2 no. 1
Type: Research Article
ISSN: 2632-0487

Keywords

Open Access
Article
Publication date: 5 December 2023

Simon Lundh, Karin Seger, Magnus Frostenson and Sven Helin

The purpose of this study is to identify the norms that underlie and condition the decisions made by preparers of financial reports.

Abstract

Purpose

The purpose of this study is to identify the norms that underlie and condition the decisions made by preparers of financial reports.

Design/methodology/approach

This interview-based study illustrates how financial report preparers engage in behaviors linked to the perception of recognition and measurement of internally generated intangible assets by important stakeholders. All of the companies included in the study adhere to International Financial Reporting Standards when creating their consolidated financial statements. The participants selected for the study are involved in accounting decisions related to research and development in accordance with International Accounting Standard (IAS) 38.

Findings

The authors identify the normative assumptions underlying the recognition and measurement of internally generated intangibles, which are based on concerns of consistency, credibility and reasonableness. The authors find that the normative basis for legitimacy in financial accounting is primarily related to cognitive legitimacy and is not of a moral or pragmatic nature.

Originality/value

The study reveals that recognition and measurement of internally generated intangibles in financial accounting relate to legitimacy. The authors identify specific norms that form the basis of this legitimacy, namely, consistency, credibility and reasonableness. These identified norms serve as constraints, mitigating the risk of judgment misuse within the IAS 38 framework for earnings management.

Details

Qualitative Research in Accounting & Management, vol. 21 no. 2
Type: Research Article
ISSN: 1176-6093

Keywords

Abstract

Purpose

An overview of the current use of handwritten text recognition (HTR) on archival manuscript material, as provided by the EU H2020 funded Transkribus platform. It explains HTR, demonstrates Transkribus, gives examples of use cases, highlights the affect HTR may have on scholarship, and evidences this turning point of the advanced use of digitised heritage content. The paper aims to discuss these issues.

Design/methodology/approach

This paper adopts a case study approach, using the development and delivery of the one openly available HTR platform for manuscript material.

Findings

Transkribus has demonstrated that HTR is now a useable technology that can be employed in conjunction with mass digitisation to generate accurate transcripts of archival material. Use cases are demonstrated, and a cooperative model is suggested as a way to ensure sustainability and scaling of the platform. However, funding and resourcing issues are identified.

Research limitations/implications

The paper presents results from projects: further user studies could be undertaken involving interviews, surveys, etc.

Practical implications

Only HTR provided via Transkribus is covered: however, this is the only publicly available platform for HTR on individual collections of historical documents at time of writing and it represents the current state-of-the-art in this field.

Social implications

The increased access to information contained within historical texts has the potential to be transformational for both institutions and individuals.

Originality/value

This is the first published overview of how HTR is used by a wide archival studies community, reporting and showcasing current application of handwriting technology in the cultural heritage sector.

Open Access
Article
Publication date: 6 March 2017

Zhuoxuan Jiang, Chunyan Miao and Xiaoming Li

Recent years have witnessed the rapid development of massive open online courses (MOOCs). With more and more courses being produced by instructors and being participated by…

2117

Abstract

Purpose

Recent years have witnessed the rapid development of massive open online courses (MOOCs). With more and more courses being produced by instructors and being participated by learners all over the world, unprecedented massive educational resources are aggregated. The educational resources include videos, subtitles, lecture notes, quizzes, etc., on the teaching side, and forum contents, Wiki, log of learning behavior, log of homework, etc., on the learning side. However, the data are both unstructured and diverse. To facilitate knowledge management and mining on MOOCs, extracting keywords from the resources is important. This paper aims to adapt the state-of-the-art techniques to MOOC settings and evaluate the effectiveness on real data. In terms of practice, this paper also tries to answer the questions for the first time that to what extend can the MOOC resources support keyword extraction models, and how many human efforts are required to make the models work well.

Design/methodology/approach

Based on which side generates the data, i.e instructors or learners, the data are classified to teaching resources and learning resources, respectively. The approach used on teaching resources is based on machine learning models with labels, while the approach used on learning resources is based on graph model without labels.

Findings

From the teaching resources, the methods used by the authors can accurately extract keywords with only 10 per cent labeled data. The authors find a characteristic of the data that the resources of various forms, e.g. subtitles and PPTs, should be separately considered because they have the different model ability. From the learning resources, the keywords extracted from MOOC forums are not as domain-specific as those extracted from teaching resources, but they can reflect the topics which are lively discussed in forums. Then instructors can get feedback from the indication. The authors implement two applications with the extracted keywords: generating concept map and generating learning path. The visual demos show they have the potential to improve learning efficiency when they are integrated into a real MOOC platform.

Research limitations/implications

Conducting keyword extraction on MOOC resources is quite difficult because teaching resources are hard to be obtained due to copyrights. Also, getting labeled data is tough because usually expertise of the corresponding domain is required.

Practical implications

The experiment results support that MOOC resources are good enough for building models of keyword extraction, and an acceptable balance between human efforts and model accuracy can be achieved.

Originality/value

This paper presents a pioneer study on keyword extraction on MOOC resources and obtains some new findings.

Details

International Journal of Crowd Science, vol. 1 no. 1
Type: Research Article
ISSN: 2398-7294

Keywords

1 – 10 of 889