Search results
1 – 10 of over 69000Constructing and evaluating behavioral science models is a complex process. Decisions must be made about which variables to include, which variables are related to each other, the…
Abstract
Constructing and evaluating behavioral science models is a complex process. Decisions must be made about which variables to include, which variables are related to each other, the functional forms of the relationships, and so on. The last 10 years have seen a substantial extension of the range of statistical tools available for use in the construction process. The progress in tool development has been accompanied by the publication of handbooks that introduce the methods in general terms (Arminger et al., 1995; Tinsley & Brown, 2000a). Each chapter in these handbooks cites a wide range of books and articles on specific analysis topics.
Julián Darío Miranda-Calle, Vikranth Reddy C., Parag Dhawan and Prathamesh Churi
The impact of cyberattacks all over the world has been increasing at a constant rate every year. Performing exploratory analysis helps organizations to identify, manage and…
Abstract
Purpose
The impact of cyberattacks all over the world has been increasing at a constant rate every year. Performing exploratory analysis helps organizations to identify, manage and safeguard the information that could be vulnerable to cyber-attacks. It encourages to the creation of a plan for security controls that can help to protect data and keep constant tabs on threats and monitor their organization’s networks for any breaches.
Design/methodology/approach
The purpose of this experimental study is to state the use of data science in analyzing data and to provide a more detailed view of the most common cybersecurity attacks, what are the most accessed logical ports, visible patterns, as well as the trends and occurrence of attacks. The data to be processed has been obtained by aggregating data provided by a company’s technology department, which includes network flow data produced by nine different types of attacks within every day user activities. This could be insightful for many companies to measure the damage caused by these breaches but also gives a foundation for future comparisons and serves as a basis for proactive measures within industry and organizations.
Findings
The most common cybersecurity attacks, most accessed logical ports and their visible patterns were found in the acquired data set. The strategies, which attackers have used with respect to time, type of attacks, specific ports, IP addresses and their relationships have been determined. The statistical hypothesis was also performed to check whether attackers were confined to perform random attacks or to any specific machines with some pattern.
Originality/value
Policies can be suggested such that if an attack is conducted on a specific machine, which can be prevented by identifying the machine, ports and duration of the attacks on which the attacker is targeting and to formulate such policies that the organization should follow to tackle these targeted attacks in the future.
Details
Keywords
Triss Ashton and Victor R. Prybutok
The purpose of this study includes two parts. First, it introduces a machine-based method for model and instrument development and updating that integrates large sample…
Abstract
Purpose
The purpose of this study includes two parts. First, it introduces a machine-based method for model and instrument development and updating that integrates large sample qualitative data. Second, a new model and instrument for e-commerce customer satisfaction are developed.
Design/methodology/approach
The research occurs in two phases. In Phase 1, data collection occurs with a literature-based quantitative model and instrument that includes at least one qualitative scale item per construct. Data analysis of the resulting data includes factor analysis (FA) and latent semantic analysis text mining to generate an updated model and instrument. In Phase 2, data collection uses the new model and instrument. Data analysis in Phase 2 includes exploratory data analysis with FA, exploratory structural equation modeling and partial least square modeling.
Findings
As a result of the information gained by the integration of qualitative scales in the literature-based survey, the final model departs substantially from the initial research-based research model. It integrates many of the constructs known to impact a website and software usability from information systems research into a new e-retail satisfaction model.
Originality/value
The research method, as presented here, offers a strategy for integrating large scale qualitative data for refinement of models and the development of instruments. It is essentially a method of gaining the wisdom of crowds economically while simultaneously reducing the biases and laborious effort commonly associated with qualitative research.
Details
Keywords
This paper contributes to the literature by discussing the impact of machine learning (ML) on management accounting (MA) and the management accountant based on three sources…
Abstract
Purpose
This paper contributes to the literature by discussing the impact of machine learning (ML) on management accounting (MA) and the management accountant based on three sources: academic articles, papers and reports from accounting bodies and consulting companies. The purpose of this paper is to identify, discuss and provide suggestions for how ML could be included in research and education in the future for the management accountant.
Design/methodology/approach
This paper identifies three types of studies on the influence of ML on MA issued between 2015 and 2021 in mainstream accounting journals, by professional accounting bodies and by large consulting companies.
Findings
First, only very few academic articles actually show examples of using ML or using different algorithms related to MA issues. This is in contrast to other research fields such as finance and logistics. Second, the literature review also indicates that if the management accountants want to keep up with the demand of their qualifications, they must take action now and begin to discuss how big data and other concepts from artificial intelligence and ML can benefit MA and the management accountant in specific ways.
Originality/value
Even though the paper may be classified as inspirational in nature, the paper documents and discusses the revised environment that surrounds the accountant today. The paper concludes by highlighting specifically the necessity of including exploratory data analysis and unsupervised ML in the field of MA to close the existing gaps in both education and research and thus making the MA profession future-proof.
Details
Keywords
Oscar Peña, Unai Aguilera and Diego López-de-Ipiña
– The purpose of this paper is to present a new approach toward automatically visualizing Linked Open Data (LOD) through metadata analysis.
Abstract
Purpose
The purpose of this paper is to present a new approach toward automatically visualizing Linked Open Data (LOD) through metadata analysis.
Design/methodology/approach
By focussing on the data within a LOD dataset, the authors can infer its structure in a much better way than current approaches, generating more intuitive models to progress toward visual representations.
Findings
With no technical knowledge required, focussing on metadata properties from a semantically annotated dataset could lead to automatically generated charts that allow to understand the dataset in an exploratory manner. Through interactive visualizations, users can navigate LOD sources using a natural approach, in order to save time and resources when dealing with an unknown resource for the first time.
Research limitations/implications
This approach is suitable for available SPARQL endpoints and could be extended for resource description framework dumps loaded locally.
Originality/value
Most works dealing with LOD visualization are customized for a specific domain or dataset. This paper proposes a generic approach based on traditional data visualization and exploratory data analysis literature.
Details
Keywords
Violetta Wilk, Geoffrey N. Soutar and Paul Harrigan
This paper aims to offer insights into the ways two computer-aided qualitative data analysis software (CAQDAS) applications (QSR NVivo and Leximancer) can be used to analyze big…
Abstract
Purpose
This paper aims to offer insights into the ways two computer-aided qualitative data analysis software (CAQDAS) applications (QSR NVivo and Leximancer) can be used to analyze big, text-based, online data taken from consumer-to-consumer (C2C) social media communication.
Design/methodology/approach
This study used QSR NVivo and Leximancer, to explore 200 discussion threads containing 1,796 posts from forums on an online open community and an online brand community that involved online brand advocacy (OBA). The functionality, in particular, the strengths and weaknesses of both programs are discussed. Examples of the types of analyses each program can undertake and the visual output available are also presented.
Findings
This research found that, while both programs had strengths and weaknesses when working with big, text-based, online data, they complemented each other. Each contributed a different visual and evidence-based perspective; providing a more comprehensive and insightful view of the characteristics unique to OBA.
Research limitations/implications
Qualitative market researchers are offered insights into the advantages and disadvantages of using two different software packages for research projects involving big social media data. The “visual-first” analysis, obtained from both programs can help researchers make sense of such data, particularly in exploratory research.
Practical implications
The paper provides practical recommendations for analysts considering which programs to use when exploring big, text-based, online data.
Originality/value
This paper answered a call to action for further research and demonstration of analytical programs of big, online data from social media C2C communication and makes strong suggestions about the need to examine such data in a number of ways.
Details
Keywords
Chika Saka, Tomoki Oshika and Masayuki Jimichi
This study aims to explore the evidence of the probability of firms’ tax avoidance and the downward convergence trend of national statutory tax rates and firms’ effective tax…
Abstract
Purpose
This study aims to explore the evidence of the probability of firms’ tax avoidance and the downward convergence trend of national statutory tax rates and firms’ effective tax rates.
Design/methodology/approach
This research employs exploratory data analysis using interactive data manipulation and visualization tools, namely, R with SparkR, dplyr, ggplot2 and googleVis (GeoChart and Motion Chart) packages. This analysis is based on the world-scale accounting data of all listed firms from 148 countries spanning 30 years.
Findings
The results reveal the following: three types of evidences on probability of firms’ tax avoidance, showing a non-random distribution of firms’ effective tax rates and return on assets, cross-sectional variation of firms’ effective tax rates in each country, and the trend of difference between effective tax rates and statutory tax rates, and the downward convergence trend of statutory tax rates and firms’ effective tax rates.
Practical implications
The results highlight the prominent issues of world-scale tax avoidance and tax rate competition and facilitate a collaborative discussion between laymen and professionals using objective evidence.
Originality/value
A novel methodology is adopted through the visualization of world-scale accounting data, which can facilitate a new perspective, revealing unexpected patterns and trends in otherwise hidden information. This study also highlights the importance of global consideration of firms’ tax avoidance and tax rate competition, using objective evidence.
Details
Keywords
Nihan Yildirim, Derya Gultekin, Cansu Hürses and Abdullah Mert Akman
This paper aims to use text mining methods to explore the similarities and differences between countries’ national digital transformation (DT) and Industry 4.0 (I4.0) policies…
Abstract
Purpose
This paper aims to use text mining methods to explore the similarities and differences between countries’ national digital transformation (DT) and Industry 4.0 (I4.0) policies. The study examines the applicability of text mining as an alternative for comprehensive clustering of national I4.0 and DT strategies, encouraging policy researchers toward data science that can offer rapid policy analysis and benchmarking.
Design/methodology/approach
With an exploratory research approach, topic modeling, principal component analysis and unsupervised machine learning algorithms (k-means and hierarchical clustering) are used for clustering national I4.0 and DT strategies. This paper uses a corpus of policy documents and related scientific publications from several countries and integrate their science and technology performance. The paper also presents the positioning of Türkiye’s I4.0 and DT national policy as a case from a developing country context.
Findings
Text mining provides meaningful clustering results on similarities and differences between countries regarding their national I4.0 and DT policies, aligned with their geographic, economic and political circumstances. Findings also shed light on the DT strategic landscape and the key themes spanning various policy dimensions. Drawing from the Turkish case, political options are discussed in the context of developing (follower) countries’ I4.0 and DT.
Practical implications
The paper reveals meaningful clustering results on similarities and differences between countries regarding their national I4.0 and DT policies, reflecting political proximities aligned with their geographic, economic and political circumstances. This can help policymakers to comparatively understand national DT and I4.0 policies and use this knowledge to reflect collaborative and competitive measures to their policies.
Originality/value
This paper provides a unique combined methodology for text mining-based policy analysis in the DT context, which has not been adopted. In an era where computational social science and machine learning have gained importance and adaptability to political and social science fields, and in the technology and innovation management discipline, clustering applications showed similar and different policy patterns in a timely and unbiased manner.
Details
Keywords
This paper provides guidelines for the design and execution of survey research in operations management (OM). The specific requirements of survey research aimed at gathering and…
Abstract
This paper provides guidelines for the design and execution of survey research in operations management (OM). The specific requirements of survey research aimed at gathering and analysing data for theory testing are contrasted with other types of survey research. The focus is motivated by the need to tackle the various issues which arise in the process of survey research. The paper does not intend to be exhaustive: its aim is to guide the researcher, presenting a systematic picture which synthesises suitable survey practices for research in an OM context. The fundamental aim is to contribute to an increase in the quality of OM research and, as a consequence, to the status of the OM discipline among the scientific community.
Details