Search results
1 – 10 of over 73000Recently the advances in wireless communication technology and the popularity of portable computers have rendered mobile computing environments from which mobile users with…
Abstract
Purpose
Recently the advances in wireless communication technology and the popularity of portable computers have rendered mobile computing environments from which mobile users with battery‐operated palmtops can access the information via wireless channels, without space and time restriction. In mobile computing environments, mobile users cache the data items to use the bandwidth efficiently and improve the response time of mobile transactions. If the data items cached in mobile users are updated at the server, the server broadcasts an invalidation report for maintaining the cache consistency of mobile users. However, this method has an obstacle that does not guarantee the serializable execution of mobile transactions. The purpose of this paper is to propose the four types of reports for mobile transaction (FTR‐MT) method for ensuring the serializable execution of mobile transactions.
Design/methodology/approach
The paper describes the FTR‐MT method, which is composed of four types of algorithms, e.g. group report composition algorithm, immediate commit decision algorithm, cache consistency algorithm, and disconnection cache consistency algorithm. FTR‐MT method for improving the response time of mobile transactions makes a commit decision by using the four types of reports.
Findings
With the FTR‐MT method, mobile users can make a commit decision by using the four types of reports. The response time of mobile transactions can be reduced. Furthermore, the FTR‐MT method can improve the cache efficiency in the case that the disconnection of mobile users is longer than the broadcast interval of the window report.
Originality/value
This paper proposes a new method for guaranteeing the serializable execution of mobile transactions, called FTR‐MT, using four types of reports. Also, it can prevent the entire cache dropping, even though the disconnection of a mobile host is longer than the broadcast interval of a window report. Through the analytical model, this method is felt to be superior to other methods, in terms of the average response time and the commit rate of mobile transactions, and bandwidth usage.
Details
Keywords
The purpose of this paper is to demonstrate the technical feasibility of implementing multi-view visualization methods to assist auditors in reviewing the integrity of high-volume…
Abstract
Purpose
The purpose of this paper is to demonstrate the technical feasibility of implementing multi-view visualization methods to assist auditors in reviewing the integrity of high-volume accounting transactions. Modern enterprise resource planning (ERP) systems record several thousands of transactions daily. This makes it difficult to find a few instances of anomalous activities among legitimate transactions. Although continuous auditing and continuous monitoring systems perform substantial analytics, they often produce lengthy reports that require painstaking post-analysis. Approaches that reduce the burden of excessive information are more likely to contribute to the overall effectiveness of the audit process. The authors address this issue by designing and testing the use of visualization methods to present information graphically, to assist auditors in detecting anomalous and potentially fraudulent accounts payable transactions. The strength of the authors ' approach is its capacity for discovery and recognition of new and unexpected insights.
Design/methodology/approach
Data were obtained from the SAP enterprise (ERP) system of a real-world organization. A framework for performing visual analytics was developed and applied to the data to determine its usefulness and effectiveness in identifying anomalous activities.
Findings
The paper provides valuable insights into understanding the use of different types of visualizations to effectively identify anomalous activities.
Research limitations/implications
Because this study emphasizes asset misappropriation, generalizing these findings to other categories of fraud, such as accounts receivable, must be made with caution.
Practical implications
This paper provides a framework for developing an automated visualization solution which may have implications in practice.
Originality/value
This paper demonstrates the need to understand the effectiveness of visualizations in detecting accounting fraud. This is directly applicable to organizations investigating methods of improving fraud detection in their ERP systems.
Details
Keywords
Ruey‐Kei Chiu, S.C. Lenny Koh and Chi‐Ming Chang
The purpose of this paper is to provide a data framework to support the incremental aggregation of, and an effective data refresh model to maintain the data consistency in, an…
Abstract
Purpose
The purpose of this paper is to provide a data framework to support the incremental aggregation of, and an effective data refresh model to maintain the data consistency in, an aggregated centralized database.
Design/methodology/approach
It is based on a case study of enterprise distributed databases aggregation for Taiwan's National Immunization Information System (NIIS). Selective data replication aggregated the distributed databases to the central database. The data refresh model assumed heterogeneous aggregation activity within the distributed database systems. The algorithm of the data refresh model followed a lazy replication scheme but update transactions were only allowed on the distributed databases.
Findings
It was found that the approach to implement the data refreshment for the aggregation of heterogeneous distributed databases can be more effectively achieved through the design of a refresh algorithm and standardization of message exchange between distributed and central databases.
Research limitations/implications
The transaction records are stored and transferred in standardized XML format. It is more time‐consuming in record transformation and interpretation but it does have higher transportability and compatibility over different platforms in data refreshment with equal performance. The distributed database designer should manage these issues as well assure the quality.
Originality/value
The data system model presented in this paper may be applied to other similar implementations because its approach is not restricted to a specific database management system and it uses standardized XML message for transaction exchange.
Details
Keywords
Most automated library systems include a transaction logging component. Yet this fact may be among the best kept secrets in the automated library arena. Often only a few people…
Abstract
Most automated library systems include a transaction logging component. Yet this fact may be among the best kept secrets in the automated library arena. Often only a few people within a library are aware of its existence, and even fewer have access to the transaction log data. This is unfortunate, since the concrete data garnered by transaction logs can provide bibliographic instructors, reference staff members, systems librarians, and system designers with unique and valuable insights into the patron/system interaction.
Muhammad Al-Abdullah, Izzat Alsmadi, Ruwaida AlAbdullah and Bernie Farkas
The paper posits that a solution for businesses to use privacy-friendly data repositories for its customers’ data is to change from the traditional centralized repository to a…
Abstract
Purpose
The paper posits that a solution for businesses to use privacy-friendly data repositories for its customers’ data is to change from the traditional centralized repository to a trusted, decentralized data repository. Blockchain is a technology that provides such a data repository. However, the European Union’s General Data Protection Regulation (GDPR) assumed a centralized data repository, and it is commonly argued that blockchain technology is not usable. This paper aims to posit a framework for adopting a blockchain that follows the GDPR.
Design/methodology/approach
The paper uses the Levy and Ellis’ narrative review of literature methodology, which is based on constructivist theory posited by Lincoln and Guba. Using five information systems and computer science databases, the researchers searched for studies using the keywords GDPR and blockchain, using a forward and backward search technique. The search identified a corpus of 416 candidate studies, from which the researchers applied pre-established criteria to select 39 studies. The researchers mined this corpus for concepts, which they clustered into themes. Using the accepted computer science practice of privacy by design, the researchers combined the clustered themes into the paper’s posited framework.
Findings
The paper posits a framework that provides architectural tactics for designing a blockchain that follows GDPR to enhance privacy. The framework explicitly addresses the challenges of GDPR compliance using the unimagined decentralized storage of personal data. The framework addresses the blockchain–GDPR tension by establishing trust between a business and its customers vis-à-vis storing customers’ data. The trust is established through blockchain’s capability of providing the customer with private keys and control over their data, e.g. processing and access.
Research limitations/implications
The paper provides a framework that demonstrates that blockchain technology can be designed for use in GDPR compliant solutions. In using the framework, a blockchain-based solution provides the ability to audit and monitor privacy measures, demonstrates a legal justification for processing activities, incorporates a data privacy policy, provides a map for data processing and ensures security and privacy awareness among all actors. The research is limited to a focus on blockchain–GDPR compliance; however, future research is needed to investigate the use of the framework in specific domains.
Practical implications
The paper posits a framework that identifies the strategies and tactics necessary for GDPR compliance. Practitioners need to compliment the framework with rigorous privacy risk management, i.e. conducting a privacy risk analysis, identifying strategies and tactics to address such risks and preparing a privacy impact assessment that enhances accountability and transparency of a blockchain.
Originality/value
With the increasingly strategic use of data by businesses and the contravening growth of data privacy regulation, alternative technologies could provide businesses with a means to nurture trust with its customers regarding collected data. However, it is commonly assumed that the decentralized approach of blockchain technology cannot be applied to this business need. This paper posits a framework that enables a blockchain to be designed that follows the GDPR; thereby, providing an alternative for businesses to collect customers’ data while ensuring the customers’ trust.
Details
Keywords
Rachel K. Fischer, Aubrey Iglesias, Alice L. Daugherty and Zhehan Jiang
The article presents a methodology that can be used to analyze data from the transaction log of EBSCO Discovery Service searches recorded in Google Analytics. It explains the…
Abstract
Purpose
The article presents a methodology that can be used to analyze data from the transaction log of EBSCO Discovery Service searches recorded in Google Analytics. It explains the steps to follow for exporting the data, analyzing the data, and recreating searches. The article provides suggestions to improve the quality of research on the topic. It also includes advice to vendors on improving the quality of transaction log software.
Design/methodology/approach
Case study
Findings
Although Google Analytics can be used to study transaction logs accurately, vendors still need to improve the functionality so librarians can gain the most benefit from it.
Research limitations/implications
The research is applicable to the usage of Google Analytics with EBSCO Discovery Service.
Practical implications
The steps presented in the article can be followed as a step-by-step guide to repeating the study at other institutions.
Social implications
The methodology in this article can be used to assess how library instruction can be improved.
Originality/value
This article provides a detailed description of a transaction log analysis process that other articles have not previously described. This includes a description of a methodology for accurately calculating statistics from Google Analytics data and provides steps for recreating accurate searches from data recorded in Google Analytics.
Details
Keywords
Chandra Sekhar Kolli and Uma Devi Tatavarthi
Fraud transaction detection has become a significant factor in the communication technologies and electronic commerce systems, as it affects the usage of electronic payment. Even…
Abstract
Purpose
Fraud transaction detection has become a significant factor in the communication technologies and electronic commerce systems, as it affects the usage of electronic payment. Even though, various fraud detection methods are developed, enhancing the performance of electronic payment by detecting the fraudsters results in a great challenge in the bank transaction.
Design/methodology/approach
This paper aims to design the fraud detection mechanism using the proposed Harris water optimization-based deep recurrent neural network (HWO-based deep RNN). The proposed fraud detection strategy includes three different phases, namely, pre-processing, feature selection and fraud detection. Initially, the input transactional data is subjected to the pre-processing phase, where the data is pre-processed using the Box-Cox transformation to remove the redundant and noise values from data. The pre-processed data is passed to the feature selection phase, where the essential and the suitable features are selected using the wrapper model. The selected feature makes the classifier to perform better detection performance. Finally, the selected features are fed to the detection phase, where the deep recurrent neural network classifier is used to achieve the fraud detection process such that the training process of the classifier is done by the proposed Harris water optimization algorithm, which is the integration of water wave optimization and Harris hawks optimization.
Findings
Moreover, the proposed HWO-based deep RNN obtained better performance in terms of the metrics, such as accuracy, sensitivity and specificity with the values of 0.9192, 0.7642 and 0.9943.
Originality/value
An effective fraud detection method named HWO-based deep RNN is designed to detect the frauds in the bank transaction. The optimal features selected using the wrapper model enable the classifier to find fraudulent activities more efficiently. However, the accurate detection result is evaluated through the optimization model based on the fitness measure such that the function with the minimal error value is declared as the best solution, as it yields better detection results.
Details
Keywords
Martin Jullum, Anders Løland, Ragnar Bang Huseby, Geir Ånonsen and Johannes Lorentzen
The purpose of this paper is to develop, describe and validate a machine learning model for prioritising which financial transactions should be manually investigated for potential…
Abstract
Purpose
The purpose of this paper is to develop, describe and validate a machine learning model for prioritising which financial transactions should be manually investigated for potential money laundering. The model is applied to a large data set from Norway’s largest bank, DNB.
Design/methodology/approach
A supervised machine learning model is trained by using three types of historic data: “normal” legal transactions; those flagged as suspicious by the bank’s internal alert system; and potential money laundering cases reported to the authorities. The model is trained to predict the probability that a new transaction should be reported, using information such as background information about the sender/receiver, their earlier behaviour and their transaction history.
Findings
The paper demonstrates that the common approach of not using non-reported alerts (i.e. transactions that are investigated but not reported) in the training of the model can lead to sub-optimal results. The same applies to the use of normal (un-investigated) transactions. Our developed method outperforms the bank’s current approach in terms of a fair measure of performance.
Originality/value
This research study is one of very few published anti-money laundering (AML) models for suspicious transactions that have been applied to a realistically sized data set. The paper also presents a new performance measure specifically tailored to compare the proposed method to the bank’s existing AML system.
Details
Keywords
Marian Alexander Dietzel, Nicole Braun and Wolfgang Schäfers
The purpose of this paper is to examine internet search query data provided by “Google Trends”, with respect to its ability to serve as a sentiment indicator and improve…
Abstract
Purpose
The purpose of this paper is to examine internet search query data provided by “Google Trends”, with respect to its ability to serve as a sentiment indicator and improve commercial real estate forecasting models for transactions and price indices.
Design/methodology/approach
This paper examines internet search query data provided by “Google Trends”, with respect to its ability to serve as a sentiment indicator and improve commercial real estate forecasting models for transactions and price indices.
Findings
The empirical results show that all models augmented with Google data, combining both macro and search data, significantly outperform baseline models which abandon internet search data. Models based on Google data alone, outperform the baseline models in all cases. The models achieve a reduction over the baseline models of the mean squared forecasting error for transactions and prices of up to 35 and 54 per cent, respectively.
Practical implications
The results suggest that Google data can serve as an early market indicator. The findings of this study suggest that the inclusion of Google search data in forecasting models can improve forecast accuracy significantly. This implies that commercial real estate forecasters should consider incorporating this free and timely data set into their market forecasts or when performing plausibility checks for future investment decisions.
Originality/value
This is the first paper applying Google search query data to the commercial real estate sector.
Details
Keywords
Due to the large-size, non-uniform transactions per day, the money laundering detection (MLD) is a time-consuming and difficult process. The major purpose of the proposed…
Abstract
Purpose
Due to the large-size, non-uniform transactions per day, the money laundering detection (MLD) is a time-consuming and difficult process. The major purpose of the proposed auto-regressive (AR) outlier-based MLD (AROMLD) is to reduce the time consumption for handling large-sized non-uniform transactions.
Design/methodology/approach
The AR-based outlier design produces consistent asymptotic distributed results that enhance the demand-forecasting abilities. Besides, the inter-quartile range (IQR) formulations proposed in this paper support the detailed analysis of time-series data pairs.
Findings
The prediction of high-dimensionality and the difficulties in the relationship/difference between the data pairs makes the time-series mining as a complex task. The presence of domain invariance in time-series mining initiates the regressive formulation for outlier detection. The deep analysis of time-varying process and the demand of forecasting combine the AR and the IQR formulations for an effective outlier detection.
Research limitations/implications
The present research focuses on the detection of an outlier in the previous financial transaction, by using the AR model. Prediction of the possibility of an outlier in future transactions remains a major issue.
Originality/value
The lack of prior segmentation of ML detection suffers from dimensionality. Besides, the absence of boundary to isolate the normal and suspicious transactions induces the limitations. The lack of deep analysis and the time consumption are overwhelmed by using the regression formulation.
Details