Search results

11 – 20 of over 87000
Article
Publication date: 19 December 2023

Sunday Olarinre Oladokun and Manya Mainza Mooya

Challenges of property data in developing markets have been reported by several authors. However, a deep understanding of the actual nature of this phenomenon in developing…

Abstract

Purpose

Challenges of property data in developing markets have been reported by several authors. However, a deep understanding of the actual nature of this phenomenon in developing markets is largely lacking as in-depth studies into the actual nature of data challenge in such markets are scarce in literature. Specifically, the available literature lacks clarity about the actual nature of data challenges that developing markets pose to valuers and how this affects valuation practice. This study provides this understanding with focus on the Lagos property market.

Design/methodology/approach

This study utilises a qualitative research approach. A total of 24 valuers were selected using snowballing sampling technique, and in-depth semi-structured interviews were conducted. Data collected were analysed using thematic analysis with the aid of NVivo 12 software.

Findings

The study finds that the main data-related challenge in the Lagos property market is the lack of database of market property transactions and not the lack or absence of transaction data as it has been emphasised in previous studies. Other data-related challenges identified include weak property rights institution with attendant transaction costs, underhand dealings among professionals, undocumented charges, undisclosed information, scarcity of data relating to specialised assets and limited access to the subject property and required documents during valuation. Also, the study unbundles the factors responsible for these challenges and how they affect valuation practice.

Practical implications

The study has implication for practice in the sense that the deeper knowledge of data challenges could provide insight into strategy to tackle the challenges.

Originality/value

This study contributes to the body of knowledge by offering a fresh and in-depth perspective to the issue of data challenges in developing markets and how the peculiar nature of the real estate market affects the nature of data challenges. The qualitative approach adopted in this study allowed for a deep enquiry into the phenomenon and resulted into an extended insight into the peculiar nature of data challenges in a typical developing property market.

Details

Journal of Property Investment & Finance, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1463-578X

Keywords

Article
Publication date: 29 April 2020

Rachel K. Fischer, Aubrey Iglesias, Alice L. Daugherty and Zhehan Jiang

The article presents a methodology that can be used to analyze data from the transaction log of EBSCO Discovery Service searches recorded in Google Analytics. It explains the…

Abstract

Purpose

The article presents a methodology that can be used to analyze data from the transaction log of EBSCO Discovery Service searches recorded in Google Analytics. It explains the steps to follow for exporting the data, analyzing the data, and recreating searches. The article provides suggestions to improve the quality of research on the topic. It also includes advice to vendors on improving the quality of transaction log software.

Design/methodology/approach

Case study

Findings

Although Google Analytics can be used to study transaction logs accurately, vendors still need to improve the functionality so librarians can gain the most benefit from it.

Research limitations/implications

The research is applicable to the usage of Google Analytics with EBSCO Discovery Service.

Practical implications

The steps presented in the article can be followed as a step-by-step guide to repeating the study at other institutions.

Social implications

The methodology in this article can be used to assess how library instruction can be improved.

Originality/value

This article provides a detailed description of a transaction log analysis process that other articles have not previously described. This includes a description of a methodology for accurately calculating statistics from Google Analytics data and provides steps for recreating accurate searches from data recorded in Google Analytics.

Details

Library Hi Tech, vol. 39 no. 1
Type: Research Article
ISSN: 0737-8831

Keywords

Content available
Book part
Publication date: 9 March 2021

Abstract

Details

The Emerald Handbook of Blockchain for Business
Type: Book
ISBN: 978-1-83982-198-1

Article
Publication date: 13 August 2020

Chandra Sekhar Kolli and Uma Devi Tatavarthi

Fraud transaction detection has become a significant factor in the communication technologies and electronic commerce systems, as it affects the usage of electronic payment. Even…

Abstract

Purpose

Fraud transaction detection has become a significant factor in the communication technologies and electronic commerce systems, as it affects the usage of electronic payment. Even though, various fraud detection methods are developed, enhancing the performance of electronic payment by detecting the fraudsters results in a great challenge in the bank transaction.

Design/methodology/approach

This paper aims to design the fraud detection mechanism using the proposed Harris water optimization-based deep recurrent neural network (HWO-based deep RNN). The proposed fraud detection strategy includes three different phases, namely, pre-processing, feature selection and fraud detection. Initially, the input transactional data is subjected to the pre-processing phase, where the data is pre-processed using the Box-Cox transformation to remove the redundant and noise values from data. The pre-processed data is passed to the feature selection phase, where the essential and the suitable features are selected using the wrapper model. The selected feature makes the classifier to perform better detection performance. Finally, the selected features are fed to the detection phase, where the deep recurrent neural network classifier is used to achieve the fraud detection process such that the training process of the classifier is done by the proposed Harris water optimization algorithm, which is the integration of water wave optimization and Harris hawks optimization.

Findings

Moreover, the proposed HWO-based deep RNN obtained better performance in terms of the metrics, such as accuracy, sensitivity and specificity with the values of 0.9192, 0.7642 and 0.9943.

Originality/value

An effective fraud detection method named HWO-based deep RNN is designed to detect the frauds in the bank transaction. The optimal features selected using the wrapper model enable the classifier to find fraudulent activities more efficiently. However, the accurate detection result is evaluated through the optimization model based on the fitness measure such that the function with the minimal error value is declared as the best solution, as it yields better detection results.

Open Access
Article
Publication date: 21 January 2020

Martin Jullum, Anders Løland, Ragnar Bang Huseby, Geir Ånonsen and Johannes Lorentzen

The purpose of this paper is to develop, describe and validate a machine learning model for prioritising which financial transactions should be manually investigated for potential…

37061

Abstract

Purpose

The purpose of this paper is to develop, describe and validate a machine learning model for prioritising which financial transactions should be manually investigated for potential money laundering. The model is applied to a large data set from Norway’s largest bank, DNB.

Design/methodology/approach

A supervised machine learning model is trained by using three types of historic data: “normal” legal transactions; those flagged as suspicious by the bank’s internal alert system; and potential money laundering cases reported to the authorities. The model is trained to predict the probability that a new transaction should be reported, using information such as background information about the sender/receiver, their earlier behaviour and their transaction history.

Findings

The paper demonstrates that the common approach of not using non-reported alerts (i.e. transactions that are investigated but not reported) in the training of the model can lead to sub-optimal results. The same applies to the use of normal (un-investigated) transactions. Our developed method outperforms the bank’s current approach in terms of a fair measure of performance.

Originality/value

This research study is one of very few published anti-money laundering (AML) models for suspicious transactions that have been applied to a realistically sized data set. The paper also presents a new performance measure specifically tailored to compare the proposed method to the bank’s existing AML system.

Details

Journal of Money Laundering Control, vol. 23 no. 1
Type: Research Article
ISSN: 1368-5201

Keywords

Article
Publication date: 26 August 2014

Marian Alexander Dietzel, Nicole Braun and Wolfgang Schäfers

The purpose of this paper is to examine internet search query data provided by “Google Trends”, with respect to its ability to serve as a sentiment indicator and improve…

2055

Abstract

Purpose

The purpose of this paper is to examine internet search query data provided by “Google Trends”, with respect to its ability to serve as a sentiment indicator and improve commercial real estate forecasting models for transactions and price indices.

Design/methodology/approach

This paper examines internet search query data provided by “Google Trends”, with respect to its ability to serve as a sentiment indicator and improve commercial real estate forecasting models for transactions and price indices.

Findings

The empirical results show that all models augmented with Google data, combining both macro and search data, significantly outperform baseline models which abandon internet search data. Models based on Google data alone, outperform the baseline models in all cases. The models achieve a reduction over the baseline models of the mean squared forecasting error for transactions and prices of up to 35 and 54 per cent, respectively.

Practical implications

The results suggest that Google data can serve as an early market indicator. The findings of this study suggest that the inclusion of Google search data in forecasting models can improve forecast accuracy significantly. This implies that commercial real estate forecasters should consider incorporating this free and timely data set into their market forecasts or when performing plausibility checks for future investment decisions.

Originality/value

This is the first paper applying Google search query data to the commercial real estate sector.

Details

Journal of Property Investment & Finance, vol. 32 no. 6
Type: Research Article
ISSN: 1463-578X

Keywords

Article
Publication date: 2 May 2017

Kannan S. and Somasundaram K.

Due to the large-size, non-uniform transactions per day, the money laundering detection (MLD) is a time-consuming and difficult process. The major purpose of the proposed…

Abstract

Purpose

Due to the large-size, non-uniform transactions per day, the money laundering detection (MLD) is a time-consuming and difficult process. The major purpose of the proposed auto-regressive (AR) outlier-based MLD (AROMLD) is to reduce the time consumption for handling large-sized non-uniform transactions.

Design/methodology/approach

The AR-based outlier design produces consistent asymptotic distributed results that enhance the demand-forecasting abilities. Besides, the inter-quartile range (IQR) formulations proposed in this paper support the detailed analysis of time-series data pairs.

Findings

The prediction of high-dimensionality and the difficulties in the relationship/difference between the data pairs makes the time-series mining as a complex task. The presence of domain invariance in time-series mining initiates the regressive formulation for outlier detection. The deep analysis of time-varying process and the demand of forecasting combine the AR and the IQR formulations for an effective outlier detection.

Research limitations/implications

The present research focuses on the detection of an outlier in the previous financial transaction, by using the AR model. Prediction of the possibility of an outlier in future transactions remains a major issue.

Originality/value

The lack of prior segmentation of ML detection suffers from dimensionality. Besides, the absence of boundary to isolate the normal and suspicious transactions induces the limitations. The lack of deep analysis and the time consumption are overwhelmed by using the regression formulation.

Details

Journal of Money Laundering Control, vol. 20 no. 2
Type: Research Article
ISSN: 1368-5201

Keywords

Article
Publication date: 1 February 1993

Martin Kurth

Introduction Since the earliest transaction monitoring studies, researchers have encountered the boundaries that define transaction log analysis as a methodology for studying the…

Abstract

Introduction Since the earliest transaction monitoring studies, researchers have encountered the boundaries that define transaction log analysis as a methodology for studying the use of online information retrieval systems. Because, among other reasons, transaction log databases contain relatively few fields and lack sufficient retrieval tools, students of transaction log data have begun to ask as many questions about what transaction logs cannot reveal as they have asked about what transaction logs can reveal. Researchers have conducted transaction monitoring studies to understand the objective phenomena embodied in this statement: “Library patrons enter searches into online information retrieval systems.” Transaction log data effectively describe what searches patrons enter and when they enter them, but they don't reflect, except through inference, who enters the searches, why they enter them, and how satisfied they are with their results.

Details

Library Hi Tech, vol. 11 no. 2
Type: Research Article
ISSN: 0737-8831

Article
Publication date: 31 May 2022

Mark E. Lokanan

This paper aims to reviews the literature on applying visualization techniques to detect credit card fraud (CCF) and suspicious money laundering transactions.

Abstract

Purpose

This paper aims to reviews the literature on applying visualization techniques to detect credit card fraud (CCF) and suspicious money laundering transactions.

Design/methodology/approach

In surveying the literature on visual fraud detection in these two domains, this paper reviews: the current use of visualization techniques, the variations of visual analytics used and the challenges of these techniques.

Findings

The findings reveal how visual analytics is used to detect outliers in CCF detection and identify links to criminal networks in money laundering transactions. Graph methodology and unsupervised clustering analyses are the most dominant types of visual analytics used for CCF detection. In contrast, network and graph analytics are heavily used in identifying criminal relationships in money laundering transactions.

Originality/value

Some common challenges in using visualization techniques to identify fraudulent transactions in both domains relate to data complexity and fraudsters’ ability to evade monitoring mechanisms.

Details

Journal of Money Laundering Control, vol. 26 no. 3
Type: Research Article
ISSN: 1368-5201

Keywords

Article
Publication date: 1 March 2005

Lynn Chmelir

To report and analyze transaction data over a four‐year period for patron‐initiated borrowing via the Cascade union catalog as well as transaction data for traditional ILL in a…

791

Abstract

Purpose

To report and analyze transaction data over a four‐year period for patron‐initiated borrowing via the Cascade union catalog as well as transaction data for traditional ILL in a consortium of six academic libraries in Washington State.

Design/methodology/approach

Transaction data for patron‐initiated borrowing via the Cascade union catalog were gathered from statistics produced by the Inn‐Reach software. Data for ILL were collected via a survey of libraries’ staff. Data for returnables and copies were analyzed at the consortium and institutional level.

Findings

In the third year of patron‐initiated borrowing, traditional ILL transactions for returnables had decreased 21 per cent consortium‐wide, the total number of transactions for returnables had increased 271.9 per cent, and the transactions for copies remained steady. Although the borrowing and lending patterns at the six libraries varied, each loaned and borrowed more returnables via patron‐initiated borrowing than via traditional ILL.

Research limitations/implications

This study describes activity at a single consortium of only six libraries. Since the Cascade libraries have now merged into a larger consortium, the Orbis Cascade Alliance, it would be interesting to collect and analyze new data from the larger group to see if patterns have changed.

Practical implications

The increased volume of returnables delivered to users in this consortium suggests that patron‐initiated borrowing is an effective method for resource sharing. Traditional ILL remains a necessary alternative for copies and books not available within the consortium.

Originality/value

This is the first study to examine consortium‐wide transaction data for both patron‐initiated borrowing and traditional interlibrary loan for a sustained period of time.

Details

Interlending & Document Supply, vol. 33 no. 1
Type: Research Article
ISSN: 0264-1615

Keywords

11 – 20 of over 87000