Search results

1 – 10 of over 93000
Book part
Publication date: 13 March 2023

MengQi (Annie) Ding and Avi Goldfarb

This article reviews the quantitative marketing literature on artificial intelligence (AI) through an economics lens. We apply the framework in Prediction Machines: The Simple

Abstract

This article reviews the quantitative marketing literature on artificial intelligence (AI) through an economics lens. We apply the framework in Prediction Machines: The Simple Economics of Artificial Intelligence to systematically categorize 96 research papers on AI in marketing academia into five levels of impact, which are prediction, decision, tool, strategy, and society. For each paper, we further identify each individual component of a task, the research question, the AI model used, and the broad decision type. Overall, we find there are fewer marketing papers focusing on strategy and society, and accordingly, we discuss future research opportunities in those areas.

Details

Artificial Intelligence in Marketing
Type: Book
ISBN: 978-1-80262-875-3

Keywords

Abstract

Details

Machine Learning and Artificial Intelligence in Marketing and Sales
Type: Book
ISBN: 978-1-80043-881-1

Article
Publication date: 18 November 2019

Guanying Huo, Xin Jiang, Zhiming Zheng and Deyi Xue

Metamodeling is an effective method to approximate the relations between input and output parameters when significant efforts of experiments and simulations are required to…

Abstract

Purpose

Metamodeling is an effective method to approximate the relations between input and output parameters when significant efforts of experiments and simulations are required to collect the data to build the relations. This paper aims to develop a new sequential sampling method for adaptive metamodeling by using the data with highly nonlinear relation between input and output parameters.

Design/methodology/approach

In this method, the Latin hypercube sampling method is used to sample the initial data, and kriging method is used to construct the metamodel. In this work, input parameter values for collecting the next output data to update the currently achieved metamodel are determined based on qualities of data in both the input and output parameter spaces. Uniformity is used to evaluate data in the input parameter space. Leave-one-out errors and sensitivities are considered to evaluate data in the output parameter space.

Findings

This new method has been compared with the existing methods to demonstrate its effectiveness in approximation. This new method has also been compared with the existing methods in solving global optimization problems. An engineering case is used at last to verify the method further.

Originality/value

This paper provides an effective sequential sampling method for adaptive metamodeling to approximate highly nonlinear relations between input and output parameters.

Details

Engineering Computations, vol. 37 no. 3
Type: Research Article
ISSN: 0264-4401

Keywords

Book part
Publication date: 13 March 2023

Xiao Liu

The expansion of marketing data is encouraging the growing use of deep learning (DL) in marketing. I summarize the intuition behind deep learning and explain the mechanisms of six…

Abstract

The expansion of marketing data is encouraging the growing use of deep learning (DL) in marketing. I summarize the intuition behind deep learning and explain the mechanisms of six popular algorithms: three discriminative (convolutional neural network (CNN), recurrent neural network (RNN), and Transformer), two generative (variational autoencoder (VAE) and generative adversarial networks (GAN)), and one RL (DQN). I discuss what marketing problems DL is useful for and what fueled its growth in recent years. I emphasize the power and flexibility of DL for modeling unstructured data when formal theories and knowledge are absent. I also describe future research directions.

Article
Publication date: 4 April 2016

Pin Shen Teh, Ning Zhang, Andrew Beng Jin Teoh and Ke Chen

The use of mobile devices in handling our daily activities that involve the storage or access of sensitive data (e.g. on-line banking, paperless prescription services, etc.) is…

Abstract

Purpose

The use of mobile devices in handling our daily activities that involve the storage or access of sensitive data (e.g. on-line banking, paperless prescription services, etc.) is becoming very common. These mobile electronic services typically use a knowledge-based authentication method to authenticate a user (claimed identity). However, this authentication method is vulnerable to several security attacks. To counter the attacks and to make the authentication process more secure, this paper aims to investigate the use of touch dynamics biometrics in conjunction with a personal identification number (PIN)-based authentication method, and demonstrate its benefits in terms of strengthening the security of authentication services for mobile devices.

Design/methodology/approach

The investigation has made use of three light-weighted matching functions and a comprehensive reference data set collected from 150 subjects.

Findings

The investigative results show that, with this multi-factor authentication approach, even when the PIN is exposed, as much as nine out of ten impersonation attempts can be successfully identified. It has also been discovered that the accuracy performance can be increased by combining different feature data types and by increasing the input string length.

Originality/value

The novel contributions of this paper are twofold. Firstly, it describes how a comprehensive experiment is set up to collect touch dynamics biometrics data, and the set of collected data is being made publically available, which may facilitate further research in the problem domain. Secondly, the paper demonstrates how the data set may be used to strengthen the protection of resources that are accessible via mobile devices.

Details

International Journal of Pervasive Computing and Communications, vol. 12 no. 1
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 27 July 2010

Dragan Ivanović, Gordana Milosavljević, Branko Milosavljević and Dušan Surla

Entering data about published research results should be implemented as a web application that enables authors to input their own data without the knowledge of the bibliographic…

Abstract

Purpose

Entering data about published research results should be implemented as a web application that enables authors to input their own data without the knowledge of the bibliographic standard. The aim of this research is to develop a research management system based on a bibliographic standard and to provide data exchange with other research management systems based on the Common European Research Information Format (CERIF) data model.

Design/methodology/approach

Object‐oriented methodology was used for information system modelling. The modelling was carried out using the computer‐aided software engineering (CASE) tool that supports the Unified Modelling Language 2.0 (UML 2.0). The implementation was realised using a set of open‐source solutions written in Java.

Findings

The result is a system for managing data about published research results. The main system features are the following: public access via the web; authors input data about their own publications by themselves; data about publications are stored in the MARC 21 format; and the user interface enables authors to input data without the knowledge of the MARC 21 format.

Research limitations/implications

A method of verifying accuracy of entered data has not been considered yet. It is necessary to allow authorised persons to verify the accuracy of the data. After verifying the accuracy the authors cannot change the data.

Practical implications

This software system has been verified and tested on data about published results of researchers employed at the University of Novi Sad in Serbia. This system can be used for evaluation and reporting on scientific research results, generating bibliographies of researchers, research groups and institutions etc.

Originality/value

A part of the research management system for entering data about authors and published results is implemented. Data about publications are stored in a bibliographic format and authors can input data about their own publications without the knowledge of the bibliographic standard. The main feature of the system architecture is mutual independence of the component for interaction with users and the component for persisting and retrieving data from the bibliographic records database.

Details

Program, vol. 44 no. 3
Type: Research Article
ISSN: 0033-0337

Keywords

Abstract

Details

An Input-output Analysis of European Integration
Type: Book
ISBN: 978-0-44451-088-4

Article
Publication date: 1 February 1977

Janet S. Pickles

Input methods for circulation systems are considered, with particular attention to the range of checks possible at the time data is being input. The amount of checking possible…

Abstract

Input methods for circulation systems are considered, with particular attention to the range of checks possible at the time data is being input. The amount of checking possible depends on whether the system is off‐line, on‐line real‐time, or hybrid, although the advent of microprocessors enables extra checking to take place in all these types of system. Examples are given of checks, which can detect hardware malfunctioning and operator error, and can compare information, as it is input, against variable sets of data (e.g., in an on‐line real‐time system, against the number of books on loan to a borrower). A list of points to consider when assessing data collection equipment is given, followed by notes on the three kinds of equipment (ALS, Plessey, Telepen) most widely used in United Kingdom circulation systems. It is noted that there is an increasing range of choice of equipment and software, and that the major consideration when assessing the options must be the individual library's requirements.

Details

Program, vol. 11 no. 2
Type: Research Article
ISSN: 0033-0337

Article
Publication date: 25 June 2019

Sushama Murty and Resham Nagpal

The purpose of this paper is to measure technical efficiency of Indian thermal power sector employing the recent by-production approach.

Abstract

Purpose

The purpose of this paper is to measure technical efficiency of Indian thermal power sector employing the recent by-production approach.

Design/methodology/approach

The by-production approach is used in conjunction with data from the Central Electricity Authority (CEA) of India to compute the output-based Färe, Grosskopf, Lovell (FGL) efficiency index and its decomposition into productive and environmental efficiency indexes for the ITPPs

Findings

The authors show that given the aggregated nature of data on coal reported by CEA, CEA’s computation of CO2 emissions through a deterministic linear formula that does not distinguish between different coal types and the tiny share of oil in coal-based power plants, the computed output-based environmental efficiency indexes are no longer informative. Meaningful measurement of environmental efficiency using CEA data is possible only along the dimension of the coal input. Productive efficiency is positively associated with the engineering concept of thermodynamic/energy efficiency and is also high for power plants with high operating availabilities reflecting better management and O&M practices. Both these factors are high for private and centrally owned as opposed to state-owned power-generating companies. The example of Sipat demonstrates the importance of (ultra)supercritical technologies in increasing productive and thermodynamic efficiencies of the ITPPs, while also reducing CO2 emitted per unit of the net electricity generated.

Originality/value

This paper uses the by-production approach for the first time to measure technical efficiency of ITPPs and highlights how the nature of the Indian data impacts on efficiency measurement.

Article
Publication date: 3 January 2023

Gangting Huang, Yunfei Li, Yajun Luo, Shilin Xie and Yahong Zhang

In order to improve the computation efficiency of the four-point rainflow algorithm, a one-stage extraction four-point rainflow algorithm is proposed based on a novel data

Abstract

Purpose

In order to improve the computation efficiency of the four-point rainflow algorithm, a one-stage extraction four-point rainflow algorithm is proposed based on a novel data preprocessing method.

Design/methodology/approach

In this new algorithm, the procedure of cycle counting is simplified by introducing the data preprocessing method. The high efficiency of new algorithm makes it a preferable candidate in fatigue life online estimation of structural health monitoring systems.

Findings

According to the data preprocessing method, in the process of cycle extraction, all equivalent cycles can be extracted at just one stage instead of two stages in the four-point rainflow algorithm, where the cycle extraction has to be performed from the doubled residue. Besides, there are no residues in the new algorithm. The extensive numerical simulation results demonstrate that the accuracy of new algorithm is the same as that of the four-point rainflow algorithm. Moreover, a comparative study based on a long input data sequence shows that the computation efficiency of the new algorithm is 42% higher than that of the four-point rainflow algorithm.

Originality/value

This merit of new algorithm makes it preferable in some application scenarios where fatigue life estimation needs to be accomplished online based on massive measured data. And it may attribute to preprocessing of input data sequence before data processing, which provides beneficial guidance to improve the efficiency of existing algorithms.

1 – 10 of over 93000