Search results

1 – 10 of 38
Content available
Book part
Publication date: 13 March 2023

Abstract

Details

Artificial Intelligence in Marketing
Type: Book
ISBN: 978-1-80262-875-3

Book part
Publication date: 10 February 2012

Kin Fun Li, Yali Wang and Wei Yu

Purpose — To develop methodologies to evaluate search engines according to an individual's preference in an easy and reliable manner, and to formulate user-oriented metrics to…

Abstract

Purpose — To develop methodologies to evaluate search engines according to an individual's preference in an easy and reliable manner, and to formulate user-oriented metrics to compare freshness and duplication in search results.

Design/methodology/approach — A personalised evaluation model for comparing search engines is designed as a hierarchy of weighted parameters. These commonly found search engine features and performance measures are given quantitative and qualitative ratings by an individual user. Furthermore, three performance measurement metrics are formulated and presented as histograms for visual inspection. A methodology is introduced to quantitatively compare and recognise the different histogram patterns within the context of search engine performance.

Findings — Precision and recall are the fundamental measures used in many search engine evaluations due to their simplicity, fairness and reliability. Most recent evaluation models are user oriented and focus on relevance issues. Identifiable statistical patterns are found in performance measures of search engines.

Research limitations/implications — The specific parameters used in the evaluation model could be further refined. A larger scale user study would confirm the validity and usefulness of the model. The three performance measures presented give a reasonably informative overview of the characteristics of a search engine. However, additional performance parameters and their resulting statistical patterns would make the methodology more valuable to the users.

Practical implications — The easy-to-use personalised search engine evaluation model can be tailored to an individual's preference and needs simply by changing the weights and modifying the features considered. A user is able to get an idea of the characteristics of a search engine quickly using the quantitative measure of histogram patterns that represent the search performance metrics introduced.

Originality/value — The presented work is considered original as one of the first search engine evaluation models that can be personalised. This enables a Web searcher to choose an appropriate search engine for his/her needs and hence finding the right information in the shortest time with the least effort.

Book part
Publication date: 13 March 2023

Xiaohang (Flora) Feng, Shunyuan Zhang and Kannan Srinivasan

The growth of social media and the sharing economy is generating abundant unstructured image and video data. Computer vision techniques can derive rich insights from unstructured…

Abstract

The growth of social media and the sharing economy is generating abundant unstructured image and video data. Computer vision techniques can derive rich insights from unstructured data and can inform recommendations for increasing profits and consumer utility – if only the model outputs are interpretable enough to earn the trust of consumers and buy-in from companies. To build a foundation for understanding the importance of model interpretation in image analytics, the first section of this article reviews the existing work along three dimensions: the data type (image data vs. video data), model structure (feature-level vs. pixel-level), and primary application (to increase company profits vs. to maximize consumer utility). The second section discusses how the “black box” of pixel-level models leads to legal and ethical problems, but interpretability can be improved with eXplainable Artificial Intelligence (XAI) methods. We classify and review XAI methods based on transparency, the scope of interpretability (global vs. local), and model specificity (model-specific vs. model-agnostic); in marketing research, transparent, local, and model-agnostic methods are most common. The third section proposes three promising future research directions related to model interpretability: the economic value of augmented reality in 3D product tracking and visualization, field experiments to compare human judgments with the outputs of machine vision systems, and XAI methods to test strategies for mitigating algorithmic bias.

Book part
Publication date: 15 July 2019

David E. Caughlin and Talya N. Bauer

Data visualizations in some form or another have served as decision-support tools for many centuries. In conjunction with advancements in information technology, data…

Abstract

Data visualizations in some form or another have served as decision-support tools for many centuries. In conjunction with advancements in information technology, data visualizations have become more accessible and more efficient to generate. In fact, virtually all enterprise resource planning and human resource (HR) information system vendors offer off-the-shelf data visualizations as part of decision-support dashboards as well as stand-alone images and displays for reporting. Plus, advances in programing languages and software such as Tableau, Microsoft Power BI, R, and Python have expanded the possibilities of fully customized graphics. Despite the proliferation of data visualization, relatively little is known about how to design data visualizations for displaying different types of HR data to different user groups, for different purposes, and with the overarching goal of improving the ways in which users comprehend and interpret data visualizations for decision-making purposes. To understand the state of science and practice as they relate to HR data visualizations and data visualizations in general, we review the literature on data visualizations across disciplines and offer an organizing framework that emphasizes the roles data visualization characteristics (e.g., display type, features), user characteristics (e.g., experience, individual differences), tasks, and objectives (e.g., compare values) play in user comprehension, interpretation, and decision-making. Finally, we close by proposing future directions for science and practice.

Details

Research in Personnel and Human Resources Management
Type: Book
ISBN: 978-1-78973-852-0

Keywords

Book part
Publication date: 4 December 2020

Aarti Mehta Sharma

Analytics is the science of examining raw data with the purpose of drawing conclusions about that information and using it for decision-making. Before the formal written language…

Abstract

Analytics is the science of examining raw data with the purpose of drawing conclusions about that information and using it for decision-making. Before the formal written language, there were pictures which shared ideas, plans, and history. Most of the knowledge that we have of our ancestors is from these pictures drawn on caves or monuments. In today’s world, visualizations in the form of bar charts, scatter plots, or dashboards are essential tools in business intelligence as they help managers to absorb information and take apt decisions quickly. Dashboards in particular are very helpful for managers as multiple charts and graphs giving the latest information about sales, returns, market share, etc. keep them up to date on the latest developments in the company. There are a number of visualization software in the market which are easy to learn and communicate the analyzed data in an easily understood form; the leading ones being Tableau, QlikView, etc. with each one having its positives. This chapter also looks at the pairing of visualization tools with different measurements of data.

Book part
Publication date: 7 December 2021

Joshua Graff Zivin, Lisa B. Kahn and Matthew Neidell

In this chapter, we examine the impact of pay-for-performance incentives on learning-by-doing. We exploit personnel data on fruit pickers paid under two distinct compensation…

Abstract

In this chapter, we examine the impact of pay-for-performance incentives on learning-by-doing. We exploit personnel data on fruit pickers paid under two distinct compensation contracts: a standard piece rate plan and one with an extra one-time bonus tied to output. Under the latter, we observe bunching of performance just above the bonus threshold, suggesting workers distort their behavior in response to the discrete bonus. Such bunching behavior increases as workers gain experience. At the same time, the bonus contract induces considerable learning-by-doing for workers throughout the productivity distribution who presumably hope to one day hit the target, and these improvements significantly outweigh the losses to the firm from the bunching. In contrast, under the standard piece rate contract, we find minimal evidence of bunching and only small performance improvements at the bottom of the productivity distribution. Our results suggest that contract design can help foster learning on the job, underscoring the importance of dynamic considerations in principle-agent models.

Details

Workplace Productivity and Management Practices
Type: Book
ISBN: 978-1-80117-675-0

Keywords

Book part
Publication date: 6 September 2000

Adam Karp

Discrimination law has evolved from litigating or prosecuting overt, individual cases of egregious behavior solely by means of anecdotal evidence and eyewitness testimony…

Abstract

Discrimination law has evolved from litigating or prosecuting overt, individual cases of egregious behavior solely by means of anecdotal evidence and eyewitness testimony. Statistical evidence came to bear the imprimatur of the United States Supreme Court in the Seventies as a probative means of discerning guilt or liability, and has been used to shore up patterns of prejudice at a systemic level since. Courtrooms of the Twenty-First Century have struggled to define discrimination through a quantitative lens, nonetheless relying on qualitative evidence to assist the factfinder in rendering a verdict. Some definitions carry more precision and accuracy than others. Consider the inflammatory National Law Journal's indictment of the United States Environmental Protection Agency (‘EPA’) as an example of the latter. In 1992, the National Law Journal ran a Special Investigation of the EPA, claiming that the federal government had fostered a racist imbalance in hazardous site cleanup and its pursuit of polluters. Kudos to the columnists for bringing environmental equity into the spotlight of public debate and for forewarning and encouraging the EPA to conduct its enforcements reflectively, in order to avoid being on the receiving end of a Title VI lawsuit. Nonetheless, the methodology used by the National Law Journal belies a total understanding of the bureaucratic structure that pursued these actions and of the notion of statistical significance. This Article confines itself to Region X's actions between 1995 and 1999, applying linear regression and other statistical tests to determine whether biases, found using the National Law Journal's naive methodology, stand after due consideration of chance. The NLJ approach finds evidence of bias, but the author also conducts more complicated and appropriate analyses, such as those contemplated by the National Guidance. After issuing some provisos, the author dismisses charges of racism or classism. While the National Guidance represents a positive first step in identifying environmental justice communities, those with an above-average proportion of lower-class or non-Caucasian inhabitants, it lacks statistical sophistication and econometric depth. This Article concludes by recommending the use of normalized racial distributions, Gini coefficients, and Social Welfare Functions to the EPA and to other organizations conducting environmental justice analysis.

Details

Research in Law and Economics
Type: Book
ISBN: 978-1-84950-022-7

Abstract

Details

Visual Pollution
Type: Book
ISBN: 978-1-80382-042-2

Book part
Publication date: 17 October 2014

Philip Z. Maymin

Economic models based on simple rules can result in complex and unpredictable deterministic dynamics with emergent features similar to those of actual economies. I present several…

Abstract

Economic models based on simple rules can result in complex and unpredictable deterministic dynamics with emergent features similar to those of actual economies. I present several such models ranging from cellular automaton and register machines to quantum computation. The additional benefit of such models is displayed by extending them to model political entanglement to determine the impact of allowing majority redistributive voting. In general, the insights obtained from simulating the computations of simple rules can serve as an additional way to study economics, complementing equilibrium, literary, experimental, and empirical approaches. I culminate by presenting a minimal model of economic complexity that generates complex economic growth and diminishing poverty without any parameter fitting, and which, when modified to incorporate political entanglement, generates volatile stagnation and greater poverty.

Details

Entangled Political Economy
Type: Book
ISBN: 978-1-78441-102-2

Keywords

Book part
Publication date: 6 February 2013

Julie Schell, Brian Lukoff and Eric Mazur

In this chapter, we introduce a new technology for facilitating and measuring learner engagement. The system creates a learning experience for students based on frequent feedback…

Abstract

In this chapter, we introduce a new technology for facilitating and measuring learner engagement. The system creates a learning experience for students based on frequent feedback, which is critical to learning. We open by problematizing traditional approaches to learner engagement that do not maximize the potential of feedback and offer a research-based solution in a new classroom response system (CRS) two of the authors developed at Harvard University – Learning Catalytics. The chapter includes an overview of cognitive science principles linked to student learning and how those principles are tied to Learning Catalytics. We then provide an overview of the limitations of existing CRSs and describe how Learning Catalytics addresses those limitations. Finally, we describe how we used Learning Catalytics to facilitate and measure learner engagement in novel ways, through a pilot implementation in an undergraduate physics classroom at Harvard University. This pilot was guided by two questions: How can we use Learning Catalytics to help students engage with subject matter in ways that will help them learn? And how can we measure student engagement in new ways using the analytics built into the system? The objective of this chapter is to introduce Learning Catalytics as a new instructional tool and respond to these questions.

Details

Increasing Student Engagement and Retention Using Classroom Technologies: Classroom Response Systems and Mediated Discourse Technologies
Type: Book
ISBN: 978-1-78190-512-8

1 – 10 of 38