Search results
1 – 10 of over 1000Jan F. Klein, Yuchi Zhang, Tomas Falk, Jaakko Aspara and Xueming Luo
In the age of digital media, customers have access to vast digital information sources, within and outside a company's direct control. Yet managers lack a metric to capture…
Abstract
Purpose
In the age of digital media, customers have access to vast digital information sources, within and outside a company's direct control. Yet managers lack a metric to capture customers' cross-media exposure and its ramifications for individual customer journeys. To solve this issue, this article introduces media entropy as a new metric for assessing cross-media exposure on the individual customer level and illustrates its effect on consumers' purchase decisions.
Design/methodology/approach
Building on information and signalling theory, this study proposes the entropy of company-controlled and peer-driven media sources as a measure of cross-media exposure. A probit model analyses individual-level customer journey data across more than 25,000 digital and traditional media touchpoints.
Findings
Cross-media exposure, measured as the entropy of information sources in a customer journey, drives purchase decisions. The positive effect is particularly pronounced for (1) digital (online) versus traditional (offline) media environments, (2) customers who currently do not own the brand and (3) brands that customers perceive as weak.
Practical implications
The proposed metric of cross-media exposure can help managers understand customers' information structures in pre-purchase phases. Assessing the consequences of customers' cross-media exposure is especially relevant for service companies that seek to support customers' information search efforts. Marketing agencies, consultancies and platform providers also need actionable customer journey metrics, particularly in early stages of the journey.
Originality/value
Service managers and marketers can integrate the media entropy metric into their marketing dashboards and use it to steer their investments in different media types. Researchers can include the metric in empirical models to explore customers' omni-channel journeys.
Details
Keywords
Amanda S. Hovious and Brian C. O'Connor
The purpose of this study was to explore the viability of transinformation analysis as a multimodal readability metric. A novel approach was called for, considering that existing…
Abstract
Purpose
The purpose of this study was to explore the viability of transinformation analysis as a multimodal readability metric. A novel approach was called for, considering that existing and established readability metrics are strictly used to measure linguistic complexity. Yet, the corpus of multimodal literature continues to grow, along with the need to understand how non-linguistic modalities contribute to the complexity of the reading experience.
Design/methodology/approach
In this exploratory study, think aloud screen recordings of eighth-grade readers of the born-digital novel Inanimate Alice were analyzed for complexity, along with transcripts of post-oral retellings. Pixel-level entropy analysis served as both an objective measure of the document and a subjective measure of the amount of reader information attention. Post-oral retelling entropy was calculated at the unit level of the word, serving as an indication of complexity in recall.
Findings
Findings confirmed that transinformation analysis is a viable multimodal readability metric. Inanimate Alice is an objectively complex document, creating a subjectively complex reading experience for the participants. Readers largely attended to the linguistic mode of the story, effectively reducing the amount of information they processed. This was also evident in the brevity and below average complexity of their post-oral retellings, which relied on recall of the linguistic mode. There were no significant group differences among the readers.
Originality/value
This is the first study that uses entropy to analyze multimodal readability.
Details
Keywords
Esfandiar Maasoumi and Tong Xu
The purpose of this paper is to combine multidimensional welfare analysis and entropy metrics to derive not only the best relative weights but also substitution degree among…
Abstract
Purpose
The purpose of this paper is to combine multidimensional welfare analysis and entropy metrics to derive not only the best relative weights but also substitution degree among different attributes to construct multidimensional indices of well-being with Chinese Household Income Project Survey 2002 data.
Design/methodology/approach
The authors follow Maasoumi’s two-step measures of multivariate inequality to calculate the inequality for three social groups in China, urban residents, migrants, and rural residents. The two-step approach provides an aggregation formula which is numerically identified in this paper based on a metric entropy distance measure between the distribution of the aggregate well-being functions, on the one hand, and the distribution of the self-reported “happiness” indicator. The authors compare the differences in relative weights and substitution degree for the three groups, and link them to some institutional factors.
Findings
The authors find that incorporating substitution among attributes, and taking into consideration group heterogeneity are very important in multidimensional analysis of well-being.
Originality/value
The two-step approach provides an aggregation formula which is numerically identified in this paper based on a metric entropy distance measure between the distribution of the aggregate well-being functions, on the one hand, and the distribution of the self-reported “happiness” indicator.
Details
Keywords
Esfandiar Maasoumi, Melinda Pitts and Ke Wu
We examine the cardinal gap between wage distributions of the incumbents and newly hired workers based on entropic distances which are well-defined welfare theoretic measures…
Abstract
We examine the cardinal gap between wage distributions of the incumbents and newly hired workers based on entropic distances which are well-defined welfare theoretic measures. Decomposition of several effects is achieved by identifying several counterfactual distributions of different groups. These go beyond the usual Oaxaca–Blinder decompositions at the (linear) conditional means. Much like quantiles, these entropic distances are well-defined inferential objects and functions whose statistical properties have recently been developed. Going beyond these strong rankings and distances, we consider weak uniform ranking of these wage outcomes based on statistical tests for stochastic dominance. The empirical analysis is focused on employees with at least 35 hours of work in the 1996–2012 monthly Current Population Survey (CPS). Among others, we find incumbent workers enjoy a better distribution of wages, but the attribution of the gap to wage inequality and human capital characteristics varies between quantiles. For instance, highly paid new workers are mainly due to human capital components, and in some years, even better wage structure.
Details
Keywords
Surveys some of the important contributions of information theory (IT) to the understanding of systems science and cybernetics. Presents a short background on the main definitions…
Abstract
Surveys some of the important contributions of information theory (IT) to the understanding of systems science and cybernetics. Presents a short background on the main definitions of IT, and examines in which way IT could be thought of as a unified approach to general systems. Analyses the topics: syntax and semantics in information, information and self‐organization, entropy of forms (entropy of non‐random functions), and information in dynamical systems. Enumerates some suggestions for further research and takes this opportunity to describe new points of view, mainly by using entropy of non‐random functions.
Details
Keywords
Balamurugan Souprayen, Ayyasamy Ayyanar and Suresh Joseph K
The purpose of the food traceability is used to retain the good quality of raw material supply, diminish the loss and reduced system complexity.
Abstract
Purpose
The purpose of the food traceability is used to retain the good quality of raw material supply, diminish the loss and reduced system complexity.
Design/methodology/approach
The proposed hybrid algorithm is for food traceability to make accurate predictions and enhanced period data. The operation of the internet of things is addressed to track and trace the food quality to check the data acquired from manufacturers and consumers.
Findings
In order to survive with the existing financial circumstances and the development of global food supply chain, the authors propose efficient food traceability techniques using the internet of things and obtain a solution for data prediction.
Originality/value
The operation of the internet of things is addressed to track and trace the food quality to check the data acquired from manufacturers and consumers. The experimental analysis depicts that proposed algorithm has high accuracy rate, less execution time and error rate.
Details
Keywords
The purpose of the paper is to perform bid mark‐up optimisation through the use of artificial neural networks (ANN) and a metric of the selected bid mark‐up's derived entropy. The…
Abstract
Purpose
The purpose of the paper is to perform bid mark‐up optimisation through the use of artificial neural networks (ANN) and a metric of the selected bid mark‐up's derived entropy. The scope is to provide an alternative, entropy‐based method for bid mark‐up optimisation that improves on the analytical models of Friedman and Gates.
Design/methodology/approach
The proposed method enables the incorporation of bid parameters through the use of ANN's pattern recognition capabilities and the integration of these parameters with a mark‐up selection process that relies on the entropy produced by possible mark‐up values. The entropy metric used is the product of the probability of winning over the bidder's competitors multiplied by the natural logarithm of the inverse of this probability.
Findings
The case study results show that the proposed entropy‐based bidding model compares favourably with the prevailing competitive bidding models of Friedman and Gates, resulting in higher optimisation with regards to the number of jobs won, the monetary value of contracts awarded and the value of “money left on the table”. Furthermore, the method allows for the incorporation of several objective and subjective bid parameters, in contrast to Friedman's and Gates's models, which are based solely on the bid mark‐up history of a bidder's competitors.
Research limitations/implications
While the proposed method is a useful tool for the selection of optimal bid mark‐up values, it requires historical data on the bidding behaviour of key competitors, much like the classic bidding models of Friedman and Gates.
Originality/value
The method is suitable for quantifying objective and subjective competitive bidding parameters and for optimising bid mark‐up values.
Details
Keywords
Mehmet Caner Akay and Hakan Temeltaş
Heterogeneous teams consisting of unmanned ground vehicles and unmanned aerial vehicles are being used for different types of missions such as surveillance, tracking and…
Abstract
Purpose
Heterogeneous teams consisting of unmanned ground vehicles and unmanned aerial vehicles are being used for different types of missions such as surveillance, tracking and exploration. Exploration missions with heterogeneous robot teams (HeRTs) should acquire a common map for understanding the surroundings better. The purpose of this paper is to provide a unique approach with cooperative use of agents that provides a well-detailed observation over the environment where challenging details and complex structures are involved. Also, this method is suitable for real-time applications and autonomous path planning for exploration.
Design/methodology/approach
Lidar odometry and mapping and various similarity metrics such as Shannon entropy, Kullback–Leibler divergence, Jeffrey divergence, K divergence, Topsoe divergence, Jensen–Shannon divergence and Jensen divergence are used to construct a common height map of the environment. Furthermore, the authors presented the layering method that provides more accuracy and a better understanding of the common map.
Findings
In summary, with the experiments, the authors observed features located beneath the trees or the roofed top areas and above them without any need for global positioning system signal. Additionally, a more effective common map that enables planning trajectories for both vehicles is obtained with the determined similarity metric and the layering method.
Originality/value
In this study, the authors present a unique solution that implements various entropy-based similarity metrics with the aim of constructing common maps of the environment with HeRTs. To create common maps, Shannon entropy–based similarity metrics can be used, as it is the only one that holds the chain rule of conditional probability precisely. Seven distinct similarity metrics are compared, and the most effective one is chosen for getting a more comprehensive and valid common map. Moreover, different from all the studies in literature, the layering method is used to compute the similarities of each local map obtained by a HeRT. This method also provides the accuracy of the merged common map, as robots’ sight of view prevents the same observations of the environment in features such as a roofed area or trees. This novel approach can also be used in global positioning system-denied and closed environments. The results are verified with experiments.
Details
Keywords
Amos Golan and Robin L. Lumsdaine
Although in principle prior information can significantly improve inference, incorporating incorrect prior information will bias the estimates of any inferential analysis. This…
Abstract
Although in principle prior information can significantly improve inference, incorporating incorrect prior information will bias the estimates of any inferential analysis. This fact deters many scientists from incorporating prior information into their inferential analyses. In the natural sciences, where experiments are more regularly conducted, and can be combined with other relevant information, prior information is often used in inferential analysis, despite it being sometimes nontrivial to specify what that information is and how to quantify that information. In the social sciences, however, prior information is often hard to come by and very hard to justify or validate. We review a number of ways to construct such information. This information emerges naturally, either from fundamental properties and characteristics of the systems studied or from logical reasoning about the problems being analyzed. Borrowing from concepts and philosophical reasoning used in the natural sciences, and within an info-metrics framework, we discuss three different, yet complimentary, approaches for constructing prior information, with an application to the social sciences.
Details
Keywords
Rafael Renteria, Mario Chong, Irineu de Brito Junior, Ana Luna and Renato Quiliche
This paper aims to design a vulnerability assessment model considering the multidimensional and systematic approach to disaster risk and vulnerability. This model serves to both…
Abstract
Purpose
This paper aims to design a vulnerability assessment model considering the multidimensional and systematic approach to disaster risk and vulnerability. This model serves to both risk mitigation and disaster preparedness phases of humanitarian logistics.
Design/methodology/approach
A survey of 27,218 households in Pueblo Rico and Dosquebradas was conducted to obtain information about disaster risk for landslides, floods and collapses. We adopted a cross entropy-based approach for the measure of disaster vulnerability (Kullback–Leibler divergence), and a maximum-entropy estimation for the reconstruction of risk a priori categorization (logistic regression). The capabilities approach of Sen supported theoretically our multidimensional assessment of disaster vulnerability.
Findings
Disaster vulnerability is shaped by economic, such as physical attributes of households, and health indicators, which are in specific morbidity indicators that seem to affect vulnerability outputs. Vulnerability is heterogeneous between communities/districts according to formal comparisons of Kullback–Leibler divergence. Nor social dimension, neither chronic illness indicators seem to shape vulnerability, at least for Pueblo Rico and Dosquebradas.
Research limitations/implications
The results need a qualitative or case study validation at the community/district level.
Practical implications
We discuss how risk mitigation policies and disaster preparedness strategies can be driven by empirical results. For example, the type of stock to preposition can vary according to the disaster or the kind of alternative policies that can be formulated on the basis of the strong relationship between morbidity and disaster risk.
Originality/value
Entropy-based metrics are not widely used in humanitarian logistics literature, as well as empirical data-driven techniques.
Details