Search results

1 – 10 of over 4000
Article
Publication date: 3 October 2008

Terry Lerch, Sean Anthony and Tanya Domina

The purpose of this paper is to validate the accuracy of point cloud data generated from a 3D body scanner.

Abstract

Purpose

The purpose of this paper is to validate the accuracy of point cloud data generated from a 3D body scanner.

Design/methodology/approach

A female dress form was scanned with an X‐ray computed tomography (CT) system and a 3D body scanning system. The point cloud data from four axial slices of the body scan (BS) data were compared with the corresponding axial slices from the CT data. Length and cross‐sectional area measurements of each slice were computed for each scanning technique.

Findings

The point cloud data from the body scanner were accurate to at least 2.0 percent when compared with the CT data. In many cases, the length and area measurements from the two types of scans varied by less than 1.0 percent.

Research limitations/implications

Only two length measurements and a cross‐sectional area measurement were compared for each axial slice, resulting in a good first attempt of validation of the BS data. Additional methods of comparison should be employed for complete validation of the data. The dress form was scanned only once with each scanning device, so little can be said about the repeatability of the results.

Practical implications

Accuracy of the point cloud data from the 3D body scanner indicates that the main issues for the use of body scanners as anthropometric measurement tools are those of standardization, feature locations, and positioning of the subject.

Originality/value

Comparisons of point cloud data from a 3D body scanner with CT data had not previously been performed, and these results indicate that the point cloud data are accurate to at least 2.0 percent.

Details

International Journal of Clothing Science and Technology, vol. 20 no. 5
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 1 December 2000

Huosheng Hu and Dongbing Gu

Landmark‐based navigation of autonomous mobile robots or vehicles has been widely adopted in industry. Such a navigation strategy relies on identification and subsequent…

1005

Abstract

Landmark‐based navigation of autonomous mobile robots or vehicles has been widely adopted in industry. Such a navigation strategy relies on identification and subsequent recognition of distinctive environment features or objects that are either known a priori or extracted dynamically. This process has inherent difficulties in practice due to sensor noise and environment uncertainty. This paper is to propose a navigation algorithm that simultaneously locates the robots and updates landmarks in a manufacturing environment. A key issue being addressed is how to improve the localization accuracy for mobile robots in a continuous operation, in which the Kalman filter algorithm is adopted to integrate odometry data with scanner data to achieve the required robustness and accuracy. The Kohonen neural networks have been used to recognize landmarks using scanner data in order to initialize and recalibrate the robot position by means of triangulation when necessary.

Details

Industrial Robot: An International Journal, vol. 27 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 1 July 1992

Claes‐Robert Julander

Explores the usefulness of capturing information about the shopper basketfrom computerized scanners, instead of only using data based on articlenumbers. Shopper baskets, as…

1149

Abstract

Explores the usefulness of capturing information about the shopper basket from computerized scanners, instead of only using data based on article numbers. Shopper baskets, as reflected by the receipt, are more akin to the consumer′s shopping problem and can be used to study shopping behaviour on a per shopper level. In this way individual shopper behaviour can be studied regarding short‐term effects of marketing share of shoppers instead of share of market in monetary units, the combination of items that shoppers put it their baskets, etc. The arguments for basket analysis are backed up by the empirical examples.

Details

International Journal of Retail & Distribution Management, vol. 20 no. 7
Type: Research Article
ISSN: 0959-0552

Keywords

Open Access
Article
Publication date: 11 February 2020

Brian T. Ratchford

The purpose of this study is to determine what the history of research in marketing implies for the reaction of the field to recent developments in technology due to the internet…

13322

Abstract

Purpose

The purpose of this study is to determine what the history of research in marketing implies for the reaction of the field to recent developments in technology due to the internet and associated developments.

Design/methodology/approach

This paper examines the introduction of new research topics over 10-year intervals from 1960 to the present. These provide the basic body of knowledge that drives the field at the present time.

Findings

While researchers have always borrowed techniques, they have refined them to make them applicable to marketing problems. Moreover, the field has always responded to new developments in technology, such as more powerful computers, scanners and scanner data, and the internet with a flurry of research that applies the technologies.

Research limitations/implications

Marketing will adapt to changes brought on by the internet, increased computer power and big data. While the field faces competition for other disciplines, its established body of knowledge about solving marketing problems gives it a unique advantage.

Originality/value

This paper traces the history of academic marketing from 1960 to the present to show how major changes in the field responded to changes in computer power and technology. It also derives implications for the future from this analysis.

Propósito

El objetivo de este estudio es examinar qué implica la historia de la investigación académica en marketing en la reacción del campo de conocimiento a los recientes desarrollos tecnológicos como consecuencia de la irrupción de Internet.

Metodología

Esta investigación analiza la introducción de nuevos temas de investigación en intervalos de diez años desde 1960 hasta la actualidad. Estos periodos proporcionan el cuerpo de conocimiento básico que conduce al ámbito del marketing hasta el presente.

Hallazgos

Aunque los investigadores tradicionalmente han tomado prestadas ciertas técnicas, las han ido refinando para aplicarlas a los problemas de marketing. Además, el ámbito del marketing siempre ha respondido a los nuevos desarrollos tecnológicos, más poder de computación, datos de escáner o el desarrollo de Internet, con un amplio número de investigaciones aplicando tales tecnologías.

Implicaciones

El marketing se adaptará a los cambios provocados por Internet, aumentando el poder de computación y el big data. Aunque el marketing se enfrenta a la competencia de otras disciplinas, su sólido cuerpo de conocimiento orientado a la resolución de problemas le otorga una ventaja diferencial única.

Valor

Describe la historia académica del marketing desde 1960 hasta la actualidad, para mostrar cómo los principales cambios en este campo respondieron a los cambios tecnológicos. Se derivan interesantes implicaciones para el futuro.

Palabras clave

Historia, Revisión, Cambio, Tecnología, Conocimiento, Internet, Datos, Métodos

Tipo de artículo

Revisión general

Details

Spanish Journal of Marketing - ESIC, vol. 24 no. 1
Type: Research Article
ISSN: 2444-9709

Keywords

Article
Publication date: 1 October 2004

Jorge M. Silva‐Risso and Randolph E. Bucklin

The authors develop a logit modeling approach, designed for application to UPC scanner panel data, to assess the effects of coupon promotions on consumer brand choice. The effects…

1836

Abstract

The authors develop a logit modeling approach, designed for application to UPC scanner panel data, to assess the effects of coupon promotions on consumer brand choice. The effects of coupon promotions are captured via two measures: the prevailing level of availability and the prevailing face value of coupons for each brand. Both of these measures are derived from coupon redemptions of a separate sample of households. The approach captures both the advertising effect and the price discount incentive of a coupon. It also avoids drawbacks of previous choice models which have incorporated coupon effects by subtracting the value of a redeemed coupon from the price of the brand purchased. The authors illustrate their modeling approach on data for two product categories: catsup (light coupon usage) and liquid laundry detergent (heavy coupon usage). Findings are reported for coupon users and non‐users as well as across latent segments.

Details

Journal of Product & Brand Management, vol. 13 no. 6
Type: Research Article
ISSN: 1061-0421

Keywords

Article
Publication date: 23 July 2021

Chelinka Rafiesta Sahara and Ammar Mohamed Aamer

Creating a real-time data integration when developing an internet-of-things (IoT)-based warehouse is still faced with challenges. It involves a diverse knowledge of novel…

1463

Abstract

Purpose

Creating a real-time data integration when developing an internet-of-things (IoT)-based warehouse is still faced with challenges. It involves a diverse knowledge of novel technology and skills. This study aims to identify the critical components of the real-time data integration processes in IoT-based warehousing. Then, design and apply a data integration framework, adopting the IoT concept to enable real-time data transfer and sharing.

Design/methodology/approach

The study used a pilot experiment to verify the data integration system configuration. Radio-frequency identification (RFID) technology was selected to support the integration process in this study, as it is one of the most recognized products of IoT.

Findings

The experimentations’ results proved that data integration plays a significant role in structuring a combination of assorted data on the IoT-based warehouse from various locations in a real-time manner. This study concluded that real-time data integration processes in IoT-based warehousing could be generated into three significant components: configuration, databasing and transmission.

Research limitations/implications

While the framework in this research was carried out in one of the developing counties, this study’s findings could be used as a foundation for future research in a smart warehouse, IoT and related topics. The study provides guidelines for practitioners to design a low-cost IoT-based smart warehouse system to obtain more accurate and timely data to support the quick decision-making process.

Originality/value

The research at hand provides the groundwork for researchers to explore the proposed theoretical framework and develop it further to increase inventory management efficiency of warehouse operations. Besides, this study offers an economical alternate for an organization to implement the integration software reasonably.

Details

International Journal of Pervasive Computing and Communications, vol. 18 no. 5
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 1 June 1979

VINE is produced at least four times a year with the object of providing up‐to‐date news of work being done in the automation of library housekeeping processes, principally in the…

Abstract

VINE is produced at least four times a year with the object of providing up‐to‐date news of work being done in the automation of library housekeeping processes, principally in the UK. It is edited and substantially written by Tony McSean, Information Officer for Library Automation based in Southampton University Library and supported by a grant from the British Library Research and Development Department. Copyright for VINE articles rests with the British Library Board, but opinions expressed in VINE do not necessarily reflect the views and policies of the British Library. The subscription to VINE is £10 per year and the subscription period runs from January to December.

Details

VINE, vol. 9 no. 6
Type: Research Article
ISSN: 0305-5728

Article
Publication date: 6 November 2009

Larry Lockshin and David Knott

The purpose of this paper is to focus on both the sales effects of free wine tastings and the effects on attitudes towards future purchases four weeks after the tastings.

1349

Abstract

Purpose

The purpose of this paper is to focus on both the sales effects of free wine tastings and the effects on attitudes towards future purchases four weeks after the tastings.

Design/methodology/approach

Store scanner data for the four weeks before and after each of ten wine tastings are used to measure the effect tastings had on sales. A total of 170 consumers, who attended a free tasting in wine shops across 4 cities, are interviewed as they leave the store and 37 of these consumers respond to a call back survey one month after the free tasting.

Findings

Scanner data shows a 400 per cent increase in sales of the wines tasted on the day of tasting, and a small but significant effect on sales during the four weeks afterwards. The survey shows that there is no difference in purchasing between those attending a tasting with the intention to purchase and those just stopping by. Both groups purchase at about the same rate. Only about 33 per cent of the attendees purchase; the other two‐thirds are boozing.

Research limitations/implications

Free tastings boost immediate sales just like most price promotions, but the effect on the intention to purchase is stronger for those who made a purchase. The study is conducted in one country among a small number of buyers, which limits its generalisability.

Practical implications

The results and implications of this research can be used by retailers and wine companies to make more informed decisions about free tastings. From this small study, attracting the maximum number of tasters to increase sales and long‐term purchasing intentions would be recommended.

Details

International Journal of Wine Business Research, vol. 21 no. 4
Type: Research Article
ISSN: 1751-1062

Keywords

Article
Publication date: 12 November 2019

Judith Hillen

The purpose of this paper is to discuss web scraping as a method for extracting large amounts of data from online sources. The author wants to raise awareness of the method’s…

1076

Abstract

Purpose

The purpose of this paper is to discuss web scraping as a method for extracting large amounts of data from online sources. The author wants to raise awareness of the method’s potential in the field of food price research, hoping to enable fellow researchers to apply this method.

Design/methodology/approach

The author explains the technical procedure of web scraping, reviews the existing literature, and identifies areas of application and limitations for food price research.

Findings

The author finds that web scraping is a promising method to collect customised, high-frequency data in real time, overcoming several limitations of currently used food price data sources. With today’s applications mostly focussing on (online) consumer prices, the scope of applications for web scraping broadens as more and more price data are published online.

Research limitations/implications

To better deal with the technical and legal challenges of web scraping and to exploit its scalability, joint data collection projects in the field of agricultural and food economics should be considered.

Originality/value

In agricultural and food economics, web scraping as a data collection technique has received little attention. This is one of the first articles to address this topic with particular focus on food price analysis.

Details

British Food Journal, vol. 121 no. 12
Type: Research Article
ISSN: 0007-070X

Keywords

Article
Publication date: 1 December 1997

Sanjog R. Misra and Minakshi Trivedi

The use of modeling and statistics for the design and development of pricing strategy is prevalent in academia as well as the industry. One of the more commonly used tools by…

25111

Abstract

The use of modeling and statistics for the design and development of pricing strategy is prevalent in academia as well as the industry. One of the more commonly used tools by researchers and managers alike for the estimation of linear demand models is the ordinary least squares (OLS) regression. Unfortunately, a majority of data sets to which such models are applied suffer from nonstationarity ‐ that is, the dependence of a variable on its prior values ‐ thereby violating the assumptions of a basic (naïve) regression model. Estimates of variables under these conditions are known commonly to be inflated and inaccurate. While this problem is well‐known and can be corrected for among statisticians and econometricians, a simple and effective tool has not yet been designed for managers ‐ the actual users of such models. Studies some of the problems encountered when using a naïve model and proposes a simple method to check for nonstationarity and redesign the model to account for the same. Using scanner data on soup, shows that the redesigned model predicts better, fits better and offers more meaningful results. Finally, looks at the implications of estimating such models for pricing strategies and issues. Surface response analysis shows how a manager can use such models for conducting insightful studies on price sensitivity.

Details

Pricing Strategy and Practice, vol. 5 no. 4
Type: Research Article
ISSN: 0968-4905

Keywords

1 – 10 of over 4000