Search results
1 – 10 of over 15000Birgit Weischedel, Sheelagh Matear and Kenneth R. Deans
Companies operating on the internet need appropriate metrics to make strategic marketing decisions. This paper applies established qualitative research methods to the online…
Abstract
Purpose
Companies operating on the internet need appropriate metrics to make strategic marketing decisions. This paper applies established qualitative research methods to the online environment to evaluate how web managers generate and incorporate web metrics to inform strategic marketing decisions.
Design/methodology/approach
Initial theories were developed using a comprehensive literature review as well as exploratory interviews with New Zealand companies. Applying a mixed methodology, the exploratory research used interviews to assess current practice within the industry, refine the research questions and set up the research design. An in‐depth case study in the USA evaluated best practices and highlighted issues that affect the use of web metrics. The main data collection utilized case studies to generate the in‐depth information necessary for theory building.
Findings
The exploratory results showed that companies currently measure web site performance and consumer behaviour online but are still uncertain how best to use those metrics to inform strategic marketing decisions. The in‐depth case study showed how web metrics can be used when sufficient resources are available and measuring performance is a priority. Owing to the initially recognized low level of web metrics use, the main research was expanded purposively to selected participants who make greater use of web metrics.
Originality/value
This paper applies traditional qualitative research methods to the online environment. Analysis of the case studies and continued research will address the research gap and provide recommendations to web managers as well as attempt to illustrate best practices, solutions to issues and industry benchmarks.
Details
Keywords
Coral Calero, Julián Ruiz and Mario Piattini
The purpose of this paper is to classify the most important metrics proposed for web information systems, with the aim of offering the user a global vision of the state of the…
Abstract
Purpose
The purpose of this paper is to classify the most important metrics proposed for web information systems, with the aim of offering the user a global vision of the state of the research within this area.
Design/methodology/approach
WQM distinguishes three dimensions related to web features, lifecycle processes and quality characteristics. A range of recently published (1992‐2004) works that include web metrics definitions have been studied and classified within this model.
Findings
In this work, a global vision of web metrics is provided. Concretely, it was found that about 44 percent of metrics are related to “presentation” and that most metrics (48 percent) are usability metrics. Regarding the life cycle, the majority of metrics are related to operation and maintenance processes. Nevertheless, focusing on metrics validation, it was found that there is not too much work done, with only 3 percent of metrics validated theoretically and 37 percent of metrics validated empirically.
Practical implications
The classification presented tries to facilitate the use and application of web metrics for different kinds of stakeholders (developers, maintainers, etc.) as well as to clarify where web metric definition efforts are centred, and thus where it is necessary to focus future works.
Originality/value
This work tries to cover a deficiency in the web metrics field, where many proposals have been stated but without any kind of rigour and order. Consequently, the application of the proposed metrics is difficult and risky, and it is dangerous to base decisions on their values.
Details
Keywords
The key purpose of the present research is to learn whether businesses use web site metrics to support business strategies and how web site metrics used differ by web site…
Abstract
Purpose
The key purpose of the present research is to learn whether businesses use web site metrics to support business strategies and how web site metrics used differ by web site categories.
Design/methodology/approach
A combination of a preliminary telephone survey and an e‐mail questionnaire survey was used to gather data. Potential respondents were contacted by phone to find firms measuring web site success. An e‐mail survey was conducted to learn how metrics were used to measure the success of a corporate web site. Responses were examined to study not only purposes and net benefits of measurement but also metrics measured.
Findings
Findings of the study indicated that a majority of businesses which took part in this survey were using the metrics more for operational than for strategic purposes. This observation is to some extent consistent with the normative view highlighted by the literature that organizations should measure how successfully their web sites support business objectives and, therefore, the web metrics to measure web site success should differ by web site categories.
Research limitations/implications
This explorative research is not based on a large sample, thereby limiting its academic contribution. Since the data analysis is over eight web site categories, future research will need to employ a sample large enough to eliminate any potential bias.
Practical implications
A key managerial implication is that businesses need to measure the success of their web site using web metrics tied to their business objectives, if they want their web site to effectively support business strategies.
Originality/value
This paper is the first attempt to explore the way that Internet‐dependent businesses measure the success of their web site via web metrics, for the purpose not only of observing some patterns between web metrics measured and site categories, but also of examining whether metrics were used for strategic or merely for operational purposes.
Details
Keywords
Guru Prasad Bhandari, Ratneshwer Gupta and Satyanshu Kumar Upadhyay
Software fault prediction is an important concept that can be applied at an early stage of the software life cycle. Effective prediction of faults may improve the reliability and…
Abstract
Purpose
Software fault prediction is an important concept that can be applied at an early stage of the software life cycle. Effective prediction of faults may improve the reliability and testability of software systems. As service-oriented architecture (SOA)-based systems become more and more complex, the interaction between participating services increases frequently. The component services may generate enormous reports and fault information. Although considerable research has stressed on developing fault-proneness prediction models in service-oriented systems (SOS) using machine learning (ML) techniques, there has been little work on assessing how effective the source code metrics are for fault prediction. The paper aims to discuss this issue.
Design/methodology/approach
In this paper, the authors have proposed a fault prediction framework to investigate fault prediction in SOS using metrics of web services. The effectiveness of the model has been explored by applying six ML techniques, namely, Naïve Bayes, Artificial Networks (ANN), Adaptive Boosting (AdaBoost), decision tree, Random Forests and Support Vector Machine (SVM), along with five feature selection techniques to extract the essential metrics. The authors have explored accuracy, precision, recall, f-measure and receiver operating characteristic curves of the area under curve values as performance measures.
Findings
The experimental results show that the proposed system can classify the fault-proneness of web services, whether the service is faulty or non-faulty, as a binary-valued output automatically and effectively.
Research limitations/implications
One possible threat to internal validity in the study is the unknown effects of undiscovered faults. Specifically, the authors have injected possible faults into the classes using Java C3.0 tool and only fixed faults are injected into the classes. However, considering the Java C3.0 community of development, testing and use, the authors can generalize that the undiscovered faults should be few and have less impact on the results presented in this study, and that the results may be limited to the investigated complexity metrics and the used ML techniques.
Originality/value
In the literature, only few studies have been observed to directly concentrate on metrics-based fault-proneness prediction of SOS using ML techniques. However, most of the contributions are regarding the fault prediction of the general systems rather than SOS. A majority of them have considered reliability, changeability, maintainability using a logging/history-based approach and mathematical modeling rather than fault prediction in SOS using metrics. Thus, the authors have extended the above contributions further by applying supervised ML techniques over web services metrics and measured their capability by employing fault injection methods.
Details
Keywords
The purpose of this paper is to provide an introduction to the various web metrics tools that are available, and to indicate how these might be used in libraries.
Abstract
Purpose
The purpose of this paper is to provide an introduction to the various web metrics tools that are available, and to indicate how these might be used in libraries.
Design/methodology/approach
The paper describes ways in which web metrics can be used to inform strategic decision making in libraries.
Findings
A framework of possible web metrics is provided that can be adapted for use as appropriate in libraries.
Originality/value
The paper offers assistance to any web site manager in planning new developments, given limited resources.
Details
Keywords
I‐Ping Chiang, Chun‐Yao Huang and Chien‐Wen Huang
There has been considerable discussion of various aspects of the “Web 2.0” concept in the past several years. However, the Web 2.0 concept as a whole has not been analysed through…
Abstract
Purpose
There has been considerable discussion of various aspects of the “Web 2.0” concept in the past several years. However, the Web 2.0 concept as a whole has not been analysed through the lens of the Web 1.0 metrics on which managers rely heavily for planning and evaluation. This paper aims to analyse the relationships among a site's audience metrics and its degree of Web 2.0‐ness.
Design/methodology/approach
Data collected from an online panel's clickstreams were aggregated to derive the web audience metrics. A web site's degree of Web 2.0‐ness was evaluated through a three‐step procedure by a series of binary criteria as to whether the site accommodates popular Web 2.0 applications. Pearson and Spearman correlations were conducted for the empirical analysis of data consisting of clickstreams gathered from an online panel coupled with expert scoring of web sites.
Findings
It was found that the size of a web site's visitor base is positively associated with the average number of page views per visitor. The average number of page views per visitor is in turn positively associated with the speed at which the visitors consume the site's content. Furthermore, a site's degree of Web 2.0‐ness is positively associated with the average number of page views per visitor and the speed of content consumption on the site.
Practical implications
First, the “double jeopardy” phenomenon of small brands found in the consumer package goods market is also observed for small sites in cyberspace in terms of audience metrics. Second, the accommodation of more Web 2.0 applications in a web site enhances the site's attractiveness so that its visitor base grows and its visitors will have a deeper relationship with the site.
Originality/value
This paper examines the Web 2.0 phenomenon through the Web 1.0 lens by exploring the relationships among web audience metrics and the degree of Web 2.0‐ness across web sites. It characterises the relationships among a web site's audience metrics and those between such metrics and the site's degree of Web 2.0‐ness. In addition this study fills an important gap in the literature and could serve as a stepping‐stone for further exploration of Web 2.0 issues from the market perspective.
Details
Keywords
Mahdi Zahedi Nooghabi and Akram Fathian Dastgerdi
One of the most important categories in linked open data (LOD) quality models is “data accessibility.” The purpose of this paper is to propose some metrics and indicators for…
Abstract
Purpose
One of the most important categories in linked open data (LOD) quality models is “data accessibility.” The purpose of this paper is to propose some metrics and indicators for assessing data accessibility in LOD and the semantic web context.
Design/methodology/approach
In this paper, at first the authors consider some data quality and LOD quality models to review proposed subcategories for data accessibility dimension in related texts. Then, based on goal question metric (GQM) approach, the authors specify the project goals, main issues and some questions. Finally, the authors propose some metrics for assessing the data accessibility in the context of the semantic web.
Findings
Based on GQM approach, the authors determined three main issues for data accessibility, including data availability, data performance, and data security policy. Then the authors created four main questions related to these issues. As a conclusion, the authors proposed 27 metrics for measuring these questions.
Originality/value
Nowadays, one of the main challenges regarding data quality is the lack of agreement on widespread quality metrics and practical instruments for evaluating quality. Accessibility is an important aspect of data quality. However, few researches have been done to provide metrics and indicators for assessing data accessibility in the context of the semantic web. So, in this research, the authors consider the data accessibility dimension and propose a comparatively comprehensive set of metrics.
Details
Keywords
Lukáš Porsche, Ladislava Zbiejczuk Suchá and Jan Martinek
The purpose of this paper is to introduce Google Analytics as a format suitable for advanced tracking of reading behavior within web books, set the metrics for measuring the…
Abstract
Purpose
The purpose of this paper is to introduce Google Analytics as a format suitable for advanced tracking of reading behavior within web books, set the metrics for measuring the reading behavior of web books and describe the first results of a pilot study. This paper offers suggestions for further deployment of web books and web analytics in digital libraries and evaluating web books' performance.
Design/methodology/approach
To understand the reading behavior of web book users, researchers use quantitative research methods based on custom and advanced metrics at Google Analytics.
Findings
Google Analytics is a valuable tool for tracking access to individual books and tracking entire web book collections, mainly if researchers use the combination of unique custom and advanced metrics. A pilot study with 190 users uncovered significant results on reading behavior, for example, the strong preference for scrolling over navigation buttons.
Research limitations/implications
This pilot study is limited to measuring two web books and 190 users. This study demonstrated a workable setup of metrics for measuring reading behavior; it would be helpful to continue measurement with a larger sample of books and users.
Originality/value
Researchers in library and information science currently use web analytics mainly to understand user behavior on the website and in the catalog. This paper presents the possibilities of deploying Google Analytics directly in web books to understand reading behavior.
Details
Keywords
A. Phippen, L. Sheppard and S. Furnell
E‐commerce has resulted in organisations investing significant resources in online strategies to extend business processes on to the World Wide Web. Traditional methods of…
Abstract
E‐commerce has resulted in organisations investing significant resources in online strategies to extend business processes on to the World Wide Web. Traditional methods of measuring Web usage fall short of the richness of data required for the effective evaluation of such strategies. Web analytics are an approach that may meet organisational demand for effective evaluation of online strategies. A case study of Web analytics usage in a large multinational airline company demonstrates an application of the theory to a practical context with a company that invests significant resources in their Web strategies. The attitudes of company individuals toward the evaluation of Web strategy and the value of the approach are shown through a survey of key employees. This work demonstrates the potential value of Web analytics and also highlights problems in promoting an awareness of Web analytics and how it can be applied to corporate goals.
Details
Keywords
Murali Sambasivan, Zainal Abidin Mohamed and Tamizarasu Nandan
e‐Supply chains are fast becoming a reality. In order to manage such supply chains efficiently and effectively, traditional measures of supply chain performance are not adequate…
Abstract
Purpose
e‐Supply chains are fast becoming a reality. In order to manage such supply chains efficiently and effectively, traditional measures of supply chain performance are not adequate. The literature search revealed lack of measures and metrics for e‐supply chains. The purpose of this paper is to develop new measures and metrics for monitoring the performance of e‐supply chains.
Design/methodology/approach
A framework based on the benefits of e‐supply chains has been used to develop the metrics and measures. The study makes use of focus group discussion by assembling eight experts and practitioners in the field of e‐supply chain to come up with the measures and metrics. A questionnaire is designed with these measures and metrics and is sent to about 300 electronic component manufacturing companies in Malaysia to obtain feedback from the industry practitioners. Appropriate reliability and validity tests are conducted to measure the reliability of the instrument and validity of the constructs.
Findings
Through the focus group discussion, this study identifies six metrics and 21 measures. Further validation through the industry practitioners, reveals that these measures are important and some are in use by the industries. The six metrics are: web‐enabled service, data reliability, time and cost, e‐response, invoice presentation and payment and e‐document management metrics.
Originality/value
The study uses a simple framework and a sound methodology to develop new measures and metrics that are relevant for e‐supply chains.
Details