Search results
1 – 10 of over 17000Damianos P. Sakas, Nikolaos T. Giannakopoulos and Panagiotis Trivellas
The purpose of this paper is to examine the impact of affiliate marketing strategies as a tool for increasing customers' engagement and vulnerability over financial services. This…
Abstract
Purpose
The purpose of this paper is to examine the impact of affiliate marketing strategies as a tool for increasing customers' engagement and vulnerability over financial services. This is attempted by examining the connection between affiliate marketing factors and customers' brand engagement and vulnerability metrics.
Design/methodology/approach
The authors developed a three-staged methodological context, based on the 7 most known centralized payment network (CPN) firms' website analytical data, which begins with linear regression analysis, followed by hybrid modeling (agent-based and dynamic models), so as to simulate brand engagement and vulnerability factors' variation in a 180-day period. The deployed context ends by applying the cognitive modeling method of producing heatmaps and facial analysis of CPN websites to the selected 47 vulnerable website customers, for gathering more insights into their brand engagement.
Findings
Throughout the simulation results of the study, it becomes clear that a higher number of backlinks and referral domains tend to increase CPN firms' brand-engaged and vulnerable customers.
Research limitations/implications
From the simulation modeling process, the implication for backlinks and referral domains as factors that enhance website customers' brand engagement and vulnerability has been highlighted. A higher number of brand-engaged website customers could mean that vulnerable categories of customers would be impacted by CPNs' affiliate marketing. Improving those customers' knowledge of the financial services utility is of utmost importance.
Practical implications
The outcomes of the research indicate that online banking service providers can increase their customers' engagement with their brands by adopting affiliate marketing techniques. To avoid the increase in customers' vulnerability, marketers should aim to apply affiliate marketing strategies to domains relevant to the provided financial services.
Originality/value
The paper's outcomes provide a new approach to the literature, where the website customer's brand engagement comes out as a valuable metric for estimating online banking sector customers' vulnerability.
Details
Keywords
Leonie Jane Cassidy and John Hamilton
Website benchmarking theory and the website analysis method (WAM) are benchmark tested across non-commercial tropical tourism websites. The paper aims to discuss this issue.
Abstract
Purpose
Website benchmarking theory and the website analysis method (WAM) are benchmark tested across non-commercial tropical tourism websites. The paper aims to discuss this issue.
Design/methodology/approach
The abridged WAM benchmarks 280 tropical tourism websites from four continental areas (Africa, Asia, Oceania, and The Americas) and presence or absence of website components objectively rank-scores. Across locations significant website benchmark score differences are determined. In all, 20 of these websites are ranked by an eight expert focus group. These experts also seek-out the existence of allocated common website components.
Findings
The abridged WAM approach is suitable for benchmarking tropical tourism websites. Website benchmarking scores at-level are determined. At the website, domain, and function levels significant continental area differences exist. Experts cross-check the study. They find it easier to rank websites with fewer components, and show split decisions when determining the existence of common website components.
Research limitations/implications
This study’s abridged version of WAM uses publicly viewable components to show significant differences across website scores, and identifies some missing components for possible future inclusion on the website, and it also supports the WAM benchmarking theory approach.
Practical implications
Website managers/owners can apply WAM (or an abridged WAM) to benchmark their websites. WAM is theoretically supported and it systematically allows comparison against the universal set of components and/or against competitor websites. A full or abridged WAM approach to website benchmarking is preferable to subjective or survey-based approaches.
Originality/value
This study successfully applies the Cassidy and Hamilton (2016) theory and approach to practical website benchmarking.
Details
Keywords
Leonie Cassidy and John Hamilton
Literature-identified website benchmarking (WB) approaches are generally time consuming, survey based, with little agreement on what and how to measure website components. The…
Abstract
Purpose
Literature-identified website benchmarking (WB) approaches are generally time consuming, survey based, with little agreement on what and how to measure website components. The purpose of this paper is to establish a theoretical approach to WB. A comprehensive design science research methodology (DSRM) artifact facilitates the evaluation of the website against the universal set of benchmark components. This knowledge allows managers to gauge/reposition their websites.
Design/methodology/approach
DSRM establishes a website analysis method (WAM) artifact. Across six activities (problem identification, solution objective, artifact design/development, artifact demonstration, artifact evaluation, results communication), the WAM artifact solves the DSRM-identified WB problem.
Findings
The WAM artifact uses 230 differentiated components, allowing managers to understand in-depth and at-level WB. Typological website components deliver interpretable WB scores. Website comparisons are made at domain (aesthetic, marketing, technical) and/or functional levels.
Research limitations/implications
New/emergent components (and occasionally new functions) are included (and redundant components removed) as upgrades to the DSRM WAM artifact’s three domains and 28 functions. Such modifications help keep latest benchmarking comparisons (and/or website upgrades) optimized.
Practical implications
This DSRM study employs a dichotomous present/absent component approach, allowing the WAM artifact’s measures to be software programmed, and merged at three different levels, delivering a useful WB tool for corporates.
Originality/value
DSRM identifies the benchmarking problem. Rough-cut set-theory and mutual-exclusivity of components allow the causal-summing of typological website components into an objective WAM artifact WB solution. This new, comprehensive, objective-measurement approach to WB thus offers comparative, competitive, and website behavioral implications for corporates.
Details
Keywords
Brian F. Blake, Steven Given, Kimberly A. Neuendorf and Michael Horvath
The purpose of this paper is threefold: first, to present a framework of five “facets,” i.e., distinct but complementary ways in which the observed appeal of a consumer shopping…
Abstract
Purpose
The purpose of this paper is threefold: first, to present a framework of five “facets,” i.e., distinct but complementary ways in which the observed appeal of a consumer shopping site’s features can potentially be generalized across product/service domains (the authors call this framework the feature appeal generalization perspective); second, to determine if and how observed feature preferences for consumer electronics, bookstores, and sites “in general” generalize across domains; third, to test hypotheses about the impact of frequency of domain usage upon feature generalizability.
Design/methodology/approach
Via an online survey administered in a controlled laboratory setting, 313 respondents evaluated 26 website features in three domains (books, electronics, general) for a total of 24,414 preference judgments.
Findings
Two facets, individual feature values and within domain evaluative dimensions, revealed minimal generalizability, while there was moderate comparability across all domains in between domain feature correspondence. Personal preference elevation could be generalized between books and general, but not between these two and electronics. Differentiating dimensions showed that preferences were not generalizable from electronics to books and general because consumers wanted electronics features to provide “flashy sizzle” and books/general features to give “comfortable safety.” As hypothesized, patterns of generalizability coincided with frequency of domain usage.
Research limitations/implications
Practitioners should not apply published studies of feature appeal to their domain of interest unless those studies directly analyzed that domain. Scientists should incorporate all five facets in modeling what attracts consumers to commercial websites.
Originality/value
This is the first multidimensional analysis of the generalizability of site feature appeal across business-to-consumer product/service domains, and the first to propose this integrated evaluative framework with its unique facets.
Details
Keywords
Didem Ölçer and Tuğba Taşkaya Temizel
This paper proposes a framework that automatically assesses content coverage and information quality of health websites for end-users.
Abstract
Purpose
This paper proposes a framework that automatically assesses content coverage and information quality of health websites for end-users.
Design/methodology/approach
The study investigates the impact of textual and content-based features in predicting the quality of health-related texts. Content-based features were acquired using an evidence-based practice guideline in diabetes. A set of textual features inspired by professional health literacy guidelines and the features commonly used for assessing information quality in other domains were also used. In this study, 60 websites about type 2 diabetes were methodically selected for inclusion. Two general practitioners used DISCERN to assess each website in terms of its content coverage and quality.
Findings
The proposed framework outputs were compared with the experts' evaluation scores. The best accuracy was obtained as 88 and 92% with textual features and content-based features for coverage assessment respectively. When both types of features were used, the proposed framework achieved 90% accuracy. For information quality assessment, the content-based features resulted in a higher accuracy of 92% against 88% obtained using the textual features.
Research limitations/implications
The experiments were conducted for websites about type 2 diabetes. As the whole process is costly and requires extensive expert human labelling, the study was carried out in a single domain. However, the methodology is generalizable to other health domains for which evidence-based practice guidelines are available.
Practical implications
Finding high-quality online health information is becoming increasingly difficult due to the high volume of information generated by non-experts in the area. The search engines fail to rank objective health websites higher within the search results. The proposed framework can aid search engine and information platform developers to implement better retrieval techniques, in turn, facilitating end-users' access to high-quality health information.
Social implications
Erroneous, biased or partial health information is a serious problem for end-users who need access to objective information on their health problems. Such information may cause patients to stop their treatments provided by professionals. It might also have adverse financial implications by causing unnecessary expenditures on ineffective treatments. The ability to access high-quality health information has a positive effect on the health of both individuals and the whole society.
Originality/value
The paper demonstrates that automatic assessment of health websites is a domain-specific problem, which cannot be addressed with the general information quality assessment methodologies in the literature. Content coverage of health websites has also been studied in the health domain for the first time in the literature.
Details
Keywords
Lisa Ogilvie and Julie Prescott
The positive addiction recovery website (https://positiveaddictionrecovery.com) has been created following a successful pilot study of a programme of work known as positive…
Abstract
Purpose
The positive addiction recovery website (https://positiveaddictionrecovery.com) has been created following a successful pilot study of a programme of work known as positive addiction recovery therapy (PART). The aim of the website is to disseminate PART to an online audience, extending its reach to a larger population. The purpose of this study is to explain the process of creating this online resource and to conduct a user evaluation to understand how well received the website is likely to be to its target audience.
Design/methodology/approach
An implementation framework cognisant of positive computing, positive technology, contemporary understanding of human–computer interaction and knowledge acquired from the delivery of eHealth interventions from the past decade was used to create the website. To understand user opinion of the resultant website, data were collected using the mobile application ratings scale user version.
Findings
By adopting a tailored implementation framework, with appropriate determinant factors of wellbeing and evidenced theoretical input, a website resource was created that users considered engaging and informative. The findings also suggest that participants appreciated the importance of intended behavioural change, having interacted with the interventions on the website.
Originality/value
To the best of the authors’ knowledge, the website is the first online version of PART, a new programme of work aimed at people in addiction recovery.
Details
Keywords
Designing an effective university mobile website is becoming a necessity for universities. With the increasing percentage of students using smart phones to research colleges and…
Abstract
Purpose
Designing an effective university mobile website is becoming a necessity for universities. With the increasing percentage of students using smart phones to research colleges and universities, many university websites worldwide are moving towards addressing mobile needs. The purpose of this paper is to provide a comprehensive mobile university evaluation framework that can be used to assess how universities' websites respond to the increasing demand for the mobile web, and also to identify trends and gaps in current services provided in universities' mobile websites.
Design/methodology/approach
A framework was developed and applied to a set of 35 universities' mobile websites worldwide. The framework consists of four categories: interface, navigation, content and services offered, and technical aspects.
Findings
Evaluation findings show that most universities' mobile websites performed well in terms of mobile-friendliness and functionality; however, suggestions for future improvements are given.
Originality/value
No previous evaluation studies of this kind have been conducted. Moreover, this study provides an evaluation framework dedicated to the assessment of universities' mobile websites.
Details
Keywords
Bonnie Farber Canziani and Dianne H.B. Welsh
The study aims to offer a general review of website evaluation, with particular application to the winery tourism field. Automated website evaluation is explored as a…
Abstract
Purpose
The study aims to offer a general review of website evaluation, with particular application to the winery tourism field. Automated website evaluation is explored as a complementary tool in the evaluation of small and medium enterprise (SME) winery websites.
Design/methodology/approach
The study adopted a mixed-method investigation including a critical review of winery website evaluation literature and analysis of winery website scores generated through a free service of a commercial automated evaluation scoring system.
Findings
No standards currently exist for winery website evaluation metrics and current evaluation processes suffer from human rater bias. An automated evaluation scoring system used in the study was able to discriminate between a sample of known best practice websites and other independently formed samples representing average wineries in the USA and in North Carolina.
Research limitations/implications
Wineries and other small business tourism firms can benefit by incorporating automated website evaluation and benchmarking into their internet strategies. Reported human rater limitations noted in manual evaluation may be minimized using automated rating technology. Automated evaluation system metrics tend to be updated more frequently and offer better alignment with trending consumer expectations for website design.
Originality/value
The current study used an automated website quality evaluation tool that serves to move winery website design efforts forward and supports the goals of reputation management for tourism businesses relying on internet marketing.
Kalyan Nagaraj, Biplab Bhattacharjee, Amulyashree Sridhar and Sharvani GS
Phishing is one of the major threats affecting businesses worldwide in current times. Organizations and customers face the hazards arising out of phishing attacks because of…
Abstract
Purpose
Phishing is one of the major threats affecting businesses worldwide in current times. Organizations and customers face the hazards arising out of phishing attacks because of anonymous access to vulnerable details. Such attacks often result in substantial financial losses. Thus, there is a need for effective intrusion detection techniques to identify and possibly nullify the effects of phishing. Classifying phishing and non-phishing web content is a critical task in information security protocols, and full-proof mechanisms have yet to be implemented in practice. The purpose of the current study is to present an ensemble machine learning model for classifying phishing websites.
Design/methodology/approach
A publicly available data set comprising 10,068 instances of phishing and legitimate websites was used to build the classifier model. Feature extraction was performed by deploying a group of methods, and relevant features extracted were used for building the model. A twofold ensemble learner was developed by integrating results from random forest (RF) classifier, fed into a feedforward neural network (NN). Performance of the ensemble classifier was validated using k-fold cross-validation. The twofold ensemble learner was implemented as a user-friendly, interactive decision support system for classifying websites as phishing or legitimate ones.
Findings
Experimental simulations were performed to access and compare the performance of the ensemble classifiers. The statistical tests estimated that RF_NN model gave superior performance with an accuracy of 93.41 per cent and minimal mean squared error of 0.000026.
Research limitations/implications
The research data set used in this study is publically available and easy to analyze. Comparative analysis with other real-time data sets of recent origin must be performed to ensure generalization of the model against various security breaches. Different variants of phishing threats must be detected rather than focusing particularly toward phishing website detection.
Originality/value
The twofold ensemble model is not applied for classification of phishing websites in any previous studies as per the knowledge of authors.
Details
Keywords
– The purpose of this paper is to decrease the traffic created by search engines’ crawlers and solve the deep web problem using an innovative approach.
Abstract
Purpose
The purpose of this paper is to decrease the traffic created by search engines’ crawlers and solve the deep web problem using an innovative approach.
Design/methodology/approach
A new algorithm was formulated based on best existing algorithms to optimize the existing traffic caused by web crawlers, which is approximately 40 percent of all networking traffic. The crux of this approach is that web servers monitor and log changes and communicate them as an XML file to search engines. The XML file includes the information necessary to generate refreshed pages from existing ones and reference new pages that need to be crawled. Furthermore, the XML file is compressed to decrease its size to the minimum required.
Findings
The results of this study have shown that the traffic caused by search engines’ crawlers might be reduced on average by 84 percent when it comes to text content. However, binary content faces many challenges and new algorithms have to be developed to overcome these issues. The proposed approach will certainly mitigate the deep web issue. The XML files for each domain used by search engines might be used by web browsers to refresh their cache and therefore help reduce the traffic generated by normal users. This reduces users’ perceived latency and improves response time to http requests.
Research limitations/implications
The study sheds light on the deficiencies and weaknesses of the algorithms monitoring changes and generating binary files. However, a substantial decrease of traffic is achieved for text-based web content.
Practical implications
The findings of this research can be adopted by web server software and browsers’ developers and search engine companies to reduce the internet traffic caused by crawlers and cut costs.
Originality/value
The exponential growth of web content and other internet-based services such as cloud computing, and social networks has been causing contention on available bandwidth of the internet network. This research provides a much needed approach to keeping traffic in check.
Details