Abstract
Purpose
The field of website quality evaluation attracts the interest of a range of disciplines, each bringing its own particular perspective to bear. This study aims to identify the main characteristics – methods, techniques and tools – of the instruments of evaluation described in this literature, with a specific concern for the factors analysed, and based on these, a multipurpose model is proposed for the development of new comprehensive instruments.
Design/methodology/approach
Following a systematic bibliographic review, 305 publications on website quality are examined, the field's leading authors, their disciplines of origin and the sectors to which the websites being assessed belong are identified, and the methods they employ characterised.
Findings
Evaluations of website quality tend to be conducted with one of three primary focuses: strategic, functional or experiential. The technique of expert analysis predominates over user studies and most of the instruments examined classify the characteristics to be evaluated – for example, usability and content – into factors that operate at different levels, albeit that there is little agreement on the names used in referring to them.
Originality/value
Based on the factors detected in the 50 most cited works, a model is developed that classifies these factors into 13 dimensions and more than 120 general parameters. The resulting model provides a comprehensive evaluation framework and constitutes an initial step towards a shared conceptualization of the discipline of website quality.
Keywords
Citation
Morales-Vargas, A., Pedraza-Jimenez, R. and Codina, L. (2023), "Website quality evaluation: a model for developing comprehensive assessment instruments based on key quality factors", Journal of Documentation, Vol. 79 No. 7, pp. 95-114. https://doi.org/10.1108/JD-11-2022-0246
Publisher
:Emerald Publishing Limited
Copyright © 2023, Alejandro Morales-Vargas, Rafael Pedraza-Jimenez and Lluís Codina
License
Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode
Introduction
Over the last three decades, websites have become one of the most important platforms on the Internet for disseminating information and providing services to society. Shortly after their first appearance, the need to evaluate website quality became evident. The earliest analyses were developed by experts in human-computer interaction and comprised usability heuristics (Nielsen, 2020) and design principles (Tognazzini, 2014) and rules (Shneiderman, 2016), aimed at improving interfaces. In parallel, inspection of the technical specifications of websites and the verification of standards for application development emerged (W3C, 2016). Subsequently, interest has grown in designing for an optimal user experience (Garrett, 2011) and quantifying user experience perceptions (Sauro and Lewis, 2016).
This evolution highlights the fact that, from its very outset, website quality evaluation has taken different approaches, analysing a range of different characteristics and employing a variety of methodologies. This may well be an indication that the discipline of website quality has yet to be fully consolidated and that its field of study is not readily delimited. This conclusion is further strengthened by the fact that the field has yet to agree on a formal definition for itself (Law et al., 2010; Semerádová and Weinlich, 2020).
Over the last twenty years, a number of different authors have offered their definitions. Leavitt and Shneiderman (2006), in one of the earliest attempts, define website evaluation as the act of establishing a correct and exhaustive set of user requirements, ensuring a site provides useful content that meets user expectations, while setting usability goals. For Aladwani and Palvia (2002), website quality is determined primarily by the degree to which a website's features are perceived by users to meet users' needs and to reflect the site's overall excellence; while for Gregg and Walczak (2010) website quality constitutes the attributes that contribute to its usefulness to consumers. Most recently, “the ability of a website to meet the expectations of its users and owners, as determined by a set of measurable attributes” is the definition proposed by Morales-Vargas et al. (2020, p. 3).
While these definitions coincide in the need for websites to satisfy user expectations, they differ in terms of the factors that should come under examination. Indeed, as an emerging research area, website quality in the literature has yet to achieve a common operationalization and each study tends to highlight different measures that are relevant to its own particular context (Law, 2019). When evaluating the quality of a website, it is important to know what can be measured and how to measure it (Akincilar and Dagdeviren, 2014). On the other hand, the evaluation of website quality is not the same as undertaking a traditional quality evaluation since it involves multi-criteria decision-making (Ecer, 2014), making it a particularly complex activity.
Thus, it is critical to identify the factors and characteristics that should be evaluated. In this regard, we can identify a first traditional focus to the question that might be defined as functional. Here, the focus is on the inspection of a website's inherent characteristics, including its content, information architecture and visual design, as well as its technical and operational features, linked to technology and security (ISO, 2008; Leung et al., 2016). The second approach, which we can define as experiential, focuses on user experience and perceptions and examines such factors as usability, accessibility, satisfaction and interaction (Maia and Furtado, 2016; Tullis and Albert, 2013). A third approach is more strategic in nature, focusing as it does on meeting the site owner's objectives, and on the use of performance, visibility and positioning metrics, among others (Kaur and Gupta, 2014; Sanabre et al., 2020).
All evaluation instruments of website quality, regardless of their particular focus, have in common the fact that they seek to conceptualise and delimit the target they seek to measure using some type of unit. The literature employs different names for these units, be it attributes, characteristics, variables (ISO, n.d.), factors or criteria (Chiou et al., 2010). Their use is largely synonymous, being terms that allude to the distinctive features of a certain property of the analysed entity, that is, websites. Codina and Pedraza-Jiménez (2016) propose addressing these properties, from the most general to the most specific, as dimensions, parameters and indicators, a terminology that we employ in this article. Dimensions constitute the generic properties of a website that we might want to evaluate. These can be divided, in turn, into more specific units, referred to as parameters. Finally, the indicators are the core elements of analysis that make it possible to operationalize and assess the parameters. Thus, for example, the dimension of “information architecture” includes “labelling” as one of its parameters and this, in turn, includes, among others, “conciseness”, “syntactic agreement”, “univocity” and “universality” as its indicators.
To evaluate these indicators, website quality studies employ different methodologies, experimental and quasi-experimental as well as descriptive and observational, typical of the associative or correlational paradigm. Likewise, such evaluations might adopt either qualitative or quantitative perspectives, undertaking both subjective and objective assessments. Similarly, they might employ either participatory and direct methods – as they record user opinions – or non-participatory or indirect methods – such as inspection or web analytics.
In the case of participatory methods, user experience (UX) studies have focused on user preferences, perceptions, emotions and physical and psychological responses that can occur before, during and after the use of a website (Bevan et al., 2015). The most frequently employed techniques are testing – which resorts to the use of such instruments as usability tests, A/B tests and task analyses; observation – centred on ethnographic, think-aloud and diary studies–; questionnaires – including surveys, interviews and focus groups; and biometrics – which uses eye tracking, psychometric and physiological reaction tests, to name just a few (Rosala and Krause, 2020).
Among the most common methods of inspection, we find expert analysis, a procedure for examining the quality of a site or a group of sites employing guidelines, heuristic principles or sets of good practices (Codina and Pedraza-Jiménez, 2016). The most common instrument is that of heuristic evaluation, in which a group of specialists judge whether each element of a user interface adheres to principles of usability, known as heuristics (Paz et al., 2015; Jainari et al., 2022).
Other instruments employed in undertaking inspections include checklists, in which each indicator usually takes the form of a question, and whose answer – typically binary – shows whether or not the quality factor under analysis is met; scales, where each indicator is assigned a relative weight based on the importance established or calculated by the experts for each parameter under evaluation (Fernández-Cavia et al., 2014); indices, metrics that not only evaluate a website's quality, but also how good it is in comparison with similar sites (Xanthidis et al., 2009); and analytical systems, typically qualitative instruments of either a general or specialized nature, which are mainly aimed at evaluating individual websites, conducting benchmarking studies, or for use as web design guides. These systems of analysis vary depending on the factors that their creators consider key to determine the quality of a website (Sanabre et al., 2020). In this study, in order to standardise their name, we refer to them as “evaluation instruments”.
These instruments can be applied manually, that is, by experts in website quality or those with an understanding of the discipline; in a semi-automated fashion, with the help of software and specialised validators (Ismailova and Inal, 2017); or in a fully automated manner (Adepoju and Shehu, 2014), using techniques of artificial intelligence (Jayanthi and Krishnakumari, 2016) or natural language processing (Nikolić et al., 2020). Thus, content analysis – a major technique in website quality inspection – can be applied in one of three ways.
Finally, we also find techniques aimed at the strategic analysis of performance (Król and Zdonek, 2020), including return on investment; search engine positioning (Lopezosa et al., 2019); competitiveness, including web analytics (Kaushik, 2010) and webmetrics (Orduña-Malea and Aguillo, 2014). Additionally, within this group we find mathematical models for decision making with multiple, hybrid, intuitive or fuzzy criteria (Anusha, 2014). By employing criteria at different, unconnected, levels, these models establish a hierarchy of evaluable factors (Rekik et al., 2015). They are used, among other applications, to weight user responses and generate indices of satisfaction or purchase intention.
Thus, this review of the literature highlights that the study of website quality is multidimensional. Moreover, such evaluations can employ a range of different focuses and employ multiple techniques and instruments. With this as our working hypothesis, we seek here to determine the properties that characterise the main website quality evaluation instruments, as well as to identify the dimensions, parameters and indicators that they analyse in each case. Based on these outcomes, we develop a comprehensive evaluation framework (Rocha, 2012). This, in addition to unifying the different concepts examined and helping to clarify the broad panorama comprised by website quality publications, should serve both as a guide and model for the development of new instruments that can be employed by professionals and researchers alike in this field.
Objectives
The general objective of this article is to identify the main characteristics of the instruments of website quality evaluation described in the literature, with particular attention to the factors they analyse, and then, based on this analysis, to propose a multipurpose model for the development of new comprehensive instruments.
Specific objectives
Characterize the main methods and techniques of evaluation used in website quality analyses, while identifying the specific focus of the instruments proposed: be it strategic, functional or experiential.
Determine which website quality factors are used by the instruments employed in the most cited works, and how these are grouped into different dimensions, parameters and indicators.
Build a model that can serve as a guide for the development of future instruments for evaluating website quality.
Methodology
To achieve the objectives outlined above, the systematic bibliographic review method (Booth et al., 2016) was employed, undertaking a search in academic databases and conducting a systematic mapping of the literature (Gough et al., 2017). Specifically, the review was carried out applying the SALSA protocol (Grant and Booth, 2009), which includes the search, appraisal, analysis and synthesis of the selected works.
In the search phase, to identify the main published works on website quality evaluation, we used the search equation presented below, comprising the most common keywords in the specialized literature and representative of the main facet of the field as it stands today: [website OR “web site” OR “web sites”] AND [quality] AND [evaluation OR evaluating OR evaluate OR analysis OR assessment OR assess OR assessing OR assurance OR index OR guideline OR standard OR heuristic].
The query was executed in the multidisciplinary databases of Web of Science (WoS) and Scopus and the results were ordered by relevance, filtered by language, selecting only studies published in English, and by year of publication, comprising the six-year period 2014–2019 (Codina, 2018). This procedure was repeated in other specialized databases of importance in the discipline, including IEEE, ACM, Emerald and the LISTA collection of EBSCO, among other information resources.
Likewise, the Google Scholar search engine was also used, which in addition to its wide coverage (Martín-Martín et al., 2018), includes books, technical reports and other documents of interest to both the academic and professional community in the field of website development. To these were added international guidelines and standards detected by undertaking a systematic mapping review (Gough et al., 2017). As a result, a corpus of 432 documents was created, once duplicates and false positives had been excluded.
These documents were appraised by conducting a manual examination of titles and abstracts to determine whether they met the established inclusion or exclusion criteria. The former included studies dedicated to website quality analysis published in the previously established period and language. Publications dedicated solely to web analytics, studies of mobile phone applications and studies focused on user psychology and not on a particular website were excluded. Thus, an evidence base (Yin, 2015) comprising 305 documents was finally obtained.
In the third phase, all the papers were reviewed, their formal aspects described, their quality attributes and methodological tools classified according to a code book (Lavrakas, 2008), and relevant data about their content collected. Then, based on the number of citations reported in Google Scholar as of September 2020, the average number of citations – average citation count, ACC – was determined, normalised according to the number of years elapsed since publication (Dey et al., 2018). Using this indicator, we identified the 50 most cited texts, which account for 86% of the total number of citations received.
Finally, in the synthesis phase, all the data were systematized onto a spreadsheet containing the following details: the characteristics of the websites evaluated; the parameters and indicators considered as quality factors; and the respective methods, models, instruments and software on which the evaluation instruments proposed in each study are based.
Results
The main findings from the coincidence count conducted on the data obtained in the synthesis phase, and the most relevant outcomes derived as a result, are detailed below.
Characteristics
Between 2014 and 2019, a total of 305 publications on website quality evaluation were found, with an average of 51 studies per year. A steady upward trend is evident in the period analysed.
Among the scientific journals, 166 different titles were detected, 44 of which belong to the field of health and medical informatics. The journals with the highest number of articles published on this subject were The Electronic Library, International Journal of Engineering and Technology, International Journal of Information Management, Online Information Review and Universal Access in the Information Society.
The number of citations received by each text according to Google Scholar (GS) was also recorded. Table 1 shows the fifteen works with the highest average citation count. The first twelve positions are occupied by website quality guidelines, such as those of the World Wide Web Consortium (W3C, 2016) and new editions of reference books in the discipline (Krug, 2014; Sauro and Lewis, 2016; Shneiderman et al., 2016). These publications mostly contain general recommendations, that is, applicable to any website, with the exception of the guide for websites of the European Union (European Commission, 2016) and the HONcode (Health On the Net, 2017), specialized in medical information.
The level of specialization of the evaluation instruments proposed in these works was also examined. Specifically, a distinction was drawn between those that propose an analysis applicable to all types of website (general) and those that focus on a specific sector. It emerged that most of the evaluation instruments (73.4%) focus on a particular sector (Figure 1).
The same figure shows that the latter are led by the education sector – universities, libraries and museums, among others – closely followed by the health sector, which includes both health sites and hospital websites. At a lower scale, we find the government sector, which focuses on the quality of websites of government administrations and municipalities; commerce, dominated by e-commerce stores; tourism, with sites of destinations, hotels and airlines; and the media, focused on the Internet news media.
Methods, focuses and techniques
A clear predominance of the associative or correlational paradigm is observed in the type of applied research conducted on the evaluation instruments as opposed to experimental. Indeed, most of the analytic instruments use observational or descriptive methodologies. Also evident is the pre-eminence of qualitative over quantitative approaches, and a balance between objective evaluations, based on the verification of verifiable characteristics, and subjective assessments, based on the perceptions of experts and users.
In turn, most of the proposals are based on non-participatory or indirect methods and, as a result, there are fewer instruments based on surveys or interviews. Similarly, there are a greater number of studies that focus on the verification of technical and functional requisites (57.4%) compared to those concerned with user experience (23.0%), with the strategic objectives of the site owner (14%), or mixed (5.5%).
If we examine more specifically the instruments present in all the publications (Table 2), we find that three-quarters were designed to be applied by professional experts in website quality, and include checklists, indices and scales, and specialized instruments that articulate various dimensions for evaluation. In contrast, usability tests and user questionnaires are much scarcer.
Dimensions, parameters and indicators
Of the 305 publications, 241 (79.0%) present website quality criteria expressed as dimensions, parameters or indicators, the latter being the most specific unit of analysis. To further our examination of these criteria, we concentrate on the systematization of the criteria present in the evaluation instruments proposed in the fifty works with the highest average number of citations.
Overall, we detected 38 factors explicitly stated as dimensions or parameters and 154 as indicators. As Table 3 shows, there is a degree of overlap between the two lists given that each author ranks and classifies the website quality factors differently depending on their own specific objectives.
It is apparent that usability and accessibility occupy the first positions both as a dimension or parameter and as an indicator. However, if all the factors directly linked to content are grouped together – that is, readability, language, transparency and others – this criterion is the one that concentrates the highest number of mentions. Information architecture and navigation and interface graphic design also feature prominently.
It should be noted here that there are entire studies that focus exclusively on a single parameter – the case, for example, of credibility (Choi and Stvilia, 2015) and accessibility (Kamoun and Almourad, 2014) – but which are treated as just one more indicator in others. There are also instruments that include indicators that apply only to very specific types of site, such as “public values” and “citizen engagement” on local government websites (Karkin and Janssen, 2014) or “emotional appeal” and “use of science in argumentation” in health websites (Keselman et al., 2019).
Likewise, we detect indictors that differ greatly in their nature. Thus, atomic and dichotomous indicators, verifying the presence of a specific element – such as an internal search engine or contact information – coexist with other more abstract, subjective properties, such as coherence, integrity, aesthetic appeal or familiarity. This multiplicity of characteristics and conditions in the nature of the indicators leads us to propose a categorization (Table 4) that should facilitate a better understanding of them.
As can be seen, the indicators can be designed with a specific focus in mind, be they strategic, functional or experiential in nature. The latter, for example, cannot be assessed by means of a metric or a technical inspection, but require a more complex evaluation – often expressed using a scale or score – applied by an expert or by recording the perceptions expressed by a website's users.
Instruments, tools and models
Precisely because of this need to measure indicators of a different nature, website quality evaluation uses a multiplicity of instruments, models and tools. Many originate from the research methodologies employed in the social sciences – the case, for example, of questionnaires, interviews and observation, while others – such as web analytics and code validators – were formulated specifically to evaluate a site's characteristics.
Table 5 reports the techniques most frequently employed by the evaluation instruments described in the 50 publications with the highest number of average citations. It shows that undertaking surveys is the most frequently used technique for collecting user data in these studies. Other techniques used for this purpose include task observation, usability tests, interviews and focus groups. Expert analyses are also represented, as identified through the use of checklists, content analyses, manual inspections and web analytics, all of which are indirect methods that do not necessarily require user participation.
The instruments also employ specialized tools and software, among which we find both manual procedures – such as the DISCERN or HONcode guidelines (Dueppen et al., 2019; Manchaiah et al., 2019) for the evaluation of medical information on the Internet and the Web Content Accessibility Guidelines (WCAG) 2.0 – and automated inspection mechanisms, including the W3C HTML code and CSS validators, the Majestic SEO tool for analysing backlinks and the Readability Studio software, aimed at determining text readability (Cajita et al., 2017).
Other software mentioned include AChecker, EvalAccess 2.0, WaaT and Fujitsu Web Accessibility Inspector for automated accessibility validation; Xenu's Link Sleuth and LinkMiner for broken link detection; Pingdom, for monitoring download speed and service availability; SortSite for website technical analysis; mobileOK for mobile adaptability; and SimilarWeb for measuring the site's traffic and that of its competitors, to name just a few (Ismailova and Inal, 2017).
We also find mathematical models designed for multiple-criteria decision-making that are employed primarily in e-commerce sites. In models of this type, user and expert responses, collected by means of assessment scales, are subjected to a weighting of variables mechanism to obtain, for example, an index of perceived quality (Cristóbal Fransi et al., 2017) or of content credibility (Choi and Stvilia, 2015).
Proposed model
Following on from the review of the literature dedicated to website quality evaluation and drawing above all on the 50 most cited works, we propose a multipurpose model with three specific focuses for the formulation and application of comprehensive instruments of evaluation. We divide this model in three parts: the first provides a breakdown of website quality parameters, organized according to the specific focus they offer; the second serves as a visual scheme of the model's main dimensions and focuses; and the third, comprises a set of tasks or a framework that synthesizes the stages that a researcher needs to consider when designing a website quality evaluation instrument.
In Table 6, we classify into thirteen dimensions the more than 120 website quality factors that appear most frequently in the 50 most cited texts. These factors are treated here as parameters because each of them can be broken down further into a number of different indicators. The dimensions are presented in descending order of frequency as they appear in the literature, while the parameters are organised according to the specific focus taken by the study.
Thus, the table compiles the parameters that have been the object of most attention in the website quality studies identified as having greatest impact. The model proposed on the basis of this mapping aims to offer researchers wanting to design new evaluation instruments a broad initial set of common parameters. The parameters, moreover, are all of a general nature and, as such, can be applied to any type of website. Consequently, the parameters can also be used to complement the specific parameters of sector-specific evaluation instruments.
As can be seen, usability and content are the dimensions with the most parameters, while the others are made up of fewer. However, here, we have opted for a hierarchical structure in order that important website factors, such as user assistance and support, advertising and legal aspects which are typically dealt with less frequently in the literature, are more visible. In so doing, we also seek to identify gaps and, hence, research opportunities – the case, for example, of the parameters to evaluate website services, which are not as well developed as those of website content. It also emerges that while certain parameters respond to more than one focus within the same dimension, the case, for example, of multilingualism or user satisfaction, we have opted not to repeat them but rather to take a decision regarding their classification.
The second component of the model is a diagram (Figure 2) that synthesizes the dimensions of website quality evaluation, placing at its core the three analytical focuses proposed: strategic, experiential and functional. For each focus, it then shows, in a tiered arrangement, the dimensions that we consider most important for any website. The figure can be read as follows: starting from the base with the site's essentials elements and working up to the peak, we begin the evaluation by determining how solid the content and services base is and continue by analysing its interface and user experience and conclude by verifying if the website owner's strategic objectives – a critical factor in any exhaustive evaluation – have been met.
Finally, the model also includes a framework or procedure for the creation of instruments to evaluate website quality. Table 7 classifies and sets out the individual steps required to design either a general or sector-specific instrument. It is organized in accordance with the most frequently employed techniques in the discipline: namely, user studies, expert analyses and strategic analyses. In this way, those responsible for the creation of the instrument can opt to incorporate those techniques they consider most pertinent, with triangulation being recommended for the most exhaustive evaluation.
This framework is divided, in turn, into five design stages: definition, research, parameterization, testing and validation. In the first, the design of the instrument is planned in relation to a set of given requirements – including the objectives and scope – and the conditions that delimit its use – including the resources and the degree of data access granted the key informants. In the second, the research stage, a study is undertaken of the specific characteristics of the sector to which the site belongs, its context of use, the profile of its users and the concrete recommendations previously made by other experts. These first two stages are common to each of the three techniques addressed.
From the third stage onwards, the tasks vary depending on the technique chosen by the creators of the instrument (see Table 7). In the parameterization stage, all the sector-specific quality factors relevant to the website's objective are determined. Then, in the testing stage, an initial test of the instrument is made to identify opportunities for improvement and to calibrate it for purposes of optimization. Finally, in the validation stage, its reproducibility is verified based on the observations of other experts.
In this way, the model guarantees that any evaluation instrument created using this methodology provides an exhaustive analysis of the quality of any given website. This is thanks to the fact that model recommends the use of a triangulation of focuses and techniques and considers such components as: the testing of the general heuristics of usability; the expert analysis of sector-specific indicators; the study of users, albeit with indirect methods such as web analytics; and, importantly, the verification that the site meets its strategic objectives.
To ensure that the cycle of enhancement continues to have positive effects on the websites analysed, we recommend the communication of the results in a timely and effective manner, with a summary of the most relevant findings or insights, accompanied ideally by suggested approaches to address the solution of the most recurrent problems.
Discussion
Based on these results, and in line with the conclusions drawn by other authors (Rekik et al., 2018; Semerádová and Weinlich, 2020), it is evident that studies concerned with website quality evaluation have undergone steady growth in recent years, attracting primarily the interest of authors from academia, but also from the professional world. In this regard, the interest of a number of specific academic disciplines for such analyses is notable, led by the computer sciences, health sciences and business. However, it is worth stressing that no interdisciplinary or transdisciplinary studies involving these fields of study have been detected and that most of the papers cite almost exclusively references from their own discipline.
At the same time, it is apparent that proposals for sector-specific or specialized evaluation instruments are increasing (Morales-Vargas et al., 2020). The education and health sectors – above all, the analysis of health information sites – are the sectors with the highest number of studies, followed by those of government, commerce and tourism.
A finding of some relevance, here, is the focus adopted by the website quality evaluation instruments. All in all, we detect three primary focuses: strategic or oriented to the fulfilment of the site owner's objectives; functional, present in more than half the proposals and designed to verify the presence of technical factors; and experiential, with a concern for user experience and perception. Sanabre et al. (2020) are pioneers in combining strategic and functional focuses, but the incorporation of all three is not evident in any of studies reviewed herein.
A common element in the way evaluation instruments are organized is the fact that most opt to express the factors to be analysed in dimensions, parameters and indicators. Although a variety of different names are employed to refer to them – including, attributes, criteria, variables and characteristics, as reported by Chiou et al. (2010) – what is present in all of them is the idea of starting from broad groups of properties which are then gradually broken down into more specific units of analysis that facilitate inspection.
Content, usability and accessibility are the most frequently occurring dimensions among the most cited studies, followed by information architecture and visual design. In the case of the pre-eminence of content, our results coincide with those of Cao and Yang, (2016) and Hasan (2014). Similarly, as regards the number of different indicators detected, our results are in line with the outcomes reported by Sun et al. (2019)Sun et al., 2019. However, other studies have tended to assign the leading role to credibility (Choi and Stvilia, 2015; Huang and Benyoucef, 2014), functionality (Law, 2019) or trust (Daraz et al., 2019).
Our study also identifies indicators of a different nature, including, for example, their level of specificity. Therefore, here, for the first time, we propose a categorization of the parameters according to their scope, site of validation, focus, way of scoring and perspective. We construct a model that classifies the parameters detected – numbering more than 120 – in thirteen dimensions and three focuses. In this way, we seek to identify the elements that make up an instrument for evaluating websites as well as the main characteristics it is designed to analyse. The classification we propose is based on previous studies that have been validated by experts, including for example the Lee-Geiller and Lee (2019) model.
Having identified the general parameters for the evaluation of all types of website, we propose a procedure for creating new instruments for evaluating website quality. The procedure comprises the following five stages: (1) definition of objectives; (2) study of the characteristics specific to a given sector; (3) the parameterization of the most relevant attributes; (4) the piloting or testing of the instrument; and (5) its subsequent validation by other experts. In this way, an evaluation centred on three main points of focus – strategic, functional and experiential – is guaranteed, satisfying also the need to use multiple tools as detected by Rekik et al. (2018)Rekik et al., 2018 and triangulation, as recommended by Whitenton (2021).
Conclusions
As is more than evident, website quality as a field of study continues to occupy a broad space in which different areas of knowledge are in continuous dialogue. But the field has yet to develop a shared terminology, a shortcoming that hinders efforts to establish its conceptualization as a discipline in its own right.
Despite the technological advances made and the growing technical mastery of their users, websites are still in need of evaluation instruments that can enhance both their performance and user experience. This is most apparent when these websites belong to a sector whose content, functions and services are characterised by a set of specific requirements.
As such, we wish to highlight the importance in the field of website quality of being able to identify and analyse a set of dimensions, parameters and indicators that are specific to each type of website. However, at the same time, it is critical that this be done by adopting a range of different focuses: in other words, the instrument of evaluation has to be able to assess the technical and functional requirements as well the website's strategic objectives and user experience.
This study, therefore, proposes a model for the development of new comprehensive instruments for the evaluation of website quality that are applicable to a very broad set of domains. It also constitutes an initial step in the adoption of a shared conceptualization in this field of study. The latter should, moreover, promote the sharing, reuse and comparison of the instruments proposed by other website quality researchers and professionals working in different disciplines.
Figures
Publications with the greatest number of average citations
Authors | Year | Title | GS | ACC |
---|---|---|---|---|
W3C | 2016 | Web design and applications standards | 1,110 | 278 |
Sauro and Lewis | 2016 | Quantifying the user experience: practical statistics for user research | 833 | 208 |
Apple | 2018 | Human interface guidelines | 231 | 116 |
Krug | 2014 | Don't make me think, revisited: A common sense approach to web and mobile usability | 533 | 89 |
Shneiderman et al | 2016 | Designing the user interface: Strategies for effective human-computer interaction | 297 | 74 |
European Commission | 2016 | Europa web guide | 291 | 73 |
Toxboe | 2018 | Design patterns | 121 | 61 |
Health On the Net | 2017 | Principles: The HON Code of Conduct for medical and health web sites | 99 | 33 |
Al-Qeisi et al | 2014 | Website design quality and usage behavior: Unified theory of acceptance and use of technology | 192 | 32 |
Quiñones and Rusu | 2017 | How to develop usability heuristics: A systematic literature review | 93 | 31 |
Kurosu | 2015 | Human-computer interaction: Design and evaluation | 149 | 30 |
Bevan et al | 2015 | ISO 9241–11 revised: What have we learnt about usability since 1998? | 147 | 29 |
Lee-Geiller and Lee | 2019 | Using government websites to enhance democratic E-governance: A conceptual model for evaluation | 27 | 27 |
Thielsch and Hirschfeld | 2019 | Facets of website content | 23 | 23 |
Fernández-Cavia et al | 2014 | Web Quality Index (WQI) for official tourist destination websites. Proposal for an assessment system | 136 | 23 |
Note(s): Average Citation Count, ACC
Main instrument employed by each publication
Instrument | % |
---|---|
Checklists | 34.4 |
Index or rating scale | 21.6 |
Articulated system of analysis | 15.7 |
Design guide | 9.2 |
Questionnaire | 7.9 |
Usability test | 5.2 |
Framework | 4.6 |
Pattern | 1.3 |
Website quality factors present in the 50 most cited publications
Factor | Dimension or parameter | Indicator |
---|---|---|
Usability | 9 | 7 |
Content | 8 | 9 |
Accessibility | 6 | 13 |
Information architecture | 5 | 3 |
Graphic design | 3 | 9 |
Readability | 3 | 3 |
Credibility | 2 | 3 |
Privacy | 2 | 3 |
Service quality | 2 | 2 |
Language | 2 | 2 |
Expertise | 2 | 2 |
Positioning | 2 | 1 |
Transparency | 1 | 5 |
Others | 22 | 138 |
Characteristics of website quality indicators
Criteria | Characteristics | Description | Examples |
---|---|---|---|
Scope | General | Common to all websites | Navigation, security |
Sector-specific | Focused on a specific sector | Perceived risk, purchase intention (Commerce sector) | |
Validation | Internal | Verifiable on the website | Updating, internal search engine |
External | Verifiable off the website | Positioning, return on investment | |
Focus | Strategic | Based on satisfying owner's objectives | Conversion, loyalty, traffic |
Functional | Based on verification of technical and functional features | Link performance, fulfilment of standards | |
Experiential | Based on user experience | Convenience, satisfaction | |
Scoring | Dichotomous | Verification is binary: does it fulfil a condition or not? | Attribution of authorship, contact information |
Scalar | Verification is normally expressed using an assessment scale | Readability, relevance | |
Perspective | Objective | Inherent and comparable website quality, free of judgements and prejudices | Link to social networks, validation of HTML code and CSS |
Subjective | Based on user perception | Visual appeal, perceived value |
Techniques used in the 50 most cited publications
Instrument or technique | No of mentions |
---|---|
Survey | 11 |
Checklist | 5 |
Content analysis | 3 |
Manual inspection | 3 |
Task observation | 3 |
Web analytics | 2 |
Satisfaction rate | 2 |
Usability test | 2 |
Interview | 1 |
Focus group | 1 |
Most frequently mentioned website quality parameters, organised by focus
Strategic | Focus | Experiential | |
---|---|---|---|
Dimension | |||
functional | |||
Usability and accessibility |
|
|
|
Content and services |
|
|
|
Information architecture |
|
|
|
User experience |
|
|
|
Graphic design |
|
|
|
Technology and security |
|
|
|
Interactivity |
| ||
Performance and effectiveness |
|
| |
Legal aspects |
|
| |
Assistance and support |
|
| |
Advertising and marketing |
|
| |
Multimedia |
| ||
Participation and sociability |
|
|
Source(s): Created by authors based on most cited publications
Framework: Stages in the design of a website quality evaluation instrument
I. Definition |
1. Define the instrument's objectives and identify its target users |
2. Establish its scope: general or sector-specific |
3. Determine the resources, deadlines and degree of effective access to information |
4. Delimit the scope of the analysis: exhaustive or centred on a specific parameter |
5. Determine the focus of analysis |
Strategic | Functional | Experiential |
---|---|---|
II. Research | ||
If the instrument is of general scope | ||
6. Review standards, heuristics and principles | ||
If the instrument is sector-specific in scope, in addition | ||
7. Know the specific characteristics and objectives of the sector | ||
8. Characterise the sector's user profile | ||
9. Study the website's context of use: motives, presence, cultural factors | ||
10. Review existing literature, guidelines and directives | ||
11. Complete a comparative analysis of the sector's leading websites | ||
12. Identify the sector's key content, functions and services | ||
In either case, choose one of the following techniques or, ideally, triangulate them |
a. Strategic analysis | b. Expert analysis | c. User study |
---|---|---|
III. Parametrization | ||
13. Express the objectives in measurable indicators | 13. Identify and define the key factors | 13. Formulate the research questions |
14. Specify these metrics as key-performance indicators (KPIs) | 14. Organise them in parameters and indicators | 14. Choose the paradigm and methodologies |
15. Determine the tools and software needed for their verification | 15. Express them as checkpoints: assertions, questions, protocols | 15. Determine the universe, the population and the sample |
16. Configure the software with access to the site's data | 16. Determine the scoring: dichotomous or scale | 16. Establish the possible factors to be measured |
17. Fix the period of analysis | 17. Add the definition of the indicator (ideally by means of an example or pattern) | 17. Specify them in parameters and indicators |
18. Determine the weighted values of each indicator | 18. Choose the most appropriate techniques | |
19. Evaluate and define the tools on which the inspection is based (manual, semi-automated, software) | 19. Design the user tests | |
20. Address ethical questions and issues of data protection | ||
IV. Testing | ||
18. Collect information by conducting a pilot test | 20. Apply and pilot the first version of the instrument on test sites | 21. Run a pilot test |
19. Report the results using a balanced scorecard | 21. Extract and systematize key information for website evaluation and improvement | 22. Apply the instrument or user test |
20. Extract and systematize key information for website evaluation and improvement | 22. Refine the instrument making improvements that facilitate its application | 23. Collect the data (attitudes, behaviours, perceptions, responses to stimuli) |
21. Compare outcomes with other sites using competitive intelligence | 24. Analyse the results | |
25. Extract and systematize key information for website evaluation and improvement | ||
V. Validation | ||
22. Have other experts validate the instrument | 23. Have other experts validate the instrument | 26. Have other researchers validate the instrument and replicate it in new studies |
23. Apply it to case studies or comparative analyses | 24. Apply it to other websites in the sector |
Funding: This work is supported by Spain's Ministry of Science and Innovation (MICINN) project “Parameters and strategies to increase the relevance of media and digital communication in society: curation, visualisation and visibility (CUVICOM)”. PID 2021-123579OB-I00.
References
Adepoju, S.A. and Shehu, I.S. (2014), “Usability evaluation of academic websites using automated tools”, 3rd International Conference on User Science and Engineering (i-USEr), IEEE, pp. 186-191.
Akincilar, A. and Dagdeviren, M. (2014), “A hybrid multi-criteria decision making model to evaluate hotel websites”, International Journal of Hospitality Management, Vol. 36, pp. 263-271.
Aladwani, A.M. and Palvia, P.C. (2002), “Developing and validating an instrument for measuring user-perceived web quality”, Information and Management, Vol. 39 No. 6, pp. 467-476.
Al-Qeisi, K., Dennis, C., Alamanos, E. and Jayawardhena, C. (2014), “Website design quality and usage behavior: unified theory of acceptance and use of technology”, Journal of Business Research, Vol. 67 No. 11, pp. 2282-2290.
Anusha, R. (2014), “A study on website quality models”, International Journal of Scientific and Research Publications, Vol. 4 No. 12.
Apple (2018), Human Interface Guidelines, Apple.
Bevan, N., Carter, J. and Harker, S. (2015), “ISO 9241-11 Revised: what have we learnt about usability since 1998?”, in Kurosu, M. (Ed.), Human-Computer Interaction: Design and Evaluation, Springer International Publishing, Cham, pp. 143-151.
Booth, A., Sutton, A. and Papaioannou, D. (2016), Systematic Approaches to a Successful Literature Review, 2nd ed., SAGE Publications, London.
Cajita, M.I., Rodney, T., Xu, J., Hladek, M. and Han, H. (2017), “Quality and health literacy demand of online heart failure information”, Journal of Cardiovascular Nursing, Vol. 32 No. 2, pp. 156-164.
Cao, K. and Yang, Z. (2016), “A study of e-commerce adoption by tourism websites in China”, Journal of Destination Marketing and Management, Vol. 5 No. 3, pp. 283-289.
Chiou, W.-C., Lin, C.-C. and Perng, C. (2010), “A strategic framework for website evaluation based on a review of the literature from 1995–2006”, Information and Management, Vol. 47 Nos 5-6, pp. 282-290.
Choi, W. and Stvilia, B. (2015), “Web credibility assessment: conceptualization, operationalization, variability, and models”, Journal of the Association for Information Science and Technology, Vol. 66 No. 12, pp. 2399-2414.
Codina, L. (2018), Revisiones bibliográficas sistematizadas: procedimientos generales y framework para ciencias humanas y sociales, Universitat Pompeu Fabra, Barcelona.
Codina, L. and Pedraza-Jiménez, R. (2016), “Características y componentes de un sistema de análisis de medios digitales: el SAAMD”, in Pedraza-Jiménez, R., Codina, L. and Guallar, J. (Eds), Calidad en sitios web: Método de análisis general, e-commerce, imágenes, hemerotecas y turismo, Editorial UOC, Barcelona, pp. 15-40.
Cristóbal Fransi, E., Hernández Soriano, F. and Marimon, F. (2017), “Critical factors in the evaluation of online media: creation and implementation of a measurement scale (e-SQ-Media)”, Universal Access in the Information Society, Vol. 16 No. 1, pp. 235-246.
Daraz, L., Morrow, A.S., Ponce, O.J., Beuschel, B., Farah, M.H., Katabi, A., Alsawas, M., Majzoub, A.M., Benkhadra, R., Seisa, M.O., Ding, J., Prokop, L. and Murad, M.H. (2019), “Can patients trust online health information? A meta-narrative systematic review addressing the quality of health information on the internet”, Journal of General Internal Medicine, Vol. 34 No. 9, pp. 1884-1891.
Dey, A., Billinghurst, M., Lindeman, R.W. and Swan, J.E., II (2018), “A systematic review of 10 years of augmented reality usability studies: 2005 to 2014”, Frontiers in Robotics and AI, Vol. 5, p. 37, doi: 10.3389/frobt.2018.00037.
Dueppen, A.J., Bellon-Harn, M.L., Radhakrishnan, N. and Manchaiah, V. (2019), “Quality and readability of English-language internet information for voice disorders”, Journal of Voice, Vol. 33 No. 3, pp. 290-296.
Ecer, F. (2014), “A hybrid banking websites quality evaluation model using AHP and COPRAS-G: a Turkey case”, Technological and Economic Development of Economy, Vol. 20 No. 4, pp. 758-782.
European Commission (2016), “Europa web guide”, The EU Internet Handbook, available at: http://ec.europa.eu/ipg/ (accessed 17 June 2018).
Fernández-Cavia, J., Rovira, C., Díaz-Luque, P. and Cavaller, V. (2014), “Web Quality Index (WQI) for official tourist destination websites. Proposal for an assessment system”, Tourism Management Perspectives, Vol. 9, pp. 5-13.
Garrett, J.J. (2011), The Elements of User Experience: User-Centered Design for the Web and beyond, 2nd ed., New Riders, Indianapolis.
Gough, D., Oliver, S. and Thomas, J. (2017), An Introduction to Systematic Reviews, 2nd ed., SAGE Publications, London.
Grant, M.J. and Booth, A. (2009), “A typology of reviews: an analysis of 14 review types and associated methodologies”, Health Information and Libraries Journal, Vol. 26 No. 2, pp. 91-108.
Gregg, D.G. and Walczak, S. (2010), “The relationship between website quality, trust and price premiums at online auctions”, Electronic Commerce Research, Vol. 10 No. 1, pp. 1-25.
Hasan, L. (2014), “Evaluating the usability of educational websites based on students’ preferences of design characteristics”, International Arab Journal of E-Technology, Vol. 3 No. 3, pp. 179-193.
Health On the Net (2017), Principles: the HON Code of Conduct for Medical and Health Web Sites, Ginebra, Suiza.
Huang, Z. and Benyoucef, M. (2014), “Usability and credibility of e-government websites”, Government Information Quarterly, Vol. 31 No. 4, pp. 584-595.
Ismailova, R. and Inal, Y. (2017), “Web site accessibility and quality in use: a comparative study of government web sites in Kyrgyzstan, Azerbaijan, Kazakhstan and Turkey”, Universal Access in the Information Society, Vol. 16 No. 4, pp. 987-996.
ISO (n.d), “Terms & definitions”, Online Browsing Platform (OBP), available at: https://www.iso.org/obp/ (accessed 18 February 2021).
ISO (2008), “ISO 9241-151:2008 ergonomics of human-system interaction. Part 151: guidance on World wide web user interfaces”, International Organization for Standardization.
Jainari, M.H., Baharum, A., Deris, F.D., Mat Noor, N.A., Ismail, R. and Mat Zain, N.H. (2022), “A standard content for university websites using heuristic evaluation”, in Arai, K. (Ed.), Intelligent Computing. SAI 2022. Lecture Notes in Networks and Systems, Springer, Cham, Vol. 506, doi: 10.1007/978-3-031-10461-9_19.
Jayanthi, B. and Krishnakumari, P. (2016), “An intelligent method to assess webpage quality using extreme learning machine”, International Journal of Computer Science and Network Security, Vol. 16 No. 9, pp. 81-85.
Kamoun, F. and Almourad, M.B. (2014), “Accessibility as an integral factor in e-government web site evaluation: the case of Dubai e-government”, Information Technology and People, Vol. 27 No. 2, pp. 208-228.
Karkin, N. and Janssen, M. (2014), “Evaluating websites from a public value perspective: a review of Turkish local government websites”, International Journal of Information Management, Vol. 34 No. 3, pp. 351-363.
Kaur, S. and Gupta, S.K. (2014), “Key aspects to evaluate the performance of a commercial website”, IJCA Proceedings on International Conference on Advances in Computer Engineering and Applications, Foundation of Computer Science (FCS), pp. 7-11.
Kaushik, A. (2010), Web Analytics 2.0: the Art of Online Accountability & Science of Customer Centricity, Wiley Publishing, Indianapolis, Indiana.
Keselman, A., Arnott Smith, C., Murcko, A.C. and Kaufman, D.R. (2019), “Evaluating the quality of health information in a changing digital ecosystem”, Journal of Medical Internet Research, Vol. 21 No. 2, e11129.
Król, K. and Zdonek, D. (2020), “Aggregated indices in website quality assessment”, Future Internet, Vol. 12 No. 4, p. 72.
Krug, S. (2014), Don't Make Me Think, Revisited: A Common Sense Approach to Web and Mobile Usability, 3rd ed., New Riders, Pearson Education, Berkeley, CA.
Kurosu, M. (2015), Human-computer Interaction: Design and Evaluation, in Kurosu, M. (Ed.), Springer International Publishing, Cham.
Lavrakas, P.J. (2008), Encyclopedia of Survey Research Methods, 5th ed., Sage Publications, Thousand Oaks, CA.
Law, R. (2019), “Evaluation of hotel websites: progress and future developments”, International Journal of Hospitality Management, Vol. 76, pp. 2-9.
Law, R., Qi, S. and Buhalis, D. (2010), “Progress in tourism management: a review of website evaluation in tourism research”, Tourism Management, Vol. 31 No. 3, pp. 297-313.
Leavitt, M.O. and Shneiderman, B. (2006), Research-based Web Design & Usability Guidelines, 2nd ed., U.S. Department of Health & Human Services, Washington, DC.
Lee-Geiller, S. and Lee, T.D. (2019), “Using government websites to enhance democratic E-governance: a conceptual model for evaluation”, Government Information Quarterly, Vol. 36 No. 2, pp. 208-225.
Leung, D., Law, R. and Lee, H.A. (2016), “A modified model for hotel website functionality evaluation”, Journal of Travel and Tourism Marketing, Vol. 33 No. 9, pp. 1268-1285.
Lopezosa, C., Codina, L. and Gonzalo-Penela, C. (2019), “Off-page SEO and link building: general strategies and authority transfer in the digital news media”, Profesional de la Información, Vol. 28 No. 1, pp. 1-12, e280107.
Maia, C.L.B. and Furtado, E.S. (2016), “A systematic review about user experience evaluation”, in Marcus, A. (Ed.), Design, User Experience, and Usability: Design Thinking and Methods, Springer International Publishing, Cham, pp. 445-455.
Manchaiah, V., Dockens, A.L., Flagge, A., Bellon-Harn, M., Azios, J.H., Kelly-Campbell, R.J. and Andersson, G. (2019), “Quality and readability of English-language internet information for tinnitus”, Journal of the American Academy of Audiology, Vol. 30 No. 01, pp. 031-040.
Martín-Martín, A., Orduña-Malea, E., Thelwall, M. and Delgado López-Cózar, E. (2018), “Google Scholar, Web of Science, and Scopus: a systematic comparison of citations in 252 subject categories”, Journal of Informetrics, Vol. 12 No. 4, pp. 1160-1177.
Morales-Vargas, A., Pedraza-Jiménez, R. and Codina, L. (2020), “Website quality: an analysis of scientific production”, Profesional de la Información, Vol. 29 No. 5, p. e290508.
Nielsen, J. (2020), “10 usability heuristics for user interface design”, Nielsen Norman Group, available at: https://www.nngroup.com/articles/ux-research-cheat-sheet/ (accessed 3 May 2021).
Nikolić, N., Grljević, O. and Kovačević, A. (2020), “Aspect-based sentiment analysis of reviews in the domain of higher education”, Electronic Library, Vol. 38, pp. 44-64.
Orduña-Malea, E. and Aguillo, I.F. (2014), Cibermetría: Midiendo El Espacio Red, UOC, Barcelona.
Paz, F., Paz, F.A., Villanueva, D. and Pow-Sang, J.A. (2015), “Heuristic evaluation as a complement to usability testing: a case study in web domain”, 2015 12th International Conference on Information Technology - New Generations, IEEE, pp. 546-551.
Quiñones, D. and Rusu, C. (2017), “How to develop usability heuristics: a systematic literature review”, Computer Standards and Interfaces, Vol. 53, pp. 89-122.
Rekik, R., Kallel, I. and Alimi, A.M. (2015), “Quality evaluation of web sites: a comparative study of some multiple criteria decision making methods”, Intelligent Systems Design and Applications (ISDA), 2015 15th International Conference. International Conference on Intelligent Systems Design and Applications, pp. 585-590.
Rekik, R., Kallel, I., Casillas, J. and Alimi, A.M. (2018), “Assessing web sites quality: a systematic literature review by text and association rules mining”, International Journal of Information Management, Vol. 38 No. 1, pp. 201-216.
Rocha, Á. (2012), “Framework for a global quality evaluation of a website”, Online Information Review, Vol. 36 No. 3, pp. 374-382.
Rosala, M. and Krause, R. (2020), User Experience Careers: What a Career in UX Looks Like Today, Fremont, CA.
Sanabre, C., Pedraza-Jiménez, R. and Vinyals-Mirabent, S. (2020), “Double-entry analysis system (DEAS) for comprehensive quality evaluation of websites: case study in the tourism sector”, Profesional de la Información, Vol. 29 No. 4, pp. 1-17, e290432.
Sauro, J. and Lewis, J.R. (2016), Quantifying the User Experience: Practical Statistics for User Research, 2nd ed., Elsevier/Morgan Kaufmann, Waltham, MA.
Semerádová, T. and Weinlich, P. (2020), “Looking for the definition of website quality”, in Semerádová, T. and Weinlich, P. (Eds), Website Quality and Shopping Behavior: Quantitative and Qualitative Evidence, SpringerBriefs in Business, Cham, pp. 5-27.
Shneiderman, B. (2016), “The eight golden rules of interface design”, Department of Computer Science, University of Maryland.
Shneiderman, B., Plaisant, C., Cohen, M.S., Jacobs, S., Elmqvist, N. and Diakopoulos, N. (2016), Designing the User Interface: Strategies for Effective Human-Computer Interaction, 6th ed., Pearson Higher Education, Essex.
Sun, Y., Zhang, Y., Gwizdka, J. and Trace, C.B. (2019), “Consumer evaluation of the quality of online health information: systematic literature review of relevant criteria and indicators”, Journal of Medical Internet Research, Vol. 21 No. 5, e12522.
Thielsch, M.T. and Hirschfeld, G. (2019), “Facets of website content”, Human–Computer Interaction, Vol. 34 No. 4, pp. 279-327.
Tognazzini, B. (2014), “First principles of interaction design (revised and expanded)”, askTog.
Toxboe, A. (2018), “Design patterns”, UI Patterns: User Interface Design Patterns Library, available at: http://ui-patterns.com/patterns (accessed 17 June 2018).
Tullis, T. and Albert, W. (2013), Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics, 2nd ed., Morgan Kaufmann, Waltham, Massachusett.
W3C (2016), “Web design and applications standards”, World Wide Web Consortium.
Whitenton, K. (2021), “Triangulation: get better research results by using multiple UX methods”, Nielsen Norman Group, available at: https://www.nngroup.com/articles/triangulation-better-research-results-using-multiple-ux-methods/ (accessed 4 March 2021).
Xanthidis, D., Argyrides, P. and Nicholas, D. (2009), “Web Site Evaluation Index: a systematic methodology and a metric system for the assessment of the quality of web sites”, in Demiralp, M. (Ed.), Proceedings of the 8th WSEAS International Conference on Telecommunications and Informatics. Electrical and Computer Engineering Series, p. 194.
Yin, R.K. (2015), Qualitative Research Form Start to Finish, 2nd ed., The Guilford Press, London.
Corresponding author
About the authors
Alejandro Morales-Vargas. PhD in Communication from the Pompeu Fabra University (UPF). Assistant Lecturer at the Faculty of Communication and Image at the University of Chile. Research collaborator in the Digital Documentation and Interactive Communication Research Group (DigiDoc) at the UPF. Journalist and Graduate in Social Communication from the University of Chile; Master's in Digital Content Management from the University of Barcelona. He was the founder of the Diploma in Digital Journalism and Internet Media Management. Currently is head of Digital Media in the Information Services and Libraries Unit (SISIB) at the University of Chile.
Rafael Pedraza-Jimenez. PhD in Information and Documentation from the University of Barcelona. Serra Húnter Associate Professor in the Department of Communication at the Pompeu Fabra University, where he is also Secretary of the Faculty of Communication and research member of the DigiDoc Group. He teaches on the undergraduate program in Journalism Studies and on various Master's and postgraduate degree courses. He has published dozens of studies with different publishers and in leading international journals, his main lines of research being website quality, information behaviour, information retrieval, web semantics and metadata, and digital media.
Lluís Codina. PhD in Information Sciences from the Autonomous University of Barcelona. Professor of the Department of Communication at the Pompeu Fabra University. Coordinator of the Research Unit in Journalism and Digital Documentation and research member of the DigiDoc Group. Coordinator of the University Master's in Social Communication. Lecturer in the Faculty of Communication, on undergraduate degrees in Journalism and Audiovisual Communication. Teacher of the Online Masters in Digital Documentation and Search Engines at the Barcelona School of Management. Co-director of the Cybermedia Observatory. Co-PI of the project “Interactive Narration and Digital Visibility in Interactive Documentary and Structured Journalism”.