Designing business analytics (BA) platforms: tracing the visual redesign process of a startup’s BA platform

Anastasia Griva (Lero—The Science Foundation Ireland Research Centre for Software, J.E. Cairnes School of Business and Economics, University of Galway, Galway, Ireland)
Angeliki Karagiannaki (Department of Management Science and Technology, ELTRUN, The E-Business Research Center, Athens University of Economics and Business, Athens, Greece)

Benchmarking: An International Journal

ISSN: 1463-5771

Article publication date: 23 August 2024

323

Abstract

Purpose

Designing effective business analytics (BA) platforms that visualise data, provide deep insights and support data-driven decision-making is a challenging task. Understanding the elements shaping BA platform design is crucial for success. The purpose of this study is to explore the impact of visualisation on usability (UI) and user experience (UX) while emphasising the importance of insights understanding in BA platform design.

Design/methodology/approach

This paper presents a case study following a startup’s journey as it undergoes two redesign phases for its BA platform. A combination of quantitative and qualitative methods is used to assess UX/UI and insights understanding of the platform. Indicatively this included semi-structured interviews, observations, think-aloud techniques and surveys to monitor runtime per task, number of errors, users’ emotions and users’ understanding.

Findings

Our findings suggest that modifications in aesthetics and information visualisation positively influence overall usability, UX, and understanding of platform insights – a critical aspect for the success of the startup.

Research limitations/implications

Our goal is not to make a methodological contribution, but to illustrate how companies, constrained by time and pressure, navigate platform changes without meticulous design and provide learnings on important elements while designing BA platforms.

Practical implications

This paper concludes with suggested methods for assessing BA platforms and recommends practical practices to follow. These practices include recommendations on important elements for BA platform users, such as navigation and interactivity, user control and personalisation, visual consistency and effective visualisation.

Originality/value

This study contributes to practice as it presents a real-life case and offers valuable insights for practitioners.

Keywords

Citation

Griva, A. and Karagiannaki, A. (2024), "Designing business analytics (BA) platforms: tracing the visual redesign process of a startup’s BA platform", Benchmarking: An International Journal, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/BIJ-06-2024-0517

Publisher

:

Emerald Publishing Limited

Copyright © 2024, Anastasia Griva and Angeliki Karagiannaki

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

Business analytics (BA) has appeared as a key subject of study for both practitioners and researchers, revealing the importance of data-related problems to be addressed in incumbent firms (Delen and Zolbanin, 2018). Due to the vast amount of data gathered, plentiful BA platforms have been developed to analyse these datasets, providing in-depth insights and supporting data-driven decision-making (Griva et al., 2018; Li et al., 2022; Raman et al., 2019; Raman et al., 2024; Raza et al., 2023; Tipu and Fantazy, 2023).

When designing BA platforms, a significant challenge lies in effectively visualising these insights to make them attainable to people, while ensuring a seamless platform experience (Franconeri et al., 2021; Vázquez-Ingelmo et al., 2024). Regarding the visualisation, the challenge is to convey information in a way that resonates with diverse end-users, taking into account their different backgrounds and knowledge (Vázquez-Ingelmo et al., 2024), and in a way that facilitates comparisons and benchmarking, allowing businesses to identify areas for improvement and drive enhancements (Agrahari and Srivastava, 2019). To address this challenge, academic effort has been structured mainly around “visual aesthetics” and “information visualisation” that deal with efficient data representation (Banissi et al., 2014; Luo, 2019; Sorapure, 2019). However, apart from the recent study of Vázquez-Ingelmo et al. (2024), little research has been conducted to evaluate these elements within BA platforms.

Regarding the need for a seamless platform experience, there is a growing body of literature suggesting models, principles and rules to evaluate usability and user experience in platforms (e.g., Mazumder and Das, 2014) or using them (e.g. Lavie and Tractinsky, 2004; Mazumder and Das, 2014). However, there is only a small subset of literature that focuses on evaluating these elements specifically in BA platforms (e.g., Pohl et al., 2012).

The goal of this study is to examine the impact of visualisation on the usability (UI), user experience (UX) and understanding of insights for a BA platform. To address this objective, empirical evidence is provided by observing the evolution of a retail analytics platform in two phases, reflecting the journey of an early-stage startup.

Early-stage start-ups are often constrained by time and resources, which limits their ability to meticulously design products to enter the market (Karpinskaia, 2023). Instead, they often rely on trial and error, employing quick and pragmatic solutions (Zamani et al., 2022). They translate these failures into growth strategies (Corvello et al., 2024), sometimes without adhering to academic rigour. Nevertheless, these practices can yield valuable insights for practitioners, aligning with or enriching academic knowledge. In this vein, our results indicated that improvements in visualisation can impact positively the usability and the UX of the BA platform. Also, they can have affect significantly the insights understanding. In this context, it is essential to emphasise that our objective is not to present a methodological contribution, acknowledging that the steps undertaken by the startup in redesigning their BA platform may not adhere to best practices. However, our aim is to provide to practitioners (e.g., startuppers) practical knowledge on what they should test and how to do so, and on which elements they should pay attention to when designing their BA platforms.

The remainder of the paper is organised as follows. Section 2 offers a justification for the relevance of this work by discussing visualisation and its importance on insights understanding. This section also examines common evaluation methods for usability and user experience based on existing literature. Section 3 describes an overview of the empirical context. Section 4 presents the findings derived from the visual alterations, while Section 5 provides concluding remarks, practical learnings, along with some limitations and implications for future research, while section 6 concludes the paper.

2. Background

Data visualisation can significantly influence the usability, user experience, and insights understanding, thereby impacting the overall acceptance of a platform (Thüring and Mahlke, 2007; Tuch et al., 2012; Vázquez-Ingelmo et al., 2024). This section introduces visualisation and its significance, followed by an exploration of insights understanding and an overview on how literature evaluates this aspect. Subsequently, we delve into usability and UX elements, presenting commonly utilised evaluation methods.

2.1 Visualisation

Information visualisation (infovis) is “the use of computer supported, interactive, visual representations of abstract data to amplify cognition (Card et al., 1999). Infovis traditionally was designed for expert users (Sorapure, 2019). However, BA platforms and dashboards might have both experienced and inexperienced users; thus, this is an important challenge. Various researchers highlight the need to make infovis attainable to non-expert audiences, coining terms such as “communication-minded visualisation”(Viégas and Wattenberg, 2006), “utilitarian visualisation”(Sorapure, 2019), “casual visualisation” etc (Pousman et al., 2007). All in all, infovis is an important element in dashboard design (Franconeri et al., 2021; Vázquez-Ingelmo et al., 2024), since wrong visuals can lead to biases, and poor decision-making.

Infovis combines three aspects: visual features (e.g., position, colour and size), textual elements (e.g., labels, instructions and descriptions) and interactive options (e.g., search, filtering and zoom) (Sorapure, 2019). When designing a dashboard, all these aesthetic factors matter. The colours, the position and size of the text, the order and the type of graphs can affect users (Franconeri et al., 2021). Aesthetics is a subjective and multidimensional element that may be interpreted variably depending on an individual’s perceptions, background, culture, etc (Miniukovich and De Angeli, 2015; Vázquez-Ingelmo et al., 2024).

2.1.1 Visualisation and insights understanding

When interacting with a BA dashboard, users should be able to fully focus on their tasks without being distracted by overly technical or complex user interfaces (Pohl et al., 2012). In this scenario, aesthetics play an important role in assisting users to understand the visualised information, spot patterns and extract insights (Sorapure, 2019). Effective data visualisation portrays the story-telling behind the visuals (Mei et al., 2020), inspires the end-user and helps then understand complex datasets and structures (Basole et al., 2019). Researchers suggest that creativity and innovation in visualisation is an important prerequisite for understanding the visualised information and generating novel insights (Adagha et al., 2017; Chouki et al., 2023). Infovis plays a crucial role in turning raw information into actionable insights. Preventing errors and ensuring accurate insights comprehension from the visuals are crucial for the decision-making quality in BA platforms (Adagha et al., 2017; Li et al., 2022).

In the literature, various principles are employed to assess visualisation and insight understanding. For instance, some researchers (e.g., Bhandari et al., 2019; Lavie and Tractinsky, 2004) distinguish visual factors into expressive and classical design factors. The former includes incorporating elements such as creativity, special effects, etc., in the design (Bhandari et al., 2019). The latter includes guidelines related to clean, clear and symmetrical design, such as appropriately organising information (e.g., using symmetries, grouping elements and applying prototypicality), considering the volume of information (e.g., colour variability, visual clutter), and ensuring information discriminability (e.g. edge congestion, figure-ground contrast) (Miniukovich and De Angeli, 2015).

From another viewpoint, practitioners integrate cognitive psychology principles in visual design such as clarity in design, manipulation of size, utilisation of white spaces, selection of appropriate colours, avoidance of visual noise and connections between objects (Miniukovich and De Angeli, 2015). Conversely, a plethora of colours, information overload, and intricate shapes may hinder insights understanding (Pohl et al., 2016). Gestalt principles of visual perception, rooted in psychology, including characteristics such as similarity, continuation, proximity, closure, symmetry, etc., are also used to create visually appealing and easily understandable visuals (Olshannikova et al., 2015).

Some researchers claim that infovis and its aesthetics constitute a component of usability (e.g. Dix et al., 2004), while others argue that it is a component of UX (e.g. Taylor et al., 2011). Alternatively, others suggest that they should be studied separately (e.g., De Angeli, Sutcliffe and Hartmann, 2006). Within the context of this study, we examine infovis and aesthetics separately.

2.2 Usability (UI), user experience (UX) and its evaluation methods

2.2.1 Usability (UI)

UI seems to be the most critical factor in designing and building interactive products to ensure user satisfaction. Various researchers offer different definitions of UI. For instance, Nielsen (1994b) nests usability within systems acceptability including usefulness and compatibility. ISO 9241–11:2018 (2018) defines usability as the extent to which users accomplish their goals using the product with effectiveness, efficiency and satisfaction. Based on this definition, effectiveness, efficiency and satisfaction are considered as common usability measures. Usability is not a one-dimensional concept and contains various attributes (e.g., learnability, memorability, efficiency, errors and user satisfaction) (Mazumder and Das, 2014; Nielsen, 1994a). The importance of these attributes varies depending on the type of application and user requirements.

Due to the multifaceted nature of usability, various Usability Evaluation Methods (UEMs), or usability inspection methods, have been developed. UEMs are categorised as either “empirical” or “analytic” (Gray and Salzman, 1998), and they may utilise both qualitative and quantitative evaluation criteria. Usability testing is one of the most widely used empirical evaluation methods for detecting difficulties and errors encountered by users when using a service. This method typically involves examining variables such as errors, user satisfaction score, time spent on tasks etc (Hartson et al., 2003). Analytic UEMs include techniques such as heuristic evaluation, walkthroughs which aim to stimulate a user’s problem solving process and/or follow a scenario, and/or make inspections etc (Gray and Salzman, 1998; Nielsen, 1994c).

Heuristic evaluation has been widely used (e.g., Mazumder and Das, 2014), due to its ease of use and the fact that it is not time-consuming (Dix et al., 2004). This method typically involves three to five evaluators who perform task-based evaluation based on platforms’ compliance with usability principles (Nielsen, 1994c). The most well-known heuristic evaluation method is Nielsen’s ten heuristic rules (Nielsen, 1994a) which include criteria such as (1) visibility of system status; (2) match between system and the real world; (3) user control and freedom; (4) consistency and standards; (5) help users recognise, diagnose and recover from errors; (6) error prevention; (7) recognition rather than recall; (8) flexibility and efficiency of use; (9) aesthetic and minimalist design; and (10) help and documentation.

2.2.2 User experience (UX)

UI cannot guarantee the acceptance of a system, as the overall user experience makes a product stand out (Thüring and Mahlke, 2007). Designing user-friendly systems should focus on creating positive experiences rather than solely preventing usability problems. There is often confusion between usability and UX among researchers, and various UX definitions exist. According to ISO 9241–11:2018, UX is perceived as a consequence of a user’s prior experiences, attitudes, skills, habits and personality, and it is defined as: “user’s perceptions and responses that result from the use and/or anticipated use of a system, product or service”. Taylor et al. (2011) define UX as a function of three variables, i.e. a user’s internal state (e.g., expectations, mood and motivation), the characteristics of the designed system (e.g., complexity, usability and functionality), and the environment within which the interaction occurs. Overall, UX should care about how to fit user’s goals, fulfil their needs, and prevent negative emotions.

One key difference between usability and UX lies in the inability of the latter to be assessed based on objective criteria (e.g. number of errors, time to complete a task) (Saket et al., 2016). On the contrary, UX is more intricate, as it is associated with psychological and physiological concepts. It is measured by examining the emotions and feelings caused by using the system, and it is highly influenced by users’ expectations and motivation (Law et al., 2014).

There are many quantitative and qualitative UX evaluation methods. Research admits that hitherto the most common UX evaluation methods are mostly qualitative; these include lab tests and field studies (Law et al., 2014). During lab tests, researchers collect experiential insights by observing users’ expressions and emotions. Understanding users’ emotions is essential in UX, and this can be also assisted by innovative techniques like design thinking (Chouki et al., 2023). Conversely, in field studies, researchers observe and interview participants in real-life contexts to examine their expressions. In qualitative UX evaluation, researchers often employ exploratory user research techniques, such as ethnography. Additionally, they usually use audio or video recording (Dix et al., 2004) to track users’ expressions, and/or employ eye-tracking, prototyping tools and visual design to evaluate UX (Kashfi et al., 2019). Similarly, surveys are sometimes utilised, as a means to receive feedback from end-users (Adagha et al., 2017). In these cases, constructs such as flow, aesthetics beauty, enjoyment and attractiveness are measured (Law et al., 2014).

A widely utilised method for assessing UX, UI and insights understanding is laboratory evaluation. This approach involves evaluating these elements “in a controlled environment where the evaluator monitors the use of a system, observes users” actions and reactions, and assesses users’ feelings about the quality of the interaction (Lallemand and Koenig, 2017). Laboratory evaluation can involve a combination of methods such as usability testing, heuristic evaluations, task scenarios, observation, think-loud protocols to capture users’ immediate experience, questionnaires to offer a consistent quantitative measurement, semi-structured interviews, etc (Alves et al., 2014). These methods are applied in the examined case.

3. Empirical setting: a startup company offering a BA platform

This study adopts a case study approach to provide insights into the design and evaluation of BA platforms. Given the practice-based issue we face, where the experiences of the actors and the context of action are critical, feedback was gathered from the platform’s actual end-users and the company’s stakeholders (Benbasat et al., 1987). This access to real-life contexts enriches the overall research process and underscores the importance of studying this empirical and contemporary phenomenon in depth (Yin, 2009). Adopting this method is valuable for understanding new phenomena that lack empirical substantiation (Eisenhardt and Graebner, 2007), as is the case with our startup, and for identifying how certain conditions change over time (Yin, 2009). Engaging with those “living the case” (Stake, 1995), the startup aimed to improve its BA platform through two redesign phases, guided by both unstructured and structured approaches. Feedback was gathered across three different versions from founders, test users and actual customers, supporting data triangulation. By involving individuals who represent the potential actual users, we aimed to simulate real-world usage scenarios and identify any areas for improvement. To triangulate and corroborate our insights, we used semi-structured interviews (15–30 min), observations and surveys, supported by software to capture users’ emotions, and employed think-aloud techniques and note taking to better understand users while performing the task-based evaluation.

In more detail, Figure 1 presents the methods used in each redesign phase, distinguishing between on-task and post-task evaluations in a lab setting. The variation in on-task evaluation methods across different phases is due to coronavirus disease 2019 (COVID-19) restrictions (see section 3.3.3 for details). Additionally, Table 1 provides a detailed overview of the evaluation methods employed for each version of the platform.

3.1 Case study description

“ShopSights” (the name is fictitious to prevent anonymity) focuses on Big Data Analytics in the retail industry. The company has developed a “plug and play” [1] BA platform for analysing transactional and loyalty data within the grocery retail sector. The solution integrates with retailers’ data marts, aiming to address predefined queries and visualise insights related to shopping behaviours to support demand and supply chain decisions. Its goal is to use advanced data analytics to provide retailers with valuable insights regarding their shopper behaviour in physical and digital stores from point-of-sales (POS) data. In more detail they have developed three modules:

(A) “Shopping segmentation”: Identify shopping missions prompting customers to visit the store, such as purchasing products for breakfast, party, pastry making, sushi preparation etc. Identifying the shopping missions, the retailer can design bundle promotions (e.g., meal-deal-like offerings), adjust orders, determine secondary in-store placements, or even redesign the online or physical store layout to accommodate customer needs.

(B) “Customer segmentation”: Segmenting shoppers by behaviour involves analysing purchasing history and creating segments based on shopping missions. This helped identify patterns in-store visits and highlight potential selling gaps. For example, if customers primarily visit for food but not for non-food items, this indicates a selling gap to address.

Apart from these advanced analytics results, for each shopping and customer segment, the end-users have access to descriptive statistics including peak days/hours, high-selling products, store Key Performance Indicators (KPIs), customer demographics, etc.

(C) “Product categories”: Users can obtain information on specific product categories and brands related to shopping missions and preferred shopper segments. They can monitor which missions involve these products/brands and their associations with other products, and identify shopper segments and characteristics (e.g., demographics and preferences) for targeted marketing.

The BA platform uses retailer data, and with retailer agreements, ShopSights shares insights with other parties, like suppliers, about products and brands. ShopSights’s clients are mainly grocery retailers or suppliers, and end-users typically have business and marketing backgrounds with limited BA knowledge. ShopSights developed a proof-of-concept (PoC) for its services, but the founders found the outdated design affected the visualisation, understanding and acceptance of the insights.

Like many start-ups struggling with time constraints, ShopSights is also facing similar challenges in swiftly improving and launching its platform. Their goal was to perform some quick modifications to the aesthetics and infovis elements to improve UI, UX and insights understanding. This happened in two phases until they finalised their platform. Throughout our interaction, we guided the startup on evaluating UI, UX and insights understanding.

3.2 Two iterations of redesigning the BA platform

The transition from the initial version of the user interface in the BA platform (Version A) to the final version (Version C) happened through a two-phase redesign process. Each phase likely involved design iterations, feedback gathering, and user testing to ensure that the final version (Version C) addressed the inefficiencies of the initial design. Below we present the changes made in each redesign phase, while in the next section, we present in detail the evaluation approach and methods used.

Phase A: Phase A involved the implementation of infovis. and aesthetic changes, guided by feedback from actual customers, the startup founders’ own assessment, and the evaluation. During this design process, specific visual elements of ShopSights were altered while retaining the same information. In Version B, information was grouped into tabs to reduce visual noise; new visuals/charts were introduced to represent data; the colours, shapes, and layout of tables and diagrams were also changed. Figure 2 illustrates an example showcasing a comparison between the old and new interface screenshots.

Phase B: After receiving feedback from Version B during Phase A, this phase focused on refining again some infovis and aesthetic elements and leading to a new version of the BA platform (Version C). The changes made include various aspects such as converting text and tables into graphical representations, creating interactive data visualisations, altering charts, adding symbols, changing the colour palette, reducing unnecessary colours, creating symmetries, using more creative diagrams for data mining results and grouping visualised information differently. Figures 3 and 4 provide a comparison of a screenshot from Version B and Version C, both containing the same amount of information.

3.3 Evaluation approach

3.3.1 Test users’ profiles

To evaluate the changes, we recruited 10 users who were young entrepreneurs and members of the incubation centre where ShopSights was incubated. The participants had diverse backgrounds and their expertise spanned across various domains (see Table 1 for details). During Phase B, we recruited 20 domain experts who served as executives for a grocery supplier (ShopSights client). These individuals were intended to be the actual end-users of the platform, utilising its content to aid decision-making. The recruited participants exhibited diverse backgrounds and held various roles, from marketing managers to business analysts. Table 1 summarises the test users’ profiles that evaluated the BA platform.

3.3.2 Interaction with the BA platform

During both phases, the test users were asked to perform specific tasks. Moreover, to support a realistic evaluation, we invited participants to browse the system for some time prior to starting the task execution. The tasks include the following:

  • (A)

    Finding the shopping mission that has the highest average basket value in-store “X”.

  • (B)

    Finding the top-selling product in this shopping mission and the most high-selling day.

  • (C)

    Finding the top-selling product of brand “Y” in-store “Z”.

  • (D)

    Finding the customer segment that includes the most shoppers.

Also, after the task execution users kept browsing the platform to share any additional thoughts and to reply to some specific questions to assess whether they understood or not the insights as explained below.

3.3.3 Evaluation methods

During users’ interaction with the several versions of the platform, we conducted the actual evaluation and collected the data. For this purpose, a combination of quantitative and qualitative techniques was used. See below (and in Table 1) the methods used in detail to evaluate UI, UX and insights understanding in all three versions of the platform.

UI: Participants assessed usability elements through a combination of quantitative and qualitative methods conducted in a laboratory setting. Quantitatively, users completed a heuristic evaluation survey (as detailed in Table A1 in the Appendix) to gauge the system’s usability. This survey utilised Likert scale questions (ranging from 1 for “totally disagree” to 5 for “absolutely agree”) aligned with Nielsen’s principles (see 2.1.1). Users also rated the importance of each factor for the BA platform using the Likert scale. Task run times and the frequency of errors were recorded for each participant. Qualitatively, we employed observation techniques and debriefing semi-structured interviews after the interaction with the platform to understand users’ sources of confusion and their overall experience. While on-task evaluation also included thinking-aloud techniques.

UX: Participants assessed the perceived UX by observing and recording users’ expressions and comments during task execution. Additionally, specialised software was utilised to identify emotions through more pronounced facial expressions, such as anger, sadness, neutrality, surprise and happiness. The on-task evaluation also included thinking-aloud techniques. To gain further insights into UX, participants were prompted during semi-structured interviews at the conclusion of the evaluation. They were asked to describe their experience with the platform using a single word and provide justification for their choice. This additional step aimed to extract further insights into users’ subjective experiences and perceptions of the platform’s usability and overall effectiveness.

Insights understanding: After completing the tasks, as part of the semi-structured interviews, participants were questioned about their comprehension of the analysis results. Some sample questions aimed at assessing insight understanding included explaining the content of certain screens, interpreting specific visualisations, explaining how these insights contribute to decision-making, etc.

In summary, it’s important to highlight that during Phase B, adjustments were made to the evaluation methods due to COVID-19 restrictions. Firstly, users’ task runtimes were measured by users, the same happened for the number of errors. Secondly, the observational component, which included thinking out loud, was replaced with the instruction for users to take notes on their observations. Furthermore, emotions were not monitored by software; instead, users were asked to note how they were feeling and elaborate on it during the interviews.

4. Evidence from the case study

In this section, we present the impact of the alterations related to aesthetics and visualisation, on usability, UX and insights understanding during Phases A and B.

4.1 UI evaluation

In the usability evaluation, we employed a heuristic evaluation survey, recorded task execution metrics (run time and number of errors) and complemented these with brief interviews. Table 2 provides a summary of the heuristic evaluation survey results for Phases A and B, helping identify the more usable platform version.

The variable “r” represents the average compliance degree of the platform with each rule, while variable “w” represents the average significance level. For example, during Phase A, participants who evaluated Version B, answered that they “absolutely agree” (r = 5) with Nielsen rule 4 and at the same time they declared that this aspect is “really significant” (w = 5). The degree of compliance “e” for each rule is calculated by multiplying “r” and “w”. Similarly, the overall degree of compliance, i.e. the overall usability score of the system is calculated by using Σ(ei) (i depicts each rule). Hence, for example, “ΣeB” (Version B) equals 201.40, whereas “ΣeA” (Version A) equals 126.45. Since the average degree of compliance of Version A is higher than the old one (Version B), it seems that the new design corresponds better to Nielsen’s rules, and therefore, it is more usable.

In Phase B, expert users assessed both Version B and the newly designed Version C. A comparison between Phases A and B of Table 3 reveals that non-domain expert users evaluating Version B in Phase A provided higher scores than domain experts who assessed the same Version B in Phase B. This discrepancy may be attributed to the fact that our Phase A test users were young entrepreneurs who might be more familiar with BA platforms. However, when comparing the usability scores between Version B and Version C, it is evident that the latter received a higher rating in usability from domain experts compared to Version B. This suggests that the changes in aesthetics and visualisation positively impacted the overall usability of the platform.

Table 3 summarises the results for task run times in both Phases. Each time the new version exhibited shorter execution times. For instance, the total average run time in Phase A for Version A was nearly 2.5 times longer than in Version B. Additionally, users evaluating Version A provided an average of 2.2 incorrect answers before reaching the correct one, while users assessing Version B averaged only 0.8 wrong answers. Qualitative insights were gathered through semi-structured interviews. Users evaluating Version A attributed the confusion to information overload on each screen. In contrast, during Phase A, users assessing Version B did not identify significant factors causing confusion.

In Phase B, due to COVID-19 restrictions, we conducted an asynchronous evaluation since we couldn’t visit the supplier. This approach presented challenges, particularly in tracking errors and execution times. Users were requested to monitor these aspects on their own and provide comments on-task execution time and errors. According to users’ feedback, navigating and completing tasks in Version B posed difficulties, leading to multiple errors before finding the correct answer. Also, two of them mentioned that they thought they had found the correct answer for task D; however, after looking at the given notes, they realised that their answers were wrong. Two others mentioned that they quit task D. When they were asked about the reasons for the confusion, they mentioned that they found Version B boring and not attractive enough to spend more time on. One of them mentioned: “I was confused as the platform design was boring, it had many tables, and it didn’t attract me to find the answers quickly. Some users reported encountering challenges stemming from information overload, while others highlighted spending considerable time navigating through various screens to locate the correct answer. A comparison between users in Phases A and B, both assessing Version B, suggests that the latter group may be more demanding, possibly due to their limited interaction with BA platforms. Despite Phase A users being young entrepreneurs rather than domain experts, they found certain aspects of the BA platform more accessible.

Most users who evaluated Version C during Phase B mentioned that they do not think that they encountered many errors. This is highly subjective, though this is still an indication. One of them mentioned that they had issues regarding task D and they spent more time on this. Also, the majority of users mentioned that they completed the tasks relatively easily without spending much time. One mentioned that “the time that I spent on the tasks is quite similar to the time that I would spend when navigating in our CRM platform to complete my daily tasks. Another mentioned that they found the information desired relatively easy and they were “satisfied because I can see all the available info gathered in one screen. Regarding Version C, users could not spot something specific that caused their confusion. Some of them mentioned something abstract such as “my background or “my age. The above are some indications that the changes we performed in BA’s platform aesthetics and data visualisation somehow affected positively the usability of Version C.

4.2 UX evaluation

During Phase A, the users who evaluated Version A spent much time in navigation and seemed frustrated and stressed based on their expressions. Similarly, based on their comments, they found it difficult to understand which page contained the requested information. Some users spent a lot of time scrolling on the same page without being able to find the answer they were asked. They also reported that the bright colours distracted their attention. It is remarkable that one user while executing the task-based evaluation of Version A, thinking out loud, they mentioned “this visual is crazy, and another one said, “so much noise to find a simple answer. When they were asked to describe the BA platform in a few words, the most popular answers were “chaotic, complicated and unpredictable. Users who evaluated Version B, although they spent less time completing the tasks, they encountered some issues, as well. For example, several users were trying to interact with the visuals, although these visuals weren’t interactive. Others tried to sort the information from the tables, but they did not have this capability. During the evaluation, they did not show any strong negative emotions, or strong positive emotions, as well. When they were asked to describe the platform, they used words such as “pleasurable, enjoyable and presentable.

During Phase B, the domain users who evaluated Version B reported that they were anxious to complete the given tasks. Also, although they did not report any strong feelings during the execution of tasks A–C, they noted that they were frustrated and nervous for spending a lot of time searching and scrolling how to complete task D. Indicatively, a user mentioned this (meaning task D) seems like a riddle. Some of them described the platform as “ok, a platform like all the rest, decent, a nice one, indifferent, mad, confused, nice but confusing, many numbers difficult to spot patterns and “busy. Users who evaluated Version C didn’t express any strong emotions, in their notes they reported that they were “calm, confident about their answers and relief. Moreover, users who evaluated Version C described it as “insightful, actionable, visually beautiful, seems you can find fast what you look for, I can easily understand the information, eye-catching, clear, well-put and “modern. Examining all the above facts based on users’ comments and emotions, we may say that the Version C of the BA platform and the alterations in the aesthetics and infovis improved the overall UX of the platform.

4.3 Insights understanding evaluation

During Phase A, the users who evaluated Version A claimed that it was difficult to understand the insights and the business value of the BA results. During an interview, a business analyst mentioned: “If I was the decision maker of this company and I had to make a decision based on this (version A), they would probably fire me. Moreover, almost all the users reported that due to the information overload on each screen, they were having difficulty understanding the insights. They also mentioned that understanding the insights is difficult due to the existence of multiple tables and the lack of visuals’: “The lack of visuals made it difficult to spot the differences in the data and understand the insights displayed”. One of their main problems was the absence of symbols to immediately understand what information was displayed in each visual. The users were evaluating Version A and they were asked to describe the information displayed, they spent more time describing the information they were viewing, and they were making many assumptions. Some of them also were waiting for a confirmation of their answers, which indicates uncertainty. Conversely, users assessing Version B offered more confident answers and provided more accurate descriptions of the analysis results. Nevertheless, their lack of expertise prevented them from fully comprehending all insights, as well as translating them into actionable decisions. They appeared to understand some of the content presented and made assumptions about the remainder.

The domain experts who evaluated Version B (in Phase B) were able to understand almost most of the content and the insights displayed. In their majority, they gave accurate descriptions. However, not all of them were able to deeply comprehend the insights. Some users appeared somewhat nervous, expressing frustration with the platform’s perceived difficulty in comprehending insights. One user remarked, It is not my job to think that much when we invest in a platform. I am not going to guess; I want the platform to tell me what is displayed here. Another user stated, I am not a business analyst, and it is not my job to solve riddles. This platform is supposed to help me understand the insights without trying that much. In a similar vein, another user commented, “Too much info but nicely viewed. I need to find a way of using it to exploit the insights, while another agreed, stating, “It contains useful info, but it does not help me find the insights. Notably, some users evaluating Version B acknowledged the absence of certain visuals but, being familiar with tools like Excel, expressed satisfaction with the presented tables and their ability to understand the insights.

On the other hand, the users who evaluated Version C gave correct answers about almost all the screens they were asked to explain. They seemed to understand almost every aspect of the platform and some of them were happy, energetic and proud when they were describing the information displayed and the respective insights. Some of them not only fully comprehended the insights displayed, but they were linked these insights with actions and decisions they could support. The business and strategy manager of the grocery supplier mentioned “I am certain that the knowledge and insights derive from this platform as-is are attainable by most of the marketing staff. Others mentioned that “the insights are intelligently visualised by the platform, and I don’t need to put a lot of effort, and that “these visuals add value to my work and my daily tasks. Of course, as expected, not everything was that clear for every user. For example, there were two people who mentioned that they need to put a lot of effort into understanding the insights, and it needs more modifications to be able to connect the insights with actions. Also, another mentioned that they understand the insights, but they need to “deep dive in the charts and spend time to find the value. Overall, the visual alterations to the BA platform appear to assist users in better-comprehending insights and, at times, translating them into actions.

5. Discussion

We are in the era of big data, and as such, BA platforms are important to assist companies in supporting demand and supply chain operations (Raza et al., 2023). However, if a BA platform is not user-friendly and does not provide actionable insights, it may fail to effectively support these operations, resulting in lower-quality decision-making (Li et al., 2022). Existing literature primarily focuses on different aspects of the design of analytics platforms, including UI, UX and visualisation aspects (González-Torres et al., 2013; Gutiérrez et al., 2020; Pohl et al., 2012).

Recognising the importance of UX/UI in platforms, existing literature includes several studies that focus on models, principles, and rules for evaluating usability and UX (e.g., Mazumder and Das, 2014), or on the application of these principles (e.g., Lavie and Tractinsky, 2004; Mazumder and Das, 2014). However, there is only a limited subset of literature specifically addressing the evaluation of these elements in BA platforms (e.g., Pohl et al., 2012).

Apart from the lack of UX/UI research in BA platforms, another gap is related to the visualisation and understanding of insights. Regarding visualisation, some studies examine visualisation elements (i.e. aesthetics and information visualisation) and study how to evaluate them (e.g., Banissi et al., 2014) or focus on the factors influencing visualisation and provide guidelines (e.g, Franconeri et al., 2021; Luo, 2019). There are also some studies that highlight the importance of insights understanding by the end-users (e.g., Vázquez-Ingelmo et al., 2024); though, they do not provide some evidence for example on the impact of this on BA platforms etc. On the other side, academia underscores the need to explore the interplay between visualisation, aesthetics and insights understanding (Chen, 2005; Vázquez-Ingelmo et al., 2024), particularly in the context of BA platforms, given their role in visualising diverse data. Despite this recognition, there is a notable scarcity of research addressing this relationship, highlighting a critical gap in the literature.

To address the above gaps and examine the impact of visualisation on UI and UX in BA platforms while emphasising the importance of insights understanding, this research provides empirical evidence by following the iterative process of a startup as it develops a retail BA platform. Specifically, the startup underwent two rounds of changes. As a result, we collaborated with the startup to conduct evaluations across three versions of the same BA platform. We used a combination of well-known evaluation methods such as heuristic evaluation surveys, computation of errors and runtimes, observations, think-aloud techniques, interviews and emotions recording.

5.1 Practical implications/learnings

Small ventures lack time and resources to meticulously evaluate and change their BA platforms. To find guidelines on what to test and how they must search multiple sources and delay the launching of their product. This paper offers them practical knowledge on what they should test and how while designing their BA platforms. The background section overviews the evaluation methods they can use to assess the UX/UI and insights understanding of their BA platforms. The case described and Table 1, provides some more tangible recommendations on what to test and how. While the findings offer some valuable insights for businesses undertaking similar redesign endeavours.

In more detail, our findings demonstrated that each phase of the redesigned platform interface performed better compared to its predecessor. Thus, alterations made to the aesthetics and information visualisation of this BA platform seem to positively impact the overall UI and UX of the platform, which is in line with what the literature suggests (Kashfi et al., 2019; Vázquez-Ingelmo et al., 2024). Additionally, our findings underscore the impact of these modifications on the insights’ understanding among platform users, an important element in practice, yet it is underexplored in research (Vázquez-Ingelmo et al., 2024). The per-phase evaluation process significantly improved our understanding of the impact of each change, making it a highly recommended practice. By evaluating changes in phases, we can identify potential issues early and make necessary adjustments before they escalate.

In our case, we identified that young entrepreneurs who were more familiar with BA platforms found it easier to use the BA platform. As such, it is advisable to assess the impact of modifications by engaging with test users and individuals from diverse backgrounds with different levels of familiarisation with BA platforms, as also suggested by others (e.g., Miniukovich and De Angeli, 2015; Vázquez-Ingelmo et al., 2024). This step should be taken before reaching out to customers, especially if direct evaluation is challenging.

Providing the small venture practitioners with some more tangible insights based on our case we can say the following:

First, we observed that each newly designed platform version was rated higher in terms of usability using Nielsen’s rules. Also, it seems that Nielsen rules such as “consistency and standards” (rule 4), “recognition rather that recall” (rule 7) and “user control and freedom” (rule 3) are important to consider when designing BA platforms. So, one quick key takeaway for practitioners is to pay attention to the colour uniformity of buttons, diagrams, and images. This aligns with the literature suggesting that a plethora of colours, information overload, and intricate shapes may hinder the usability of the platform (Pohl et al., 2016). In our case, these created better experiences for users and their emotions seemed to be more positive. Overall, monitoring emotions can help evaluate the BA platform efficiently, a practice commonly suggested by psychologists and researchers (e.g., Thüring and Mahlke, 2007).

Second, practitioners should be aware that end-users struggle when they have to remember their previous actions while performing tasks. As such, they should prioritise providing clear navigation paths and minimising the need for users to recall past interactions. This finding is supported by existing literature; for instance, Nielsen (1994a) advocates for systems that do not force users to remember information from one section or screen to another, corresponding to the need for clear navigation paths and minimising memory load. Based on our findings, navigation and the ability to interact with the platform are important factors that practitioners should examine when designing BA platforms. Existing literature (Heer and Shneiderman, 2012; Luo, 2019) supports this, demonstrating that interactive dynamics in visual analysis significantly enhance user engagement and usability. Moreover, users should be able to easily find and access the features and information they need within the platform. On top of this, we also suggest that interactive elements such as clickable charts, hovering over charts, and drag-and-drop functionalities make the platform more engaging and user-friendly. For instance, users might appreciate the ability to interact directly with data visualisations to drill down into specific insights or adjust parameters dynamically. Furthermore, users should be able to easily find and access the features and information they need within the platform. For example, implementing a clear and logical menu structure, providing breadcrumbs for easy navigation back to previous pages and incorporating intuitive search functionality can all contribute to a smoother user experience.

In the same vein, incorporating features that offer users control and freedom within the platform can significantly enhance their experience. Users value being able to undo an action in case of an error, to be able to easily navigate to the home screen from anywhere and to be able to find the pages they were searching for quickly and easily. Such elements can also improve the time spent for each task. To assess it in our case we monitored the runtime per task, as such we recommend companies to measure the same as a crucial element in the redesign of a BA platform. Several studies highlight the significance of freedom and customisation in various contexts, such as health or educational platforms. For instance, Aljohani et al. (2019) demonstrate that personalised layouts, such as data filtering, draggable widgets and resizable elements, play a crucial role in enhancing UX by aligning with individual preferences. This principle is also highly relevant to BA platforms, which may have users with diverse backgrounds and varying levels of familiarity with analytics tools (Vázquez-Ingelmo et al., 2024). By allowing users to personalise their dashboards, choose their preferred data visualisation methods, and adjust tool settings, these platforms accommodate a wide range of expertise levels and learning styles. In this regard, providing users the freedom to personalise their dashboard layout is essential. Instead of a fixed arrangement of charts and graphs, BA platforms could allow users to drag-and-drop widgets, resize or hide elements according to their specific workflow needs and priorities. Additionally, offering customisable filtering and sorting options for data can further enhance UX. Although Version C of the platform did not fully integrate these features, they emerged as significant in short interviews, underscoring the importance of such customisation for optimising user engagement and task efficiency.

According to our findings, it is reported that visual interventions and improvements, aided users in better understanding the analysis results. They easily comprehended the insights, and sometimes they were connecting the visualised insights with actions and decisions. Several participants reported that “these visuals add value to my work and daily tasks aligning with the concept of “perceived usefulness” found in existing literature (Adagha et al., 2017). While we did not specifically measure perceived usefulness, it is crucial to consider when evaluating insight comprehension in BA platforms. Our research suggests that to enhance perceived usefulness, it is essential to avoid bright and vibrant colours and maintain visual consistency throughout the platform. This includes using consistent colour schemes, fonts, and iconography across different visualisations and interface elements. For example, having a standardised colour palette that avoids overwhelming or distracting colours can help maintain visual harmony and focus users’ attention on the most important information. In our case, the pal colour in Version C was reported to offer more positive emotions to users. Closing, the inclusion of symbols on the charts and the creation of symmetries (see bottom left of Figure 4) are also important elements to consider. Gestalt principles of visual perception support this view, and other researchers agree that symmetry can produce visuals that are both aesthetically pleasing and easy to comprehend (Olshannikova et al., 2015).

Moreover, the conversion of text and tables into visuals, the separation of primary and secondary information, and the grouping of the visualised information are important factors that practitioners should scrutinise when they design a BA platform. For example, in Version 2 (see Figure 3) the tabular representation of the days and the hours seemed boring to the users. While the heatmap in Version 3 (see Figure 4) was perceived better as users reported that they could easily spot patterns. Utilising heat maps with a consistent gradient is recommended by other researchers (Lobanova et al., 2024). They also suggested that assigned colours for trends can be applied across different visualisations, enabling users to quickly identify whether a trend is rapidly growing or well-established without needing to reference the matrix.

Contemporary graphics such as the packed bubble chart (see Figure 3) often pose challenges for certain user groups, particularly older individuals, who may find these visualisations less intuitive. In contrast, a simpler representation, such as a traditional bubble plot (see Figure 4), appears more accessible and straightforward to them. Feedback from test users highlighted a prevalent issue in design practices: the assumption that more elaborate or “fancier” visuals inherently lead to better UX. This assumption often leads companies to prioritise visually complex graphics under the belief that they enhance the appeal and effectiveness of data presentations. However, this belief can be counterproductive. Elaborate visualisations can sometimes hinder rather than help comprehension, particularly for users who may not be as familiar with or adept at interpreting complex visual formats. This aligns with existing research which suggests that overly complex graphics can create cognitive overload and confusion, especially for users who are not well-versed in data interpretation (Hollender et al., 2010).

On the other hand, the perspective held by the founders of ShopSights, that more “innovative or fancy visuals are beneficial for pitching and investor presentations”, reflects a different aspect of visual design. According to their viewpoint, sophisticated visuals can enhance the perceived value and attractiveness of their product during high-stakes presentations, even if they are not necessarily optimal for everyday user interaction. This dichotomy is partially supported by existing literature. For instance, Chouki et al. (2023) argue that innovative visualisations can indeed offer an enhanced UX by providing novel ways to engage with and interpret data. While more complex visuals might not always be practical for everyday use, they can still play a crucial role in specific contexts, such as marketing and investment scenarios, where demonstrating the potential of a product or idea in an impressive way is essential.

Overall, an important challenge that emerged from this example lies in balancing visual complexity with usability. Effective design should consider the context of use, the expertise level of the audience and the specific goals of the visualisation. Striking the right balance can ensure that visualisations are not only appealing but also functional and accessible to all users.

To help companies improve their performance using insights from the paper, Table 4 offers a structured overview of key areas to focus on and specific actions to take for benchmarking, evaluating and enhancing BA platforms. This table provides details on methods to assess BA platforms and outlines practical practices to follow. It serves as a practical guide for organisations to identify strengths and areas for improvement.

5.2 Limitations and future research

In this study, our primary aim was not to claim any methodological contribution or rigorously assess the visual changes made by the startup. Instead, we focused on tracing the startup’s visual redesign journey and aiding them in a quick and pragmatic evaluation, given their time constraints for a speedy platform launch. While acknowledging the potential for errors or limitations in our evaluation, our goal was to emphasise the significance of infovis and aesthetics in UI, UX and insights understanding. Future research could further delve into suggesting more rigorous visual changes and measuring their impact on UX/UI and insights.

Building on the above, in this research, we do not support that only visual changes are adequate to ensure the acceptance of a BA platform, but we pinpoint the importance of visualisation in BA platforms. In addition, our goal is not to evaluate the visual changes made and we do not admit that we created a visual optimal BA platform. That is the reason why we do not present common aesthetic evaluation and information visualisation methods. However, future research may focus on deriving visual design principles while designing BA platforms, and present methods for their evaluation.

Domain knowledge is considered as one of the most critical factors in the understanding of insights and the analysis results (Vázquez-Ingelmo et al., 2024). However, in our study, it seemed that users who evaluated Version B during Phase A, although they weren’t domain experts, their prior experience as startuppers in building BA platforms, resulted in rating the platform better than the domain experts who evaluated the same Version B during the Phase B. We do not admit that we can generalise this finding; however, examining whether prior experience in similar BA platforms is more important than domain knowledge in the acceptance of a BA platform and a system in general, is a great avenue for future research.

Another important aspect is that in the current study, the users interacted with the BA platform for a short period of time before evaluating it. However, examining the post-acceptance and post-adoption of such BA solutions is a fruitful area of research (Batziakoudi et al., 2020).

An additional limitation of this research is that due to COVID-19 restrictions, we were unable to visit the grocery supplier and observe users during the task-based evaluation. As a result, we had to extract their emotions based on their claims rather than using software, as was done in Phase A of this study. Additionally, some usability metrics (e.g., runtime and errors) were measured subjectively. Future research may address these limitations.

6. Conclusions

Our goal in this paper is three-fold. Firstly, we aim to underscore the pivotal role of aesthetics and information visualisation within BA platforms. Secondly, we advocate for a holistic approach to evaluating BA platforms, emphasising that we should not only examine one element such as UI or UX; but we should also pay attention to the insights understanding. By considering insights and understanding alongside traditional usability metrics, businesses can obtain a more nuanced understanding of their platform’s effectiveness and user acceptance. Lastly, we provide insights and some learning on what small companies should focus on when developing BA platforms. It is well-known that start-ups often lack the luxury of time for extensive reactions and frequently select quick and dirty solutions to innovate, survive, and go to market as fast as they can (Corvello et al., 2024; Karpinskaia, 2023; Zamani et al., 2022). Our selected case study illustrates a startup facing time constraints while rapidly evolving its BA platform to meet market demands.

Figures

Overview of methods used in each redesign phase

Figure 1

Overview of methods used in each redesign phase

A screenshot of Version A (left) and Version B (right) – Module B view

Figure 2

A screenshot of Version A (left) and Version B (right) – Module B view

A screenshot of Version B – Module A view

Figure 3

A screenshot of Version B – Module A view

A screenshot of Version C – Module A view

Figure 4

A screenshot of Version C – Module A view

Heuristic evaluation, results from Phase A (Version A vs Version B); and Phase B (Version B vs Version C)

Phase APhase B
Nielsen rulerΑ [1.5]wA [1.5]eArB [1.5]wB [1.5]eBrB [1.5]wB [1.5]eBrC [1.5]wC [1.5]eC
13.004.8114.434.334.8621.043.394.9316.734.384.9021.46
22.874.6313.294.174.6619.433.814.4216.864.564.7921.84
33.174.9115.564.224.8920.643.284.9616.274.015.0020.05
43.255.0016.255.005.0025.004.004.9619.844.925.0024.60
52.754.8613.374.504.8221.693.194.8115.364.544.9322.38
61.504.666.992.674.6412.393.854.7918.443.884.8918.97
71.754.107.184.334.0017.323.694.4516.424.434.1218.25
82.254.6610.494.334.6420.093.544.7816.924.214.7319.91
93.255.0016.255.005.0025.004.004.9619.844.925.0024.60
102.754.6012.654.004.7018.803.384.5515.404.694.6421.76
ΣeA126.45ΣeB201.40ΣeB172.08ΣeC213.84

Source(s): Table created by authors

Run time for every per phase and platform version

Average run time in minutes
Phase APhase B
TaskPlatform APlatform BPlatform BPlatform C
A2.561.081.120.46
B1.280.450.530.34
C1.050.500.560.38
D3.451.261.330.58
Total8.343.293.541.61

Source(s): Table created by authors

Overview of recommendations for BA platform assessment and design

AreaAction
Methods to assess UX/UI and insights understandingThe following methods are recommended to assess each element
  • UI evaluation: errors, user satisfaction score, time spent on tasks, heuristic evaluation, walkthroughs (with notes taking), scenario following, inspections. (Semi-) structure interviews and observations can be also helpful

  • UX evaluation: ethnographies, surveys, lab tests and field studies to monitor users' expressions and emotions. This can be facilitated via eye-tracking, prototyping tools and software. (Semi-) structure interviews can be also used

  • Insights understanding: (Semi-) structure interviews based on scenarios/tasks

Benchmark against previous versions and users/Compare each redesigned phase with its predecessor to measure improvements in UI, UX and insights understanding. Try to split the redesign phases into measurable steps
Find test users with diverse backgrounds
Nielsen’s usability rulesFocus on rules like “consistency and standards”, “recognition rather than recall”, and “user control and freedom” to enhance platform design. Pay attention to the colour uniformity of buttons, diagrams, and images
Navigation and interactivitySimplify navigation, ensure clear navigation paths, reduce the need for users to recall past actions, and incorporate features like clickable charts, hover effects and drag-and-drop functionalities
User control and personalisationProvide user control and customisation options. Allow them to undo actions, navigate easily and personalise their dashboard layout with drag-and-drop widgets and customisable filters
Visual consistencyMaintain a consistent visual design. Use a standardised colour scheme, consistent fonts, and iconography to avoid overwhelming users and to highlight important information
Effective visualisationConvert text and tables into easy-to-understand visuals. Use simple and clear visualisations like heatmaps instead of complex graphics, and ensure the visual design is straightforward
Intuitive design for non-expert usersEnsure visualisations are intuitive by balancing visual appeal with ease of use. This approach will help avoid cognitive overload and enhance accessibility for all users, particularly those who are less familiar with BA platforms and who may be less tech-savvy

Source(s): Table created by authors

Heuristic evaluation survey

QuestionNielsen rule
1. Words, descriptions and symbols were fully understood2
2. I could always go back or undo an action in case of an error3
3. The design was consistent. Buttons, diagrams and images have schematic and colour uniformity4, 9
4. I was not confused or made any mistake while performing the tasks6
5. I did not have to remember my previous actions while performing the tasks7
6. Being an unexperienced user, I found it easy to follow the steps I had to perform8
7. I could easily understand what the charts show2, 8
8. I could easily navigate to the home screen from anywhere3
9. I could correct any action taken by mistake quickly and easily5, 1
10. I could find every page I was requested quickly and easily3, 1
11. I was able to find more information about each platform element easily10

Source(s): Table created by authors

Notes

1.

A non-customisable BA platform with predefined screens. Analytics, including model retraining and chart updates, occurs automatically with new datasets. Weekly in our case.

Appendix

Table 1

Test users profiles and platform evaluation methods overview

PhasesPhase APhase B
Users’ profilesUsers10 young entrepreneurs, members of an incubation centre20 domain experts, executives of a grocery supplier (actual end-users of the platform)
Mean Age3441
Gender6 males, 4 females12 males, 8 females
Role
  • 3 business analysts

  • 2 marketing managers

  • 2 salespersons

  • 1 project manager

  • 1 software engineer

  • 1 business development manager

  • 4 advertising managers

  • 3 marketing managers

  • 3 business analysts

  • 2 digital marketing managers

  • 2 account managers

  • 2 sales managers

  • 1 business and strategy manager

  • 1 merchandiser

  • 1 e-commerce manager

  • 1 consumer research manager

Industry ExpertiseRetail, e-commerce, gaming, advertising, banking, insuranceGrocery retail
VersionsVersion AVersion BVersion C
UIHeuristic evaluation survey based on Nielsen’s principlesHeuristic evaluation survey based on Nielsen’s principlesHeuristic evaluation survey based on Nielsen’s principles
Runtime per task (as measured by researchers)Runtime per task (as measured by researchers)Runtime per task (as measured by users)aRuntime per task (as measured by users)3
Evaluation methodsNumber of errors (as measured by researchers)Number of errors (as measured by researchers)Perception about errors (as declared by users)aPerception about errors (as declared by users)3
Observations Semi-structured interviews
Think-aloud techniques
Observations Semi-structured interviews
Think-aloud techniques
Notes taking (from users)a
Semi-structured interviews
Notes taking (from users)3
Semi-structured interviews
UXUsers’ emotions (software based)Users’ emotions (software based)Users’ emotions (as declared by users)aUsers’ emotions (as declared by users)a
Semi-structured interviews
Think-aloud techniques
Semi-structured interviews
Think-aloud techniques
Semi-structured interviews
Notes taking (from users)3
Semi-structured interviews
Notes taking (from users)3
Insights UnderstandingSemi-structured interviewsSemi-structured interviewsSemi-structured interviewsSemi-structured interviews

Note(s): aIn Phase B, due to COVID-19 restrictions, there were changes in data collection compared to Phase A

Source(s): Table created by authors

References

Adagha, O., Levy, R.M., Carpendale, S., Gates, C. and Lindquist, M. (2017), “Evaluation of a visual analytics decision support tool for wind farm placement planning in Alberta: findings from a focus group study”, Technological Forecasting and Social Change, Vol. 117, pp. 70-83, doi: 10.1016/j.techfore.2017.01.007.

Agrahari, A. and Srivastava, S.K. (2019), “A data visualization tool to benchmark government tendering process: insights from two public enterprises”, Benchmarking: An International Journal, Vol. 26 No. 3, pp. 836-853, doi: 10.1108/bij-06-2017-0148.

Aljohani, N.R., Daud, A., Abbasi, R.A., Alowibdi, J.S., Basheri, M. and Aslam, M.A. (2019), “An integrated framework for course adapted student learning analytics dashboard”, Computers in Human Behavior, Vol. 92, pp. 679-690, doi: 10.1016/j.chb.2018.03.035.

Alves, R., Valente, P. and Nunes, N.J. (2014), “The state of user experience evaluation practice”, in Proceedings of the 8th Nordic Conference on Human-Computer Interaction: Fun, Fast, Foundational, pp. 93-102.

Banissi, E., Forsell, C., Marchese, F.T. and Johansson, J. (2014), Information Visualisation: Techniques, Usability and Evaluation, Cambridge Scholars Publishing, Newcastle upon Tyne.

Basole, R.C., Park, H. and Chao, R.O. (2019), “Visual analysis of venture similarity in entrepreneurial ecosystems”, IEEE Transactions on Engineering Management, Vol. 66 No. 4, pp. 568-582, doi: 10.1109/tem.2018.2855435.

Batziakoudi, K., Griva, A., Karagiannaki, A. and Pramatari, K. (2020), “Human computer interaction in business analytics: the case of a retail analytics platform”, Twenty-Eighth European Conference on Information Systems (ECIS).

Benbasat, I., Goldstein, D. and Mead, M. (1987), “The case research strategy in studies of information systems case research definition”, MIS Quarterly, Vol. 11 No. 3, pp. 369-386, doi: 10.2307/248684.

Bhandari, U., Chang, K. and Neben, T. (2019), “Understanding the impact of perceived visual aesthetics on user evaluations: an emotional perspective”, Information and Management, Vol. 56 No. 1, pp. 85-93, doi: 10.1016/j.im.2018.07.003.

Card, S.K., Mackinlay, J. and Shneiderman, B. (Eds) (1999), Readings in Information Visualization: Using Vision to Think, Morgan Kaufmann, Burlington, MA.

Chen, C. (2005), “Top 10 unsolved information visualization problems”, IEEE Computer Society, Vol. 25 No. 4, pp. 12-16, doi: 10.1109/mcg.2005.91.

Chouki, M., Borja De Mozota, B., Kallmuenzer, A., Kraus, S. and Dabic, M. (2023), “Design thinking and agility in digital production: the key role of user experience design”, IEEE Transactions on Engineering Management, Vol. 70 No. 12, pp. 4207-4221, doi: 10.1109/tem.2021.3099094.

Corvello, V., Troise, C., Schiuma, G. and Jones, P. (2024), “How start-ups translate learning from innovation failure into strategies for growth”, Technovation, Vol. 134 February, 103051, doi: 10.1016/j.technovation.2024.103051.

De Angeli, A., Sutcliffe, A. and Hartmann, J. (2006), “Interaction, usability and aesthetics: what influences users' preferences?”, Designing Interactive Systems: Processes, Practices, Methods, and Techniques, DIS, Vol. 2006, pp. 271-280.

Delen, D. and Zolbanin, H.M. (2018), “The analytics paradigm in business research”, Journal of Business Research, Vol. 90 No. 2018, pp. 186-195, doi: 10.1016/j.jbusres.2018.05.013.

Dix, A., Finlay, J., Abowd, G.D. and Beale, R. (2004), “Ch.9 evaluation techniques”, in Human-Computer Interaction, 3rd ed., Pearson Education.

Eisenhardt, K.M. and Graebner, M.E. (2007), “Theory building from cases: opportunities and challenges”, Academy of Management Journal, Vol. 50 No. 1, pp. 25-32, doi: 10.5465/amj.2007.24160888.

Franconeri, S.L., Padilla, L.M., Shah, P., Zacks, J.M. and Hullman, J. (2021), “The science of visual data communication: what works”, Psychological Science in the Public Interest, Vol. 22 No. 3, pp. 110-161, doi: 10.1177/15291006211051956.

González-Torres, A., García-Peñalvo, F.J. and Therón, R. (2013), “Human-computer interaction in evolutionary visual software analytics”, Computers in Human Behavior, Vol. 29 No. 2, pp. 486-495, doi: 10.1016/j.chb.2012.01.013.

Gray, W.D. and Salzman, M.C. (1998), “Damaged merchandise? A review of experiments that compare usability evaluation methods”, Human-Computer Interaction, Vol. 13 No. 3, pp. 203-261, doi: 10.1207/s15327051hci1303_2.

Griva, A., Bardaki, C., Pramatari, K. and Papakiriakopoulos, D. (2018), “Retail business analytics: customer visit segmentation using market basket data”, Expert Systems with Applications, Vol. 100 No. 2018, pp. 1-16, doi: 10.1016/j.eswa.2018.01.029.

Gutiérrez, F., Seipp, K., Ochoa, X., Chiluiza, K., De Laet, T. and Verbert, K. (2020), “LADA: a learning analytics dashboard for academic advising”, Computers in Human Behavior, Vol. 107 No. February 2018, 105826, doi: 10.1016/j.chb.2018.12.004.

Hartson, H.R., Andre, T.S. and Williges, R.C. (2003), “Criteria for evaluating usability evaluation methods”, International Journal of Human-Computer Interaction, Vol. 15 No. 1, pp. 145-181, doi: 10.1207/s15327590ijhc1501_13.

Heer, J. and Shneiderman, B. (2012), “Interactive dynamics for visual analysis”, Communications of the ACM, Vol. 55 No. 4, pp. 45-54, doi: 10.1145/2133806.2133821.

Hollender, N., Hofmann, C., Deneke, M. and Schmitz, B. (2010), “Integrating cognitive load theory and concepts of human-computer interaction”, Computers in Human Behavior, Vol. 26 No. 6, pp. 1278-1288, doi: 10.1016/j.chb.2010.05.031.

ISO 9241-11:2018 (2018), Ergonomics of Human-System Interaction - Part 11: Usability: Definitions and Concepts, International Organization for Standardization (ISO), Geneva.

Karpinskaia, E. (2023), “Make me act rapidly: identity perspective to the dynamics of start-up creation process”, Journal of Entrepreneurship in Emerging Economies, Vol. 15 No. 6, pp. 1612-1633, doi: 10.1108/jeee-11-2021-0450.

Kashfi, P., Feldt, R. and Nilsson, A. (2019), “Integrating UX principles and practices into software development organizations: a case study of influencing events”, Journal of Systems and Software, Vol. 154 No. 2019, pp. 37-58, doi: 10.1016/j.jss.2019.03.066.

Lallemand, C. and Koenig, V. (2017), “Lab testing beyond usability: challenges and recommendations for assessing user experiences”, Journal of Usability Studies, Vol. 12 No. 3, pp. 133-154.

Lavie, T. and Tractinsky, N. (2004), “Assessing dimensions of perceived visual aesthetics of web sites”, International Journal of Human-Computer Studies, Vol. 60 No. 3, pp. 269-298, doi: 10.1016/j.ijhcs.2003.09.002.

Law, E.L.C., Van Schaik, P. and Roto, V. (2014), “Attitudes towards user experience (UX) measurement”, International Journal of Human-Computer Studies, Vol. 72 No. 6, pp. 526-541, doi: 10.1016/j.ijhcs.2013.09.006.

Li, L., Lin, J., Ouyang, Y. and Luo, X. and Robert) (2022), “Evaluating the impact of big data analytics usage on the decision-making quality of organizations”, Technological Forecasting and Social Change, Vol. 175, 121355, doi: 10.1016/j.techfore.2021.121355.

Lobanova, P., Bakhtin, P. and Sergienko, Y. (2024), “Identifying and visualizing trends in science, technology, and innovation using SciBERT”, IEEE Transactions on Engineering Management, Vol. 71, pp. 1-9, doi: 10.1109/tem.2023.3306569.

Luo, W. (2019), “User choice of interactive data visualization format: the effects of cognitive style and spatial ability”, Decision Support Systems, Vol. 122 No. 2019, 113061, doi: 10.1016/j.dss.2019.05.001.

Mazumder, F.K. and Das, U.K. (2014), “Usability guidelines for usable user interface”, International Journal of Renewable Energy Technology, Vol. 3, pp. 279-282.

Mei, H., Guan, H., Xin, C., Wen, X. and Chen, W. (2020), “DataV: data Visualization on large high-resolution displays”, Visual Informatics, Vol. 4 No. 3, pp. 12-23, doi: 10.1016/j.visinf.2020.07.001.

Miniukovich, A. and De Angeli, A. (2015), “Computation of interface aesthetics”, Annual ACM Conference on Human Factors in Computing Systems, CHI’15, pp. 1163-1172.

Nielsen, J. (1994a), “Heuristic evaluation”, in Usability Inspection Methods, John Wiley and Sons, pp. 25-62.

Nielsen, J. (1994b), Usability Engineering, Morgan Kaufmann, Burlington, MA.

Nielsen, J. (1994c), “Usability inspection methods”, ACM Conference on Human Factors in Computer Systems, CHI’94, pp. 413-414.

Olshannikova, E., Ometov, A., Koucheryavy, Y. and Olsson, T. (2015), “Visualizing Big Data with augmented and virtual reality : challenges and research agenda”, Journal of Big Data, Vol. 2 No. 22, pp. 1-27, doi: 10.1186/s40537-015-0031-2.

Pohl, M., Smuc, M. and Mayr, E. (2012), “The user puzzle—explaining the interaction with visual analytics systems”, IEEE Transactions on Visualization and Computer Graphics, Vol. 18 No. 12, pp. 2908-2916, doi: 10.1109/tvcg.2012.273.

Pohl, M., Wallner, G. and Kriglstein, S. (2016), “Using lag-sequential analysis for understanding interaction sequences in visualizations”, International Journal of Human-Computer Studies, Vol. 96, pp. 54-66, doi: 10.1016/j.ijhcs.2016.07.006.

Pousman, Z., Stasko, J.T. and Mateas, M. (2007), “Casual information visualization: depictions of data in everyday life”, IEEE Transactions on Visualization and Computer Graphics, Vol. 13 No. 6, pp. 1145-1152, doi: 10.1109/tvcg.2007.70541.

Raman, R., Bhattacharya, S. and Pramod, D. (2019), “Predict employee attrition by using predictive analytics”, Benchmarking: An International Journal, Vol. 26 No. 1, pp. 2-18, doi: 10.1108/bij-03-2018-0083.

Raman, R., Bhattacharya, S., Goswami, A.K., Sinha, A., Goswami, M. and Kumar, P. (2024), “Explicating the mapping between big data and knowledge management: a systematic literature review and future directions”, Benchmarking: An International Journal, Vol. ahead-of-print No. ahead-of-print, doi: 10.1108/bij-09-2022-0550.

Raza, S.A., Govindaluri, S.M. and Bhutta, M.K. (2023), “Research themes in machine learning applications in supply chain management using bibliometric analysis tools”, Benchmarking: An International Journal, Vol. 30 No. 3, pp. 834-867, doi: 10.1108/bij-12-2021-0755.

Saket, B., Endert, A. and Stasko, J. (2016), “Beyond usability and performance: a review of user experience-focused evaluations in Visualization”, Sixth Workshop on Beyond Time and Errors on Novel Evaluation Methods for Visualization (BELIV ’16), 24-October, pp. 133-142.

Sorapure, M. (2019), “Text, image, data, interaction: understanding information visualization”, Computers and Composition, Vol. 54 No. 2019, 102519, doi: 10.1016/j.compcom.2019.102519.

Stake, R. (1995), Case Researcher Roles, the Art of Case Study Research, Sage Publications, Thousand Oaks, CA.

Taylor, P., Hassenzahl, M., Tractinsky, N., Hassenzahl, M. and Tractinsky, N. (2011), “User experience - a research agenda”, Behaviour and Information Technology, Vol. 25 No. 2, pp. 91-97.

Thüring, M. and Mahlke, S. (2007), “Usability, aesthetics and emotions in human-technology interaction”, International Journal of Psychology, Vol. 42 No. 4, pp. 253-264, doi: 10.1080/00207590701396674.

Tipu, S.A.A. and Fantazy, K. (2023), “Examining the relationships between big data analytics capability, entrepreneurial orientation and sustainable supply chain performance: moderating role of trust”, Benchmarking: An International Journal, Vol. ahead-of-print No. ahead-of-print, doi: 10.1108/bij-04-2023-0206.

Tuch, A.N., Roth, S.P., Hornbæk, K., Opwis, K. and Bargas-Avila, J.A. (2012), “Is beautiful really usable? Toward understanding the relation between usability, aesthetics, and affect in HCI”, Computers in Human Behavior, Vol. 28 No. 5, pp. 1596-1607, doi: 10.1016/j.chb.2012.03.024.

Vázquez-Ingelmo, A., García-Holgado, A., Verdugo-Castro, S., Therón, R. and García-Peñalvo, F.J. (2024), “Data visualization and domain knowledge: insights through focus groups of researchers in Spain”, Computers in Human Behavior, Vol. 155 February, 108162, doi: 10.1016/j.chb.2024.108162.

Viégas, F.B. and Wattenberg, M. (2006), “Communication-minded visualization: a call to action”, IBM Systems Journal, Vol. 45 No. 4, pp. 801-812, doi: 10.1147/sj.454.0801.

Yin, R.K. (2009), Case Study Research: Design and Methods, Sage Publications, Thousand Oaks, CA.

Zamani, E.D., Griva, A. and Conboy, K. (2022), “Using business analytics for SME business model transformation under pandemic time pressure”, Information Systems Frontiers, Vol. 24 No. 4, pp. 1145-1166, doi: 10.1007/s10796-022-10255-8.

Acknowledgements

Dr Griva received funding for this research from the Science Foundation Ireland grant 13/RC/ 2094_2. Open access funding was provided by IReL.

Corresponding author

Anastasia Griva is the corresponding author and can be contacted at: anastasia.griva@universityofgalway.ie

About the authors

Dr Anastasia Griva is an Assistant Professor in Business Information Systems at the University of Galway, Ireland. She has worked on research commercialisation by establishing two AI and analytics start-ups which have raised more than 1.3 M from VC funds. Her research interests lie in the areas of Business Analytics, Responsible AI and innovation. She has published in edited books, academic journals (e.g., Information Systems Journal, Information Systems Frontiers, IT and People, International Journal of Information Management, Journal of Retailing and Consumer Services, Expert Systems with Applications, IEEE Software, Journal of Decision Systems), and proceedings of international conferences (e.g., ECIS, MCIS, EGOS).

Dr Angeliki Karagiannaki is a part-time Lecturer at the Departments of Management Science and Technology, and Informatics of the Athens University of Economics and Business (AUEB). She is also a founding member and the managing director of ACEin, the incubation centre of AUEB. She holds a PhD from AUEB and a Masters’ in Management Science and Operational Research from Warwick Business School. Her research interests lie in innovation and entrepreneurship, research commercialisation, and supply chain management innovation. Angeliki has had active participation in several national and European research projects and has published articles in many academic Journals and conferences.

Related articles