The application of AI in digital HRM – an experiment on human decision-making in personnel selection

Christine Dagmar Malin (Business Analytics and Data Science-Center, University of Graz, Graz, Austria)
Jürgen Fleiß (Business Analytics and Data Science-Center, University of Graz, Graz, Austria)
Isabella Seeber (Department of Management, Technology and Strategy, Grenoble Ecole de Management, Grenoble, France)
Bettina Kubicek (Institute of Psychology, Work and Organizational Psychology, University of Graz, Graz, Austria)
Cordula Kupfer (Institute of Psychology, Work and Organizational Psychology, University of Graz, Graz, Austria)
Stefan Thalmann (Business Analytics and Data Science-Center, University of Graz, Graz, Austria)

Business Process Management Journal

ISSN: 1463-7154

Article publication date: 18 July 2024

Issue publication date: 16 December 2024

1810

Abstract

Purpose

How to embed artificial intelligence (AI) in human resource management (HRM) is one of the core challenges of digital HRM. Despite regulations demanding humans in the loop to ensure human oversight of AI-based decisions, it is still unknown how much decision-makers rely on information provided by AI and how this affects (personnel) selection quality.

Design/methodology/approach

This paper presents an experimental study using vignettes of dashboard prototypes to investigate the effect of AI on decision-makers’ overreliance in personnel selection, particularly the impact of decision-makers’ information search behavior on selection quality.

Findings

Our study revealed decision-makers’ tendency towards status quo bias when using an AI-based ranking system, meaning that they paid more attention to applicants that were ranked higher than those ranked lower. We identified three information search strategies that have different effects on selection quality: (1) homogeneous search coverage, (2) heterogeneous search coverage, and (3) no information search. The more applicants were searched equally often (i.e. homogeneous) as when certain applicants received more search views than others (i.e. heterogeneous) the higher the search intensity was, resulting in higher selection quality. No information search is characterized by low search intensity and low selection quality. Priming decision-makers towards carrying responsibility for their decisions or explaining potential AI shortcomings had no moderating effect on the relationship between search coverage and selection quality.

Originality/value

Our study highlights the presence of status quo bias in personnel selection given AI-based applicant rankings, emphasizing the danger that decision-makers over-rely on AI-based recommendations.

Keywords

Citation

Malin, C.D., Fleiß, J., Seeber, I., Kubicek, B., Kupfer, C. and Thalmann, S. (2024), "The application of AI in digital HRM – an experiment on human decision-making in personnel selection", Business Process Management Journal, Vol. 30 No. 8, pp. 284-312. https://doi.org/10.1108/BPMJ-11-2023-0884

Publisher

:

Emerald Publishing Limited

Copyright © 2024, Christine Dagmar Malin, Jürgen Fleiß, Isabella Seeber, Bettina Kubicek, Cordula Kupfer and Stefan Thalmann

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

The rapid growth of digital technologies has transformed traditional human resource management (HRM) into digital HRM (Nicolás-Agustín et al., 2022; Jackson and Dunn-Jensen, 2021). The most promising and heavily debated technology that is currently on the rise in personnel selection is artificial intelligence (AI) (Black and van Esch, 2020). AI refers to an intelligent technology that uses natural language processing (NLP) and machine learning (ML) algorithms to automatically learn from a given data set and apply the learned hiring rules to future personnel selection decisions (Huang and Rust, 2018). It is able to collect applicant information from various sources, process, evaluate, and present it to decision-makers, for example, in dashboards (Yigitbasioglu and Velcu, 2012). Despite the high potential ascribed to AI use in organizations (Bongard, 2019; Oberst et al., 2021; Langer et al., 2021), the question of how AI can be successfully integrated into personnel selection is one of the main challenges in the digital transformation of HRM (Prikshat et al., 2023).

One reason for this challenge is that when AI is used as decision support, the interaction between humans and AI can lead to biases in the decision-making process (Skitka et al., 2000; Charlwood and Guenole, 2022; Soleimani et al., 2021). Decision-makers may adopt an AI-based ranking as a default option without adequately scrutinizing it, which is a form of status quo bias (Geng, 2016) and can be indicative of human overreliance on AI-based recommendations. However, relying on AI-based recommendations without critically checking their suitability is problematic, as e.g. incomplete data sets can lead to erroneous AI-based recommendations (Soleimani et al., 2021). Consequently, status quo bias and decision-makers’ overreliance on AI suggestions can have a negative impact on selection quality (Green, 2022). In doing so, the status quo bias can have far-reaching negative consequences, ranging from ethical concerns regarding fairness and equal opportunities, legal concerns, e.g. §22 GPPR (automated decision-making), discrimination laws, or (upcoming) AI regulations, to organizational challenges in the form of dissatisfaction and high opposition costs due to not hiring the most suitable candidates (Hunt, 2007).

To ensure that AI meets ethical, legal, and organizational standards, it is of great relevance to investigate whether there is a status quo bias in the context of AI-based personnel selection. One way to verify its existence is by investigating decision-makers’ information search behavior during AI use. The search for information can be described based on several parameters, such as the amount of available information considered or how equal the information about each option is considered (Redlawsk, 2004; Lau, 1995, 2003). Depending on these parameters, the tendency towards status quo bias and selection quality varies. However, to the best of our knowledge, there is a lack of studies that observe how decision-makers navigate through AI-based decision support tools, which generally offer different levels of detail (Lai et al., 2021). It is unknown whether decision-makers in personnel selection rigorously evaluate or blindly rely on the information provided by AI. To prevent status quo bias, and thus overreliance on AI, we need to understand how decision-makers search for available information when making personnel selection decisions with the assistance of AI.

This paper investigates decision-makers’ information search behavior when using AI in personnel selection and its impact on their selection quality. We are also interested in whether the visualization of (applicant) information in the form of a ranking in an AI-based dashboard and prompts influence decision-makers interaction and overreliance. To achieve this goal, we designed a prototype of a clickable dashboard that mimics an AI-based dashboard and used it in an experimental vignette study. Our design represents an AI-based system that visualizes job applicant information on different levels of aggregation including an AI-based ranking list of applicants. Decision-makers could freely navigate between applicants and aggregation levels with the goal of adjusting the ranking based on their assessment and the decision support given by AI.

Our study contributes to our understanding of status quo bias by showing that an AI-based ranking system for personnel selection can trigger status quo bias among human decision-makers indicating overreliance on AI-based recommendations. Our findings also contribute to information search theory by demonstrating that the way decision-makers search for information affects their personnel selection quality. We elicited three information search strategies that decision-makers adopt when searching and processing (applicant) information and point out how each strategy impacts their (personnel) selection quality. Our findings also serve as a basis for organizations and decision-makers to consciously guide the choice of information search strategy through the way AI visualizes and provides information for decision-makers, ensuring high selection quality while at the same time preventing status quo bias and thus overreliance on AI. Thus, our study provides implications for a suitable AI design to increase selection quality as well as recommendations for practitioners for mitigating status quo bias. We provide relevant insights for organizations and researchers in the context of the current debate about AI regulations that require human oversight in high-risk areas such as personnel selection.

2. Background

2.1 Human oversight and status quo bias

Despite organizational decision-making being one of the most important applications of AI (Cao et al., 2021) and AI being increasingly used in personnel selection to improve selection quality, its use can at the same time lead to biases arising from the interaction between humans and AI (Skitka et al., 2000; Charlwood and Guenole, 2022). These biases in the decision-making process can arise from algorithm design (Langer and König, 2023) and from (human) biases reflected in training data. To protect users from negative consequences caused by biased AI decisions (Mikalef et al., 2022), legislators and societal interest groups, define human oversight as a key requirement when using AI in sensitive decision-making processes such as personnel selection (Hunkenschroer and Luetge, 2022). Several regulations, such as article §22 of the General Data Protection Regulation (GDPR), the EU AI Act or the AI Bill of Rights, demand that humans have to make the final decision (Mikalef et al., 2022; Hunkenschroer and Luetge, 2022; Charlwood and Guenole, 2022). AI in personnel selection has to support decision-makers appropriately, but it has to be ensured that decision-makers rigorously check recommendations and decide themselves (Green, 2022). By involving people in the decision-making processes (i.e. humans in the loop), the objective is to ensure the maintenance of human competence and accountability, legal safeguards, and quality controls (Enarsson et al., 2022), as well as fairness and transparency in the decision-making process. AI regulations currently leave room for interpretation on how they can be implemented in practice (Langer and König, 2023). It is unclear how effective human oversight can be implemented and ensured, e.g. through the design of AI to thereby enable optimal or unbiased personnel selection decisions (Fleiß et al., 2024).

High-quality decision-making requires that the decision-maker systematically and critically investigates applicant information (Malin et al., 2023). Decision-makers must rely on AI decisions to an appropriate extent (i.e. distinguish between correct and incorrect AI-based recommendations) to benefit from AI’s advantages (Schemmer et al., 2022). From prior research, it is known that (inexperienced) human decision-makers tend to overly-rely on system recommendations (e.g. Jakubik et al., 2023), indicating insufficient human oversight. Especially when using such automated systems, decision-makers tend to trust them excessively regarding reliability (Lee and See, 2004) and accept automated recommendations without questioning them, which can reduce selection quality (Goddard et al., 2012). Consequently, decision-makers may forego their own deep search and processing of information (Mosier and Skitka, 1999). This overreliance on algorithms and its negative consequences can be amplified or caused by status quo bias. Status quo bias states that people tend to prefer a pre-selected option. They do not include all available information in their decision but instead rely on the current status quo or what someone else has chosen for them (Debarliev et al., 2020). The tendency towards status quo bias can be ascribed to several well-established concepts, including default bias (i.e. reluctance to choose the default option), inertia (i.e. resilience to change is increased by the size and complexity of the organization), loss aversion (i.e. avoidance of change, as gains are weighted more heavily than losses) and sunk cost (i.e. innovation resistance, including the concept of status quo satisfaction) (Godefroid et al., 2023). The status quo bias is a widespread phenomenon that has already been observed in many disciplines such as price management (Bergers, 2022) or investment decisions (Freiburg and Grichnik, 2013). The relevance of the status quo bias issue is also reflected in the increasing research interest, which has so far focused on investigating mechanisms favoring the bias, measurement options for status quo bias (Godefroid et al., 2023), and the susceptibility of certain groups of people to the status quo bias (Burmeister and Schade, 2007). Specifically in information systems research the focus has been on investigating the effects of status quo bias on various factors such as user resistance to and adoption of new technologies (Lee and Joshi, 2017) like AI-based voice assistants (Balakrishnan et al., 2024), with evidence of the effect of status quo bias also observed in the AI context (Chatfield and Reddick, 2019). Similarly, recent research on HRM has focused on the effects of status quo bias on resistance to mobile HRM applications (Shankar and Nigam, 2022) or its occurrence in conventional personnel selection decisions. Given this background, the status quo bias in conventional personnel selection is a valid concern that could gain further significance with the increasing use of AI.

In personnel selection, the outcomes of AI visualized in a dashboard can present a status quo in the form of a ranking of suitable applicants. Options ranked higher or recommended by an AI can tempt decision-makers to establish them as pre-selected or default options. When AI is used in personnel selection, it can be assumed that decision-makers tend to focus on a limited number of applicant profiles rather than consider all profiles equally for making the personnel selection decision. That is, due to the unequal distribution of the considered applicant profiles, they focus their information search on a subset of applicants rather than considering the information of the full set of applicants. We term these two kinds of information search subset-focused information search and full set information search.

The assumption that a predefined set of options shifts the focus of the information search to those options is consistent with research by Geng (2016). In several experiments, they showed that one underlying mechanism of how the status quo bias comes into effect is that decision-makers focus their attention for a (longer time) on the default option and spend less time evaluating non-default options. This form of skewed attention distribution, which we term subset-focused information search, means that information about the default options is better processed and understood by decision-makers (Geng, 2016). We formulate the following Hypothesis:

H1.

Decision-makers are more likely to show a subset-focused information search in the aggregate than a full set information search.

2.2 Search coverage and selection quality

In personnel selection, the decision-making process is the choice between several options (Pomerol, 1997). It involves three components: information search, evaluation/action, and feedback/learning (Einhorn and Hogarth, 1981; Payne, 1982). Decision-makers differ in the amount of effort they put into each phase and in the way they search and process information. When it comes to making effective decisions, the way in which information is searched is crucial (Mishra et al., 2015), making information search an important determinant of selection quality (Jelokhani-Niaraki and Malczewski, 2015). Information search, as within personnel selection, means that a decision-maker gathers information to select one or more available alternatives (Browne and Pitts, 2004).

Decision-makers either conduct an active and intensive information search (i.e. information searching) or a non-intensive or average information search process (i.e. no information searching) (Maser and Weiermair, 1998). In doing so, decision-makers follow certain decision rules that determine the way they search and process information. These different decision rules are reflected in the compensatory and non-compensatory (information) search strategies, which differ for example regarding decision-makers’ search coverage, i.e. the equality of the distribution of considered alternatives (Redlawsk, 2004). Compensatory search strategies are for example characterized by the decision-maker fully searching, considering, and reviewing all available information to make the decision (Cook, 1993), meaning that the same information is considered for each available alternative (Lau and Redlawsk, 2006). For personnel selection, this would mean that the decision-maker would search for the same information for each applicant and consider it for making a selection decision, thus comparing the individual applicant profiles homogeneously with each other. Contrary, decision-makers with a non-compensatory search strategy search unequally or heterogeneously for information across the alternatives (Lau and Redlawsk, 2006), which means that not all available information is considered for making the decision (Cook, 1993). Decision-makers with a non-compensatory search strategy tend to have preferences regarding the alternatives (Cook, 1986). For personnel selection, this would mean that the decision-maker, as a result of the unequal or heterogeneous search across the applicants, would consider certain applicants more strongly for the final selection decision than others. Thus, the compensatory search strategy is characterized by a homogeneous search coverage, while a non-compensatory search strategy is rather characterized by a heterogeneous search coverage.

Even if the decision-making situation is identical, decision-makers’ selection decisions vary depending on whether their information search has compensatory or non-compensatory characteristics (i.e. homogeneous or heterogeneous search coverage) (Cook, 1993). One reason for the varying selection quality is that in compensatory information search decision-makers tend to use more complex and elaborate decision-making heuristics, while in non-compensatory information search, they employ more simplifying decision-making heuristics (Takemura and Selart, 2007). Particularly in the latter case, to reduce cognitive effort, cross-dimension comparisons and combinations of information are avoided and available information is not used, which in turn can lead to lower selection quality (Cook, 1993; Redlawsk, 2004). It can be assumed that the way information is searched by decision-makers has an influence on selection quality.

Various factors, such as status quo bias, can in turn influence how equally decision-makers search for information (Geng, 2016). It can be assumed that when options are searched and processed heuristically, it is more likely that cognitive biases affect the decision. In case when the AI-based ranking creates a status quo bias (see above) and the decision-maker conducts a more heuristic search and processing with heterogeneous search coverage, then it is more likely that good options get overlooked. Contrary, when a decision-maker engages in comparative and more deliberate searching with homogeneous search coverage, thus status quo bias (rather) not occurs, because the decision-maker would compare the first option also with the later options (equally) and potentially become aware of potential errors in the AI recommendations. It is less likely that heuristic information searching or heterogeneous search coverage can uncover errors in the AI output, because the decision-maker will tend to search and process information less intensively and incompletely, resulting in lower selection quality. We formulated the following Hypothesis:

H2.

Decision-makers with a homogeneous search coverage achieve higher selection quality than those with a heterogeneous search coverage.

Besides search coverage, the amount of information considered for making the selection decision is also an important determinant for selection quality (Lau and Redlawsk, 2006). Information behavior models, such as rational choice theory, state that the more information is considered, the better decisions can be made (i.e. higher selection quality) (Redlawsk, 2004). The consideration of a large amount of information requires an intensive and extensive information search by the decision-maker (i.e. search intensity). This means that an information search can either be intensive, i.e. all attributes or information of each applicant are investigated, or shallow, i.e. the decision-maker’s attention lays only on a limited set/number of attributes and/or applicants (Redlawsk, 2004). A high search intensity is associated with a compensatory search strategy, as more complex decision heuristics and homogeneous search coverage mean that a larger amount of available information is considered (Jelokhani-Niaraki and Malczewski, 2015; Redlawsk, 2004). On the other hand, a shallow search intensity is associated with a non-compensatory search strategy, as the decision maker spends less effort to search for information (Redlawsk, 2004). It can be assumed that with increasing homogeneous search coverage, information is searched more intensively, which in turn positively influences selection quality. We investigate if the relationship between search coverage and selection quality can be explained by search intensity:

H3.

Search intensity mediates the relationship between search coverage and selection quality.

2.3 Moderation of priming

Decision-makers generally tend towards heuristic information searching and processing, as humans prefer a low effort regarding processing (Davis and Tuttle, 2013). However, the presence of various conditions, such as accuracy, defense, and impression motivation, can activate a more thorough way of searching and processing information (Zhang et al., 2014). The artificial creation of these conditions can be understood as priming. Priming means behavior is unconsciously activated for a short time (Bargh and Chartrand, 2000), which makes it possible to measure the psychological effects of the primed concepts on the behavior (Cohn and Maréchal, 2016).

Priming towards organizational factors such as responsibility and the occurrence of possible system errors is promising to influence decision-makers’ information search behavior. Decision-makers in personnel selection who are primed for responsibility are sensitized to the need to justify their selections (Skitka et al., 2000), which makes them aware of the risks of possible negative (legal) consequences for themselves. If decision-makers feel responsible for negative outcomes, this may increase their sufficiency threshold, which in turn increases the likelihood that they will practice systematic processing (Chaiken et al., 1989). Decision-makers who have been primed for responsibility before the decision-making process will exert considerable cognitive effort in making a decision. They tend to use complex rules when selecting options, to search and process information in a detailed and critical way (Skitka et al., 2000). Taking responsibility for the correctness of a decision can mitigate cognitive biases, by making automated decisions more scrutinized and by incorporating multiple sources of information (Mosier et al., 1996). The tendency for errors of omission and commission can be reduced, leading to improved selection quality (Skitka et al., 2000; Mosier et al., 1996).

When using decision support systems, such as dashboards, decision-makers may tend to overestimate a system’s capabilities, thereby placing undue trust in its recommendations and, as a result, adopting them unquestioningly (i.e. automation bias) (Goddard et al., 2012; Buçinca et al., 2021). Previous research indicates that when users are sensitized to possible system errors, they engage in more checking behavior (e.g. searching and matching the automatized recommendation with other available information), resulting in lower automation bias (Manzey et al., 2006; Bahner et al., 2008). The impact of decision-makers’ sensitization to possible system failures on the occurrence of status quo bias has not been investigated so far, but from the research on automation bias, it can be assumed that it also reduces status quo bias. Similar to responsibility, it can be argued that decision-makers who are primed for the occurrence of possible system errors will invest greater cognitive effort in searching and processing information, thereby reducing the occurrence of biases such as status quo bias, resulting in higher selection quality.

We assume that priming regarding responsibility for decisions or potential shortcomings of AI affects the strength of the relationship between search coverage and selection quality:

H4a.

The relationship between search coverage and selection quality is moderated only when decision-makers were primed on responsibility.

H4b.

The relationship between search coverage and selection quality is moderated only when decision-makers were primed on the occurrence of system errors.

H4c.

The relationship between search coverage and search intensity is stronger for decision-makers who are primed on responsibility and/or for the occurrence of system errors, which in turn leads to a stronger indirect effect on selection quality.

3. Method

This study followed a 3 × 2 experimental design. We used vignettes of dashboard prototypes to investigate the impact of decision-makers’ information search behavior on selection quality. This method was chosen as vignettes present the study participants with realistic scenarios representing decision-makers’ typical contexts of use, allowing us to manipulate independent variables and assess the study participants’ behaviors (Aguinis and Bradley, 2014). We followed the best practice recommendations of Aguinis and Bradley (2014) for designing and implementing the vignettes.

We manipulated priming (control vs. responsibility vs. occurrence of system errors) and the AI-based rating score visualization (matching score vs. 5-point rating). The latter is not the focus of the study, but as it was part of the overall manipulation we retrained it as a control variable in the research model and analysis.

Decision-makers assigned to the control group were informed about the basics of a dashboard and its functions. In addition to this basic information, decision-makers of the occurrence of system errors group were also given more detailed information about the dashboard and a warning about the possible occurrence of system errors. Decision-makers of the responsibility group, after receiving basic information about the dashboard and its functions, were informed that according to §22 of the GDPR, they are responsible for their selection decision and have to justify it after the experiment. However, at the end of the experiment, a justification was waived. The dashboard provided to all decision-maker groups included a visual assessment of each applicant’s suitability for the open position consisting of three key indicators: education, abilities, and personality.

3.1 Participants

Our study sample consisted of 93 decision-makers. With 90%, the majority of the decision-makers were full-time students (82% studied psychology and 12% studied business). The remaining 10% of decision-makers were full-time employees. Psychology students received course credits for participation. 10 decision-makers of the student-subsample had already completed seminars or internships in HRM. The average age was 23 years (SD = 3.89). Overall, 68% of the decision-makers were female, 30% male, and 2% diverse.

3.2 Dashboard

To design the dashboard’s vignettes as realistic as possible, we reviewed visual analytics research for design requirements for creating a dashboard. In doing so, we identified several requirements, including adherence to design standards such as context and readability (Sosulski, 2018) and the interaction of human intervention in the context of statistical storytelling (Yau, 2013). Based on these requirements, we created a dashboard prototype representing an AI as decision support for personnel selection in Preely (Testlab ApS, 2020) (see Figure 1). We deliberately designed a dashboard prototype representing an AI, as this ensured that all decision-makers made their personnel selection decisions based on identical information provided by a dashboard allowing us to compare their individual information search behavior in the most effective way. Using an actual AI-based dashboard would have provided different information to each decision-maker as a basis for decision-making, as AI, trained using ML algorithms, learns from each interaction with the system and automatically adjusts its outcome accordingly (Das et al., 2015).

Decision-makers assumed that the information presented in the dashboard was captured, processed, and visualized by an AI, indicating that their information search behavior would not differ if we used an actual AI-based decision support system instead of dashboard prototypes. The dashboard prototype was reviewed by external experts from an AI provider and evaluated as corresponding to a real AI tool in practice.

The dashboard prototype visualized the assessment of the suitability of applicants for a position using three levels: On the first dashboard level, i.e. the main interface, an overview of all 10 applicants for the job was presented (see Figure 2). Additionally, the fit of each applicant to the job and several keywords from the CV were displayed. Additional features of the first dashboard level included a filter function that decision-makers could use to rank the applicants according to their scores. Clicking on an applicant called up the second level, which displays information about the applicant regarding the professional background focusing on three key indicators (education, abilities, and personality) (see Figure 3). By clicking on one of the key indicators, more detailed information about the applicant was displayed in form of a spider diagram, which compared the requirement profile with the qualification profile (see Figure 4). The second dashboard level also contained the option to call up the third dashboard level showing the chat history of the applicant with a chatbot. The dashboard levels and applicants could be called up as often as desired.

3.3 Procedure

The study was conducted in a computer room and data were collected in 17 sessions. At the beginning of the experiment, all decision-makers received a written introduction to the experiment, in which they were asked to imagine themselves as recruiters. The decision-makers were randomly assigned to one of six conditions. In accordance with the assigned priming group, they received a further written introduction containing relevant information about the dashboard and its functions. Decision-makers were informed and assumed that the entire applicant information had been processed by an AI using intelligent language analysis, assigned a numerical value, and aggregated into viewable charts and summary values.

The introduction included a request to perform two personnel selection tasks for two different positions using the dashboard. First, each decision-maker had to select the five most suitable applicants out of 10 for the position of head of the marketing department and rank them according to their qualifications. Afterward, they performed a second personnel selection task, but this time they had to select the five most suitable applicants for the position of a branch manager of a psychosocial facility. For both tasks, decision-makers were provided with a job description before the experiment, which included an overview of the requirements for each open position. Decision-makers completed both tasks using the dashboard, which presented the key indicators scores for each applicant at the highest aggregation level which was also the starting page of the dashboard (see Figure 2). For each task, the dashboard ranked three applicants incorrectly, two too high and one too low. The errors were discussed with two industry experts working for a company specialized on AI personnel selection solutions for their authenticity, and intentionally integrated. The dashboard was designed in a way that the decision-makers had to access at least the second dashboard level to have enough information to evaluate the applicants sufficiently and to detect the integrated errors.

The decision-makers had a max. of 10 min to complete each of the tasks. However, they were free to finish the tasks at any time and make the selection decision faster. After decision-makers had completed the selection for both tasks, they had to answer a series of questionnaires.

3.4 Measures

Our unit of observation is a decision path, defined as the decision-makers’ click behavior. It consists of all steps that a decision-maker has taken up to his or her final selection decision. The decision paths show in which order and at what time the decision-maker viewed which applicants and their information and the individual dashboard levels. It contains all information to measure and calculate the information search variables (i.e. equality of information search, search coverage, and search intensity) and the selection quality of a decision-maker within a personnel selection task. As each decision-maker completed two scenarios, we observed two decision paths per decision-maker.

Based on the literature described in the background section, we identified three parameters of information search that impact selection quality: (1) equality of information search, (2) search coverage, and (3) search intensity. These parameters were assessed via three eponymous indicators:

Equality of information search describes the extent of equal distribution of decision-makers’ highest frequency of processed applicant profiles. To operationalize this measure we assessed which decision-maker had the most dashboard level 2 and 3 views on a certain applicant profile. The most viewed applicant profile for each decision-maker received a 1. Aggregating these numbers across all decision-makers resulted in a distribution of most views on applicant profiles. The higher the measure of individual applicant profiles compared to the others was (i.e. unequal distribution of considered applicant profiles), the more the decision-makers focused or paid attention to certain applicant profiles, thus they conducted a subset-focused information search, whereas the more applicant profiles had the same measure (i.e. equal distribution of considered applicant profiles), the more equally the decision-makers considered all applicant profiles, thus they conducted a full set information search.

Search coverage describes how frequently and homogeneously a decision-maker searched for information. The variable was operationalized by the coverage measure, i.e. how often an applicant was viewed per decision path of a decision-maker. First, we calculated how often an applicant was viewed by a decision-maker for each decision path and then we determined the mean value from the number of views. Second, we calculated the variable search coverage by determining the standard deviation for each decision path from the respective mean of value. The higher the coverage measure, the more often different applicants were compared (i.e. heterogeneous search coverage), the lower the coverage measure, the more applicants were viewed the same number of times or clicks (i.e. homogeneous search coverage).

Search intensity describes both the amount and the effort invested in the elaboration of available information that a decision-maker considered for making the selection decision. It was operationalized by the number of views of the second and third dashboard level, whereby the variable was assessed via two indicators:

Search intensity: dashboard level 2 was measured by the number of how often the second dashboard level was accessed, i.e. how often an applicant’s overview of his/her professional background focusing on three key factors (education, abilities, and personality) and/or the detail levels of each key factor were viewed.

Search intensity: dashboard level 3 was operationalized by the number of the third dashboard level accessed, i.e. how often the applicants’ chat history with a chatbot were viewed.

Selection quality describes the extent to which a decision-maker selected the most suitable applicants for the open job position. It was measured by the number of incorrect selection decisions. A selection decision was considered to be incorrect if an insufficiently qualified applicant was selected for the open position. The variable was calculated by dividing the number of selected applicants by a decision-maker minus the number of all applicants who were incorrectly ranked higher (by the decision-maker) by the number of applicants selected.

To control the effects of other predictors in the statistical model, rating score visualization, decision-makers’ gender, and priming were set as control variables. Gender was answered on a 3-point scale by decision-makers: (1) female, (2) male, and (3) diverse. Priming was also considered as a moderator for the analysis.

4. Results

4.1 Human oversight and status quo bias

Hypothesis 1 stated that decision-makers are more likely to perform a subset-focused information search than a full set information search. Since both types of information searches are characterized by the extent of equal distribution of the considered candidate profiles, we tested Hypothesis 1 by looking at the aggregate distribution of views that each applicant profile received from decision-makers. Specifically, we conducted Chi-square goodness-of-fit tests, as these statistical tests can be used to examine whether a data set may have been generated by a certain distribution (Rolke and Gongora, 2021). Figure 5 displays the distribution of how often each applicant was in focus in task 1 and task 2, separately.

Chi-square goodness-of-fit tests reject the null hypotheses that all applicants are in focus equally often (i.e. equal distribution of processed applicant profiles) in both the first and second task (χ2 = 115.64, df = 9 with p < 0.001 for task 1, and χ2 = 43.167, df = 9 with p < 0.001 for task 2, respectively), indicating a subset-focused information search.

For both the first and second task, the AI-generated ranking, in which applicants are initially ordered in the dashboard, results in an aggregate unequal information search pattern consistent with the presence of a status quo bias. Overall, decision-makers paid the most attention to applicants ranked number 1 (36 views in task 1, 20 views in task 2) as compared to those ranked lower. Those who were ranked lowest (ranks 8 to 10) again received less attention. Thus, when decision-makers searched for information to make the personnel selection decision, they paid more attention to applicant profiles ranked higher by the AI than to those ranked lower. In sum, our findings show an unequal distribution of applicant profiles considered (i.e. subset-focused information search), which is consistent with a status-quo bias. Consequently, our data support H1.

4.2 Search coverage and selection quality

To better understand the underlying decision behavior that produces this status quo bias, we test in Hypothesis 2 how a decision-maker’s search coverage affects selection quality. We also looked at the effect of search coverage on selection quality at the individual level. The investigation of Hypothesis 2 consisted of qualitative and quantitative analysis. First, we used a cross-case qualitative thematic analysis (Braun and Clarke, 2006; Miles and Huberman, 1994) to classify different patterns of information search. Patterns in the respective decision-makers’ decision path, i.e. the order of the viewed applicants and the viewed dashboard levels, were identified and assigned to superordinate groups and topics. For example, a decision path was characterized by the following click order: Applicant 1 (first dashboard level) - Applicant 1 (second dashboard level (overview)) - Applicant 2 (first dashboard level) - Applicant 2 (second dashboard level (overview)) - Applicant 3 (first dashboard level) - Applicant 3 (second dashboard level (overview)) - Applicant 3 (first dashboard level).

We manually coded decision-makers’ click behavior while navigating across applicants to identify concrete information search behavior when interacting with an AI. We identified three distinct information search strategies: (1) no information search, (2) heterogeneous search coverage, and (3) homogeneous search coverage.

No information search describes an approach in which decision-makers make a selection decision without considering any detailed applicant information. Decision-makers only viewed the first dashboard level, i.e. superficial overview of all applicants before making their decision. Decision-makers using other information search strategies viewed at least the second dashboard level where more detailed information on an individual applicant was displayed. The typical sequence of detail-level views differs between the two approaches. In homogeneous search coverage, detail-level views are distributed equally over all applicants, while in heterogeneous search coverage, one or few applicants were in focus, i.e. receiving more views compared to others.

In 31 of a total of 186 decision paths, decision-makers did not search for detailed information for any applicant and only viewed the first-level overview page containing the assigned rating score visualization. Thirteen decision-makers chose this strategy for both personnel selection tasks, while five decision-makers chose this strategy exclusively for the first personnel selection task. None of the decision-makers who followed the no information search approach were able to make an error-free selection. We excluded those decision paths from further analyses. The remaining 155 decision paths were characterized by a varying degree of homogeneous and heterogeneous search coverage.

To quantitatively validate the results of the qualitative analysis and to test Hypothesis 2 stating that decision-makers with a homogeneous search coverage achieve higher selection quality than decision-makers with a heterogeneous search coverage, we conducted an OLS regression analysis. An OLS regression analysis minimizes the sum of squares of the differences between observed and predicted values, allowing to model the relationship between a dependent and independent variable(s) (Burton, 2021), which makes this method appropriate to investigate the influence of decision makers’ search coverage on their selection quality. Table 1 (see model nr. 1) displays the results of the OLS regression of search coverage on selection quality with gender, priming, and rating score visualization as control variables.

A significant F-test (F (5, 149) = 3.809, p = 0.003), together with an R2 of 0.113 (adjusted R2 = 0.084) indicates adequate model fit. For search coverage, we find a statistically significant regression coefficient of −1.039 (t = −2.945; p = 0.004): Selection quality decreases with higher values of search coverage. An increase of heterogeneous search coverage decreases selection quality, which is in line with H2. From the control variables, only the priming dummy for the occurrence of system errors-condition had a significant effect.

4.3 Mediation of search intensity

To test Hypothesis 3 stating that the search intensity mediates the relationship between search coverage and selection quality, we conducted a mediation analysis. We chose a mediation analysis, as it aims to investigate whether the relationship between an independent (search coverage) and a dependent variable (selection quality) can be explained by a third variable (search intensity) by analyzing the direct and indirect effects of these variables on each other. Specifically, the assumptions used in the mediation analysis are causal, as the literature on information search behavior indicates that increased (homogeneous) search coverage increases search intensity and thus increases selection quality. The mediator variable search intensity is represented by the number of views of the second and third dashboard level, whereby the influence of the respective dashboard levels was investigated separately in one of two models. Gender, priming, and rating score visualization were included as control variables in both models. Considering that mediation analyses generally risk leading to a skewed distribution of the variables, we ensured a robust estimation of the effects and quantification of the uncertainties by bootstrapping with a 95% confidence interval.

Model 1.

Mediation of search intensity represented by accesses to the second dashboard level (Search intensity: level 2)

First, we conducted a mediation analysis to test Hypothesis 3 stating that the search intensity regarding the second dashboard levels mediates the relationship between search coverage and selection quality. The mediation analysis (models 1a and 1b in Table 1) for the second dashboard level clicks shows adequate model fit explaining 13.51% percent of total variance (adjusted R2 = 0.106, F (5, 149) = 4.656, p = 0.001). The negative coefficient of −277.867 (t = −3.598; p = 0.0004) indicates that the search intensity decreases with an increase in heterogeneous search coverage. The more the information is searched homogeneously, the more intensively the information is searched. From the control variables, only the dummy variable capturing the occurrence of system errors-priming has a significant (positive) effect. Models 1a and 1b (see Table 1) show a significant positive effect of both search coverage (b = −0.744, t = −2.073; p = 0.040) and search intensity (b = 0.001, t = 2.903; p = 0.004) on selection quality. Increased views of the second dashboard level increase selection quality. This model provides a significant explanatory contribution (F (6, 148) = 4.737, p = 0.0002), explaining 16, 11% (adjusted R2 = 0.127) of the total variance.

The mediation analysis revealed a significant negative effect of search coverage (IV) on search intensity (M), B = −277.867, p = 0.0004 (a-path), and a significant positive effect of the mediator on selection quality (DV), B = 0.001, p = 0.004 (b-path). Under control of the mediator search intensity, there remains a weak significant negative (direct) effect of search coverage (IV) on the dependent variable selection quality (c’-path), B = −0.744, p = 0.05. The relationship between search coverage and selection quality is mediated by search intensity, indirect effect ab = −0.295, CI [−0.618, −0.076]. Consequently, H3 was supported for model 1. Table 2 (models 1c and 1d and models 1e and 1f) and Figure 6 display an overview of the results of the mediation analysis.

Model 2.

Mediation of search intensity represented by accesses to the third dashboard level (Search intensity: level 3)

Second, we conducted a mediation analysis to test Hypothesis 3 stating that the search intensity regarding the third dashboard levels mediates the relationship between search coverage and selection quality. Similar to model 1, the mediation analysis revealed that the more the applicants were compared with each other, the lower the number of views of the third dashboard level (see models 2a and 2b in Table 1). The regression coefficient of the search coverage variable is −16.049 and significant (t = −2.167; p = 0.032). Priming (occurrence of system errors) as a control variable has a significant positive effect on the number of levels viewed as well, B = 1.546 (t = 2.813; p = 0.006). Contrary to model 1, the number of views of the third dashboard level has no significant effect on selection quality. The mediation analysis revealed that there is a significant negative effect of search coverage (IV) on search intensity (M), B = −1.082, p = 0.0030 (a-path), and a significant positive effect of the mediator on selection quality (DV), B = −0.003, p = 0.496 (b-path). There is no significant (indirect) effect, so no mediation effect was found. The relationship between search coverage and selection quality is not mediated by search intensity, indirect effect ab = 0.043, CI [−0.148, 0.2999]. Consequently, H2 and H3 were not supported for model 2. An overview of the results of the mediation analysis is shown in Table 2 (see models 2c, 2d, 2e, and 2f).

Overall, the search intensity mediates the relationship between search coverage and selection quality for model 1 (see models 1e/f in Table 2), i.e. number of views of the second dashboard level, but not for model 2 (see models 2e/f in Table 2), i.e. number of views of the third dashboard level. Consequently, H3 was supported for model 1 (Search intensity: level 2), but not for model 2 (Search intensity: level 3).

4.4 Moderation of priming

To investigate whether the relationship between search coverage and selection quality is moderated by a responsibility-priming (Hypothesis 4a) or by an occurrence of system errors-priming (Hypothesis 4b), we conducted a moderated mediation analysis (see Table 3 and Table 4). Furthermore, our moderated mediation analysis included the investigation of whether the relationship between search coverage and search intensity is stronger for decision-makers who are primed on responsibility and/or for the occurrence of system errors, which in turn leads to a stronger indirect effect on selection quality (Hypothesis 4c). This method is suitable for testing Hypotheses 4a, 4b, and 4c as a moderated mediation analysis uses regression analyses and bootstrapping to investigate whether the effect of an independent variable on a dependent variable is influenced by a moderator variable via a moderator (Hayes, 2021). The calculation included the following control variables: (1) gender and (2) rating score visualization.

Contrary to what we assumed, for both models, the index of moderated mediation was not significant. Our analysis revealed that priming regarding responsibility and the occurrence of system errors does not moderate the relationship between search coverage and selection quality, indicating that the strength of the relationship between search coverage (IV) and selection quality (DV) via search intensity does not depend on the moderator priming. Consequently, H4a, H4b, and H4c were not supported.

5. Discussion

This paper addresses one of the core challenges of digital HRM: How to embed AI in HRM so that decision-makers do not over-rely on AI-based decisions? We investigated decision-makers’ information search behavior when using AI in personnel selection and its impact on selection quality. Therefore, we conducted an experiment using vignettes of an AI-based dashboard with multiple levels designed to support decision-makers in personnel selection.

Our study revealed decision-makers’ tendency towards status quo bias when using AI, indicating overreliance on AI-based decisions. Those applicants who are ranked high by the AI were in focus of the decision-makers and received more attention than those who were listed lower in the ranking. Notably, those ranked lowest again received less attention than those with middle ranking. Thus, our results support Hypothesis 1, stating an unequal distribution of considered candidate profiles (i.e. subset-focused information search), indicating the presence of status quo bias in AI-based personnel selection. Our findings are consistent with previous research, such as Basu and Savani (2017) and Chun and Larrick (2022), suggesting that decision-makers tend to pay disproportionate attention to higher ranked options. An unequal distribution of attention is an indication of status quo bias. Thus, our results support existing studies indicating the presence of status quo bias in various decision scenarios such as investment decisions (Freiburg and Grichnik, 2013) or conventional personnel selection (Thomas and Reimann, 2023). Previous research offers potential explanations for the occurrence of status quo bias, ranging from overestimation/unrealistic perception of the potential loss of change (i.e. cognitive misperception) to the avoidance of uncertainty and transition costs (rational decision-making) to the tendency to stick with the status quo if money, effort or time has been invested in it (psychological commitment) (Samuelson and Zeckhauser, 1988). In the context of our study, cognitive misperception may have favored status quo bias in that our decision-makers held on to the AI judgments instead of making an objective reassessment, while rational decision-makers avoided the complexity and uncertainty of a reassessment. Psychological commitment could explain the adherence of participating decision-makers to the ranking order with the possibility of justifying their prior investment in the personnel selection process. Research on status quo bias offers several countermeasures for mitigating status quo bias for each explanatory approach, ranging from manipulation of the default option for cognitive misperception, to use of mental simulation for rational decision making, to provision of adequate information and resources for psychological commitment (Godefroid et al., 2023).

Furthermore, we observed a positive relationship between search coverage and selection quality (Hypothesis 2). We identified three information search strategies that had different effects on selection quality: (1) no information search, (2) heterogeneous search coverage, and (3) homogeneous search coverage. No information search describes an approach with a low or absent search intensity and a low selection quality. In the homogeneous search coverage approach, applicants were viewed equally often, while in the heterogeneous search coverage approach, certain applicants were in focus, receiving more views compared to others. The three identified information search strategies of decision-makers reflect features of well-known concepts of decision theory such as compensatory to non-compensatory (information) search strategies (Redlawsk, 2004). Previous research indicates that in high decision stakes such as personnel selection, decision makers tend to use their own judgment to assess whether to apply an AI-based recommendation (Wang et al., 2022), which hints at homogeneous search coverage. At the same time, decision-makers in personnel selection often have to deal with a large number of evaluations under time and resource pressure, which makes them prone to heuristic decisions (i.e. heterogeneous search coverage). In our study, the fact that decision-makers used different information search strategies despite the same decision environment may be attributed to personal preferences, experiences, and cognitive styles.

Our analysis revealed that the more equal the applicant information was processed, the more often the second dashboard level was viewed, resulting in higher selection quality (Hypothesis 3). To our surprise, however, this result did not apply if the search intensity refers to the number of views of the third dashboard level. This result was contrary to the common assumption that compensatory search strategy (i.e. homogeneous search coverage) is associated with a greater amount of available information for decision-making and higher information consideration leads to better decisions (Jelokhani-Niaraki and Malczewski, 2015; Redlawsk, 2004). However, it also stands in contrast to related studies, which found that providing additional information about alternative options can reduce status quo bias (e.g. Lorenc et al., 2013). Possible explanations for our observations are time pressure, cognitive load due to the detailed information of the third dashboard level, the tendency to rely on simplified decision heuristics or the categorization of the information of the third dashboard level as less priority.

Finally, our findings show that priming on responsibility (Hypothesis 4a) or occurrence of system errors (Hypothesis 4b) had no moderating effect. Furthermore, our study found that the strength of the relationship between search coverage and search quality is not dependent on the moderator priming (Hypothesis 4c). Surprisingly, our results differ from previous research suggesting an influence of priming on the decision behavior. For example, previous research indicates that decision-makers who receive information about the performance of an AI system have less trust in it, which affects their decision-making behavior (Kim and Song, 2023). In particular, when decision-makers are informed about potential system errors with AI-based decision support systems, they are less likely to thoughtlessly adopt AI-based personnel selection recommendations, which has a positive impact on the quality of the decision (Kupfer et al., 2023). Our observations of a non-significant effect of priming on occurrence of system errors could be explained by overconfidence in the AI system or time constraints on the execution of the decision task. Furthermore, previous research offers a possible explanation as to why we did not observe significant effects of priming on responsibility. Studies indicate that performance increases when decision-makers are responsible for the decision process, but this is not true when they are responsible for the decision outcome (Doney and Armstrong, 1996; Siegel-Jacobs and Yates, 1996).

5.1 Theoretical implications

Our study has four theoretical implications. First, we contribute to the literature on overreliance on AI-based recommendations by demonstrating the emergence of status quo bias when decision-makers use AI in personnel selection. Specifically, our study revealed that decision-makers tend to perform subset-focused information searches, which indicates status quo bias (Hypothesis 1). Previous studies have observed decision-makers’ tendency to over-rely on AI-based decisions (e.g. Jakubik et al., 2023; Jones-Jang and Park, 2022). Since the technology is not error-free and AI-based decisions may contain biases (Jakubik et al., 2023; Green, 2022), overreliance is very critical in the context of high-risk use-cases such as personnel selection. To address this issue, legislators and societal interest groups have identified human oversight as a key mechanism of AI governance, although its implementation in practice remains a challenge (Laux, 2023). Human oversight legally requires human investigation of AI-based decisions, but this prerequisite is often not fulfilled as existing human oversight measures show shortcomings or weaknesses (Green, 2022). These shortcomings are caused as humans are often unable to perform the desired monitoring tasks, increasing the risk of adopting flawed algorithms for decision-making (Green, 2022). With ineffective human oversight, decision-makers are given a false sense of safety as they assume that humans are in the loop, causing them to trust the AI more (Laux, 2023; Green, 2022). Implementing human oversight in the form of humans in the loop does not solve the risk of overreliance; on the contrary, it may even encourage decision-makers’ tendency to over-rely on AI-based decisions. Our study indicates that human oversight that prevents overreliance in personnel selection decisions depends on the way AI visualizes applicant information. Since presenting results in the form of an AI-based ranking may lead decision-makers to adopt information search strategies that promote status quo bias (Hypothesis 1), we recommend presenting unranked AI-provided information, e.g. in the form of a bag of preferred choices, to encourage the use of information search strategies that are characterized by homogeneous search coverage and high search intensity. Our research findings provide first insights into AI design for digital HRM. Proper AI design has the potential to overcome the current shortcomings of implementing effective human oversight over AI and to reduce overreliance.

Second, we contribute to research on status quo bias by demonstrating that visualizations in AI-based personnel selection decision support systems can trigger biases among decision-makers. To the best of our knowledge, we are one of the first to investigate status quo bias in the context of AI in personnel selection. We show that the status quo bias is triggered by the presentation of applicant information in an AI-based dashboard, particularly by the ranking of applicants. Depending on the ranking order of the applicants, certain applicants came into focus and others out of focus for the decision-maker (Hypothesis 1). Our findings are in line with the literature stating that people tend to pay more attention to options that are presented first in a list due to the primacy effect (Basu and Savani, 2017). Since respondents were tempted to neglect the last-ranked applicants, it can be assumed that those applicants whom the system ranked lowest are most likely to “fall off the grid”. Our findings show that despite global efforts by legislators to ensure human oversight in AI (Hunkenschroer and Luetge, 2022), the way of implementation matters. We conclude that we need more detailed regulatory specifications on how to interpret and audit human oversight in an AI context.

Our study and the applied metrics can be used to audit the implementation of human oversight in digital HRM. Even more important from an information system perspective, we showed that a suitable design of AI is necessary, rather than communicating regulatory requirements to decision-makers (as we simulated with priming). Organizations should consider the redesign of AI systems so that results are not presented as ranked list, but e.g. a randomized presentation of options. We are confident that our recommendation can mitigate the status quo bias.

Third, we contribute to information search behavior theory by identifying those information search strategies of decision-makers leading to high (personnel) selection quality. Literature highlights that when humans make a selection decision, they apply different strategies to search and process the available information, ranging from compensatory to non-compensatory (information) search strategies (Redlawsk, 2004). Particularly, the way in which different candidates are searched and processed by decision-makers has an influence on selection quality.

The three information search strategies that were identified in this research are characterized by different degrees of search intensity. We show that the more the applicants were viewed equally often than compared with each other, the more often the second dashboard level was viewed (search intensity) and the better selection quality was achieved (Hypothesis 3). We collected first evidence that AI design might influence the information search strategy of decision-makers and thus determine selection quality.

Fourth, we contribute to research on decision-making behavior by highlighting specific design components of AI that influence decision-makers’ information search behavior in the context of personnel selection. On the one hand, by testing Hypothesis 1, we showed that decision-makers tend to focus their information search on a subset of applicant profiles (i.e. subset-focused information search) rather than considering the information of the full set of applicants profiles (i.e. full set information search). In doing so, when the decision-makers searched for information to make a personnel selection decision, they paid more attention to applicant profiles that were ranked higher by the AI than those that were ranked lower. On the other hand, by testing Hypothesis 3, we showed that the relationship between search coverage and selection is mediated by search intensity only when it refers to the number of views of the second dashboard level and not to the third dashboard level. We conclude from these two findings that design components such as (1) visualization of the results in the form of a ranking of candidates and (2) the level of data aggregation may influence the observed effects. Thus, in the context of personnel selection, AI design has an impact on the occurrence of status quo bias and its far-reaching ethical, legal, and organizational consequences, including violation of equal opportunities, discrimination laws or (future) AI regulations as well as the fostering of high opposition costs (Hunt, 2007).

5.2 Practical implications

Humans play a fundamental role in the success of digital transformation in HRM (Barišić et al., 2021), which is why we need to understand their behavior when interacting with AI (Jones-Jang and Park, 2022). To achieve AI’s full potential, companies need to create an environment in which decision-makers and AI can work together (Tian et al., 2023). Our findings shed light on decision-makers’ information search behavior when using AI in personnel selection. Specifically, we show that decision-makers interacting with an AI tend to pay more attention to candidate profiles that were ranked higher by the AI than those that were ranked lower, indicating status quo bias (Hypothesis 1). In addition, we identified three information search strategies that decision-makers use when interacting with an AI in personnel selection, which influence selection quality to different degrees (Hypothesis 2) and are characterized by different levels of search intensity (Hypothesis 3).

User guidance in AI might be a suitable approach to deal with detected information search strategies in the context of personnel selection. We found that the relationship between search coverage and selection quality is only mediated by the search intensity when it relates to the second dashboard level and not to the third dashboard level showing the raw data (Hypothesis 3). We conclude that a certain level of data preparation (which is the goal of AI) is needed to increase the selection quality, but that a certain level of detail is needed to fulfill the requirements of human oversight. Regarding AI design, previous research has already indicated that the order in which the decision-makers are presented with the choice options has an impact on their decision (Rubinstein and Salant, 2006). Our study hints at the influence of the AI design on information searching by showing that by presenting the applicants in a ranking, the distribution of attention of the decision-makers (Hypothesis 1) as well as their choice of information searching (Hypothesis 2) was influenced. These factors determine selection quality. Based on our findings, we recommend designing AI-based personnel selection decision support systems with targeted user guidance to actively minimize the status quo by prompting them to seek, verify, and evaluate alternative information. For example, developing feedback mechanisms to inform decision-makers about their verification efforts and regularly deactivating AI-based recommendations would force them to avoid heuristic decisions (Yetgin et al., 2015). However, the presentation of AI-based personnel selection recommendations in the form of side-by-side charts or before-and-after diagrams would also force decision-makers to distribute their attention equally across candidate profiles, contributing to overcoming status quo bias.

Our results serve as a basis for organizations and decision-makers to consciously guide the choice of information search strategy in the context of personnel selection through AI design, ensuring high selection quality and preventing overreliance on AI. By identifying concrete factors, such as the way applicant information is visualized, that organizations need to consider when using AI to achieve the best possible selection quality, our study supports organizations to efficiently embed AI in HRM by preventing overreliance. Furthermore, our research findings provide relevant insights for organizations and researchers in the context of the current debate about regulations of AI that require human oversight in high-risk areas such as personnel selection. For example, an AI presenting unranked information, e.g. in the form of a bag of preferred choices or a randomized presentation of options, could lead to an equal consideration of the applicant profiles, thereby encouraging a homogeneous search coverage strategy and thus a high selection quality.

Our study contributes to the education of HR managers and AI practitioners by highlighting factors that foster the occurrence of status quo bias in the context of AI-based personnel selection. Thus, our findings create awareness for the risk of HR managers' overreliance on AI-based personnel selection recommendations. Well-designed training of users is considered in the literature as a promising approach for mitigating status quo bias (Godefroid et al., 2023). To the best of our knowledge, we are one of the first studies to investigate status quo bias in the context of AI in personnel selection, therefore it can be assumed that previous training approaches have been only partially tailored to the AI-based personnel selection context. By incorporating our observed research findings into the curriculum, we can help to create awareness on the part of HR managers concerning status quo bias and thus overreliance on AI. Specifically, our study showed that (1) the order of applicants within the AI-based ranking determines the distribution of decision-makers' attention (Hypothesis 1), (2) we identified those information search strategies that lead decision-makers to high selection quality (Hypothesis 2), and (3) we show first evidence that presenting the results in the form of an AI-based ranking leads decision-makers to adapt information search strategies that cause status quo bias (Hypotheses 1, 2 and 3). The educational curriculum can be built on these research results, thereby sensitizing (future) HR managers and AI practitioners to the status quo bias issue as well as guiding them to adapt information search strategies that lead to high selection quality and thus prevent overreliance on AI.

6. Limitations

Our study has two main limitations. First, the experiment was conducted with students and not with professional decision-makers. Since the use of AI in personnel selection can in particular improve the selection quality of less experienced decision-makers (Goddard et al., 2012; Langer et al., 2021), students nevertheless represent a good target group for investigating the information search behavior of less experienced decision-makers. The relevance and applicability of our findings to experienced decision-makers is supported by previous research indicating that the more experience a person has, the higher their susceptibility to status quo bias (Burmeister and Schade, 2007) and that the majority of HR managers hold degrees in business and/or psychology (Demir and Uyargil, 2017). Since our study sample represents a group of less experienced decision-makers, it can be assumed that experienced decision-makers are even more susceptible to the status quo bias. Furthermore, our sample consists mainly of students from business and psychology disciplines, who thus have both an academic background and the potential to become future decision-makers in personnel selection.

Second, the experiment was not conducted in a decision-makers’ real work setting. The respondents performed two personnel selection tasks in an isolated environment, each selecting five out of ten applicants. In practice, decision-makers are confronted with more complex and labor-intensive working conditions. Digital recruiting has led to a rapid increase in the number of applications (Black and van Esch, 2020), with international companies receiving an average of 250 applications (Glassdoor, 2015). The increasing number of applicants can lead to an increased effort for information searching and processing and a more difficult decision-making process by decision-makers (Black and van Esch, 2020). In such multitasking environments and high workloads, decision-makers such as those in personnel selection are often affected by the automation bias to save time and cognitive effort (Parasuraman and Manzey, 2010). To simulate these working conditions as well as possible, the respondents had only a limited amount of time to complete the tasks. Our experiment shows that even though the real-life work conditions of decision-makers were artificially created, (status quo) biases even occur in a positive work setting, an indication of the generalizability of our findings.

7. Conclusion

The suitable design of AI and integration into HRM processes in a way that decision-makers do not over-rely on AI-based decisions is a main challenge of digital transformation. As this experimental study indicates, one reason is the existence of status quo bias triggered by AI-based rankings. Our findings shed light on the influence of AI visualizations on decision-makers’ information search behavior. We found that when AI is used in personnel selection, decision-makers’ information search strategy determines selection quality, and this relationship is mediated by the search intensity. Decision-makers’ priming of their responsibility and the occurrence of system errors has no moderating effect on the relationship between search coverage and selection quality. Our study offers implications for AI design in personnel selection regarding effective human oversight for fair and ethical decision-making, serving as a basis for organizations to adopt AI to improve their HR processes and systems.

Figures

Overview of the layout of the dashboard prototype

Figure 1

Overview of the layout of the dashboard prototype

Rating score visualization in the form of an overall percentage score and a 5-point rating

Figure 2

Rating score visualization in the form of an overall percentage score and a 5-point rating

Second dashboard level (overview)

Figure 3

Second dashboard level (overview)

Second dashboard level (detail)

Figure 4

Second dashboard level (detail)

Equality of information search

Figure 5

Equality of information search

Mediation model (Search intensity: level 2)

Figure 6

Mediation model (Search intensity: level 2)

Results of regression and mediator analysis

AnalysisRegressionMediator analysis
DependentSelection qualitySearch intensity: Level 2Selection qualitySearch intensity: Level 3Selection quality
Model Nr11a1b2a2b
Search coverage−1.039*** (0.353)−277.867*** (77.229)−0.744** (0.359)−16.049** (7.405)−1.082*** (0.359)
Search intensity: level 2 0.001*** (0.001)
Search intensity: level 3 −0.003 (0.004)
Gender0.018 (0.021)1.899 (4.551)0.016 (0.020)0.015 (0.436)0.018 (0.021)
Priming (occurrence of system errors)0.054** (0.026)13.794** (5.731)0.039 (0.026)1.546*** (0.549)0.058** (0.027)
Priming (responsibility)−0.012 (0.027)8.366 (5.873)−0.021 (0.026)0.534 (0.563)−0.011 (0.027)
Rating score visualization−0.015 (0.021)−8.219* (4.643)−0.007 (0.021)−0.255 (0.445)−0.016 (0.021)
Constant0.905*** (0.046)64.600*** (10.106)0.836*** (0.051)2.731*** (0.969)0.912*** (0.047)
Observations155 decision paths nested in 80 study participants
R20.1130.1350.1610.0870.116
Adjusted R20.0840.1060.1270.0570.080
F Statistic3.809*** (df = 5; 149)4.656*** (df = 5; 149)4.737*** (df = 6; 148)2.852** (df = 5; 149)3.240*** (df = 6; 148)

Source(s): Table by authors

Results of mediation analysis

AnalysisMediation analysis
DependentSearch intensity: Level 2Selection qualitySearch intensity: Level 3Selection quality
Model Nr1c1d2c2d
Search coverage−277.867 (77.229)−0.744 (0.359)−16.049 (7.405)−1.0817 (0.3589)
Search intensity: level 2 0.001 (0.001)
Search intensity: level 3 −0.0027 (0.0039)
Gender1.899 (4.551)0.016 (0.020)0.0152 (0.436)0.0180 (0.0208)
Priming (occurrence of system errors)−8.366 (5.873)0.021 (0.026)−0.534 (0.563)0.0105 (0.0270)
Priming (responsibility)5.427 (5.542)0.0598 (0.025)1.0121 (0.5313)0.0682 (0.0257)
Rating score visualization−8.219 (4.643)−0.007 (0.021)−0.2548 (0.4452)−0.0161 (0.0213)
Selection quality (model 1e/f: Search intensity: Level 2)Selection quality (model 2e/f: Search intensity: Level 3)
1e1f2e2f
Index (SE)Boot (95% CI)Index (SE)Boot (95% CI)
Direct effect−0.744 (0.359)[−1.454, −0.035]−1.082 (0.359)[−1.791, −0.372]
Indirect effect−0.295 (0.136)[−0.618, −0.076]0.043 (0.114)[−0.148, 0.2999]

Source(s): Table by authors

Results of moderated mediation

Selection quality (model 1 g/h: Search intensity: Level 2)Selection quality (model 2 g/h: Search intensity: Level 3)
Priming 1Index (SE)Boot (95% CI)Index (SE)Boot (95% CI)
Conditional direct effectoccurrence of system errors−0.742 (0.407)[−1.545, 0.062]−1.082 (0.412)[−1.896, −0.269]
responsibility−0.885 (0.782)[−2.430, 0.6597]−1.206 (0.7997)[−2.786, 0.375]
Indirect effectoccurrence of system errors0.321 (0.161)[−0.716, −0.090]0.0197 (0.141)[−0.241, 0.344]
responsibility−0.311 (0.241)[−0.827, 0.149]0.009 (0.088)[−0.164, 0.197]
Index of moderated mediation 0.010 (0.245)[−0.445; −0.557]−0.011 (0.134)[−0.318, 0.266]

Source(s): Table by authors

Results of moderation effects on priming

Dependent variable
Search intensity: Level 2Search intensity: Level 3
(model 1i/j)(model 2i/j)
Search coverage−274.758 (169.004)−8.810 (16.039)
Priming (1 = occurrence of system errors)26.328** (12.142)3.468*** (1.152)
Priming (1 = responsibility)1.911 (11.367)0.181 (1.079)
Gender1.623 (4.511)−0.022 (0.428)
Rating score visualization−9.109* (4.619)−0.359 (0.438)
Search coverage*priming (occurrence of system errors)−273.031 (225.991)−41.447* (21.448)
Search coverage*priming (responsibility)128.332 (198.861)6.787 (18.873)
Constant65.384*** (12.364)2.495** (1.173)
Observations155 decision paths nested in 80 study participants
R20.1620.134
Adjusted R20.1230.093
F Statistic (df = 7; 147)4.073***3.252***

Note(s): *p, **p, ***p < 0.01

Data availability: Upon request to the first author, the data supporting the results of this study are available.

References

Aguinis, H. and Bradley, K.J. (2014), “Best practice recommendations for designing and implementing experimental vignette methodology studies”, Organizational Research Methods, Vol. 17 No. 4, pp. 351-371, doi: 10.1177/1094428114547952.

Bahner, J.E., Hüper, A.-D. and Manzey, D. (2008), “Misuse of automated decision aids: complacency, automation bias and the impact of training experience”, International Journal of Human-Computer Studies, Vol. 66 No. 9, pp. 688-699, doi: 10.1016/j.ijhcs.2008.06.001.

Balakrishnan, J., Dwivedi, Y.K., Hughes, L. and Boy, F. (2024), “Enablers and inhibitors of AI-powered voice assistants: a dual-factor approach by integrating the status quo bias and technology acceptance model”, Information Systems Frontiers, Vol. 26 No. 3, pp. 921-942, doi: 10.1007/s10796-021-10203-y.

Bargh, J.A. and Chartrand, T.L. (2000), “The mind in the middle. A practical guide to priming and automaticity research”, in Reis, H.T. and Judd, C.M. (Eds), Handbook of Research Methods in Social and Personality Psychology, Cambridge University Press, Cambridge, pp. 253-285.

Barisi´c, A.F., Barisi´c, J.R. and Miloloza, I. (2021), “Digital transformation: challenges for human resources management”, ENTRENOVA-ENTerprise REsearch InNOVAtion, Vol. 7 No. 1, pp. 357-366.

Basu, S. and Savani, K. (2017), “Choosing one at a time? Presenting options simultaneously helps people make more optimal decisions than presenting options sequentially”, Organizational Behavior and Human Decision Processes, Vol. 139, pp. 76-91, doi: 10.1016/j.obhdp.2017.01.004.

Bergers, D. (2022), “The status quo bias and its individual differences from a price management perspective”, Journal of Retailing and Consumer Services, Vol. 64, 102793, doi: 10.1016/j.jretconser.2021.102793.

Black, J.S. and van Esch, P. (2020), “AI-enabled recruiting: what is it and how should a manager use it?”, Business Horizons, Vol. 63 No. 2, pp. 215-226, doi: 10.1016/j.bushor.2019.12.001.

Bongard, A. (2019), “Automating talent acquisition: smart recruitment, predictive hiring algorithms, and the data-driven nature of artificial intelligence”, Psychosociological Issues in Human Resource Management, Vol. 7 No. 1, pp. 36-41.

Braun, V. and Clarke, V. (2006), “Using thematic analysis in psychology”, Qualitative Research in Psychology, Vol. 3 No. 2, pp. 77-101, doi: 10.1191/1478088706qp063oa.

Browne, G.J. and Pitts, M.G. (2004), “Stopping rule use during information search in design problems”, Organizational Behavior and Human Decision Processes, Vol. 95 No. 2, pp. 208-224, doi: 10.1016/j.obhdp.2004.05.001.

Burmeister, K. and Schade, C. (2007), “Are entrepreneurs' decisions more biased? An experimental investigation of the susceptibility to status quo bias”, Journal of Business Venturing, Vol. 22 No. 3, pp. 340-362, doi: 10.1016/j.jbusvent.2006.04.002.

Burton, A.L. (2021), “OLS (linear) regression”, Barnes, J.C. and Forde, D.R. Forde (Ed.s.), The Encyclopedia of Research Methods and Statistical Techniques in Criminology and Criminal Justice, Wiley, New York, pp. 509-514.

Bu¸cinca, Z., Malaya, M.B. and Gajos, K.Z. (2021), “To trust or to think: cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making”, Proceedings of the ACM on Human-Computer Interaction, pp. 1-21.

Cao, G., Duan, Y., Edwards, J.S. and Dwivedi, Y.K.D. (2021), “Understanding managers' attitudes and behavioral intentions towards using artificial intelligence for organizational decision-making”, Technovation, Vol. 106, 102312, doi: 10.1016/j.technovation.2021.102312.

Chaiken, S., Liberman, A. and Eagly, A.H. (1989), “Heuristic and systematic information processing within and beyond the persuasion context”, in Uleman, J.S. and Bargh, J.S. (Eds), Unintended Thought, The Guilford Press, New York, pp. 212-252.

Charlwood, A. and Guenole, N. (2022), “Can HR adapt to the paradoxes of artificial intelligence?”, Human Resoure Management Journal, Vol. 32 No. 4, pp. 729-742, doi: 10.1111/1748-8583.12433.

Chatfield, A.T. and Reddick, C.G. (2019), “Blockchain investment deci-sion making in central banks: a status quo bias theory perspective”, Twenty-fifth Americas Conference on Information Systems.

Chun, J.S. and Larrick, R.P. (2022), “The power of rank information”, Journal of Personality and Social Psychology: Attitudes and Social Cognition, Vol. 122 No. 6, pp. 83-1003, doi: 10.1037/pspa0000289.

Cohn, A. and Mar´echal, M.A. (2016), “Priming in economics”, Current Opinion in Psychology, Vol. 12, pp. 17-21, doi: 10.1016/j.copsyc.2016.04.019.

Cook, G.J. (1986), “An analysis of information search strategies for decision making”, ACM SIGMIS Database: The database for Advances in Information Systems, Vol. 17 No. 4, p. 37, doi: 10.1145/1113523.1113529.

Cook, G.J. (1993), “An empirical investigation of information search strategies with implications for decision support system design”, Decision Sciences, Vol. 24 No. 3, pp. 683-698, doi: 10.1111/j.1540-5915.1993.tb01298.x.

Das, S., Dey, A., Pal, A. and Roy, N. (2015), “Applications of artificial intelligence in machine learning: review and prospect”, International Journal of Computer Applications, Vol. 115 No. 9, pp. 31-41, doi: 10.5120/20182-2402.

Davis, J.M. and Tuttle, B.M. (2013), “A heuristic–systematic model of end- user information processing when encountering IS exceptions”, Information & Management, Vol. 50 Nos 2-3, pp. 125-133, doi: 10.1016/j.im.2012.09.004.

Debarliev, S., JaneskaIliev, A. and Ilieva, V. (2020), “The status quo bias of students and reframing as an educational intervention towards entrepreneurial thinking and change adoption”, Economic and Business Review, Vol. 22 No. 3, pp. 363-381, doi: 10.15458/ebr105.

Demir, R. and Uyargil, C. (2017), “A study on the relation between the demographics of human resource managers in Turkey and characteristics of their companies”, International Journal of Human Resource Studies, Vol. 7 No. 4, pp. 212-230, doi: 10.5296/ijhrs.v7i4.11977.

Doney, P.M. and Armstrong, G.M. (1996), “Effects of accountability on symbolic information search and information analysis by organizational buyers”, Journal of the Academy of Marketing Science, Vol. 24 No. 1, pp. 57-65, doi: 10.1007/bf02893937.

Einhorn, H.J. and Hogarth, R.M. (1981), “Behavioral decision theory: processes of judgment and choice”, Annual Review of Psychology, Vol. 32 No. 1, pp. 53-88, doi: 10.1146/annurev.ps.32.020181.000413.

Enarsson, T., Enqvist, L. and Naarttij¨arvi, M. (2022), “Approaching the human in the loop–legal perspectives on hybrid human/algorithmic decision-making in three contexts”, Information & Communications Technology Law, Vol. 31 No. 1, pp. 123-153, doi: 10.1080/13600834.2021.1958860.

Fleiß, J., Bäck, E. and Thalmann, S. (2024), “Mitigating algorithm aversion in recruiting: a study on explainable AI for conversational agents.”, ACM SIGMIS Database: the DATABASE for Advances in Information Systems, Vol. 55 No. 1, pp. 56-87.

Freiburg, M. and Grichnik, D. (2013), “Institutional reinvestments in private equity funds as a double-edged sword: the role of the status quo bias”, Journal of Behavioral Finance, Vol. 14 No. 2, pp. 134-148, doi: 10.1080/15427560.2013.791295.

Geng, S. (2016), “Decision time, consideration time, and status quo bias”, Economic Inquiry, Vol. 54 No. 1, pp. 433-449, doi: 10.1111/ecin.12239.

Glassdoor (2015), “50 HR & recruiting stats that make you think”, available at: https://www.glassdoor.com/employers/blog/50-hr-recruiting-stats-make-think/(accessed 20 June 2023).

Goddard, K., Roudsari, A. and Wyatt, J.C. (2012), “Automation bias: a systematic review of frequency, effect mediators, and mitigators”, Journal of the American Medical Informatics Association, Vol. 19 No. 1, pp. 121-127, doi: 10.1136/amiajnl-2011-000089.

Godefroid, M., Plattfaut, R. and Niehaves, B. (2023), “How to measure the status quo bias? A review of current literature”, Management Review Quarterly, Vol. 73 No. 4, pp. 1667-1711, doi: 10.1007/s11301-022-00283-8.

Green, B. (2022), “The flaws of policies requiring human oversight of government algorithms”, Computer Law & Security Review, Vol. 45, 105681, doi: 10.1016/j.clsr.2022.105681.

Hayes, A.F. (2021), Introduction to Mediation, Moderation, and Conditional Process Analysis.A Regression-Based Approach, The Guilford Press, New York, NY.

Huang, M.-H. and Rust, R.T. (2018), “Artificial intelligence in service”, Journal of Service Research, Vol. 21 No. 2, pp. 155-172, doi: 10.1177/1094670517752459.

Hunkenschroer, A.L. and Luetge, C. (2022), “Ethics of AI-enabled recruiting and selection: a review and research agenda”, Journal of Business Ethics, Vol. 178 No. 4, pp. 977-1007, doi: 10.1007/s10551-022-05049-6.

Hunt, S.T. (2007), Hiring Success: the Art and Science of Staffing Assessment and Employee Selection, Pfeiffer, San Francisco, CA.

Jackson, N.C. and Dunn-Jensen, L.M. (2021), “Leadership succession planning for today's digital transformation economy: key factors to build for competency and innovation”, Business Horizons, Vol. 64 No. 2, pp. 273-284, doi: 10.1016/j.bushor.2020.11.008.

Jakubik, J., Schöffer, J., Hoge, V., V¨ossing, M. and Kühl, N. (2023), “An empirical evaluation of predicted outcomes as explanations in human-AI decision-making”, in Machine Learning and Principles and Practice of Knowledge Discovery in Databases, Springer, Cham, pp. 353-368.

Jelokhani-Niaraki, M. and Malczewski, J. (2015), “The decision task complexity and information acquisition strategies in GIS-MCDA”, International Journal of Geographical Information Science, Vol. 29 No. 2, pp. 327-344, doi: 10.1080/13658816.2014.947614.

Jones-Jang, S.M. and Park, Y.J. (2022), “How do people react to AI failure? Automation bias, algorithmic aversion, and perceived controllability”, Journal of Computer-Mediated Communication, Vol. 28 No. 1, pp. 1-8, doi: 10.1093/jcmc/zmac029.

Kim, T. and Song, H. (2023), “Communicating the limitations of AI: the effect of message framing and ownership on trust in artificial intelligence”, International Journal of Human–Computer Interaction, Vol. 39 No. 4, pp. 790-800, doi: 10.1080/10447318.2022.2049134.

Kupfer, C., Prassl, R., Fleiß, J., Malin, C., Thalmann, S. and Kubicek, B. (2023), “Check the box! How to deal with automation bias in AI-based personnel selection”, Frontiers in Psychology, Vol. 14, 1118723, doi: 10.3389/fpsyg.2023.1118723.

Lai, V., Chen, C.Q., Liao, V.Q., Smith-Renner, A. and Tan, C. (2021), “Towards a science of human-Ai decision making: a survey of empirical studies”.

Langer, M. and König, C.J. (2023), “Introducing a multi-stakeholder perspective on opacity, transparency and strategies to reduce opacity in algorithm-based human resource management”, Human Resource Management Review, Vol. 33 No. 1, 100881, doi: 10.1016/j.hrmr.2021.100881.

Langer, M., J, K.C. and Busch, V. (2021), “Changing the means of managerial work: effects of automated decision support systems on personnel selection tasks”, Journal of Business and Psychology, Vol. 36 No. 5, pp. 751-769, doi: 10.1007/s10869-020-09711-6.

Lau, R.R. (1995), “Information search during an election campaign: introducing a process tracing methodology for political scientists”, Political Judgment: Structure and Process, Cambridge University, Cambridge, pp. 179-206.

Lau, R.R. (2003), Models of decision-making, Sears, D.O., Huddy, L. and Jervis, R. (Eds), Oxford Handbook of Political Psychology, Oxford University Press, New York, pp. 19-59.

Lau, R.R. and Redlawsk, D.P. (2006), How Voters Decide: Information Processing in Election Campaigns, Cambridge University Press, New York.

Laux, J. (2023), “Institutionalised distrust and human oversight of artificial intelligence: toward a democratic design of AI governance under the European union AI Act”, Working Paper, Oxford Internet Institute, Oxford, 3 March, doi: 10.2139/ssrn.4377481,

Lee, K. and Joshi, K. (2017), “Examining the use of status quo bias perspective in IS research: need for re-conceptualizing and incorporating biases”, Information Systems Journal, Vol. 27 No. 6, pp. 733-752, doi: 10.1111/isj.12118.

Lee, J.D. and See, K.A. (2004), “Trust in automation: designing for appropriate reliance”, Human Factors, Vol. 46 No. 1, pp. 50-80.

Lorenc, A., Pedro, L., Badesha, B., Dize, C., Fernow, I. and Dias, L. (2013), “Tackling fuel poverty through facilitating energy tariff switching: a participatory action research study in vulnerable groups”, Public Health, Vol. 127 No. 10, pp. 894-901, doi: 10.1016/j.puhe.2013.07.004.

Malin, C., Kupfer, C., Fleiß, J., Kubicek, B. and Thalmann, S. (2023), “In the AI of the beholder—a qualitative study of hr professionals’ beliefs about AI-based Chatbots and decision support in candidate pre-selection”, Administrative Sciences, Vol. 13 No. 11, p. 231.

Manzey, D.J., Bahner, E. and Hüper, A.-D. (2006), “Misuse of automated aids in process control: complacency, automation bias and possible training interventions”, Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Sage CA, Los Angeles, pp. 220-224.

Maser, B. and Weiermair, K. (1998), “Travel decision-making: from the vantage point of perceived risk and information preferences”, Journal of Travel & Tourism Marketing, Vol. 7 No. 4, pp. 107-121, doi: 10.1300/j073v07n04_06.

Mikalef, P., Conboy, K., Lundström, J.E. and Popovic, A. (2022), “Thinking responsibly about responsible AI and ‘the dark side’ of AI”, European Journal of Information Systems, Vol. 31 No. 3, pp. 257-268, doi: 10.1080/0960085x.2022.2026621.

Miles, M.B. and Huberman, A.M. (1994), Qualitative Data Analysis: an Expanded Sourcebook, Sage, Thousand Oaks, CA.

Mishra, J.L., Allen, D. and Pearman, A. (2015), “Information seeking, use, and decision making”, Journal of the Association for Information Science and Technology, Vol. 66 No. 4, pp. 662-673, doi: 10.1002/asi.23204.

Mosier, K.L. and Skitka, L.J. (1999), “Automation use and automation bias”, Proceedings of HFES annual meeting, CA, Los Angeles, Sage, pp. 344-348.

Mosier, K.L., Skitka, L.J., Burdick, M.D. and Heers, S.T. (1996), “Automation bias, accountability, and verification behaviors”, Proceedings of the Human Factors and Ergonomics Society Annual Meeting, CA, Los Angeles, Sage, pp. 204-208.

Nicolás-Agustín, A., Jiménez-Jiménez, D. and Maeso-Fernandez, F. (2022), “The role of human resource practices in the implementation of digital transformation”, International Journal of Manpower, Vol. 43 No. 2, pp. 395-410, doi: 10.1108/ijm-03-2021-0176.

Oberst, U., De Quintana, M., Del Cerro, S. and Chamarro, A. (2021), “Recruiters prefer expert recommendations over digital hiring algorithm: a choice-based conjoint study in a pre-employment screening scenario”, Management Research Review, Vol. 44 No. 4, pp. 625-641, doi: 10.1108/mrr-06-2020-0356.

Parasuraman, R. and Manzey, D.H. (2010), “Complacency and bias in human use of automation: an attentional integration”, Human Factors, Vol. 52 No. 3, pp. 381-410, doi: 10.1177/0018720810376055.

Payne, J.W. (1982), “Contingent decision behavior”, Psychological Bulletin, Vol. 92 No. 2, pp. 382-402, doi: 10.1037//0033-2909.92.2.382.

Pomerol, J.-C. (1997), “Artificial intelligence and human decision making”, European Journal of Operational Research, Vol. 99 No. 1, pp. 3-25, doi: 10.1016/s0377-2217(96)00378-5.

Prikshat, V., Malik, A. and Budhwar, P. (2023), “AI-augmented HRM: antecedents, assimilation and multilevel consequences”, Human Resource Management Review, Vol. 33 No. 1, 100860, doi: 10.1016/j.hrmr.2021.100860.

Redlawsk, D.P. (2004), “What voters do: information search during election campaigns”, Political Psychology, Vol. 25 No. 4, pp. 501-677, doi: 10.1111/j.1467-9221.2004.00389.x.

Rolke, W. and Gongora, C.G. (2021), “A chi-square goodness-of-fit test for continuous distributions against a known alternative”, Computational Statistics, Vol. 36 No. 3, pp. 1885-1900, doi: 10.1007/s00180-020-00997-x.

Rubinstein, A. and Salant, Y. (2006), “A model of choice from lists”, Theoretical Economics, Vol. 1 No. 1, pp. 3-17.

Samuelson, W. and Zeckhauser, R. (1988), “Status quo bias in decision making”, Journal of Risk and Uncertainty, Vol. 1 No. 1, pp. 7-59, doi: 10.1007/bf00055564.

Schemmer, M., Hemmer, P., Kühl, N., Benz, C. and Satzger, G. (2022), “Should I follow AI-based advice? Measuring appropriate reliance in human-AI decision-making”, CHI ’22 TRAIT.

Shankar, A. and Nigam, A. (2022), “Explaining resistance intention towards mobile HRM application: the dark side of technology adoption”, International Journal of Manpower, Vol. 43 No. 1, pp. 206-225, doi: 10.1108/ijm-03-2021-0198.

Siegel-Jacobs, K. and Yates, J.F. (1996), “Effects of procedural and outcome accountability on judgment quality”, Organizational Behavior and Human Decision Processes, Vol. 65, pp. 1-17, doi: 10.1006/obhd.1996.0001.

Skitka, L.J., Mosier, K. and Burdick, M.D. (2000), “Accountability and automation bias”, International Journal of Human-Computer Studies, Vol. 52 No. 4, pp. 701-717, doi: 10.1006/ijhc.1999.0349.

Soleimani, M., Intezari, A., Taskin, N. and Pauleen, D. (2021), “Cognitive biases in developing biased Artificial Intelligence recruitment system”, HICCS, Hawaii, pp. 5091-5099.

Sosulski, K. (2018), Data Visualization Made Simple. Insights into Becoming Visual, Routledge, New York, NY.

Takemura, K. and Selart, M. (2007), “Decision making with information search constraints: a process tracing study”, Behaviormetrika, Vol. 34 No. 2, pp. 111-130, doi: 10.2333/bhmk.34.111.

Testlab ApS (2020), “Preely”, available at: https://preely.com (accessed 20 June 2023).

Thomas, O. and Reimann, O. (2023), “The bias blind spot among HR employ- ees in hiring decisions”, German Journal of Human Resource Management, Vol. 37 No. 1, pp. 5-21, doi: 10.1177/23970022221094523.

Tian, X., Pavur, R., Han, H. and Zhang, L. (2023), “A machine learning-based human resources recruitment system for business process management: using LSA, BERT and SVM”, Business Process Management Journal, Vol. 29 No. 1, pp. 202-222, doi: 10.1108/bpmj-08-2022-0389.

Wang, X., Lu, Z. and Yin, M. (2022), “Will you accept the AI recommendation? Predicting human behavior in AI-assisted decision making”, Proceedings of the ACM Web Conference.

Yau, N. (2013), Data Points: Visualization that Means Something, WILEY, Indianapolis, IN.

Yetgin, E., Jensen, M. and Shaft, T. (2015), “Complacency and intentionality in IT use and continuance”, Transactions on Human-Computer Interaction, Vol. 7 No. 1, pp. 17-42, doi: 10.17705/1thci.00064.

Yigitbasioglu, O.M. and Velcu, O. (2012), “A review of dashboards in performance management: implications for design and research”, International Journal of Accounting Information Systems, Vol. 13 No. 1, pp. 41-59, doi: 10.1016/j.accinf.2011.08.002.

Zhang, K.Z.K., Zhao, S.J., Cheung, C.M.K. and Lee, M.K.O. (2014), “Examining the influence of online reviews on consumers' decision-making: a heuristic–systematic model”, Decision Support Systems, Vol. 67, pp. 78-89, doi: 10.1016/j.dss.2014.08.005.

Acknowledgements

This research was partly funded by the Field of Excellence Smart Regulation of the University of Graz. The authors acknowledge the financial support by the University of Graz.

Corresponding author

Christine Dagmar Malin can be contacted at: christine.malin@uni-graz.at

Related articles