Abstract
Purpose
This study aims to investigate some individual factors that may positively/negatively impact upon the willingness to use AI-assisted hiring procedures (AI-WtU). Specifically, the authors contribute to the ongoing discussion by testing the specific role of individuals’ personality traits and their attitude toward technology acceptance.
Design/methodology/approach
Data have been collected from a cohort of workers (n = 157) to explore their individual level of AI-WtU, their personality traits and level of technology acceptance, along with a series of control variables including age, gender, education, employment status, knowledge and previous experience of AI-assisted hiring.
Findings
The results obtained show the significant role played by a specific personality trait –conscientiousness – and technology acceptance in shaping the level of AI-WtU. Importantly, technology acceptance also mediates the relationship between AI-WtU and conscientiousness, thus suggesting that conscientious people may be more willing to engage in AI-assisted practices, as they see technologies as means of improving reliability and efficiency. Further, the study also shows that previous experience with AI-assisted hiring in the role of job applicants has a negative effect on AI-WtU, suggesting a prevailing negative experience with such tools, and the consequent urge for their improvement.
Originality/value
This study, to the best of the authors’ knowledge, is the first to test the potential role of personality traits in shaping employees AI-WtU and to provide a comprehensive understanding of the issue by additionally testing the joint effect of technology acceptance, age, gender, education, employment status and knowledge and previous experience of AI-assisted hiring in shaping individual AI-WtU.
Keywords
Citation
Calluso, C. and Devetag, M.G. (2024), "The impact of technology acceptance and personality traits on the willingness to use AI-assisted hiring practices", International Journal of Organizational Analysis, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/IJOA-06-2024-4562
Publisher
:Emerald Publishing Limited
Copyright © 2024, Emerald Publishing Limited
License
Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode
Introduction
In the past decade, technological innovations and their applications to recruitment systems have seen considerable growth. Indeed, in many big companies AI-assisted applications – ranging from resumes screening to automated video-interviews – are consistently being implemented as an aid to the recruitment process. These phenomena have favored the rise of the so-called e-HRM, namely, “a way of implementing HR strategies, policies, and practices in organizations through a conscious and directed support of and/or with the full use of web-technology-based channels” (Ruël et al., 2004, pp. 365–366). Nonetheless, the implementation of AI in hiring practices is still in its infancy; hence, a better understanding of the opportunities and challenges represented by e-HRM is both necessary and desirable.
Furthermore, the “digital revolution”, with the introduction of AI in HR practices, brings about a more general societal change that inevitably affects employees, shaping their experiences even before formally joining the organization, especially in the case of technology-based hiring practices.
In this study, we contribute to the discussion by investigating the specific role that technology acceptance and personality traits have in determining the level of acceptance of AI integration in hiring practices. To this aim, data have been collected from a cohort of workers to explore their individual level of willingness to use AI-mediated hiring procedures, along with their general level of technology acceptance, and their personality traits. The results obtained point at the role played by both technology acceptance and a specific personality trait –conscientiousness – in shaping the level of willingness to use AI-mediated practices. Importantly, technology acceptance also mediates the relationship between AI and the personality trait “conscientiousness”, thus suggesting that conscientious people may be better inclined toward technology in general, and this, in turn, may increase their willingness to use AI-assisted practices, possibly because they see technologies as means of improving reliability and efficiency of HRM practices.
Literature review
AI advances are moving tremendously fast, promising to reshape the face of many businesses. Such remarkable changes are urging organizations across the globe to take immediate advantage of these technologies in processes traditionally carried out by humans. The COVID-19 pandemic has dramatically contributed to the progressive introduction of Web-technology-assisted recruiting channels; indeed, during the pandemic, 86% out of 334 interviewed HR leaders were using virtual interviews to hire candidates, and believe that this may become a standard (Gartner, 2020).
The reason for such predictions lies in the evidence showing that the use of algorithms and AI solutions in hiring procedures can increase recruitment efficiency by cutting costs and reducing time spent in candidates’ screening, further allowing companies to reach a broader talent pool, as indicated in the Global Recruiting Trends 2018 LinkedIn report (Spar et al., 2018). Further, for international companies, it aids standardization of HR practices (Parry and Tyson, 2011; Ruël et al., 2004). Additionally, Howardson and Behrend (2014) suggest that those organizations that use AI for recruiting purposes may even leverage such innovativeness in their hiring approach to improve their image and attract better candidates.
Another important element fostering the implementation of AI-assisted hiring is evidence showing that such technologies may also support D&I practices (Florentine, 2016; Jora et al., 2022), by providing more consistent evaluations (Kuncel et al., 2014) and by getting rid of the traditional biases (i.e. gender, race etc.) that often affect human-based hiring decisions (Avery et al., 2023; Houser, 2019; Hu, 2023; Polli, 2019; Walkowiak, 2023). Nonetheless, despite the common assumption of AI algorithms being impartial and fair, many researchers are skeptic, urging caution in their use, as they may end up replicating the exact same biases that affect humans (Barocas and Selbst, 2016) or even amplifying them (Ajunwa, 2020). On the bright side, many efforts are being placed to identify existing gaps and, more importantly, to establish best practices for the design, development and deployment of reliable AI-assisted practices (Hunkenschroer and Luetge, 2022; Kelan, 2023; Shams et al., 2023).
Individual perceptions of artificial intelligence-mediated hiring practices
When considering the adoption of AI practices, people’s perception is a key element. Traditionally, people tend not to trust automated, AI-assisted decision systems (Glikson and Woolley, 2020) for many reasons: firstly, people feel that AI-assisted tools are less transparent and fair as compared to traditional human-to-human interactions, possibly because of scarce familiarity with such tools (Johnson and Verdicchio, 2017; Mirowska and Mesnet, 2022). Indeed, Blacksmith et al. (2016) in their metanalysis show that technology in employees recruitment is associated with overall negative evaluations due to less control over managing the impression given to recruiters; lack of ongoing feedback; and difficulties in communication. Nonetheless, a more recent study has reported that applicants overall perceived AI-mediated hiring as useful and easy to use, and highlighted the reduced response times as the major advantage of such technology, despite also recognizing a series of drawbacks, such as low accuracy, low reliability and an immature technology (Horodyski, 2023a). Taken together, the studies conducted so far tend to show a negative perception of AI-assisted hiring procedures on the part of applicants, also affecting hiring-related organizational outcomes; for example, it has been shown that people are less willing to apply for a job position when they know that their data will be reviewed by an AI evaluator (Mirowska and Mesnet, 2022; Wesche and Sonderegger, 2021). Hence, recognizing the factors that can shape positive or negative evaluations of AI-assisted recruiting appears particularly relevant to understand if and to what extent these practices can be implemented to substitute human intervention.
Therefore, previous literature has tried to disentangle the possible variables that may contribute to shaping applicants’ perception of AI-mediated hiring practices. For example, Köchling et al. (2023) found negative affective reactions to AI-assisted recruitment when they took place in the final stages of selection, but not for earlier stages. Interestingly, some studies have highlighted how candidates’ individual characteristics may play a role in AI acceptance; indeed, Zhang and Yencha (2022) found that women are less favorable than men, and that people with higher income and education appear overall more positive toward hiring algorithms. Also race seems to play a role: Bedemariam and Wessel (2023) found that black applicants tend to have more negative general justice perception in the AI-assisted procedures. Finally, Wiechmann and Ryan (2003) found an effect of candidates’ evaluations exerted by technical computer experience and computer anxiety. Along the same lines, Gonzalez et al. (2022) found that candidates’ evaluations crucially depend on the level of familiarity with AI.
Theoretical framework: HEXACO personality model
Personality traits can be described as a set of psychological features that arise from biological and environmental factors that give rise to cognitive, emotional and behavioral patterns (Corr and Matthews, 2009). Hence, personality represents a set of features which appear to be relatively stable over one’s lifetime, and which determine, to a considerable extent, our way of thinking, feeling and behaving. Importantly, personality features can greatly vary among an otherwise homogeneous population (Roberts et al., 2007) and can have broad-ranging consequences for many domains of life, e.g. health-related behaviors (Bogg and Roberts, 2004), leadership and motivational preferences (Calluso and Devetag, 2024), stress and work satisfaction (Petasis and Economides, 2020).
More importantly for the current investigation, previous studies have shown that personality traits – using the five factor model of personality (FFM; Goldberg, 1992) as a theoretical framework – have an impact on technology adoption. For example, openness to experience is positively associated with the use of collaborative technologies (Devaraj et al., 2008), increases team decision-making by means of computer-assisted communications (Colquitt et al., 2002), and increases willingness to use the metaverse for socializing (Sowmya et al., 2023). Neuroticism appears to be positively correlated with an increased use of mobile instant messaging (Ehrenberg et al., 2008); however, it appears to negatively impact upon social network use (Sriyabhand and John, 2014); along similar lines, Barnett et al. (2015) found that neuroticism is also negatively associated with perceived and actual use of Web-based classroom devices. Agreeableness has been reported to be negatively associated with instant messaging use (Ehrenberg et al., 2008), positively associated with use of social networking (Sriyabhand and John, 2014) and with willingness to use the metaverse for socializing (Sowmya et al., 2023). Extraversion is also positively associated with both instant messaging use (Ehrenberg et al., 2008) and social networking behavior (Sriyabhand and John, 2014), as well as on the willingness to use metaverse technologies (Sowmya et al., 2023). Finally, conscientiousness has been reported to be negatively associated with social networking behavior (Sriyabhand and John, 2014) and positively associated with perceived and actual use of Web-based classroom technological systems (Barnett et al., 2015).
Overall, the evidence revised above points toward the relevance of personality traits in shaping individual openness to (and willingness to use) technologies. Nonetheless, to the best of our knowledge, no studies have explored the possible role exerted by personality traits in shaping reactions to AI-assisted hiring. Here, we seek to fill this gap in the extant literature by investigating the role of personality traits. We based our investigation on the HEXCO model of personality (Ashton et al., 2004, 2006). This model represents an important update of the original FFM, resulting from lexical studies originally conducted only in English which allowed to identify five dimensions of personality (i.e. openness to experience, conscientiousness, neuroticism, agreeableness and extraversion); when studies were conducted also in other languages, a sixth factor – honesty-humility – was identified, along with slight changes in the facets of neuroticism, which was consequently replaced by the factor emotionality. Hence, here we decided to use the revised version of the original model, as it is considered more generalizable to all languages, and extensively used in personality research (Ashton and Lee, 2007, 2008).
Along with personality traits, in the current study we also investigate the possible effects of technology acceptance in shaping individual willingness to use AI-mediated recruitment practices. According to previous research, technology acceptance can be further decomposed into two dimensions, namely, attitude-acceptance and behavior-acceptance. While the first is referred to an affective dimension, entailing motivation and emotion, the second describes actual behavior (Arning and Ziefle, 2007). The behavior-acceptance component has been extensively studied from multiple perspectives, leading to the formulation of several models of technology use; the most widely used model is probably the Technology Acceptance Model (TAM; Davis, 1989), together with its later development and extension, namely, the Unified Theory of Acceptance and Use of Technology (UTAUT; Venkatesh et al., 2003). These models explain the use of technology by means of acceptance of technology, which in turn is determined by variables that consider the perceived performance and easiness of use of technologies, and in the case of the UTAUT also social norms and facilitating conditions. Hence, these models neglect the possible influences determined by dispositional and psychological characteristics – such as personality (Neyer et al., 2012), while, on the other hand, alternative approaches in technology research have already included personality and psychological considerations (Arning and Ziefle, 2007), hence focusing on the dispositional elements that contribute to the attitude-acceptance component. Therefore, because our theoretical framework is rooted in the psychology of personality, in this work we refrained from using such models (i.e. TAN, UTAT), and we focus on the analysis of the dispositional and psychological features that can shape individual willingness to use AI in hiring practices. To this aim, here we use an integrative model of technology commitment that includes personality traits (Neyer et al., 2012, 2016). In particular, here we choose to focus on one particular dimension of the model, namely technology acceptance, intended as an attitudinal characteristic that reflects the subjective evaluation of technological progress, which accounts for the personal connection to modern technologies (rather than the assessment of their importance for society), hence, primarily manifesting itself in personal interest in technical innovations (Neyer et al., 2012). We choose to focus on the dimension that in our view is the most correlated with personality traits, and also the most relevant for the willingness to use AI technologies in the context of hiring procedures, as compared to the other two subscales of technology competence and technology control beliefs. Furthermore, we also test the three-way relationship between AI-WtU, technology acceptance and personality traits, under the hypothesis that technology acceptance may represent the mediator of the relationship between personality traits and AI-WtU.
Finally, we also included in our model a series of additional control variables, i.e. age, gender, education, employment status, previous experience/knowledge of AI-assisted hiring practices – to provide a more comprehensive understanding of the joint role of these features in shaping willingness to use AI-assisted hiring (AI-WtU).
Figure 1 displays the conceptual model adopted in the current study. Nonetheless, we refrained from formulating specific predictions with respect to the direction of the effects of personality traits on technology acceptance and AI-WtU. Indeed, as reported above, previous research has reported conflicting results on the influence exerted by various personality traits; furthermore, most studies exploring personality effects on technology use were developed using the FFM as a theoretical framework. For these reasons, we remain agnostic on the direction of these effects, if any are detected. On the contrary, with respect to the effect of technology acceptance on AI-WtU, in line with previous work (Balcioğlu and Artar, 2024; Chakraborty et al., 2023; Horodyski, 2023a, 2023b; Islam et al., 2024), we predict a positive effect of technology acceptance on AI-WtU.
Methods
Participants
A total of 157 participants took part in the study (90F, 66 M; 1 non-binary; mean age: 33,03 ± 13,19 s.d.; mean years of education: 16,0 ± 2,42 s.d.), after providing informed consent in accordance with the 1964 Declaration of Helsinki and the APA ethical standards in the treatment of human sample. Participants were informed of their right to discontinue participation at any time. Participants were recruited through various channels including social media (i.e. LinkedIn, Facebook and Instagram), as well as direct/indirect contact. The questionnaire, formulated in both Italian and English, was generated and administered online via Qualtrics. Data were collected between May and July 2022, asking participants to fill the online questionnaire, anonymously and voluntarily. The overall description of the sample is summarized in Table 1.
Procedure and questionnaires
All the survey’s questions required to express the responders’ level of agreement on a five-point Likert Scale from 1 (strongly disagree) to 5 (strongly agree). First, participants were asked whether they had ever applied to a job position and participated in a recruitment procedure taking place with the aid of AI (i.e. AI experience), and whether they were aware of the application of AI to the recruitment process (i.e. AI awareness). Then, a general definition of the employment of AI in HR practices, and in recruitment and selection, was deliberately included to provide a concise description of this practice for participants declaring no or poor awareness of such tools. In particular, the definition provided highlighted the main uses and the main benefits in terms of efficiency, security, transparency and timesaving.
In the following section, participants were asked a series of questions aimed at collecting their individual opinions and judgments related to the employment of AI in the recruitment process reflecting their willingness to use (i.e. AI-WtU). This scale was developed in Italian and translated into English for the specific purpose of the current study. The scale includes 6 items to specifically investigate the perceived usefulness and transparency of AI-mediated practices, as well as their willingness to take part in various hiring procedures using AI-based tools (e.g. recoded interviews, chatbots etc.). The reliability of the scale appeared appropriate (α = 0.79).
In the next section, a technology acceptance scale was administered; in particular, the subscale technology acceptance of the Technology Commitment Questionnaire was used (Neyer et al., 2016). As mentioned in the Theoretical Framework section, we decided to focus on this specific subscale because it represents an attitudinal measure that reflects the subjective evaluation of technological progress, accounting for the personal connection to modern technologies. Because no validated translation of the scale exists in Italian, nor in English, the scale has been translated in-house. Despite this issue, the scale appeared to be reliable (α = 0.72).
Finally, the Italian and English validated versions of the brief HEXACO personality model (24-item) was administered (De Vries, 2013), to measure six major dimensions of personality: honesty-humility: people who score high in this scale avoid using others for their own benefit, feel minimal temptation to disobey the rules, are uninterested in extravagant expensive and luxury items, and do not have any special entitlement to high social standing; emotionality: high scores in this scale indicate stress in response to life's pressures, fear of physical danger, desire for help and support from others, empathy and sentimental bonds with others; extraversion: high extraversion scores are associated with good self-esteem, self-assurance while speaking or leading groups of people, enjoyment of social interactions and positive sentiments of enthusiasm and energy; agreeableness: agreeable people are tolerant in their judgment of others, prepared to compromise and work with others and have good self-control, they also easily forgive wrongs they have experienced; conscientiousness: conscientious people plan their time and their physical environment, work systematically toward their objectives, aim for precision and perfection in their duties and think things through thoroughly before acting; openness to experience: high scores in this trait indicate people enthralled by the beauty of art and nature, curious in a wide range of topics, free with their creativity in daily life and drawn to unique ideas or people.
As a final step, participants were asked some demographic questions such as age, gender, education and employment status.
Statistical testing
To measure each variable of interest, questionnaires were scored and z-transformed prior to statistical testing.
First, we investigated the variables that may impact upon AI willingness to use (AI-WtU). To this aim a multiple linear regression was computed using the AI-WtU score as dependent variable, while regressors included the technology acceptance, AI awareness, AI experience, the six personality traits, along with the control variables age, gender, years of education and employment status.
Second, we run an additional analysis to inspect whether technology acceptance was in turn influenced by individual personality traits; hence, a multiple regression was computed using technology acceptance as dependent variable and the six personality traits as regressors, along with the control variables age, gender, years of education and employment status.
Finally, we investigated whether there exists a three-level relationship between the variables of interest, more specifically: whether personality traits may play a mediator role in the relationship between the individual AI-WtU and their general level of technology acceptance. In these analyses, we focused our attention on one specific personality trait, namely, conscientiousness, as it was the only one showing a significant impact on both AI-WtU and technology acceptance (see Results section for further details). Thus, a mediations’ analysis was conducted using the package PROCESS (Hayes, 2015; Hayes and Preacher, 2014) for SPSS (IBM SPSS, 2021). In particular, AI-WtU was used as dependent variable, technology acceptance was used as independent variable, while conscientiousness trait was used as mediator. Further, participants’ age, gender, years of education, employment status, AI awareness and AI experience were included as covariates.
Results
Means and standard deviations of the raw scores collected across all the scales and subscales of the survey are displayed in Table 2.
Results of the regressions
The results of the regression analysis conducted using the AI-WtU as dependent variable are summarized in Table 3.
The results highlighted that the effect of technology acceptance was statistically significant (β = 0.430, t = 6.019, p < 0.001), hence indicating that higher technology acceptance brings about higher AI-WtU in the hiring process. We also found a specific personality trait, to exert a significant effect on AI-WtU, namely, conscientiousness (β = 0.186, t = 2.654, p < 0.001), indicating that more conscientious people have higher acceptance for the use of AI tools in the recruitment process. Further, previous experience with AI recruiting tools was also found to yield a marginally significant effect (β = −0.292, t = −1.961, p = 0.052), thus indicating that people that already experienced AI tools applications in recruiting had a lower acceptance of AI; this result may be due to previous negative experiences with the AI-assisted hiring process. Finally, we found the employment status to have a marginally negative effect of AI-WtU (β = −0.096, t = −1.950, p = 0.053); this effect indicates that people employed full- or part-time showed a higher acceptance of AI compared to people looking for a job, interns/trainees or unemployed people.
A second multiple regression was computed to investigate whether technology acceptance was also influenced by personality traits and other control variables. The results of this analysis are summarized in Table 4.
The results indicated that also for technology acceptance the personality trait that exerted a positive significant impact was conscientiousness (β = 0.210, t = 2.637, p < 0.001), hence suggesting that conscientious people are characterized by a higher technology acceptance. A marginally significant effect was found for honesty-humility (β = −0.136, t = 1.729, p = 0.086), indicating that people scoring high in this trait are more reluctant toward the use of technology. Finally, age was also found to have a marginal impact (β = 0.186, t = 1.870, p = 0.063), which counterintuitively suggests that older people are characterized by higher technology acceptance compared to younger ones.
Mediation results
Based on the results of the regression analyses, we tested whether more complex relationships between the variables of interest exist, and more specifically, we focused on the personality trait “conscientiousness” – which was found significantly associated with both AI-WtU and technology acceptance. Thus, we tested whether Technology acceptance exerted a mediation effect in the relationship between conscientiousness and AI-WtU.
Hence, a mediation analysis with AI-WtU as dependent variable, conscientiousness as independent variable, and technology acceptance as mediator was computed, while AI experience, AI awareness, age, gender, years of education, and employment status were included as covariates. The results are summarized in Table 5. The model was overall significant (R = 0.611, R2 = 0.373, F8,148 = 11.020, p < 0.001); specifically, the direct effect of conscientiousness was significant (β = 0.176, t = 2.616, p < 0.01, LLCI = 0.043, ULCI = 0.310), along with the direct effect of technology acceptance (β = 0.427, t = 6.090, p < 0.001, LLCI = 0.288, ULCI = 0.565).
Regarding the covariates, the effect of AI experience (β = −0.298, t = −1.998, p < 0.05, LLCI = −0.593, ULCI = −0.003) and employment status (β = −0.102, t = −2.079, p < 0.05, LLCI = −0.198, ULCI = −0.005) were found significant, replicating the results observed in the regression analysis.
More importantly, the results of the indirect effects (i.e. mediated by conscientiousness) of technology acceptance on AI-WtU indicated that it was indeed significant (β = 0.091, LLCI = 0.020, ULCI = 0.163) (Figure 2).
Discussion
The main aim of this study was to investigate the individual determinants of acceptance of AI-assisted recruitment processes. To this aim we focused on the HEXACO model of personality; secondly, we also investigated the possible effect of individuals’ level of technology acceptance, namely people’s general inclination and attitude toward the use of technologies in their everyday life.
The results of the analysis show a statistically significant effect of the personality trait “conscientiousness”, indicating that people high in this trait are better inclined toward the integration of AI tools in hiring. A possible explanation is that conscientious people are characterized by self-discipline, sense of duty and aim for achievement; they also tend to take tasks and people seriously and hence display planned behavior to ensure the proper realization of their objectives. Hence, people with these characteristics may find the integration of AI into recruiting as a source of reliability, accuracy and absence of judgment biases, thus showing better acceptance.
Regarding technology acceptance, it was indeed a predictor of the AI-WtU in recruitment, hence, people that are generally more willing to integrate technologies in their everyday life are also better inclined toward their application to the hiring process. This appears also in line with a previous study showing that favorable reactions toward an AI-assisted approach in hiring generally depended on participants’ familiarity levels with AI (Gonzalez et al., 2022). Additionally, we found that previous experiences with AI-mediated recruitment exerted a marginally but negative impact upon its acceptance, hence suggesting that those who already have undergone such procedures may have had a negative experience of the process. Nonetheless, the quantitative nature of the study prevented us from investigating the ways in which respondents’ experience with AI-assisted hiring has not been optimal. Nevertheless, some considerations are possible based on the current state of the art regarding AI implementation in recruitment: AI is still a novel element in most organizations, and gold standard rules for its application are still missing. Furthermore, only a few technologically advanced companies use it extensively. Thus, complexities may still exist in the optimal use of such resources, as well as in the exploitation of their full potential.
Finally, this analysis also shows a marginal effect of employment status; specifically, people looking for employment, trainees/intern, or unemployed show lower AI-WtU in recruiting. This result can be understood considering that these people, as compared to part-time or full-time employed, are more likely to actually face such recruitment process; hence, they may have more concerns about the actual impact of these procedures on their current ability to find employment. Conversely, our analysis shows no effect of either gender or education on the level of acceptance of AI-assisted hiring, in contrast with previous literature showing more negative evaluations from females, and more positive evaluations from more educated people (Zhang and Yencha, 2022). The absence of an effect of gender on the willingness to use AI-assisted hiring is somewhat surprising, as one would expect that more discriminated people (i.e. females) would show more positive evaluations of AI-assisted recruiting. Nonetheless, the lack of this effect may be due to the overall still high level of skepticism associated with the use of AI algorithms in HR practices.
In a second step, we conducted an additional regression analysis to verify whether personality traits may exert an impact upon the more general construct of technology acceptance. Understanding this impact has relevance given the pivotal role that technology acceptance has on AI-WtU. The results reveal that, as in the case of AI-WtU, conscientiousness has a positive significant influence on technology acceptance, thus suggesting that people scoring high in this trait may find in technology an aid for efficiency, accuracy and reliability. Additionally, we also found a marginally negative effect of the honestly-humility trait, in line with previous studies showing that honesty-humility is negatively associated with the technology acceptance model (Sindermann et al., 2020; Weger et al., 2022). A possible interpretation of these findings is that people high in honesty-humility may see technology as unnecessary symbols of luxury and wealth, hence, given their tendency to refrain from pursuing wealth and social status goals, they may develop a negative opinion on the massive use of technologies in sectors – such as hiring – traditionally dominated by human-to-human interactions.
Finally, in the case of technology acceptance, we also found a marginally significant effect of age, indicating that older people are surprisingly better inclined toward the use of technology. However, it is important to note that despite the age range of this study being wide, most of participants were 20 to 35 years old, and this may have impacted upon the results.
Finally, based on the results of the regression analyses we aimed at further exploring the relationship between AI-WtU, technology acceptance and conscientiousness, as this latter was found to significantly impact upon both AI-WtU and Technology acceptance. Hence, we hypothesized that technology acceptance may exert a mediation role between AI-WtU and conscientiousness. The results indeed confirm our prediction, showing that besides having a direct impact on AI-WtU, conscientiousness had also an indirect impact mediated by technology acceptance. Given the features of conscientious people, they perceive AI tools as means of guaranteeing a trustworthy selection process, thus finding them more appealing as opposed to people with different personality traits. These findings are in line with a previous study reporting positive correlations between conscientiousness and perceived usefulness and intention to use technology (Sindermann et al., 2020). Furthermore, conscientiousness was found to positively predict both perceived (i.e. self-reported) and actual use of technologies, as conscientious individuals tend to set goals and engage in behaviors, such as technology use, that help them succeed (Barnett et al., 2015). Along the same lines, conscientiousness has been linked with motivation to learn (Major et al., 2006) and willingness to participate in development training (Simmering et al., 2003), hence suggesting that conscientious people are willing to engage with tools that can help them improve and reach their goals.
Implications
From a theoretical standpoint, the study contributes to highlight the usefulness of alternative models for interpreting individuals’ willingness to engage in technologies. Indeed, most existing studies in the broader field of technology acceptance relies on models that focus specifically on the behavior-acceptance of technologies (e.g. TAM, UTAUT, etc.) often neglecting the affective component of attitude-acceptance, which is more strongly rooted in personality and psychological characteristics. The results of the current investigation contribute to highlight the relevance of these features to understand the possible ways in which individuals shape their perception and representation of technology and its impact in our daily life.
Overall, the results reported in this study highlight the importance of considering individual personality traits in the adoption and integration of technologies, and specifically artificial intelligence in the recruitment process. More specifically, we found that conscientiousness appears to be a good predictor of the AI-WtU, directly as well as by the indirect effect of technology acceptance. This result, in line with previous literature, suggests that AI and tech tools for hiring should be designed and improved with the aim of, first and foremost, optimizing efficiency and reliability of hiring procedures. Furthermore, investing on efficiency and reliability of AI may help mitigate the negative evaluation of such technologies from people expressing ethical concerns about the use of AI in general and, in particular, from individuals scoring high in honesty-humility, which decreases their AI-WtU; thus, leveraging efficiency and reliability of AI may help reframing the conception of techs from symbols of luxury to useful and reliable tools for HR practices. Introducing the use of reliable AI technology in hiring is particularly relevant in the current business scenario also in light of the ever-increasing importance of guaranteeing diversity and inclusion; indeed, AI may bring about the advantage of ensuring a much fairer hiring process, free from the widespread biases often reported in the empirical literature on hiring (see Lippens et al., 2022 for a review).
Another important point to highlight is related to the importance of investing resources in technological and AI-literacy; indeed, the general construct of technology acceptance plays a key role for the successful integration of AI in hiring practices, especially considering how this sector is traditionally based on human-to-human interactions.
Finally, it is worth mentioning that the results of this study suggest how previous experience with AI in recruitment in the role of job applicants represents a negative predictor of AI-WtU, hence highlighting how those that already experienced AI in hiring were not fully satisfied with its use. Hence, despite the marginal effect, this result points toward the importance to invest in the comprehension of the motives of such dissatisfaction with the aim of improving AI tools and adequately training the HR experts who will use it, thus allowing to take advantage of all the possible benefits of AI while avoiding or minimizing the potential risks (HR Research Institute, 2019). In conclusion, it appears particularly important to consider the impact that technology integration in HR practices may have on employees, with particular attention to the possible – even unintended – negative consequences.
Limitations and future research
This study has some limitations. First, we acknowledge that in the current research we have limited our focus only to the personality dimensions that may impact upon the willingness to use AI-mediate hiring; also with respect to the construct of technology acceptance, the current investigation only focuses on the attitude-acceptance of technologies, which is the element of the construct that is shaped by affective reactions to technology, and as such it is mostly impacted by personality and psychological characteristics of potential applicants. In doing so, this study neglected additional variables that may play an important impact on perceptions of AI-assisted recruitment practices. For example, Sowmya et al. (2024) highlighted the influential role played by privacy and security concerns in enhancing consumers’ intention to adopt matrimonial apps; similarly, privacy and security concern may indeed represent key variables also in determining individuals’ willingness to use AI-mediated technologies in hiring; hence, future research should systematically investigate the possible impact of these variables. Furthermore, future research should also focus on how to improve AI systems reliability and accuracy, in line with the findings by Horodyski (2023a) who reported that lack of nuance typical of human judgment, low accuracy and reliability, and immature technology represented the major drawbacks identified by applicants with respect to AI in recruitment (Horodyski, 2023a). More generally, our study focused on the determinants of individuals’ attitude toward AI; however, borrowing Chater and Lowenstein's (2022) distinction between I-frame and S-frame, we believe ours and others’ I-frame perspective should be complemented by urgent S-frame reflections on the consequences, both positive and negative, of AI-assisted organizational practices at the societal level before AI is massively introduced in organizational settings.
A second limitation of the current study is represented by the reliance on self-reported data which may introduce a potential for bias. Indeed, future research should focus on replicating and expanding these findings my means of additional research methods. For example, the use of in-depth interviews may be of aid to understand the reasons behind the negative perception of applicants that have already faced AI-assisted hiring practices, to identify the crucial aspects that lead to negative experiences. Furthermore, lab and field experiments may help in understanding the factors at play in the context of simulated hiring practices. For example, the HEXACO personality framework may be applied also in the context of lab or field experiments involving the analysis of job openings disclosing (or not) the use of AI-mediated tools in the selection procedure to understand how personality traits impact on the evaluation process and on the willingness to use AI.
A final limitation of the study resides in its sample size. Indeed, despite it being in line with the sample size of many other comparable studies, the cohort may not be adequate to represent the diverse range of individuals in the workforce. Hence, future studies should engage a broader number of participants trying to investigate additional differences across them, as for example, cultural differences.
Figures
Demographic description of the sample
Participants’ characteristics | N | % |
---|---|---|
Gender | ||
F | 90 | 58 |
M | 66 | 41 |
NB | 1 | 1 |
Language | ||
Italian | 148 | 94 |
English | 9 | 6 |
Job | ||
Employed full-time | 68 | 43 |
Intern/trainee | 45 | 28 |
Employed part-time | 22 | 14 |
Looking for employment | 15 | 9 |
Currently unemployed (not looking for employment) | 8 | 5 |
Industry | ||
Consultancy | 51 | 32 |
Clerical | 23 | 15 |
Health | 14 | 9 |
Finance | 11 | 7 |
Research | 9 | 6 |
Education | 7 | 4 |
Marketing | 7 | 4 |
Public service | 5 | 3 |
Entertainment | 3 | 2 |
Other | 5 | 3 |
None (unemployed) | 23 | 15 |
Experience with AI recruitment | ||
No | 104 | 66 |
Yes | 54 | 34 |
Awareness of AI recruitment | ||
Yes | 89 | 56 |
No | 69 | 44 |
Source: Authors’ own production
Descriptive statistics – minimum and maximum value, average and standard deviation of all the variables of interest
Variables | Min. | Max. | Average | SD |
---|---|---|---|---|
Age | 18.00 | 64.00 | 33.03 | 13.19 |
Education | 13.00 | 21.00 | 16.06 | 2.42 |
AI-WtU | 6.00 | 26.00 | 15.35 | 4.47 |
Technology acceptance | 4.00 | 16.00 | 9.56 | 2.72 |
Honesty-Humility | 4.00 | 15.00 | 8.95 | 2.23 |
Emotionality | 6.00 | 20.00 | 12.13 | 2.74 |
Extraversion | 4.00 | 15.00 | 8.29 | 2.24 |
Agreeableness | 6.00 | 19.00 | 12.61 | 2.60 |
Conscientiousness | 4.00 | 15.00 | 8.80 | 2.32 |
Openness to experience | 4.00 | 15.00 | 9.00 | 2.58 |
Source: Authors’ own production
Results of the regression analysis conducted using AI-WtU as dependent variable
Effects | β | SE | t |
---|---|---|---|
Technology acceptance | 0.430 | 0.071 | 6.019*** |
AI experience | −0.292 | 0.149 | −1.961ɫ⸸ |
AI awareness | 0.112 | 0.140 | 0.804 |
Honesty-Humility | −0.012 | 0.068 | −0.174 |
Emotionality | −0.046 | 0.074 | −0.624 |
Extraversion | −0.080 | 0.070 | −1.137 |
Agreeableness | 0.064 | 0.073 | 0.882 |
Conscientiousness | 0.186 | 0.070 | 2.654*** |
Openness to experience | −0.125 | 0.069 | −1.820 |
Gender | 0.063 | 0.143 | 0.443 |
Age | 0.003 | 0.006 | 0.501 |
Years of education | −0.004 | 0.029 | −0.132 |
Employment status | −0.096 | 0.049 | −1.950⸸ |
***p < 0.001;
⸸p < 0.1
Source: Authors’ own production
Results of the regression analysis conducted using technology acceptance as dependent variable
Effects | β | SE | t |
---|---|---|---|
Honesty-Humility | −0.136 | 0.079 | −1.729⸸ |
Emotionality | 0.030 | 0.085 | 0.351 |
Extraversion | 0.104 | 0.081 | 1.273 |
Agreeableness | 0.131 | 0.084 | 1.555 |
Conscientiousness | 0.210 | 0.080 | 2.637*** |
Openness to experience | −0.010 | 0.080 | −0.131 |
Gender | −0.052 | 0.164 | −0.318 |
Age | 0.013 | 0.007 | 1.870⸸ |
Years of education | −0.041 | 0.033 | −1.228 |
Employment status | −0.014 | 0.056 | −0.256 |
***p < 0.001; p < 0.1
Source: Authors’ own production
Results of the moderation analysis conducted using AI-WtU as dependent variable, conscientiousness as independent variable technology acceptance as moderator
Effects | β | SE | t | LLCI | ULCI |
---|---|---|---|---|---|
Direct effects | |||||
Conscientiousness | 0.18 | 0.07 | 2.62** | 0.04 | 0.31 |
Technology acceptance | 0.43 | 0.07 | 6.09*** | 0.29 | 0.56 |
AI experience | −0.30 | 0.15 | −2.00* | −0.59 | 0.00 |
AI awareness | 0.08 | 0.14 | 0.55 | −0.20 | 0.35 |
Gender | 0.05 | 0.13 | 0.38 | −0.21 | 0.31 |
Age | 0.00 | 0.01 | 0.66 | −0.01 | 0.02 |
Years of education | 0.00 | 0.03 | 0.04 | −0.06 | 0.06 |
Employment status | −0.10 | 0.05 | −2.08* | −0.20 | −0.01 |
Indirect effect | |||||
Technology acceptance | 0.09 | 0.04 | – | 0.02 | 0.16 |
***p < 0.001; **p < 0.01; *p < 0.05
Source: Authors’ own production
References
Ajunwa, I. (2020), “The paradox of automation as anti-bias intervention”, Cardozo Law Review, Vol. 41 No. 5, pp. 1671-1742.
Arning, K. and Ziefle, M. (2007), “Understanding age differences in PDA acceptance and performance”, Computers in Human Behavior, Vol. 23 No. 6, pp. 2904-2927, doi: 10.1016/j.chb.2006.06.005.
Ashton, M.C. and Lee, K. (2007), “Empirical, theoretical, and practical advantages of the HEXACO model of personality structure”, Personality and Social Psychology Review, Vol. 11 No. 2, pp. 150-166, doi: 10.1177/1088868306294907.
Ashton, M.C. and Lee, K. (2008), “The HEXACO model of personality structure and the importance of the H factor”, Social and Personality Psychology Compass, Vol. 2 No. 5, pp. 1952-1962, doi: 10.1111/j.1751-9004.2008.00134.x.
Ashton, M.C., Lee, K., de Vries, R.E., Perugini, M., Gnisci, A. and Sergi, I. (2006), “The HEXACO model of personality structure and indigenous lexical personality dimensions in Italian, Dutch, and English”, Journal of Research in Personality, Vol. 40 No. 6, pp. 851-875, doi: 10.1016/j.jrp.2005.06.003.
Ashton, M.C., Perugini, M., De Vries, R.E., Boies, K., Lee, K., Szarota, P., Di Blas, L. and De Raad, B. (2004), “A Six-Factor structure of Personality-Descriptive adjectives: Solutions from psycholexical studies in seven languages”, Journal of Personality and Social Psychology, Vol. 86 No. 2, pp. 356-366, doi: 10.1037/0022-3514.86.2.356.
Avery, M., Leibbrandt, A. and Vecci, J. (2023), “Does artificial intelligence help or hurt gender diversity? Evidence from two field experiments on recruitment in tech, SSRN, doi: 10.2139/ssrn.4370805.
Balcioğlu, Y.S. and Artar, M. (2024), “Artificial intelligence in employee recruitment”, Global Business and Organizational Excellence, Vol. 43 No. 5, doi: 10.1002/joe.22248.
Barnett, T., Pearson, A.W., Pearson, R. and Kellermanns, F.W. (2015), “Five-factor model personality traits as predictors of perceived and actual usage of technology”, European Journal of Information Systems, Vol. 24 No. 4, pp. 374-390, doi: 10.1057/ejis.2014.10.
Barocas, S. and Selbst, A.D. (2016), “Big data’s disparate impact”, California Law Review, Vol. 104 No. 671, pp. 671-732.
Bedemariam, R. and Wessel, J.L. (2023), “The roles of outcome and race on applicant reactions to AI systems”, Computers in Human Behavior, Vol. 148, pp. 107869, doi: 10.1016/j.chb.2023.107869.
Blacksmith, N., Willford, J.C. and Behrend, T.S. (2016), “Technology in the employment interview: a Meta-Analysis and future research agenda”, Personnel Assessmentand Decisions, Vol. 2 No. 1, pp. 8-14, doi: 10.25035/pad.2016.002.
Bogg, T. and Roberts, B.W. (2004), “Conscientiousness and health-related behaviors: a meta-analysis of the leading behavioral contributors to mortality”, Psychological Bulletin, Vol. 130 No. 6, pp. 887-919, doi: 10.1037/0033-2909.130.6.887.
Calluso, C. and Devetag, M.G. (2024), “The impact of leadership preferences and personality traits on employees ’ motivation”, Evidence-Based HRM, doi: 10.1108/EBHRM-01-2023-0023.
Chakraborty, D., Babu Singu, H., Kumar Kar, A. and Biswas, W. (2023), “From fear to faith in the adoption of medicine delivery application: an integration of SOR framework and IRT theory”, Journal of Business Research, Vol. 166, p. 114140, doi: 10.1016/j.jbusres.2023.114140.
Chater, N. and Lowenstein, G. (2022), “The i-frame and the s-frame: how focusing on individual-level solutions has led behavioral public policy astray”, Behavioral and Brain Sciences, Vol. 46 No. 2, pp. 1-45, doi: 10.1017/S0140525X22002023.
Colquitt, J.A., Hollenbeck, J.R., Ilgen, D.R., Lepine, J.A. and Sheppard, L. (2002), “Computer-assisted communication and team decision-making performance: the moderating effect of openness to experience”, Journal of Applied Psychology, Vol. 87 No. 2, pp. 402-410, doi: 10.1037/0021-9010.87.2.402.
Corr, P.J. and Matthews, G. (2009), The Cambridge Handbook of Personality Psychology, Cambridge University Press, Cambridge, UK.
Davis, F.D. (1989), “Perceived usefulness, perceived ease of use, and user acceptance of information technology”, MIS Quarterly, Vol. 13 No. 3, pp. 319-339, doi: 10.2307/249008.
De Vries, R.E. (2013), “The 24-item brief HEXACO inventory (BHI)”, Journal of Research in Personality, Vol. 47 No. 6, pp. 871-880, doi: 10.1016/j.jrp.2013.09.003.
Devaraj, U.S., Easley, R.F. and Michael Crant, J. (2008), “How does personality matter? Relating the five-factor model to technology acceptance and use”, Information Systems Research, Vol. 19 No. 1, pp. 93-105, doi: 10.1287/isre.1070.0153.
Ehrenberg, A., Juckes, S., White, K.M. and Walsh, S.P. (2008), “Personality and Self-Esteem as predictors of young people’s technology use”, CyberPsychology and Behavior, Vol. 11 No. 6, pp. 739-741, doi: 10.1089/cpb.2008.0030.
Florentine, S. (2016), “How artificial intelligence can eliminate bias in hiring”, Cio, pp. 1-20, available at: www.cio.com/article/236913/how-artificial-intelligence-can-eliminate-bias-in-hiring.html
Gartner (2020), “Gartner HR survey shows 86% of organizations are conducting virtual interviews to hire candidates During coronavirus pandemic”, available at: www.gartner.com/en/newsroom/press-releases/2020-04-30-gartner-hr-survey-shows-86–of-organizations-are-cond
Glikson, E. and Woolley, A.W. (2020), “Human trust in artificial intelligence: Review of empirical research”, Academy of Management Annals, Vol. 14 No. 2, pp. 627-660, doi: 10.5465/annals.2018.0057.
Goldberg, L.R. (1992), “Development of markers for The Big-Five factor structure”, Psychology Assessment, Vol. 4 No. 1, pp. 26-42.
Gonzalez, M.F., Liu, W., Shirase, L., Tomczak, D.L., Lobbe, C.E., Justenhoven, R. and Martin, N.R. (2022), “Allying with AI? Reactions toward human-based, AI/ML-based, and augmented hiring processes”, Computers in Human Behavior, Vol. 130, p. 107179, doi: 10.1016/j.chb.2022.107179.
Hayes, A.F. (2015), “Hacking PROCESS for estimation and probing of linear moderation of quadratic effects and quadratic moderation of linear effects”, The Ohio State University, available at: www.afhayes.com/public/quadratichack.pdf
Hayes, A.F. and Preacher, K.J. (2014), “Statistical mediation analysis with a multicategorical independent variable”, British Journal of Mathematical and Statistical Psychology, Vol. 67 No. 3, pp. 451-470, doi: 10.1111/bmsp.12028.
Horodyski, P. (2023a), “Applicants’ perception of artificial intelligence in the recruitment process”, Computers in Human Behavior Reports, Vol. 11, p. 100303, doi: 10.1016/j.chbr.2023.100303.
Horodyski, P. (2023b), “Recruiter’s perception of artificial intelligence (AI)-based tools in recruitment”, Computers in Human Behavior Reports, Vol. 10, p. 100298, doi: 10.1016/j.chbr.2023.100298.
Houser, K. (2019), “What?’, ‘why?’, and ‘how?’: an argument for employing physiological techniques in research about visual response to light”, LEUKOS, Vol. 15 No. 1, pp. 1-2, doi: 10.1080/15502724.2019.1560722.
Howardson, G.N. and Behrend, T.S. (2014), “Using the internet to recruit employees: Comparing the effects of usability expectations and objective technological characteristics on internet recruitment outcomes”, Computers in Human Behavior, Vol. 31 No. 1, pp. 334-342, doi: 10.1016/j.chb.2013.10.057.
HR Research Institute (2019), “The 2019 state of artificial intelligence in talent acquisition - get ahead of the curve in an area that promises high impact and dramatic growth”, available at: www.hr.com/en/magazines/all_articles/research-summary---the-2019-state-of-artificial-in_jxbccezp.html
Hu, Q. (2023), “Unilever‘s practice on AI-based recruitment”, Highlights in Business, Economics and Management, Vol. 16, pp. 256-263, doi: 10.54097/hbem.v16i.10565.
Hunkenschroer, A.L. and Luetge, C. (2022), “Ethics of AI-Enabled recruiting and selection: a review and research agenda”, Journal of Business Ethics, Vol. 178 No. 4, p. s10551, doi: 10.1007/s10551-022-05049-6.
IBM SPSS (2021), “SPSS statistics 28”.
Islam, M., Rahman, M.M., Taher, M.A., Quaosar, G.M.A.A. and Uddin, M.A. (2024), “Using artificial intelligence for hiring talents in a moderated mechanism”, Future Business Journal, Vol. 10 No. 1, doi: 10.1186/s43093-024-00303-x.
Johnson, D.G. and Verdicchio, M. (2017), “Reframing AI discourse”, Minds and Machines, Vol. 27 No. 4, pp. 575-590, doi: 10.1007/s11023-017-9417-6.
Jora, R.B., Sodhi, K.K., Mittal, P. and Saxena, P. (2022), “Role of artificial intelligence (AI) In meeting diversity, equality and inclusion (DEI) goals”, 2022 8th International Conference on Advanced Computing and Communication Systems (ICACCS), IEEE, Coimbatore, India, pp. 1687-1690.
Kelan, E.K. (2023), “Algorithmic inclusion: Shaping the predictive algorithms of artificial intelligence in hiring”, Human Resource Management Journal, pp. 1-14, doi: 10.1111/1748-8583.12511.
Köchling, A., Wehner, M.C. and Warkocz, J. (2023), “Can I show my skills? Affective responses to artificial intelligence in the recruitment process”, Review of Managerial Science, Vol. 17 No. 6, pp. 2109-2138, doi: 10.1007/s11846-021-00514-4.
Kuncel, N.R., Klieger, D.M. and Ones, D.S. (2014), “In hiring, algorithms beat instinct”, Harvard Business Review, May, available at: https://hbr.org/2014/05/in-hiring-algorithms-beat-instinct
Lippens, L., Vermeiren, S. and Baert, S. (2022), “The state of hiring discrimination: a meta-analysis of (almost) all recent correspondence experiments”, European Economic Review, Vol. 151 No. January 2022, p. 104315, doi: 10.1016/j.euroecorev.2022.104315.
Major, D.A., Turner, J.E. and Fletcher, T.D. (2006), “Linking proactive personality and the big five to motivation to learn and development activity”, Journal of Applied Psychology, Vol. 91 No. 4, pp. 927-935, doi: 10.1037/0021-9010.91.4.927.
Mirowska, A. and Mesnet, L. (2022), “Preferring the devil you know: Potential applicant reactions to artificial intelligence evaluation of interviews”, Human Resource Management Journal, Vol. 32 No. 2, pp. 364-383, doi: 10.1111/1748-8583.12393.
Neyer, F.J., Felber, J. and Gebhardt, C. (2012), “Entwicklung und validierung einer kurzskala zur erfassung von technikbereitschaft”, Diagnostica, Vol. 58 No. 2, pp. 87-99, doi: 10.1026/0012-1924/a000067.
Neyer, F.J., Felber, J. and Gebhardt, C. (2016), “Kurzskala zur Erfassung von Technikbereitschaft (technology commitment)”, Zusammenstellung Sozialwissenschaftlicher Items Und Skalen (ZIS).
Parry, E. and Tyson, S. (2011), “Desired goals and actual outcomes of e-HRM”, Human Resource Management Journal, Vol. 21 No. 3, pp. 335-354, doi: 10.1111/j.1748-8583.2010.00149.x.
Petasis, A. and Economides, O. (2020), “The big five personality traits, occupational stress, and job satisfaction”, European Journal of Business and Management Research, Vol. 5 No. 4, pp. 1-7, doi: 10.24018/ejbmr.2020.5.4.410.
Polli, F. (2019), “Using AI to eliminate bias from hiring”, Harvard Business Review, available at: https://hbr.org/2019/10/using-ai-to-eliminate-bias-from-hiring
Roberts, B.W., Kuncel, N.R., Shiner, R., Caspi, A. and Goldberg, L.R. (2007), “The power of personality. The comparative validity of personality traits, socioeconomic status, and cognitive ability for predicting important life outcomes”, Perspectives on Psychological Science, Vol. 2 No. 4, pp. 313-345, doi: 10.1136/bmj.2.3584.509.
Ruël, H., Bondarouk, T. and Looise, J.K. (2004), “E-HRM: Innovation or irritation. An explorative empirical study in five large companies on web-based HRM”, Management Revue, Vol. 15 No. 3, pp. 364-380, doi: 10.5771/0935-9915-2004-3-364.
Shams, R.A., Zowghi, D. and Bano, M. (2023), “AI and the quest for diversity and inclusion: a systematic literature review”, AI and Ethics, p. 123456789, doi: 10.1007/s43681-023-00362-w.
Simmering, M.J., Colquitt, J.A., Noe, R.A. and Porter, C.O.L.H. (2003), “Conscientiousness, autonomy fit, and development: a longitudinal study”, Journal of Applied Psychology, Vol. 88 No. 5, pp. 954-963, doi: 10.1037/0021-9010.88.5.954.
Sindermann, C., Riedl, R. and Montag, C. (2020), “Investigating the relationship between personality and technology acceptance with a focus on the smartphone from a gender perspective: results of an exploratory survey study”, Future Internet, Vol. 12 No. 7, doi: 10.3390/FI12070110.
Sowmya, G., Chakraborty, D., Polisetty, A. and Jain, R.K. (2024), “Exploring the adoption patterns of matrimonial apps: an analysis of user gratifications”, Journal of Retailing and Consumer Services, Vol. 78, p. 103731, doi: 10.1016/j.jretconser.2024.103731.
Sowmya, G., Chakraborty, D., Polisetty, A., Khorana, S. and Buhalis, D. (2023), “Use of metaverse in socializing: Application of the big five personality traits framework”, Psychology and Marketing, Vol. 40 No. 10, p. 21863, doi: 10.1002/mar.21863.
Spar, B., Pletenyuk, I., Reilly, K. and Ignatova, M. (2018), “Linkedin global recruiting trends 2018”, available at: https://business.linkedin.com/content/dam/me/business/en-us/talent-solutions/resources/pdfs/chapter2-diversity-global-recruiting-trends-2018.pdf
Sriyabhand, T. and John, S.P. (2014), “An empirical study about the role of personality traits in information technology adoption”, Silpakorn University Journal of Social Science, Humanities and Arts, Vol. 14 No. 2, pp. 67-90.
Venkatesh, V., Morris, M.G., Davis, G.B. and Davis, F.D. (2003), “User acceptance of information technology: toward a unified view”, MIS Quarterly, Vol. 27 No. 3, pp. 425-478.
Walkowiak, E. (2023), “Digitalization and inclusiveness of HRM practices: the example of neurodiversity initiatives”, Human Resource Management Journal, pp. 1-21, doi: 10.1111/1748-8583.12499.
Weger, K., Easley, T., Branham, N., Tenhundfeld, N. and Mesmer, B. (2022), “Individual differences in the acceptance and adoption of AI-enabled autonomous systems”, Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 66 No. 1, pp. 241-245, doi: 10.1177/1071181322661154.
Wesche, J.S. and Sonderegger, A. (2021), “Repelled at first sight? Expectations and intentions of job-seekers reading about AI selection in job advertisements”, Computers in Human Behavior, Vol. 125, p. 106931, doi: 10.1016/j.chb.2021.106931.
Wiechmann, D. and Ryan, A.M. (2003), “Reactions to computerized testing in selection contexts”, International Journal of Selection and Assessment, Vol. 11 Nos 2/3, pp. 215-229, doi: 10.1111/1468-2389.00245.
Zhang, L. and Yencha, C. (2022), “Examining perceptions towards hiring algorithms”, Technology in Society, Vol. 68, p. 101848, doi: 10.1016/j.techsoc.2021.101848.
Acknowledgements
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. The authors received no financial support for the research, authorship, and publication of this article. The authors thank Leonardo Manzi for his help in data collection. Data are available at: www.doi.org/10.17632/3r25j88v5m.1.