Fairness evaluation of marketing algorithms: a framework for equity distribution

Mengxi Yang (School of Economics and Management, University of the Chinese Academy of Sciences, Beijing, China) (MOE Social Science Laboratory of Digital Economic Forecasts and Policy Simulation, University of Chinese Academy of Sciences, Beijing, China)
Jie Guo (School of Economics and Management, University of the Chinese Academy of Sciences, Beijing, China)
Lei Zhu (Ant Group CO Ltd, Hangzhou, China)
Huijie Zhu (Ant Group CO Ltd, Hangzhou, China)
Xia Song (School of Economics and Management, University of the Chinese Academy of Sciences, Beijing, China)
Hui Zhang (Ant Group CO Ltd, Hangzhou, China)
Tianxiang Xu (Ant Group CO Ltd, Hangzhou, China)

Journal of Electronic Business & Digital Economics

ISSN: 2754-4214

Article publication date: 11 September 2024

Issue publication date: 22 October 2024

388

Abstract

Purpose

Objectively evaluating the fairness of the algorithm, exploring in specific scenarios combined with scenario characteristics and constructing the algorithm fairness evaluation index system in specific scenarios.

Design/methodology/approach

This paper selects marketing scenarios, and in accordance with the idea of “theory construction-scene feature extraction-enterprise practice,” summarizes the definition and standard of fairness, combs the application link process of marketing algorithms and establishes the fairness evaluation index system of marketing equity allocation algorithms. Taking simulated marketing data as an example, the fairness performance of marketing algorithms in some feature areas is measured, and the effectiveness of the evaluation system proposed in this paper is verified.

Findings

The study reached the following conclusions: (1) Different fairness evaluation criteria have different emphases, and may produce different results. Therefore, different fairness definitions and standards should be selected in different fields according to the characteristics of the scene. (2) The fairness of the marketing equity distribution algorithm can be measured from three aspects: marketing coverage, marketing intensity and marketing frequency. Specifically, for the fairness of coverage, two standards of equal opportunity and different misjudgment rates are selected, and the standard of group fairness is selected for intensity and frequency. (3) For different characteristic fields, different degrees of fairness restrictions should be imposed, and the interpretation of their calculation results and the means of subsequent intervention should also be different according to the marketing objectives and industry characteristics.

Research limitations/implications

First of all, the fairness sensitivity of different feature fields is different, but this paper does not classify the importance of feature fields. In the future, we can build a classification table of sensitive attributes according to the importance of sensitive attributes to give different evaluation and protection priorities. Second, in this paper, only one set of marketing data simulation data is selected to measure the overall algorithm fairness, after which multiple sets of marketing campaigns can be measured and compared to reflect the long-term performance of marketing algorithm fairness. Third, this paper does not continue to explore interventions and measures to improve algorithmic fairness. Different feature fields should be subject to different degrees of fairness constraints, and therefore their subsequent interventions should be different, which needs to be continued to be explored in future research.

Practical implications

This paper combines the specific features of marketing scenarios and selects appropriate fairness evaluation criteria to build an index system for fairness evaluation of marketing algorithms, which provides a reference for assessing and managing the fairness of marketing algorithms.

Social implications

Algorithm governance and algorithmic fairness are very important issues in the era of artificial intelligence, and the construction of the algorithmic fairness evaluation index system in marketing scenarios in this paper lays a safe foundation for the application of AI algorithms and technologies in marketing scenarios, provides tools and means of algorithm governance and empowers the promotion of safe, efficient and orderly development of algorithms.

Originality/value

In this paper, firstly, the standards of fairness are comprehensively sorted out, and the difference between different standards and evaluation focuses is clarified, and secondly, focusing on the marketing scenario, combined with its characteristics, key fairness evaluation links are put forward, and different standards are innovatively selected to evaluate the fairness in the process of applying marketing algorithms and to build the corresponding index system, which forms the systematic fairness evaluation tool of marketing algorithms.

Keywords

Citation

Yang, M., Guo, J., Zhu, L., Zhu, H., Song, X., Zhang, H. and Xu, T. (2024), "Fairness evaluation of marketing algorithms: a framework for equity distribution", Journal of Electronic Business & Digital Economics, Vol. 3 No. 3, pp. 251-274. https://doi.org/10.1108/JEBDE-10-2023-0024

Publisher

:

Emerald Publishing Limited

Copyright © 2024, Mengxi Yang, Jie Guo, Lei Zhu, Huijie Zhu, Xia Song, Hui Zhang and Tianxiang Xu

License

Published in Journal of Electronic Business & Digital Economics . Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction and literature review

With the continuous growth of data availability and the development of information technology, artificial intelligence has deeply penetrated into our lives and work. More and more choices and decisions about people’s employment, loans, education, transportation and travel are made with the participation of artificial intelligence based on big data and algorithms (Sharma, Yadav, & Chopra, 2020; Martin, 2019). The emergence of Chat GPT has once again refreshed people’s understanding of the degree and speed of artificial intelligence development, and new rules of algorithmic governance also need to be established (Zhang, 2022; Xiao, 2022). Among the various governance objectives, algorithmic discrimination and fairness are related to consumer protection, fair competition among market players, labor protection and other issues, which need to be comprehensively and deeply clarified in theory and practice (Liu, 2019; Meng et al., 2022).

Although artificial intelligence is not as subjective as human beings at present, algorithms will still be affected by unintentional bias and intentional discrimination of data manipulation and algorithm designers (Miao, 2022; Kumar, Hines & Dickerson, 2022), and generate bias and discrimination at different stages of design, learning and application (Yan, 2021; De-Arteaga, Feuerriegel, Saar-Tsechansky, 2022). These prejudices and discrimination will make their decision-making become ‘unfair', that is, according to the different characteristics of individuals or groups, to give them different treatment, resulting in racial, gender, age and other dimensions of discrimination (Wang & Ru, 2020). Typical examples are as follows: Google’s automatic image tagging software had identified and marked black people’s photos as ‘gorillas'; some American courts had introduced the crime risk intelligent assessment system COMPAS, which had obvious discrimination against black criminals in assessing the risk of recidivism of criminals (Liu, 2019); Amazon had developed an ‘algorithm screening system' for resume screening in recruitment, but the algorithm obviously prefers male candidates and discriminates against women (Coen, Paul, Vanegas, Lange, & Hans, 2016). With this problem becoming more and more prominent in the real society, issues such as algorithm discrimination and big data-enabled price discrimination against existing customers have attracted wide attention from the society and become a research hotspot in recent years. The main research directions include developing fair algorithms without bias, seeking the measurement basis and judgment standard of algorithm fairness, exploring the impact of algorithm decision-making on consumer groups, and the governance and regulation of algorithm fairness. At the same time, scholars have pointed out that the definition, interpretation, evaluation and governance of algorithm fairness have different emphases in different scenarios. Therefore, the research on algorithm fairness should be classified and contextualized, combined with the practical characteristics and needs of specific fields, and analyzed according to different logic (Liang, Yu, & Song, 2020; Ding, 2022). However, there are still few studies focusing on the fairness of algorithms in specific fields, and the relevant literature needs to be enriched.

Based on this, this paper focuses on the marketing scenario to explore the algorithm fairness in this process. Philip Kotler has described the concept of precision marketing, which marks the inception of Internet-based precision marketing theory. According to him, personalized communication at the opportune moment with the appropriate audience to convey pertinent messages and actions is of paramount importance in marketing. The emergence of big data technologies has significantly enhanced the potential of data-driven marketing by enabling instantaneous analysis of extensive data pools, which facilitates more intricate customer segmentation and predictive analytics (Muniswamaiah et al., 2019a, b). Therefore, in recent years precision marketing has gradually become the practice of most companies' marketing departments, that is, targeting the company’s marketing resources (e.g. red packets, consumer coupons, etc.) to users who have a higher probability of generating profits for the company (Gwinner, Gremler, & Jo Bitner, 1998; Homburg, Droll, & Totzek, 2008), and this kind of treating different consumers differential treatment is often considered unfair (Frow, Payne, Wilkinson, & Young, 2011).

In response to criticism, there is a growing trend towards management that is more focused on social and ethical considerations (Wu, Liu, Chen, & Wang, 2012). This shift emphasizes the importance of fairness and justice in business operations (Carr, 2007), aiming to enhance brand reputation (Woo Jin & Winterich, 2013), build goodwill (Cox, 2001), and foster consumer loyalty (Xia, Monroe, & Cox, 2004). However, there are still many gaps in the current research on algorithmic fairness in the marketing field. First, most of the current evaluations of algorithmic fairness discuss the concept of algorithmic fairness and generalized definitions from a more macroscopic point of view, without focusing on specific scenarios and domains, and there is a lack of in-depth discussion of the connotation of algorithmic fairness in specific scenarios as well as the evaluation criteria (Liang et al., 2020; Ding, 2022); second, starting from the evaluation criteria of a single fairness starting from a single fairness evaluation criterion does not help to understand algorithmic fairness as a multidimensional concept (Lee-Wingate & Stern, 2006), and it is difficult to form a systematic and comprehensive evaluation system, and the relationship between each fairness evaluation criterion needs to be further clarified; third, in marketing scenarios, the research focuses mainly on the differences in the price of the product or the quality of the service in a single transaction, and it does not comprehensively summarize the fairness evaluation The dimensions of fairness evaluation have not been comprehensively summarized (Seiders & Berry, 1998); finally, in specific enterprise practices, the measurement of algorithmic fairness lacks a clear grip, and the questions of from which aspects to evaluate algorithmic fairness in marketing scenarios, what fairness criteria to use for measuring and evaluating, what kind of process to use for evaluating, and how to solve algorithmic unfairness have yet to be solved (Greenberg & Tyler, 1987; Thibaut & Walker, 1975).

Based on the above background, we delve into the process of algorithm-assisted marketing equity distribution in the field of marketing, focusing on solving the following three problems: First, what are the criteria for algorithm fairness, and how to quantify and formalize them respectively. Second, in what aspects should the fairness of algorithms be evaluated in marketing scenarios and in the process of marketing equity distribution. And third, what kind of fairness measurement bases and judgment criteria should be used to evaluate the fairness of marketing algorithms. Specifically, the article will firstly summarize the fairness evaluation criteria based on previous literature to lay the foundation for the construction of the algorithmic fairness evaluation index system in the later article. Secondly, by sorting out the application links of marketing algorithms, we find out the fairness issues involved in this process and construct a multi-dimensional evaluation perspective. Then, on the basis of the first two parts, according to the characteristics of the marketing scenario, the algorithm fairness evaluation criteria suitable for the scenario are selected, and the evaluation index system is established in accordance with the principles of systematicity, typicality, dynamics, concise scientificity and operability, so as to provide a practical grip and tool for the self-checking of the company’s algorithmic fairness autonomy. Finally, according to the evaluation index system, the marketing data of Company A is used to calculate the fairness evaluation of some user characteristic attributes, in order to verify the validity of the index system constructed in this paper. In addition, we discuss the theoretical and practical value of this paper and the future research direction.

2. Fairness theory and evaluation criteria

In order to effectively solve the problem of algorithm discrimination caused by the unreasonable application of the algorithm, it is necessary to understand what is the fairness of the algorithm. The fairness of algorithm is a complex problem, and there is no unique and clear definition. The conclusions obtained from different evaluation perspectives are often different, and they also have different short-term and long-term effects. The clarification of its definition and measurement criteria is helpful to the quantification and formal expression of algorithm fairness, which has important theoretical significance for the subsequent research on algorithm fairness in different scenarios. In summary, the common definitions and evaluation criteria of fairness mainly include: demographic parity, individual fairness, equal opportunity, and equal prediction rate. The definitions and formulas for each fairness criterion are shown in Table 1.

2.1 Demographic parity (DP), disparate impact (DI)

The definition of DP is that if the predicted value Y satisfies P(Yˆ|S=0)=P(Yˆ|S=1), that is, the proportion of people who accept positive classification and negative classification is the same as the whole population statistics, then the algorithm achieves demographic parity and aims to treat all groups equally (Kim, Korolova, Rothblum, & Yona, 2019; Liu & Chi, 2019). Another definition of different effects DI is the same as Zafar, Valera, Gomez Rodriguez, and Gummadi (2017). Interpreted different effects as different effects when the results given by the decision-making system are more beneficial or harmful to a group of people with sensitive attributes. Compared with DP, it only makes a formal transformation in calculation, that is, P(Yˆ|S=0)P(Yˆ|S=1). In the case of binary classification, the two groups of people will be directly compared; in the case of multi-classification, DI calculation is performed on each component pair, and the average of these calculation results is taken as the final value. Another similar deformation is proposed by Calders and Verwer (2010). This measure is similar to DI, but it uses the difference rather than the ratio. In the presence of multiple sensitive values, the average method can also be used.

2.2 Individual fairness (IF)

This definition was proposed by Dwork et al., in 2012, that is, if an algorithm predicts the same results for similar individuals, it is said to achieve individual fairness. Formally, given a metric d (.,.), if individual i and individual j are similar under this metric, then the predictions for the two individuals should also be similar (Zemel, Wu, Swersky, Pitassi, & Dwork, 2013; Joseph, Kearns, Morgenstern, Neel, & Roth, 2016). In 2019, Michael et al. improved it and proposed preference-informed individual fairness (PIIF), that is, relaxing individual fairness and allowing deviations from the results of IF, provided that deviations are in line with individual preferences. Under this premise, PIIF can provide individuals with more favorable solutions.

2.3 Equality of opportunity (EO)

Equal opportunity means that if the predicted value Yˆ satisfies P(Yˆ=1|S=0,Y=1)=P(Yˆ=1|S=1,Y=1), the algorithm achieves equal opportunity. It can be seen from its expression that its essence is to compare whether the probability of predicting labels based on different sensitive attributes S in the same category Y is equal. Several similar forms and their derivatives are defined by different articles as follows:

SAccuracy P(Yˆ=y|S=s,Y=y)
STPR (True Positive Rate)P(Yˆ=1|S=s,Y=1)=TPTP+FN
STNR (True Negative Rate)P(Yˆ=0|S=s,Y=0)=TNTN+FP
SFPR (False Positive Rate)P(Yˆ=1|S=s,Y=0)=FPTN+FP
SFNR (False Negative Rate)P(Yˆ=0|S=s,Y=1)=FNTP+FN

The goal proposed by Chouldechova (2017) is to achieve equal 1-S-TPR and 1- S-TNR values in sensitive populations, that is, error rate balance; the goal proposed by Hardt, Price, and Srebro (2016) is to achieve the same S-TPR and 1-S-TNR in sensitive groups, that is, opportunity equilibrium; if S-TPR, S-TNR, S-FPR and S-FNR are equal, the Equality of Odds is considered to be realized, that is, the classifier has the same prediction accuracy and error rate on each sub-group. It can be considered that the algorithm treats each sub-group in the same way, which is a relatively stricter, higher standard and broader concept. If the sum of S-FPR and S-FNR is equal, it is considered that the same misjudgment rate (Disparate Mistreatment) is achieved (Zafar et al., 2017). The formula can be similarly expressed as follows:

P(Yˆ=1|S=0,Y=0)+P(Yˆ=0|S=0,Y=1)=P(Yˆ=1|S=1,Y=0)+P(Yˆ=0|S=1,Y=1)

In summary, the relationships between Equality of Opportunity, Equality of Odds, the same misjudgment rate, error rate balance and opportunity balance are shown in Figure 1.

2.4 Predictive rate parity (PRP)

Equality of prediction rate can be considered as an inverse process of Equality of Odds, that is, the actual value of the target variable is observed on the basis of the predicted results. If the probability of the result is not related to the subpopulation, the model satisfies the equal prediction rate. In general, we can relax its definition and only observe the target variable Y = Positive in the population predicted to be Positive, that is, to verify whether P(Y=1|S=s,Yˆ=1) is equal.

3. Marketing algorithm application link combing and fairness problem identification

The algorithm can accurately predict user behavior according to the user’s basic information, APP behavior and consumer credit behavior characteristics, so as to improve the efficiency of equity issuance and achieve accurate marketing. Therefore, it plays an important role in the link process of marketing equity issuance. Combing the basic marketing algorithm application links is helpful to identify the links that need to evaluate the fairness of the algorithm, so as to find the entry point and evaluation dimension of the fairness evaluation of the marketing algorithm. Combined with the characteristics of marketing equity distribution activities, the basic links of marketing algorithm application are sorted out as shown in Figure 2.

The fairness problem has begun to appear in the algorithm training process. The quality of the training data samples is the basis of the accuracy and fairness of the algorithm. Whether the distribution is biased will affect the logic of the algorithm decision-making, and then have a certain impact on the results of the algorithm training. In this regard, on the one hand, there are more mature calculation methods and systems for judging whether the data is biased in statistics, on the other hand, it belongs to the pre-preparation link of algorithm training, so it is not included in the discussion scope of the fairness evaluation of the algorithm in this paper.

In the application of the algorithm, according to different purposes, the application of the algorithm in the distribution of marketing rights and interests can be roughly divided into two parts: delineating the marketing population, predicting and allocating user rights and interests. First of all, the delineation of the marketing crowd or called Customer Segmentation and Targeting is the first link in the distribution of marketing rights after the customer information gathering (Qin & Zhao, 2016). This process is equivalent to the funnel of screening users, that is, it is necessary to judge which users in the user pool have the possibility to use the product and become target users in the future, so as to include it in the scope of marketing rights distribution. After the algorithm decision-making in this link, some users have obtained the qualification to obtain rights and interests in this marketing activity, and some users have been excluded from the marketing rights and interest’s distribution activities (Speicher et al., 2018; Stefanija, 2021). In this process, the phenomenon of users with different characteristics entering the marketing activity in different proportions may occur, i.e. generating differences in marketing coverage among different groups, leading to inequitable results of the algorithmic decision.

Secondly, after delineating the marketing crowd, due to the difference in the possibility of each user’s subsequent use of the product and becoming the target customer, that is, the user value is different, further decisions need to be made on how much marketing intensity is to be given to different users to ensure that marketing efficiency is maximized (Ali et al., 2019; Stefanija, 2021). This process is also known as carrying out Marketing Strategy Development and Marketing Program Design, looking for possible business opportunities in different groups of customers, and ultimately develop personalized marketing for each group strategy (Ghose, Ipeirotis, & Li, 2011; Qin & Zhao, 2016) in order to improve customer engagement, increase conversion rates, and enhance customer loyalty (Sousa, Pesqueira, Lemos, Sousa, & Rocha, 2019).

Specifically, the first step is to roughly divide the users in the marketing crowd into several clusters according to the users’ basic characteristics, this process is often supported by clustering algorithms like K-means or hierarchical clustering (Muniswamaiah et al., 2019a, b), and Decision tree algorithms like CART-Classification and Regression Trees (Khan & Aziz, 2023), and then allocate the crowd marketing budget for each cluster on this basis. the second step is to use machine learning algorithms like random forests, neural networks, deep learning and reinforcement learning (Khan & Aziz, 2023) to determine and adjust the individual budget according to their more complex unstructured behavior information (Ghose & Han, 2011) on the basis of the cluster marketing budget (You et al., 2015), so as to achieve the optimal distribution of rights and interests near the average price of the cluster marketing budget. When the number of marketing targets is small, the above two steps can also be combined into a single step, i.e. assigning marketing interests directly to each individual user according to their information without going through the clustering step. Many companies use various models and algorithms for precision marketing in accordance with the above logic, such as e-commerce giants Amazon and Alibaba, which use advanced clustering algorithms to segment their customers and thus are able to provide personalized recommendations, and retail giants such as Amazon and Walmart, which use predictive algorithms to recommend products to their customers, thus increasing sales and customer engagement (Khan & Aziz, 2023).

After the above steps, a marketing activity of marketing rights distribution ends, and each user who enters the marketing activity is evaluated by the algorithm as having different marketing values based on his/her own demographic information and behavioral data information, and therefore receives different results of marketing rights tailored to each customer segment (Jun et al., 2021), such as the amount of red packets, the strength of discount coupons, and so on. In other words, user groups with different characteristics will be assigned different marketing efforts, so there may be unfairness in the distribution results that we need to pay attention to. When evaluating the fairness of the above steps, we can judge the fairness of the marketing rights and benefits obtained by different groups of people from the results of the distribution process such as the amount of red envelopes and the strength of discount vouchers.

In addition, the combing of the above marketing link process only focuses on a certain marketing activity, and in a more macro and overall perspective, the time dimension should also be considered, that is, the fairness evaluation of marketing frequency among different groups of people. Actually, the time dimension of fairness is often overlooked by scholars, i.e. fairness is not understood as a multidimensional concept (Lee-Wingate & Stern, 2006). If the time dimension is also taken into account for the fairness of the algorithm, we can see that differences between groups in a single marketing activity does not necessarily mean that there is unfairness. From a long-term perspective, if the frequency and average value of marketing rights obtained by different groups of people are roughly the same in a certain time period, it should be considered fair. Therefore, it is necessary to evaluate the fairness of frequency.

To sum up, three dimensions of marketing algorithm fairness evaluation can be abstracted from the above marketing links: marketing coverage, marketing intensity and marketing frequency. Specifically, the fairness of marketing coverage mainly focuses on the links delineated by the access population and pays attention to the results of the division of the access population. The fairness of marketing efforts focuses on the result stage of predicting the distribution of equity according to the model. The fairness of marketing efficiency is examined from a longer-term perspective.

4. Construction of marketing algorithm fairness index system

4.1 The construction of algorithm fairness framework in marketing field

Through the review of previous studies, it is found that the evaluation of fairness includes multiple criteria including demographic parity or group fairness, individual fairness, equality of opportunity, equality of odds, disparate mistreatment and predictive rate parity. Because it is mathematically impossible to realize all the algorithmic fairness criteria at the same time (Chouldechova, 2017; Kleinberg, Mullainathan, & Raghavan, 2017), it is necessary to further think about what kind of criteria should be chosen for evaluating the marketing algorithms in terms of the three dimensions of fairness evaluation, namely, marketing coverage, marketing intensity, and marketing frequency. The relationship between different fairness evaluation criteria and the focus of evaluation are shown in Figure 3.

First of all, demographic parity is the concept of group fairness, that is, the whole population with similar characteristics is used as the comparison object (Liu & Chi, 2019), but the fairness between groups does not mean that similar people in the group will be treated fairly, and different individuals in the same group may still be treated differently (Zemel et al., 2013), so on the basis of it, the concept of individual fairness focusing on individuals is developed. Individual fairness requires similar individuals to be treated equally (Dwork, Hardt, Pitassi, Reingold, & Zemel, 2012; Joseph et al., 2016). The premise of its measurement is to measure and reflect the degree of similarity between people with a certain standard, which is also the difficulty of the application of individual fairness standards. On the one hand, the evaluation criteria for the degree of similarity between people in different scenarios are different (Zemel et al., 2013; Dolata, Feuerriegel, & Schwabe, 2022), so there is a need to find a way to be able to evaluate similarity in different scenarios, and scholars are not able to form a uniform and clear formula at present; on the other hand, focusing on the individual means that it is necessary to calculate and compare the similarity between every two people. The amount of calculation is large and the application efficiency is not high. Therefore, the application of individual fairness standards in practice is not operable.

In terms of equality of opportunity, disparate mistreatment, equality of odds and predictive rate parity, they can be grouped into one evaluation logic. They all based on the measure of TPR (True Positive Rate), TNR (True Negative Rate), FPR (False Positive Rate), and FNR (False Negative Rate), with the difference that equality of opportunity only requires that different groups have the same TPR, i.e. different groups have the same prediction correctness rate; disparate mistreatment requires that the values of FNR + FPR are the same among different groups, that is, different groups have the same prediction error rate; and equality of odds requires that all of TPR, TNR, FPR and FNR are equal among different groups; and predictive rate parity is equivalent to the inverse process of equality of odds, where equality of odds evaluates the correct and incorrect rate of prediction based on the actual situation, and predictive rate parity calculates the proportional value of the actual situation based on the result of prediction and focuses more on the measurement of model accuracy. Therefore, in order to avoid duplication of evaluation, we select two indicators, namely, equality of opportunity and disparate mistreatment, to evaluate the fairness of algorithms from two perspectives, one positive and one negative, among these four indicators.

4.2 Construction of algorithm fairness evaluation index system in marketing field

Marketing coverage, marketing intensity and marketing frequency are the dimensions of marketing algorithm fairness evaluation, and different fairness evaluation criteria are different scales of fairness evaluation criteria, which are different perspectives to observe the same problem. Combined with the characteristics of marketing activities, this paper selects different fairness evaluation criteria to calculate and evaluate the fairness of marketing algorithms in the three dimensions of marketing coverage, marketing intensity and marketing frequency, and establishes the evaluation index system as shown in Table 2.

The evaluation of marketing coverage fairness is whether different groups of people divided according to a certain feature field have the same opportunity to be circled as marketing groups. Whether it is delineated as a marketing group is a 0 or 1 binary classification problem. It can be used to represent the non-marketing group with “0”, and to represent the marketing group with ‘1'. Therefore, the two indicators of equality of opportunity and disparate mistreatment can fit well in describing and evaluating the ratio of correctly predicted and incorrectly predicted for different groups of people in the process described above, i.e. by comparing whether the ratio of correctly predicted and incorrectly predicted is the same among groups with different attributes, to see whether the algorithmic prediction results are non-related to specific sensitive attributes. The evaluation of marketing intensity fairness is whether the average size of marketing rights obtained by different groups of people divided according to a certain feature field is the same. Because it involves the calculation of the value of marketing equity between different groups, and not only the binary judgment and division of access to the marketing population or not, it is no longer appropriate to start from the perspective of equal opportunity and different misjudgment rates, but to choose the perspective of fairness evaluation of population parity. Specifically, on the one hand, the degree of difference in the average value of equity between groups is compared; on the other hand, the formula for calculating JS divergence is introduced to further compare the degree of difference in the distribution of equity between groups from a more specific and detailed perspective. The evaluation of marketing frequency fairness refers to whether the average number of marketing rights obtained by different groups of people divided according to a certain feature field is the same. The logic of its evaluation is the same as that of marketing intensity fairness, which is essentially a comparison of the differences in the number of times marketing is obtained among different groups of people. Therefore, the average number of marketing rights obtained between groups and the degree of difference in distribution are also chosen and calculated respectively, from the perspective of demographic parity.

The marketing coverage, marketing intensity and marketing frequency constitute the three-dimensional system of the fairness evaluation of marketing activity algorithm, which is composed of three first-level indicators and six second-level indicators. For each feature field, the numerical performance of each secondary index is calculated respectively, and the average value is taken as the performance value of the three primary indexes. Then, the Euclidean distance from the fairness value of marketing coverage, the fairness value of marketing intensity and the fairness value of marketing frequency to the origin of three-dimensional coordinates is calculated to represent the marketing fairness of marketing activities as a whole. The formula is as follows.

FairValue=1nmi=1nj=1mxij2

n is the number of feature fields, m is the number of fairness dimensions, xij is the performance value of a certain fairness dimension of a feature field, and i=1nj=1mxij2 is the sum of the distances from each feature field to the origin. Considering that the number of feature fields involved in different marketing activities is different, the calculation results are normalized by 1/nm to make the algorithm fairness of different marketing activities comparable.

When evaluating the algorithm fairness of a single marketing activity, it is only necessary to calculate the index performance in the two dimensions of marketing coverage and marketing intensity, as well as the Euclidean distance to the origin, at this time m = 2; when evaluating the fairness of the overall algorithm of marketing activities over a period of time (month, season, year, etc.), it is necessary to calculate the indicators in the three dimensions of marketing coverage, marketing intensity and marketing frequency, and the Euclidean distance from each feature field to the origin, at this time m = 3.

In order to verify the effectiveness of the indicator system constructed in this paper, we invited 10 experts and scholars in the fields of marketing, digital economy and computer to rate the effectiveness of the fairness evaluation system constructed in this paper on a scale of 0–100 in terms of the systematicness of the indicator system, typicality of the indicator selection, dynamics, concise scientificity and operability. The scoring results show that the mean values of the scores of 10 experts and scholars in the five aspects of systematicity, typicality, dynamics, concise scientificity and operability of the indicator system are 84.7, 83.5, 81.2, 76.9 and 84.4, respectively, which are all over 75 points, indicating that 10 experts and scholars are recognized for the validity of the indicator system constructed in this paper; the standard deviations of the scores in the five aspects are 14.8, 15.7, 19.5, 15.7, 19.5 and 14.8, respectively, indicating that the evaluation of the 10 experts and scholars is more consistent, which provides support for the validity of the indicator system in this paper.

5. Case analysis and discussion

In this part, we will calculate and analyze the fairness of the algorithm based on the data of Company A according to the above index system. The simulation data in this paper comes from the real data sampled from Company A’s marketing rights issuance activities in the second quarter of 2023. For the sake of user privacy protection and commercial data protection, Company A only opens the data of nine feature fields involved in the study of this paper to us, and the data and the calculation results can only be viewed in Company A’s internal network system, and cannot be downloaded and transmitted to ensure the security of the data use process. During the sampling process, the samples with missing data were cleaned and eliminated, and Company A pre-processed the data in six characteristic fields other than demographic information (gender, age, and city of residence), i.e. the data in each characteristic field were linearly mapped without changing the distribution and distance characteristics of the original data, so as to hide the true absolute number of the original data. In addition, the cumulative distribution of the sampling samples and the overall data in the nine feature fields is basically consistent, indicating that the sampling samples and the overall data basically have the same distribution, which ensures the fairness of the simulation data in this paper.

Common marketing recommendation, marketing rights distribution and other algorithms will process hundreds or even thousands of feature fields to assist the algorithm calculation, and the feature fields that can be perceived and have analytical significance are limited. Based on the simulation data of an online payment marketing activity, this paper calculates the algorithm fairness of nine representative feature fields as shown in Table 3. Specifically, the basic demographic characteristic fields including gender, age, and resident city level; the characteristic fields of online consumption behavior including 30-day online consumption number, 30-day online consumption amount, 30-day online transfer number, 30-day online transfer amount; the 30-day APP login times and the 30-day payment page access times reflect the APP behavior. Letters A-I represent the nine feature fields in turn and all feature fields are calculated and evaluated from three aspects: coverage fairness, intensity fairness and frequency fairness.

Among them, for the fairness of marketing coverage, if a user has received a red envelope during the period, the value is 1, and vice versa is 0; for the fairness of marketing intensity, the total amount of red envelopes obtained by a user during the period is calculated as the marketing intensity value of the user; for the fairness of marketing frequency, the number of times a user has received red envelopes during the period is calculated as the marketing frequency value of the user. The calculation results are shown in Table 4.

5.1 Analysis of fairness calculation results of feature field algorithm

Through the horizontal comparison of the feature fields (as shown in Figure 4), it can be found that the average value of the fairness performance of gender is 0.0327, and the standard deviation is 0.0524, which are significantly lower than the average value and standard deviation of the fairness performance of other feature fields, indicating that its performance in the three fairness dimensions of marketing coverage, marketing intensity and marketing frequency is relatively stable. Specifically, the fairness performance values in the three dimensions are better than other feature fields.

The average fairness performance of several characteristic fields, such as age, 30-day online consumption number, 30-day online consumption amount, 30-day APP login times, and 30-day payment page access times, is less than 0.5, showing good performance. Among them, the average fairness performance of age is 0.4237, and the standard deviation is 0.5032. The overall performance is good, but there are some differences in the performance of the three fairness dimensions. Specifically, its intensity fairness and frequency fairness perform better, but the coverage fairness performance value is the worst in all feature fields. The average value of the fairness performance of the two fields of 30-day online consumption number and 30-day online consumption amount is lower than 0.4, but its standard deviation is high, indicating that it has certain differences in the three fairness dimensions of marketing coverage, marketing intensity and marketing frequency. Specifically, the coverage fairness of these two fields is poor, while the intensity fairness and frequency fairness are better. The average fairness performance of the two fields of the 30-day APP login times and the 30-day payment page access times is also not high, and the standard deviation is low, indicating that the performance in the three fairness dimensions is relatively stable and at a good level.

The average fairness performance of resident city level, 30-day online transfer number and 30-day online transfer amount is higher than other characteristic fields and the standard deviation is lower, and the overall performance needs to be improved. Among them, the average fairness performance of the resident city level is 0.5270, and the standard deviation is 0.3087. Specifically, the coverage fairness is the worst compared with other feature fields, the intensity fairness performs well, and the frequency fairness is worse than other feature fields. The performance of coverage fairness of 30-day online transfer number and 30-day online transfer amount is not well, while the intensity fairness and frequency fairness are the worst compared with other feature fields.

The vertical comparison of the fairness dimension in Figure 5 shows that the average value of the fairness of marketing coverage is 0.6167, which is the maximum value in the three dimensions, and the standard deviation is 0.2670, which is the minimum value in the three dimensions, indicating that the performance of the fairness of marketing coverage needs to be optimized as a whole. The average values of the fairness dimension of marketing intensity and marketing frequency are less than 0.5, and the standard deviations are 0.3538 and 0.3091 respectively, showing that the overall performance of these two dimensions is good and has certain stability.

5.2 Analysis and discussion on the conclusion of marketing algorithm fairness measurement

The fairness calculation results of the above feature fields are drawn in the three-dimensional coordinate diagram, as shown in Figure 6(1). The following Figures 2–4 are the visual mapping of the performance values of each feature field in the two-dimensional coordinate system plane, so as to make a more detailed comparison of the fairness performance of each feature field in different dimensions.

The overall fairness performance value of the marketing activity simulation data is calculated as follows: (n is the number of feature fields 9, m is the number of fairness dimensions 3).

FairValue=1nmi=1nj=1mxij2=0.5247

According to the above fairness calculation results of the simulation data of an online payment marketing activity, and combined with the business intention in the general scenario, the following conclusions can be drawn:

First of all, for the three characteristic fields that reflect demographic information, the fairness at the gender level is the best, that is, men and women are basically treated equally in the three dimensions of marketing coverage, marketing intensity and marketing frequency without significant unfair differences. Although the fairness of age is not prominent, its calculation results are more in line with the practice. Compared with the 20–30 age group and the 50–60 age group, the 30–50 age group has a higher acceptance and awareness of online consumption, a better overall financial situation, and a more mature consumption habit. If the 20–30 age group and the 50–60 age group are over-marketed, it is easy to cause blind and excessive consumption. On the one hand, it will have a negative impact on its personal financial status, resulting in negative social value orientation, and on the other hand, it will also increase financial risks for enterprises. Therefore, it is reasonable to limit the age of marketing objects with emphasis, and its “unfairness” performance to a certain extent can and should be accepted. In contrast, the fairness performance at the level of resident cities needs to be improved. The results reflect the differences in marketing strategies caused by the uneven distribution of users behind the results. The number of users in cities with city levels of 3,4 and 5 is small, so the marketing tends to be more accurate; in cities with city levels of 1 and 2, the user base is better, and the willingness and ability of online consumption are high, so the marketing rights and interests distribution tends to be “carpet-like”. In this regard, on the one hand, we should increase the accuracy of marketing in grade 1 and 2 cities and improve the marketing conversion rate, on the other hand, we should also steadily expand the user base and marketing intensity of grade 3, 4 and 5 cities on the basis of risk control to further stimulate their needs.

Secondly, 30-day online consumption number, 30-day online consumption amount, 30-day online transfer number, and 30-day online transfer amount can reflect the user’s consumption behavior and consumption habits, as well as the activity of using online payment products. The higher the number and amount of online consumption and transaction, the higher the user’s consumption ability and the more mature the consumption habit, which belongs to the ideal target user of the marketing activity. Therefore, the above four feature fields have a greater impact on marketing decisions, and there are certain differences in the marketing rights obtained by different performance value groups, that is, there is a certain degree of ‘unfairness', but it has certain rationality.

Thirdly, there is no significant unfairness in the two fields of 30-day APP login times and 30-day payment page access times, which reflect the APP behavior. The fairness values of the three dimensions are at a better level in each characteristic field, indicating that the user’s APP behavior have little impact on marketing decision-making. In addition, through the vertical comparison of the three dimensions of fairness marketing measurement, it can be found that the coverage fairness performance is low, indicating that there are many factors to be considered in the distribution of marketing rights and interests, and the access of marketing groups is strict, which reflects the principle of prudence in risk control. However, it is still necessary to pay attention to different strict restrictions on different characteristic fields, and explore the boundary conditions of risk, efficiency and fairness to maximize the overall value.

After we provided the data analysis results to Company A, Company A launched the autonomy, intervention and governance of algorithmic fairness according to the above analysis results and evaluation process, and found that the application of the index system constructed in this paper and the conclusions of the study can detect algorithmic unfairness timely on the basis of guaranteeing algorithmic prediction accuracy, which provides practical validation for the validity of the index system constructed in this paper and the conclusions of the study.

6. Summary and prospect

This paper focuses on the marketing scenario to discuss the fairness of the algorithm. Firstly, this paper systematically sorts out the definition and measurement standards of fairness based on previous studies, including demographic parity, individual fairness, equal opportunity, and predictive rate fairness, and clarifies the relationship between standards and measurement focus. Secondly, by combing the process of algorithm-assisted marketing equity issuance in practice, three entry points for algorithmic fairness evaluation of marketing coverage, marketing intensity and marketing frequency are extracted. The algorithmic fairness issue in the process of marketing equity issuance is described from different perspectives, and the time dimension is creatively added on the basis of static marketing coverage and marketing intensity to ensure the comprehensiveness and systematicity of the subsequent indicator system construction. On this basis, combining theory and practice, selecting appropriate fairness evaluation standards according to the characteristics of the marketing process, following the principle of concise scientificity to construct a set of algorithmic fairness evaluation system suitable for the marketing scenarios, including three first-level indicators and six second-level indicators. Finally, based on the data of Company A, its algorithmic fairness evaluation index is calculated and evaluated, and several characteristic fields such as gender, age, resident city level, 30-day online consumption number, 30-day online consumption amount, 30-day online transfer number, 30-day online transfer amount, 30-day APP login number, 30-day payment page access number are selected for analysis, which verifies the operability of the index system. The practice of Company A in using the indicator system constructed in this paper also provides support for the effectiveness of the indicator system.

In summary, the conclusions of this paper are as follows: (1) Different fairness evaluation standards have different emphases, and different standards may produce different results. Therefore, different fairness definitions and standards should be selected in different fields according to the characteristics of scenarios. (2) The fairness of the marketing rights distribution algorithm can be measured from three aspects: marketing coverage, marketing intensity and marketing frequency. Specifically, the two criteria of equal opportunity and different misjudgment rate are selected for the fairness of coverage, and the criteria of demographic parity is selected for intensity and frequency. (3) Different feature fields should be given different degrees of fairness restrictions. The interpretation of the calculation results and the means of follow-up intervention should also be different. It should be judged in combination with marketing objectives and industry characteristics.

This paper has certain theoretical and managerial practical significance. In terms of theoretical significance, firstly, the paper focuses on the algorithmic unfairness problem in the specific field of marketing, and specifically chooses to analyze the precision marketing process using algorithms for the distribution of marketing rights, which responds to the scholars' call for the “algorithmic fairness research should be more scenario-based” (Liang et al., 2020; Ding, 2022). Second, this paper clarifies the focuses of different fairness evaluation criteria and the relationship between them, laying a foundation for the construction of a multidimensional and systematic algorithmic fairness evaluation system. Third, on the basis of combing different fairness evaluation standards, this paper further combs through the precision marketing process of marketing rights issuance, summarizes the key points of evaluation that need to be paid attention to in this process, and explores and selects appropriate fairness evaluation standards according to the characteristics of the key points to form a multidimensional and systematic algorithmic fairness evaluation system, constructs a comprehensive evaluation index of FV, and puts the theoretical research on algorithmic fairness deeper into the marketing rights issuance scenario, which plays a role in promoting the research and application of algorithmic fairness evaluation and algorithmic ethics. research and application.

In terms of practical significance, this paper constructs a multi-dimensional and systematic algorithmic fairness evaluation index system and FV comprehensive evaluation index to measure the degree of algorithmic fairness in the process of marketing equity issuance. Meanwhile, the validity of the index system constructed in this paper is verified through the trial calculation and analysis of the real data of Enterprise A, so as to provide a practicable grip for the evaluation and governance of algorithmic fairness for enterprises, and to guide the enterprises to strive to become fairer and more ethical in their daily business practices. Enterprises can self-examine and self-govern algorithmic fairness according to this process in their marketing business, and intervene in the link that generates algorithmic unfairness according to the attribution results, so as to realize sustainable ESG development.

This paper constructs a system of indicators for evaluating the fairness of algorithms in the process of algorithm-assisted marketing entitlement issuance, but there are many issues that have not been discussed in depth. First, the fairness sensitivity of different feature fields is different, so the degree of concern and protection for different sensitive attribute data should also be different. For example, the sensitive attributes that are strictly regulated and mandatorily protected by laws and regulations should be categorized as highly sensitive attributes, and the data of the relevant sensitive attributes should not be used as the basis for algorithmic decision-making; the sensitive attributes that are constrained by the general ethical standards, as well as the industry-specific self-regulatory standards can be categorized as moderately sensitive attributes, and the algorithms should be strictly evaluated to see if there is any unfairness in the process of decision making, and should intervene if it exceeds the reasonable range of Fairness Value; whereas, for the less sensitive data, whose use does not have any negative impacts on the individuals and the society, they can be reasonably used under the circumstance of safeguarding the privacy and security of the data, instead of strictly requesting to realize the fairness. However, which sensitive attributes should be classified as highly sensitive attributes and which as moderately sensitive attributes and low sensitive attributes need to be judged in the light of specific industries and relevant laws and regulations. The criteria and rules for classifying the high and low sensitivity of sensitive attributes can also be further explored in future research, and a categorization table of fair sensitive attributes can be formed in specific industries to provide guidance for practice.

Secondly, for the intervention and governance of the unfairness problem, different interventions should be carried out in combination with the sensitivity of different sensitive attributes. If algorithmic unfairness is found to occur in highly sensitive attributes, the data of relevant sensitive attributes should be removed from the database of algorithmic decision-making; when algorithmic unfairness occurs in sensitive attributes of moderate sensitivity, attribution and intervention can be carried out in terms of algorithmic training data, model and algorithmic mechanism, and evaluation of algorithmic fairness can be carried out afresh after adjustments; whereas, for the low sensitivity of sensitive attributes, the company is allowed to combine specific marketing objectives for categorized marketing, such as giving college student groups rental marketing preferences during the college employment season. In future research, scholars can explore and construct more detailed intervention and governance principles and processes. In addition, it should be noted that there is a certain difference between the unfairness of the algorithm itself and the unfairness perceived by the users, such as scholars have pointed out that the fairness preference of users in different cultures largely predicts their fairness perceptions of differential customer treatment (Mayser & von Wangenheim, 2013), therefore, when designing the intervention and governance principles and processes for the algorithmic unfairness problem, full consideration should also be given to the intervention and governance principles and processes for algorithmic unfairness. and processes, users' perceptions in specific situations should also be fully considered, and these are research questions that can be explored in depth in future studies.

Finally, this paper only selects a set of marketing data simulation data to measure the overall algorithm fairness, but does not measure and compare multiple sets of marketing activities. Therefore, the calculation and analysis are relatively general, and there is a lack of long-term data tracking, which cannot reflect the long-term performance of marketing algorithm fairness. Future research can select a company to launch a long-term tracking study, apply the index system constructed in this paper to observe its long-term algorithmic fairness performance, and explore the practical issues of algorithmic fairness governance in this process.

Figures

The relationship between equality of opportunity, equality of odds, the same misjudgment rate, error rate balance and opportunity balance

Figure 1

The relationship between equality of opportunity, equality of odds, the same misjudgment rate, error rate balance and opportunity balance

Marketing link combing and fairness evaluation link identification

Figure 2

Marketing link combing and fairness evaluation link identification

The relationship of fairness standards combing

Figure 3

The relationship of fairness standards combing

Feature field fairness performance horizontal comparison histogram

Figure 4

Feature field fairness performance horizontal comparison histogram

Vertical comparison histogram of three fairness measurement dimensions

Figure 5

Vertical comparison histogram of three fairness measurement dimensions

Visualization diagram of fairness performance value of each feature field

Figure 6

Visualization diagram of fairness performance value of each feature field

Summary table of fairness standard definition

Fairness criterionDefinition and calculation formula of fairness standard
Demographic Parity, its essence is the comparison between P(Yˆ|S=0) and P(Yˆ|S=1).
  • (1)

    If the predicted value Y satisfies P(Yˆ|S=0)=P(Yˆ|S=1), then the algorithm achieves demographic parity

  • (2)

    Or the ratio of the two groups can be compared P(Yˆ|S=0)P(Yˆ|S=1)

Individual fairnessIf an algorithm predicts the same results for similar individuals, it is said to achieve individual fairness. The calculation method is the same as Demographic Parity, but the refinement is specific to each person
Equality of opportunityif the predicted value Yˆ satisfies P(Yˆ=1|S=0,Y=1)=P(Yˆ=1|S=1,Y=1),the algorithm achieves equal opportunity. Concretely speaking, S-TPR(True Positive Rate) = P(Yˆ=1|S=s,Y=1)=TPTP+FN
Equality of oddsOn the basis of Equality of Opportunity, TNR, FPR and FNR are also required to be equal
S-TPR (True Positive Rate) = P(Yˆ=1|S=s,Y=1)=TPTP+FN
S-TNR (True Negative Rate) = P(Yˆ=0|S=s,Y=0)=TNTN+FP
S-FPR (False Positive Rate) = P(Yˆ=1|S=s,Y=0)=FPTN+FP
S-FNR (False Negative Rate) = P(Yˆ=0|S=s,Y=1)=FNTP+FN
Disparate mistreatmentS-FPR + S-FNR = P(Yˆ=1|S=s,Y=0)+P(Yˆ=0|S=s,Y=1) is equal
Predictive rate parityP(Y=1|S=s,Yˆ=1)=TPTP+FP is equal

Source(s): Table by authors

Fairness evaluation index system of marketing algorithm

Algorithm fairness evaluation perspectiveAlgorithm fairness criteriaMarketing algorithm fairness evaluation index calculation method
Marketing coverage fairnessEquality of OpportunityCEOAi=STPR=P(Yˆ=1|S=i,Y=1)=TPTP+FNCEOD=i=1n(CEOAiCEOAi¯)2/n
Disparate MistreatmentCDMAi=SFPR+SFNR2=[P(Yˆ=1|S=i,Y=0)+P(Yˆ=0|S=i,Y=1)]/2=(FPTN+FP+FNTP+FN)/2CDMD=i=1n(CDMAiCDMAi¯)2/n
Marketing intensity fairnessDemographic ParityEDPAi=Y(S=i)¯Y¯, Y is the amount of red envelopes
EDPDj=i=1n(EDPAiEDPAi¯)2nEDPD=EDPDjEDPDminEDPDmaxEDPDmin
DPJSij=JS(SiSj)=12KL(SiSi+Sj2)+12KL(SjSi+Sj2)=12si(x)log2si(x)si(x)+sj(x)+12sj(x)log2sj(x)si(x)+sj(x)DPJS=1n(n1)jiDPJSij
Marketing frequency fairnessDemographic ParityFDPAi=Y(S=i)¯/Y¯, Y is the number of times the user gets a red envelope in a period of time
FDPDj=i=1n(FDPAiFDPAi¯)2/nFDPD=FDPDjFDPDminFDPDmaxFDPDmin
DPJSij=JS(SiSj)=12KL(SiSi+Sj2)+12KL(SjSi+Sj2)=12si(x)log2si(x)si(x)+sj(x)+12sj(x)log2sj(x)si(x)+sj(x)DPJS=1n(n1)jiDPJSij

Note(s): Among them, n is the number of categories of sensitive attribute S, i is the population of each category under a certain sensitive attribute, and j is each sensitive attribute

Source(s): Table by authors’

Feature field selection

Basic demographic characteristics field
  • -

    A. gender

  • -

    B. age

  • -

    C. resident city level

Feature fields that reflect the characteristics of online consuming behavior
  • -

    D. 30-day online consumption number

  • -

    E. 30-day online consumption amount

  • -

    F. 30-day online transfer number

  • -

    G. 30-day online transfer amount

Feature fields that reflect APP behavior
  • -

    H. 30-day APP login times

  • -

    I. 30-day payment page access times

Source(s): Table by authors’

Feature field algorithm fairness calculation results

Feature fieldMarketing coverage fairnessMarketing intensity fairnessMarketing frequency fairnessMeanSD
CEODCDMDMCFEDPDDPJSMIFFDPDDPJSMFF
A00.18620.09310.009900.00490000.03270.0524
B1110.07160.07110.07140.30620.09320.19970.42370.5032
C0.79670.92420.86050.13000.37250.25130.45580.48300.46940.52700.3087
D0.78490.85190.81840.00000.18000.09000.21960.10760.16360.35730.4010
E0.71190.90650.80920.00990.37710.19350.24960.13310.19140.39800.3561
F0.80070.21420.50751.00000.96610.98310.87530.74820.81180.76740.2409
G0.89610.31430.60520.983210.99161110.86560.2255
H0.06010.81670.43840.15510.29500.22500.48210.38980.43600.36650.1225
I0.836400.41820.13900.56460.35180.47500.68060.57780.44930.1161
Mean0.65410.57940.61670.27760.42520.35140.45150.40390.4277//
SD0.34230.36930.26700.38570.33840.35380.29840.32960.3091//

Note(s): Letters A-I represent gender, age, resident city level, 30-day online consumption number, 30-day online consumption amount, 30-day online transfer number, 30-day online transfer amount, 30-day APP login times, and 30-day payment page access times

Source(s): Table by authors’

References

Ali, M., Sapiezynski, P., Bogen, M., Korolova, A., Mislove, A., & Rieke, A. (2019). Discrimination through optimization: How Facebook's Ad delivery can lead to biased outcomes. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 130. doi: 10.1145/3359301.

Calders, T., & Verwer, S. (2010). Three naive bayes approaches for discrimination-free classification. Data Mining and Knowledge Discovery, 21(2), 277292. doi: 10.1007/s10618-010-0190-x.

Carr, C. L. (2007). The FAIRSERV model: Consumer reactions to services based on a multidimensional evaluation of service fairness. Decision Sciences, 38(1), 107130. doi: 10.1111/j.1540-5915.2007.00150.x.

Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153163. doi: 10.1089/big.2016.0047.

Coen, R., Paul, E., Vanegas, P., Lange, A., & Hans, G. S. (2016). A user-centered perspective on algorithmic personalization. University of California, Berkeley MIMS Final Project. Available from: https://www.ischool.berkeley.edu/projects/2016/user-centeredperspective-algorithmic-personalization

Cox, J. L. (2001). Can differential prices be fair?. The Journal of Product and Brand Management, 10(5), 264275. doi: 10.1108/10610420110401829.

De‐Arteaga, M., Feuerriegel, S., & Saar‐Tsechansky, M. (2022). Algorithmic fairness in business analytics: Directions for research and practice. Production and Operations Management, 31(10), 37493770. doi: 10.1111/poms.13839.

Ding, X. (2022). On the legal regulation of algorithms. Frontiers of Law in China, 17(1), 88103.

Dolata, M., Feuerriegel, S., & Schwabe, G. (2022). A sociotechnical view of algorithmic fairness. Information Systems Journal, 32(4), 754818. doi: 10.1111/isj.12370.

Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2012). Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference (pp. 214226).

Frow, P., Payne, A., Wilkinson, I. F., & Young, L. (2011). Customer management and CRM: Addressing the dark side. Journal of Services Marketing, 25(2), 7989.

Ghose, A., & Han, S. P. (2011). An empirical analysis of user content generation and usage behavior on the mobile Internet. Management Science, 57(9), 16711691. doi: 10.1287/mnsc.1110.1350.

Ghose, A., Ipeirotis, P. G., & Li, B. (2012). Designing ranking systems for hotels on travel search engines by mining user-generated and crowdsourced content. Marketing Science, 31(3), 493520. doi: 10.1287/mksc.1110.0700.

Greenberg, J., & Tyler, T. R. (1987). Why procedural justice in organizations?. Social Justice Research, 1(2), 127142. doi: 10.1007/BF01048012.

Gwinner, K. P., Gremler, D. D., & Jo Bitner, M. (1998). Relational benefits in services industries: The customer’s perspective. Journal of the Academy of Marketing Science, 26(Spring), 101114. doi: 10.1177/0092070398262002.

Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems, 29.

Homburg, C., Droll, M., & Totzek, D. (2008). Customer prioritization: Does it pay off, and how should it be implemented?. Journal of Marketing, 72(September), 110130. doi: 10.1509/jmkg.72.5.110.

Joseph, M., Kearns, M., Morgenstern, J., Neel, S., & Roth, A. (2016). Rawlsian fairness for machine learning. arXiv preprint arXiv:1610.09559, 1(2), 19.

Jun, W., Li, S., Yanzhou, Y., Gonzalezc, E. D.S., Weiyi, H., Litao, S., & Zhang, Y. (2021). Evaluation of precision marketing effectiveness of community e-commerce–An AISAS based model. Sustainable Operations and Computers, 2, 200205. doi: 10.1016/j.susoc.2021.07.007.

Khan, A. R., & Aziz, M. T. (2023). Harnessing big data for precision marketing: A deep dive into customer segmentation and predictive analytics in the digital era. AI, IoT and the Fourth Industrial Revolution Review, 13(7), 91102.

Kim, M. P., Korolova, A., Rothblum, G. N., & Yona, G. (2019). Preference-informed fairness. arXiv preprint arXiv:1904.01793.

Kleinberg, J., Mullainathan, S., & Raghavan, M. (2017). Inherent trade-offs in the fair determination of risk scores. In Conference on Innovations in Theoretical Computer Science (ITCS).

Kumar, I. E., Hines, K. E., & Dickerson, J. P. (2022). Equalizing credit opportunity in algorithms: Aligning algorithmic fairness research with us fair lending regulation. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (pp. 357368).

Lee-Wingate, S. N., & Stern, B. B. (2006). Perceived fairness: Conceptual framework and scale development. Advances in Consumer Behavior, 34, 400402.

Liang, Z., Yu, Z., & Song, Q. (2020). Platform governance in the context of artificial intelligence application: Core issues, transition challenges and system construction. Comparative Economic & Social Systems, 3, 6775.

Liu, Y. (2019). Research on algorithm bias and its regulation approach. Law Science, 40(6), 5566.

Liu, P., & Chi, Z. (2019). Ethical reflections on algorithmic discrimination. Journal of Dialectics of Nature, 10.

Martin, K. (2019). Ethical implications and accountability of algorithms. Journal of Business Ethics, 160(4), 835850. doi: 10.1007/s10551-018-3921-3.

Mayser, S., & von Wangenheim, F. (2013). Perceived fairness of differential customer treatment: Consumers’ understanding of distributive justice really matters. Journal of Service Research, 16(1), 99113. doi: 10.1177/1094670512464274.

Meng, L., Chang, S., Sang, Y., Ding, P., Wang, L., Nan, X., … Sang, M. (2022). From algorithm bias to algorithm discrimination: Research on the responsibility of algorithmic discrimination. Journal of Northeastern University, 24(1), 1. doi: 10.1186/s13058-021-01497-6.

Miao, F. (2022). Ethics of AI in education: Analysis and governance: Educational overview of recommendation on ethics of Al. China Educational Technology, (6), 2236.

Muniswamaiah, M., Agerwala, T., & Tappert, C. (2019a). Big data in cloud computing review and opportunities. arXiv preprint arXiv:1912.10821.

Muniswamaiah, M., Agerwala, T., & Tappert, C. C. (2019b). Federated query processing for big data in data science. In 2019 IEEE International Conference on Big Data (Big Data) (pp. 61456147). IEEE.

Qin, X., & Zhao, J. (2016). Research on big data application in precision marketing. In 6th International Conference on Electronic, Mechanical, Information and Management Society (pp. 18241828). Atlantis Press.

Seiders, K., & Berry, L. L. (1998). Service fairness: What it is and why it matters. The Academy of Management Executive, 12(2), 820. doi: 10.5465/ame.1998.650513. Available from: http://www.jstor.org/stable/4165454

Sharma, G. D., Yadav, A., & Chopra, R. (2020). Artificial intelligence and effective governance: A review, critique and research agenda. Sustainable Futures, 2, 100004. doi: 10.1016/j.sftr.2019.100004.

Sousa, M. J., Pesqueira, A. M., Lemos, C., Sousa, M., & Rocha, Á. (2019). Decision-making based on big data analytics for people management in healthcare organizations. Journal of Medical Systems, 43(9), 110. doi: 10.1007/s10916-019-1419-x.

Speicher, T., Ali, M., Venkatadri, G., Ribeiro, F. N., Arvanitakis, G., Benevenuto, F., …, & Mislove, A. (2018). Potential for discrimination in online targeted advertising. In Conference on Fairness, Accountability and Transparency (pp. 519). PMLR.

Stefanija, A. P. (2021). Increasing fairness in targeted advertising: The risk of gender stereotyping by job ad algorithms. Alexander von Humboldt Institute for Internet and Society. doi:10.5281/zenodo.4619894.

Thibaut, J., & Walker, L. (1975). Procedural justice: A psychological analysis. Hillsdale, NJ: Lawrence Erlbaum Associates.

Wang, H., & Ru, X. (2020). AI algorithm bias and its governance. In Studies in Philosophy of Science and Technology, 2, acquisition.

Woo Jin, C., & Winterich, K. (2013). Can brands move in from the outside? How moral identity enhances out-group brand attitudes. Journal of Marketing, 77(2), 96111. doi: 10.1509/jm.11.0544.

Wu, C. C., Liu, Y. F., Chen, Y. J., & Wang, C. J. (2012). Consumer responses to price discrimination: Discriminating bases, inequality status, and information disclosure timing influences. Journal of Business Research, 65(1), 106116. doi: 10.1016/j.jbusres.2011.02.005.

Xia, L., Monroe, K. B., & Cox, J. L. (2004). The price is unfair! A conceptual framework of price fairness perceptions. Journal of Marketing, 68(4), 115. doi: 10.1509/jmkg.68.4.1.42733.

Xiao, H. J. (2022). Algorithmic responsibility: Theoretical justification, panoramic portrait and governance paradigm. Journal of Management World, 4, 200226.

Yan, W. (2021). Comprehensive governance of algorithmic discrimination in the era of weak artificial intelligence. Nomocracy Forum, 03, 137157.

You, Z., Si, Y. W., Zhang, D., Zeng, X., Leung, S. C., & Li, T. (2015). A decision-making framework for precision marketing. Expert Systems with Applications, 42(7), 33573367. doi: 10.1016/j.eswa.2014.12.022.

Zafar, M. B., Valera, I., Gomez Rodriguez, M., & Gummadi, K. P. (2017). Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th International Conference on World Wide Web (pp. 11711180).

Zemel, R., Wu, Y., Swersky, K., Pitassi, T., & Dwork, C. (2013). Learning fair representations. In International conference on machine learning (pp. 325333). PMLR.

Zhang, J. (2022). Construction the algorithmic governance system with multiple co-governance. Science of Law (Journal of Northwest University of Political Science and Law), 01, 115123. doi: 10.16290/j.cnki.1674-5205.2022.01.007.

Acknowledgements

This work was supported by the National Natural Science Foundation of China (Grant numbers: 72102220, 72192843) and was supported by Ant Group and a grant from MOE Social Science Laboratory of Digital Economic Forecasts and Policy Simulation at UCAS.

Corresponding author

Mengxi Yang is the corresponding author and can be contacted at: yangmengxi@ucas.ac.cn

About the authors

Mengxi Yang is Associate Professor at the School of Economics and Management and MOE Social Science Laboratory of Digital Economic Forecasts and Policy Simulation, University of Chinese Academy of Sciences. His research interests include leadership and ethics, AI and creativity, and his work has been published in academic journals such as Journal of Applied Psychology, Human Relations, Human Resource Management, Journal of Organizational Behavior and Journal of Business Ethics.

Jie Guo received her Bachelor’s degree in Business Administration from China University of Political Science and Law in 2023 and is currently a Master’s degree candidate in Business Management at the School of Economics and Management, University of Chinese Academy of Sciences, with her main research interests in the areas of digital transformation and management of enterprises, artificial intelligence and leader ethics.

Lei Zhu holds a Ph.D. in Systems Engineering and has previously served as a visiting scholar at the University of California San Diego (USA) and Newcastle University (UK). His research focuses on machine learning/statistical inference, large language models, artificial general intelligence, and financial technology. He has held positions such as the Director of the Mathematics Department at Harbin Engineering University and the AI Director at a technology subsidiary of one of China’s top two real estate companies. Currently, he works as a Senior Algorithm Expert at Ant Group, China’s largest internet finance technology company. Zhu Lei has led and participated in various projects, including the Sub-project of Chinese National Programs for High Technology Research and Development, the Youth Fund from Heilongjiang Province. He has published over 20 papers in various conferences/journals (such as NeurIPS, SCI, and EI). He is a member of the expert group for the National Graduate Mathematical Modeling Competition and the Heilongjiang Provincial Mathematical Modeling Committee. He is also a member of the China Society for Industrial and Applied Mathematics (CSIAM) and a senior member of the China Computer Federation (CCF).

Huijie Zhu is a technical expert in the field of artificial intelligence and serves Ant Group. He is good at abstract modeling of recommendation, marketing and other scenarios, and further uses AI technology to solve problems. He has extensive experience in consumer finance, and is particularly good at marketing and pricing of consumer credit products. He graduated from the Viterbi School of Engineering of the University of Southern California (USC) in 2017 with a master’s degree. His research direction were electrical engineering and computer engineering.

Xia Song obtained her Bachelor’s degree in Economic Statistics from Shandong University of Finance and Economics in 2019, and now she is a Ph.D. student in Innovation Management at School of Economics and Management of the University of Chinese Academy of Sciences. Her main research interests are organizational ecology, platform economy and enterprise habitats.

Hui Zhang studied at Beihang University from 2007 to 2011 and received his Bachelor of Science degree in Mathematics and Applied Mathematics in 2011. He studied Control Science and Engineering in Beihang University from 2011 to 2014 and obtained his Master of Engineering degree in 2014. After graduation, he has been engaged in the research and application of artificial intelligence in the field of recommendation system, advertising system, natural language processing, etc. At present, he is engaged in the research and application of algorithms in the marketing business of Ant Group

Tianxiang Xu earned Master’s degree in Operations Research at Cornell University and Bachelor’s degree in Control Science and Engineering at Zhejiang University. His research focuses on advertising, recommendation, forecasting, optimization, revenue management etc., and he currently leads a machine learning team at large internet enterprise.

Related articles