Search results
1 – 10 of over 2000Mengxi Yang, Jie Guo, Lei Zhu, Huijie Zhu, Xia Song, Hui Zhang and Tianxiang Xu
Objectively evaluating the fairness of the algorithm, exploring in specific scenarios combined with scenario characteristics and constructing the algorithm fairness evaluation…
Abstract
Purpose
Objectively evaluating the fairness of the algorithm, exploring in specific scenarios combined with scenario characteristics and constructing the algorithm fairness evaluation index system in specific scenarios.
Design/methodology/approach
This paper selects marketing scenarios, and in accordance with the idea of “theory construction-scene feature extraction-enterprise practice,” summarizes the definition and standard of fairness, combs the application link process of marketing algorithms and establishes the fairness evaluation index system of marketing equity allocation algorithms. Taking simulated marketing data as an example, the fairness performance of marketing algorithms in some feature areas is measured, and the effectiveness of the evaluation system proposed in this paper is verified.
Findings
The study reached the following conclusions: (1) Different fairness evaluation criteria have different emphases, and may produce different results. Therefore, different fairness definitions and standards should be selected in different fields according to the characteristics of the scene. (2) The fairness of the marketing equity distribution algorithm can be measured from three aspects: marketing coverage, marketing intensity and marketing frequency. Specifically, for the fairness of coverage, two standards of equal opportunity and different misjudgment rates are selected, and the standard of group fairness is selected for intensity and frequency. (3) For different characteristic fields, different degrees of fairness restrictions should be imposed, and the interpretation of their calculation results and the means of subsequent intervention should also be different according to the marketing objectives and industry characteristics.
Research limitations/implications
First of all, the fairness sensitivity of different feature fields is different, but this paper does not classify the importance of feature fields. In the future, we can build a classification table of sensitive attributes according to the importance of sensitive attributes to give different evaluation and protection priorities. Second, in this paper, only one set of marketing data simulation data is selected to measure the overall algorithm fairness, after which multiple sets of marketing campaigns can be measured and compared to reflect the long-term performance of marketing algorithm fairness. Third, this paper does not continue to explore interventions and measures to improve algorithmic fairness. Different feature fields should be subject to different degrees of fairness constraints, and therefore their subsequent interventions should be different, which needs to be continued to be explored in future research.
Practical implications
This paper combines the specific features of marketing scenarios and selects appropriate fairness evaluation criteria to build an index system for fairness evaluation of marketing algorithms, which provides a reference for assessing and managing the fairness of marketing algorithms.
Social implications
Algorithm governance and algorithmic fairness are very important issues in the era of artificial intelligence, and the construction of the algorithmic fairness evaluation index system in marketing scenarios in this paper lays a safe foundation for the application of AI algorithms and technologies in marketing scenarios, provides tools and means of algorithm governance and empowers the promotion of safe, efficient and orderly development of algorithms.
Originality/value
In this paper, firstly, the standards of fairness are comprehensively sorted out, and the difference between different standards and evaluation focuses is clarified, and secondly, focusing on the marketing scenario, combined with its characteristics, key fairness evaluation links are put forward, and different standards are innovatively selected to evaluate the fairness in the process of applying marketing algorithms and to build the corresponding index system, which forms the systematic fairness evaluation tool of marketing algorithms.
Details
Keywords
Jaya Choudhary, Mangey Ram and Ashok Singh Bhandari
This research introduces an innovation strategy aimed at bolstering the reliability of a renewable energy resource, which is hybrid energy systems, through the application of a…
Abstract
Purpose
This research introduces an innovation strategy aimed at bolstering the reliability of a renewable energy resource, which is hybrid energy systems, through the application of a metaheuristic algorithm. The growing need for sustainable energy solutions underscores the importance of integrating various energy sources effectively. Concentrating on the intermittent characteristics of renewable sources, this study seeks to create a highly reliable hybrid energy system by combining photovoltaic (PV) and wind power.
Design/methodology/approach
To obtain efficient renewable energy resources, system designers aim to enhance the system’s reliability. Generally, for this purpose, the reliability redundancy allocation problem (RRAP) method is utilized. The authors have also introduced a new methodology, named Reliability Redundancy Allocation Problem with Component Mixing (RRAP-CM), for optimizing systems’ reliability. This method incorporates heterogeneous components to create a nonlinear mixed-integer mathematical model, classified as NP-hard problems. We employ specially crafted metaheuristic algorithms as optimization strategies to address these challenges and boost the overall system performance.
Findings
The study introduces six newly designed metaheuristic algorithms. Solve the optimization problem. When comparing results between the traditional RRAP method and the innovative RRAP-CM method, enhanced reliability is achieved through the blending of diverse components. The use of metaheuristic algorithms proves advantageous in identifying optimal configurations, ensuring resource efficiency and maximizing energy output in a hybrid energy system.
Research limitations/implications
The study’s findings have significant social implications because they contribute to the renewable energy field. The proposed methodologies offer a flexible and reliable mechanism for enhancing the efficiency of hybrid energy systems. By addressing the intermittent nature of renewable sources, this research promotes the design of highly reliable sustainable energy solutions, potentially influencing global efforts towards a more environmentally friendly and reliable energy landscape.
Practical implications
The research provides practical insights by delivering a comprehensive analysis of a hybrid energy system incorporating both PV and wind components. Also, the use of metaheuristic algorithms aids in identifying optimal configurations, promoting resource efficiency and maximizing reliability. These practical insights contribute to advancing sustainable energy solutions and designing efficient, reliable hybrid energy systems.
Originality/value
This work is original as it combines the RRAP-CM methodology with six new robust metaheuristics, involving the integration of diverse components to enhance system reliability. The formulation of a nonlinear mixed-integer mathematical model adds complexity, categorizing it as an NP-hard problem. We have developed six new metaheuristic algorithms. Designed specifically for optimization in hybrid energy systems, this further highlights the uniqueness of this approach to research.
Details
Keywords
Dukun Xu, Yimin Deng and Haibin Duan
This paper aims to develop a method for tuning the parameters of the active disturbance rejection controller (ADRC) for fixed-wing unmanned aerial vehicles (UAVs). The bald eagle…
Abstract
Purpose
This paper aims to develop a method for tuning the parameters of the active disturbance rejection controller (ADRC) for fixed-wing unmanned aerial vehicles (UAVs). The bald eagle search (BES) algorithm has been improved, and a cost function has been designed to enhance the optimization efficiency of ADRC parameters.
Design/methodology/approach
A six-degree-of-freedom nonlinear model for a fixed-wing UAV has been developed, and its attitude controller has been formulated using the active disturbance rejection control method. The parameters of the disturbance rejection controller have been fine-tuned using the collaborative mutual promotion bald eagle search (CMP-BES) algorithm. The pitch and roll controllers for the UAV have been individually optimized to obtain the most effective controller parameters.
Findings
Inspired by the salp swarm algorithm (SSA), the interaction among individual eagles has been incorporated into the CMP-BES algorithm, thereby enhancing the algorithm's exploration capability. The efficient and accurate optimization ability of the proposed algorithm has been demonstrated through comparative experiments with genetic algorithm, particle swarm optimization, Harris hawks optimization HHO, BES and modified bald eagle search algorithms. The algorithm's capability to solve complex optimization problems has been further proven by testing on the CEC2017 test function suite. A transitional function for fitness calculation has been introduced to accelerate the ability of the algorithm to find the optimal parameters for the ADRC controller. The tuned ADRC controller has been compared with the classical proportional-integral-derivative (PID) controller, with gust disturbances introduced to the UAV body axis. The results have shown that the tuned ADRC controller has faster response times and stronger disturbance rejection capabilities than the PID controller.
Practical implications
The proposed CMP-BES algorithm, combined with a fitness function composed of transition functions, can be used to optimize the ADRC controller parameters for fixed-wing UAVs more quickly and effectively. The tuned ADRC controller has exhibited excellent robustness and disturbance rejection capabilities.
Originality/value
The CMP-BES algorithm and transitional function have been proposed for the parameter optimization of the active disturbance rejection controller for fixed-wing UAVs.
Details
Keywords
Yingjie Yu, Shuai Chen, Xinpeng Yang, Changzhen Xu, Sen Zhang and Wendong Xiao
This paper proposes a self-supervised monocular depth estimation algorithm under multiple constraints, which can generate the corresponding depth map end-to-end based on RGB…
Abstract
Purpose
This paper proposes a self-supervised monocular depth estimation algorithm under multiple constraints, which can generate the corresponding depth map end-to-end based on RGB images. On this basis, based on the traditional visual simultaneous localisation and mapping (VSLAM) framework, a dynamic object detection framework based on deep learning is introduced, and dynamic objects in the scene are culled during mapping.
Design/methodology/approach
Typical SLAM algorithms or data sets assume a static environment and do not consider the potential consequences of accidentally adding dynamic objects to a 3D map. This shortcoming limits the applicability of VSLAM in many practical cases, such as long-term mapping. In light of the aforementioned considerations, this paper presents a self-supervised monocular depth estimation algorithm based on deep learning. Furthermore, this paper introduces the YOLOv5 dynamic detection framework into the traditional ORBSLAM2 algorithm for the purpose of removing dynamic objects.
Findings
Compared with Dyna-SLAM, the algorithm proposed in this paper reduces the error by about 13%, and compared with ORB-SLAM2 by about 54.9%. In addition, the algorithm in this paper can process a single frame of image at a speed of 15–20 FPS on GeForce RTX 2080s, far exceeding Dyna-SLAM in real-time performance.
Originality/value
This paper proposes a VSLAM algorithm that can be applied to dynamic environments. The algorithm consists of a self-supervised monocular depth estimation part under multiple constraints and the introduction of a dynamic object detection framework based on YOLOv5.
Details
Keywords
The energy generation process through photovoltaic (PV) panels is contingent upon uncontrollable variables such as wind patterns, cloud cover, temperatures, solar irradiance…
Abstract
Purpose
The energy generation process through photovoltaic (PV) panels is contingent upon uncontrollable variables such as wind patterns, cloud cover, temperatures, solar irradiance intensity and duration of exposure. Fluctuations in these variables can lead to interruptions in power generation and losses in output. This study aims to establish a measurement setup that enables monitoring, tracking and prediction of the generated energy in a PV energy system to ensure overall system security and stability. Toward this goal, data pertaining to the PV energy system is measured and recorded in real-time independently of location. Subsequently, the recorded data is used for power prediction.
Design/methodology/approach
Data obtained from the experimental setup include voltage and current values of the PV panel, battery and load; temperature readings of the solar panel surface, environment and the battery; and measurements of humidity, pressure and radiation values in the panel’s environment. These data were monitored and recorded in real-time through a computer interface and mobile interface enabling remote access. For prediction purposes, machine learning methods, including the gradient boosting regressor (GBR), support vector machine (SVM) and k-nearest neighbors (k-NN) algorithms, have been selected. The resulting outputs have been interpreted through graphical representations. For the numerical interpretation of the obtained predictive data, performance measurement criteria such as mean absolute error (MAE), mean squared error (MSE), root mean squared error (RMSE) and R-squared (R2) have been used.
Findings
It has been determined that the most successful prediction model is k-NN, whereas the prediction model with the lowest performance is SVM. According to the accuracy performance comparison conducted on the test data, k-NN exhibits the highest accuracy rate of 82%, whereas the accuracy rate for the GBR algorithm is 80%, and the accuracy rate for the SVM algorithm is 72%.
Originality/value
The experimental setup used in this study, including the measurement and monitoring apparatus, has been specifically designed for this research. The system is capable of remote monitoring both through a computer interface and a custom-developed mobile application. Measurements were conducted on the Karabük University campus, thereby revealing the energy potential of the Karabük province. This system serves as an exemplary study and can be deployed to any desired location for remote monitoring. Numerous methods and techniques exist for power prediction. In this study, contemporary machine learning techniques, which are pertinent to power prediction, have been used, and their performances are presented comparatively.
Details
Keywords
Allison Starks and Stephanie Michelle Reich
This study aims to explore children’s cognitions about data flows online and their understandings of algorithms, often referred to as algorithmic literacy or algorithmic folk…
Abstract
Purpose
This study aims to explore children’s cognitions about data flows online and their understandings of algorithms, often referred to as algorithmic literacy or algorithmic folk theories, in their everyday uses of social media and YouTube. The authors focused on children ages 8 to 11, as these are the ages when most youth acquire their own device and use social media and YouTube, despite platform age requirements.
Design/methodology/approach
Nine focus groups with 34 socioeconomically, racially and ethnically diverse children (8–11 years) were conducted in California. Groups discussed data flows online, digital privacy, algorithms and personalization across platforms.
Findings
Children had several misconceptions about privacy risks, privacy policies, what kinds of data are collected about them online and how algorithms work. Older children had more complex and partially accurate theories about how algorithms determine the content they see online, compared to younger children. All children were using YouTube and/or social media despite age gates and children used few strategies to manage the flow of their personal information online.
Practical implications
The paper includes implications for digital and algorithmic literacy efforts, improving the design of privacy consent practices and user controls, and regulation for protecting children’s privacy online.
Originality/value
Research has yet to explore what socioeconomically, racially and ethnically diverse children understand about datafication and algorithms online, especially in middle childhood.
Details
Keywords
Sijie Tong, Qingchen Liu, Qichao Ma and Jiahu Qin
This paper aims to address the safety concerns of path-planning algorithms in dynamic obstacle warehouse environments. It proposes a method that uses improved artificial potential…
Abstract
Purpose
This paper aims to address the safety concerns of path-planning algorithms in dynamic obstacle warehouse environments. It proposes a method that uses improved artificial potential fields (IAPF) as expert knowledge for an improved deep deterministic policy gradient (IDDPG) and designs a hierarchical strategy for robots through obstacle detection methods.
Design/methodology/approach
The IAPF algorithm is used as the expert experience of reinforcement learning (RL) to reduce the useless exploration in the early stage of RL training. A strategy-switching mechanism is introduced during training to adapt to various scenarios and overcome challenges related to sparse rewards. Sensor inputs, including light detection and ranging data, are integrated to detect obstacles around waypoints, guiding the robot toward the target point.
Findings
Simulation experiments demonstrate that the integrated use of IDDPG and the IAPF method significantly enhances the safety and training efficiency of path planning for mobile robots.
Originality/value
This method enhances safety by applying safety domain judgment rules to improve APF’s security and designing an obstacle detection method for better danger anticipation. It also boosts training efficiency through using IAPF as expert experience for DDPG and the classification storage and sampling design for the RL experience pool. Additionally, adjustments to the actor network’s update frequency expedite convergence.
Details
Keywords
This study aims to propose a force control algorithm based on neural networks, which enables a robot to follow a changing reference force trajectory when in contact with human…
Abstract
Purpose
This study aims to propose a force control algorithm based on neural networks, which enables a robot to follow a changing reference force trajectory when in contact with human skin while maintaining a stable tracking force.
Design/methodology/approach
Aiming at the challenge of robots having difficulty tracking changing force trajectories in skin contact scenarios, a single neuron algorithm adaptive proportional – integral – derivative online compensation is used based on traditional impedance control. At the same time, to better adapt to changes in the skin contact environment, a gated recurrent unit (GRU) network is used to model and predict skin elasticity coefficients, thus adjusting to the uncertainty of skin environments.
Findings
In two robot–skin interaction experiments, compared with the traditional impedance control and robot force control algorithm based on the radial basis function model and iterative algorithm, the maximum absolute force error, the average absolute force error and the standard deviation of the force error are all decreased.
Research limitations/implications
As the training process of the GRU network is currently conducted offline, the focus in the subsequent phase is to refine the network to facilitate real-time computation of the algorithm.
Practical implications
This algorithm can be applied to robot massage, robot B-ultrasound and other robot-assisted treatment scenarios.
Originality/value
As the proposed approach obtains effective force tracking during robot–skin contact and is verified by the experiment, this approach can be used in robot–skin contact scenarios to enhance the accuracy of force application by a robot.
Details
Keywords
Anna R. Oliveri and Jeffrey Paul Carpenter
The purpose of this conceptual paper is to describe how the affinity space concept has been used to frame learning via social media, and call for and discuss a refresh of the…
Abstract
Purpose
The purpose of this conceptual paper is to describe how the affinity space concept has been used to frame learning via social media, and call for and discuss a refresh of the affinity space concept to accommodate changes in social media platforms and algorithms.
Design/methodology/approach
Guided by a sociocultural perspective, this paper reviews and discusses some ways the affinity space concept has been used to frame studies across various contexts, its benefits and disadvantages and how it has already evolved. It then calls for and describes a refresh of the affinity space concept.
Findings
Although conceptualized 20 years ago, the affinity space concept remains relevant to understanding social media use for learning. However, a refresh is needed to accommodate how platforms have changed, algorithms’ evolving role in social media participation and how these technologies influence users’ interactions and experiences. This paper offers three perspectives to expand the affinity space concept’s usefulness in an increasingly platformized and algorithmically mediated world.
Practical implications
This paper underscores the importance of algorithmic literacy for learners and educators, as well as regulations and guidance for social media platforms.
Originality/value
This conceptual paper revisits and updates a widely utilized conceptual framing with consideration for how social media platform design and algorithms impact interactions and shape user experiences.
Details
Keywords
Wiput Tuvayanond, Viroon Kamchoom and Lapyote Prasittisopin
This paper aims to clarify the efficient process of the machine learning algorithms implemented in the ready-mix concrete (RMC) onsite. It proposes innovative machine learning…
Abstract
Purpose
This paper aims to clarify the efficient process of the machine learning algorithms implemented in the ready-mix concrete (RMC) onsite. It proposes innovative machine learning algorithms in terms of preciseness and computation time for the RMC strength prediction.
Design/methodology/approach
This paper presents an investigation of five different machine learning algorithms, namely, multilinear regression, support vector regression, k-nearest neighbors, extreme gradient boosting (XGBOOST) and deep neural network (DNN), that can be used to predict the 28- and 56-day compressive strengths of nine mix designs and four mixing conditions. Two algorithms were designated for fitting the actual and predicted 28- and 56-day compressive strength data. Moreover, the 28-day compressive strength data were implemented to predict 56-day compressive strength.
Findings
The efficacy of the compressive strength data was predicted by DNN and XGBOOST algorithms. The computation time of the XGBOOST algorithm was apparently faster than the DNN, offering it to be the most suitable strength prediction tool for RMC.
Research limitations/implications
Since none has been practically adopted the machine learning for strength prediction for RMC, the scope of this work focuses on the commercially available algorithms. The adoption of the modified methods to fit with the RMC data should be determined thereafter.
Practical implications
The selected algorithms offer efficient prediction for promoting sustainability to the RMC industries. The standard adopting such algorithms can be established, excluding the traditional labor testing. The manufacturers can implement research to introduce machine learning in the quality controcl process of their plants.
Originality/value
Regarding literature review, machine learning has been assessed regarding the laboratory concrete mix design and concrete performance. A study conducted based on the on-site production and prolonged mixing parameters is lacking.
Details