Search results

1 – 10 of over 1000
Article
Publication date: 22 June 2023

Cristiano Busco, Fabrizio Granà and Maria Federica Izzo

Although accounting and reporting visualisations (i.e. graphs, maps and grids) are often used to veil organisations’ untransparent actions, these practices perform irrespectively…

Abstract

Purpose

Although accounting and reporting visualisations (i.e. graphs, maps and grids) are often used to veil organisations’ untransparent actions, these practices perform irrespectively of their ability to represent facts. In this research, the authors explore accounting and reporting visualisations beyond their persuasive and representational purpose.

Design/methodology/approach

By building on previous research on the rhetoric of visualisations, the authors illustrate how the design of accounting visualisations within integrated reports engages managers in a recursive process of knowledge construction, interrogation, reflection and speculation on what sustainable value creation means. The authors articulate the theoretical framework by developing a longitudinal field study in International Fashion Company, a medium-sized company operating in the fashion industry.

Findings

This research shows that accounting and reporting visualisations do not only contribute to creating unclear and often contradicting representations of organisations’ sustainable performance but, at the same time, “open up” and support managers’ unfolding search for “sustainable value” by reducing its unknown meaning into known and understandable categories. The inconsistencies and imperfections that accounting and reporting visualisations leave constitute the conditions of possibility for the interrogation of the unknown to happen in practice, thus augmenting managers’ questioning, reflections and speculation on what sustainable value means.

Originality/value

This study shows that accounting and reporting visualisations can represent good practices (the authors are not saying a “solution”) through which managers can re-appreciate the complexities of measuring and defining something that is intrinsically unknown and unknowable, especially in contexts where best practices have not yet consolidated into a norm. Topics such as climate change and sustainable development are out there and cannot be ignored, cannot be reduced through persuasive accounts and, therefore, need to be embraced.

Details

Accounting, Auditing & Accountability Journal, vol. 37 no. 1
Type: Research Article
ISSN: 0951-3574

Keywords

Open Access
Article
Publication date: 25 January 2024

Atef Gharbi

The purpose of the paper is to propose and demonstrate a novel approach for addressing the challenges of path planning and obstacle avoidance in the context of mobile robots (MR)…

Abstract

Purpose

The purpose of the paper is to propose and demonstrate a novel approach for addressing the challenges of path planning and obstacle avoidance in the context of mobile robots (MR). The specific objectives and purposes outlined in the paper include: introducing a new methodology that combines Q-learning with dynamic reward to improve the efficiency of path planning and obstacle avoidance. Enhancing the navigation of MR through unfamiliar environments by reducing blind exploration and accelerating the convergence to optimal solutions and demonstrating through simulation results that the proposed method, dynamic reward-enhanced Q-learning (DRQL), outperforms existing approaches in terms of achieving convergence to an optimal action strategy more efficiently, requiring less time and improving path exploration with fewer steps and higher average rewards.

Design/methodology/approach

The design adopted in this paper to achieve its purposes involves the following key components: (1) Combination of Q-learning and dynamic reward: the paper’s design integrates Q-learning, a popular reinforcement learning technique, with dynamic reward mechanisms. This combination forms the foundation of the approach. Q-learning is used to learn and update the robot’s action-value function, while dynamic rewards are introduced to guide the robot’s actions effectively. (2) Data accumulation during navigation: when a MR navigates through an unfamiliar environment, it accumulates experience data. This data collection is a crucial part of the design, as it enables the robot to learn from its interactions with the environment. (3) Dynamic reward integration: dynamic reward mechanisms are integrated into the Q-learning process. These mechanisms provide feedback to the robot based on its actions, guiding it to make decisions that lead to better outcomes. Dynamic rewards help reduce blind exploration, which can be time-consuming and inefficient and promote faster convergence to optimal solutions. (4) Simulation-based evaluation: to assess the effectiveness of the proposed approach, the design includes a simulation-based evaluation. This evaluation uses simulated environments and scenarios to test the performance of the DRQL method. (5) Performance metrics: the design incorporates performance metrics to measure the success of the approach. These metrics likely include measures of convergence speed, exploration efficiency, the number of steps taken and the average rewards obtained during the robot’s navigation.

Findings

The findings of the paper can be summarized as follows: (1) Efficient path planning and obstacle avoidance: the paper’s proposed approach, DRQL, leads to more efficient path planning and obstacle avoidance for MR. This is achieved through the combination of Q-learning and dynamic reward mechanisms, which guide the robot’s actions effectively. (2) Faster convergence to optimal solutions: DRQL accelerates the convergence of the MR to optimal action strategies. Dynamic rewards help reduce the need for blind exploration, which typically consumes time and this results in a quicker attainment of optimal solutions. (3) Reduced exploration time: the integration of dynamic reward mechanisms significantly reduces the time required for exploration during navigation. This reduction in exploration time contributes to more efficient and quicker path planning. (4) Improved path exploration: the results from the simulations indicate that the DRQL method leads to improved path exploration in unknown environments. The robot takes fewer steps to reach its destination, which is a crucial indicator of efficiency. (5) Higher average rewards: the paper’s findings reveal that MR using DRQL receive higher average rewards during their navigation. This suggests that the proposed approach results in better decision-making and more successful navigation.

Originality/value

The paper’s originality stems from its unique combination of Q-learning and dynamic rewards, its focus on efficiency and speed in MR navigation and its ability to enhance path exploration and average rewards. These original contributions have the potential to advance the field of mobile robotics by addressing critical challenges in path planning and obstacle avoidance.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Article
Publication date: 12 September 2023

Yang Zhou, Long Wang, Yongbin Lai and Xiaolong Wang

The coupling process between the loading mechanism and the tank car mouth is a crucial step in the tank car loading process. The purpose of this paper is to design a method to…

Abstract

Purpose

The coupling process between the loading mechanism and the tank car mouth is a crucial step in the tank car loading process. The purpose of this paper is to design a method to accurately measure the pose of the tanker car.

Design/methodology/approach

The collected image is first subjected to a gray enhancement operation, and the black parts of the image are extracted using Otsu’s threshold segmentation and morphological processing. The edge pixels are then filtered to remove outliers and noise, and the remaining effective points are used to fit the contour information of the tank car mouth. Using the successfully extracted contour information, the pose information of the tank car mouth in the camera coordinate system is obtained by establishing a binocular projection elliptical cone model, and the pixel position of the real circle center is obtained through the projection section. Finally, the binocular triangulation method is used to determine the position information of the tank car mouth in space.

Findings

Experimental results have shown that this method for measuring the position and orientation of the tank car mouth is highly accurate and can meet the requirements for industrial loading accuracy.

Originality/value

A method for extracting the contours of various types of complex tanker mouth is proposed. This method can accurately extract the contour of the tanker mouth when the contour is occluded or disturbed. Based on the binocular elliptic conical model and perspective projection theory, an innovative method for measuring the pose of the tanker mouth is proposed, and according to the space characteristics of the tanker mouth itself, the ambiguity of understanding is removed. This provides a new idea for the automatic loading of ash tank cars.

Details

Robotic Intelligence and Automation, vol. 43 no. 6
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 1 April 2024

Tao Pang, Wenwen Xiao, Yilin Liu, Tao Wang, Jie Liu and Mingke Gao

This paper aims to study the agent learning from expert demonstration data while incorporating reinforcement learning (RL), which enables the agent to break through the…

Abstract

Purpose

This paper aims to study the agent learning from expert demonstration data while incorporating reinforcement learning (RL), which enables the agent to break through the limitations of expert demonstration data and reduces the dimensionality of the agent’s exploration space to speed up the training convergence rate.

Design/methodology/approach

Firstly, the decay weight function is set in the objective function of the agent’s training to combine both types of methods, and both RL and imitation learning (IL) are considered to guide the agent's behavior when updating the policy. Second, this study designs a coupling utilization method between the demonstration trajectory and the training experience, so that samples from both aspects can be combined during the agent’s learning process, and the utilization rate of the data and the agent’s learning speed can be improved.

Findings

The method is superior to other algorithms in terms of convergence speed and decision stability, avoiding training from scratch for reward values, and breaking through the restrictions brought by demonstration data.

Originality/value

The agent can adapt to dynamic scenes through exploration and trial-and-error mechanisms based on the experience of demonstrating trajectories. The demonstration data set used in IL and the experience samples obtained in the process of RL are coupled and used to improve the data utilization efficiency and the generalization ability of the agent.

Details

International Journal of Web Information Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 12 December 2023

Qing Zhou, Yuanqing Liu, Xiaofeng Liu and Guoping Cai

In the post-capture stage, the tumbling target rotates the combined spacecraft system, and the detumbling operation performed by the space robot is required. To save the costly…

Abstract

Purpose

In the post-capture stage, the tumbling target rotates the combined spacecraft system, and the detumbling operation performed by the space robot is required. To save the costly onboard fuel of the space robot, this paper aims to present a novel post-capture detumbling strategy.

Design/methodology/approach

Actuated by the joint rotations of the manipulator, the combined system is driven from three-axis tumbling state to uniaxial rotation about its maximum principal axis. Only unidirectional thrust perpendicular to the axis is needed to slow down the uniaxial rotation, thus saving the thruster fuel. The optimization problem of the collision-free detumbling trajectory of the space robot is described, and it is optimized by the particle swarm optimization algorithm.

Findings

The numerical simulation results show that along the trajectory planned by the detumbling strategy, the maneuver of the manipulator can precisely drive the combined system to rotate around its maximum principal axis, and the final kinetic energy of the combined system is smaller than the initial. The unidirectional thrust and the lower kinetic energy can ensure the fuel-saving in the subsequent detumbling stage.

Originality/value

This paper presents a post-capture detumbling strategy to drive the combined system from three-axis tumbling state to uniaxial rotation about its maximum principal axis by redistributing the angular momentum of the parts of the combined system. The strategy reduces the thrust torque for detumbling to effectively save the thruster fuel.

Details

Aircraft Engineering and Aerospace Technology, vol. 96 no. 1
Type: Research Article
ISSN: 1748-8842

Keywords

Article
Publication date: 30 October 2023

Li He, Shuai Zhang, Heng Zhang and Liang Yuan

The purpose of this paper is to solve the problem that mobile robots are still based on reactive collision avoidance in unknown dynamic environments leading to a lack of…

Abstract

Purpose

The purpose of this paper is to solve the problem that mobile robots are still based on reactive collision avoidance in unknown dynamic environments leading to a lack of interaction with obstacles and limiting the comprehensive performance of mobile robots. A dynamic window approach with multiple interaction strategies (DWA-MIS) is proposed to solve this problem.

Design/methodology/approach

The algorithm firstly classifies the moving obstacle movement intention, based on which a rule function is designed to incorporate positive incentives to motivate the robot to make correct avoidance actions. Then, the evaluation mechanism is improved by considering the time cost and future information of the environment to increase the motion states. Finally, the optimal objective function is designed based on genetic algorithm to adapt to different environments with time-varying multiparameter optimization.

Findings

Faced with obstacles in different states, the mobile robot can choose a suitable interaction strategy, which solves the limitations of the original DWA evaluation function and avoids the defects of reactive collision avoidance. Simulation results show that the algorithm can efficiently adapt to unknown dynamic environments, has less path length and iterations and has a high comprehensive performance.

Originality/value

A DWA-MIS is proposed, which increases the interaction capability between mobile robots and obstacles by improving the evaluation function mechanism and broadens the navigation strategy of DWA at a lower computational cost. After real machine verification, the algorithm has a high comprehensive performance based on real environment and provides a new idea for local path planning methods.

Details

Industrial Robot: the international journal of robotics research and application, vol. 51 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 8 December 2022

Chunming Tong, Zhenbao Liu, Wen Zhao, Baodong Wang, Yao Cheng and Jingyan Wang

This paper aims to propose an online local trajectory planner for safe and fast trajectory generation that combines the jerk-limited trajectory (JLT) generation algorithm and the…

Abstract

Purpose

This paper aims to propose an online local trajectory planner for safe and fast trajectory generation that combines the jerk-limited trajectory (JLT) generation algorithm and the particle swarm optimization (PSO) algorithm. A trajectory switching algorithm is proposed to improve the trajectory tracking performance. The proposed system generates smooth and safe flight trajectories online for quadrotors.

Design/methodology/approach

First, the PSO algorithm method can obtain the optimal set of target points near the path points obtained by the global path searching. The JLT generation algorithm generates multiple trajectories from the current position to the target points that conform to the kinetic constraints. Then, the generated multiple trajectories are evaluated to pick the obstacle-free trajectory with the least cost. A trajectory switching strategy is proposed to switch the unmanned aerial vehicle (UAV) to a new trajectory before the UAV reaches the last hovering state of the current trajectory, so that the UAV can fly smoothly and quickly.

Findings

The feasibility of the designed system is validated through online flight experiments in indoor environments with obstacles.

Practical implications

The proposed trajectory planning system is integrated into a quadrotor platform. It is easily implementable onboard and computationally efficient.

Originality/value

The proposed local planner for trajectory generation and evaluation combines PSO and JLT generation algorithms. The proposed method can provide a collision-free and continuous trajectory, significantly reducing the required computing resources. The PSO algorithm locally searches for feasible target points near the global waypoint obtained by the global path search. The JLT generation algorithm generates trajectories from the current state toward each point contained by the target point set. The proposed trajectory switching strategy can avoid unnecessary hovering states in flight and ensure a continuous and safe flight trajectory. It is especially suitable for micro quadrotors with a small payload and limited onboard computing power.

Details

Aircraft Engineering and Aerospace Technology, vol. 95 no. 5
Type: Research Article
ISSN: 1748-8842

Keywords

Article
Publication date: 11 April 2023

Xiangda Yan, Jie Huang, Keyan He, Huajie Hong and Dasheng Xu

Robots equipped with LiDAR sensors can continuously perform efficient actions for mapping tasks to gradually build maps. However, with the complexity and scale of the environment…

Abstract

Purpose

Robots equipped with LiDAR sensors can continuously perform efficient actions for mapping tasks to gradually build maps. However, with the complexity and scale of the environment increasing, the computation cost is extremely steep. This study aims to propose a hybrid autonomous exploration method that makes full use of LiDAR data, shortens the computation time in the decision-making process and improves efficiency. The experiment proves that this method is feasible.

Design/methodology/approach

This study improves the mapping update module and proposes a full-mapping approach that fully exploits the LiDAR data. Under the same hardware configuration conditions, the scope of the mapping is expanded, and the information obtained is increased. In addition, a decision-making module based on reinforcement learning method is proposed, which can select the optimal or near-optimal perceptual action by the learned policy. The decision-making module can shorten the computation time of the decision-making process and improve the efficiency of decision-making.

Findings

The result shows that the hybrid autonomous exploration method offers good performance, which combines the learn-based policy with traditional frontier-based policy.

Originality/value

This study proposes a hybrid autonomous exploration method, which combines the learn-based policy with traditional frontier-based policy. Extensive experiment including real robots is conducted to evaluate the performance of the approach and proves that this method is feasible.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 13 March 2024

Rong Jiang, Bin He, Zhipeng Wang, Xu Cheng, Hongrui Sang and Yanmin Zhou

Compared with traditional methods relying on manual teaching or system modeling, data-driven learning methods, such as deep reinforcement learning and imitation learning, show…

Abstract

Purpose

Compared with traditional methods relying on manual teaching or system modeling, data-driven learning methods, such as deep reinforcement learning and imitation learning, show more promising potential to cope with the challenges brought by increasingly complex tasks and environments, which have become the hot research topic in the field of robot skill learning. However, the contradiction between the difficulty of collecting robot–environment interaction data and the low data efficiency causes all these methods to face a serious data dilemma, which has become one of the key issues restricting their development. Therefore, this paper aims to comprehensively sort out and analyze the cause and solutions for the data dilemma in robot skill learning.

Design/methodology/approach

First, this review analyzes the causes of the data dilemma based on the classification and comparison of data-driven methods for robot skill learning; Then, the existing methods used to solve the data dilemma are introduced in detail. Finally, this review discusses the remaining open challenges and promising research topics for solving the data dilemma in the future.

Findings

This review shows that simulation–reality combination, state representation learning and knowledge sharing are crucial for overcoming the data dilemma of robot skill learning.

Originality/value

To the best of the authors’ knowledge, there are no surveys that systematically and comprehensively sort out and analyze the data dilemma in robot skill learning in the existing literature. It is hoped that this review can be helpful to better address the data dilemma in robot skill learning in the future.

Details

Robotic Intelligence and Automation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 20 September 2023

Zhifang Wang, Quanzhen Huang and Jianguo Yu

In this paper, the authors take an amorphous flattened air-ground wireless self-assembling network system as the research object and focus on solving the wireless self-assembling…

Abstract

Purpose

In this paper, the authors take an amorphous flattened air-ground wireless self-assembling network system as the research object and focus on solving the wireless self-assembling network topology instability problem caused by unknown control communication faults during the operation of this system.

Design/methodology/approach

In the paper, the authors propose a neural network-based direct robust adaptive non-fragile fault-tolerant control algorithm suitable for the air-ground integrated wireless ad hoc network integrated system.

Findings

The simulation results show that the system eventually tends to be asymptotically stable, and the estimation error asymptotically tends to zero with the feedback adjustment of the designed controller. The system as a whole has good fault tolerance performance and autonomous learning approximation performance. The experimental results show that the wireless self-assembled network topology has good stability performance and can change flexibly and adaptively with scene changes. The stability performance of the wireless self-assembled network topology is improved by 66.7% at maximum.

Research limitations/implications

The research results may lack generalisability because of the chosen research approach. Therefore, researchers are encouraged to test the proposed propositions further.

Originality/value

This paper designs a direct, robust, non-fragile adaptive neural network fault-tolerant controller based on the Lyapunov stability principle and neural network learning capability. By directly optimizing the feedback matrix K to approximate the robust fault-tolerant correction factor, the neural network adaptive adjustment factor enables the system as a whole to resist unknown control and communication failures during operation, thus achieving the goal of stable wireless self-assembled network topology.

1 – 10 of over 1000