Search results
1 – 10 of 124Chetan Jalendra, B.K. Rout and Amol Marathe
Industrial robots are extensively used in the robotic assembly of rigid objects, whereas the assembly of flexible objects using the same robot becomes cumbersome and challenging…
Abstract
Purpose
Industrial robots are extensively used in the robotic assembly of rigid objects, whereas the assembly of flexible objects using the same robot becomes cumbersome and challenging due to transient disturbance. The transient disturbance causes vibration in the flexible object during robotic manipulation and assembly. This is an important problem as the quick suppression of undesired vibrations reduces the cycle time and increases the efficiency of the assembly process. Thus, this study aims to propose a contactless robot vision-based real-time active vibration suppression approach to handle such a scenario.
Design/methodology/approach
A robot-assisted camera calibration method is developed to determine the extrinsic camera parameters with respect to the robot position. Thereafter, an innovative robot vision method is proposed to identify a flexible beam grasped by the robot gripper using a virtual marker and obtain the dimension, tip deflection as well as velocity of the same. To model the dynamic behaviour of the flexible beam, finite element method (FEM) is used. The measured dimensions, tip deflection and velocity of a flexible beam are fed to the FEM model to predict the maximum deflection. The difference between the maximum deflection and static deflection of the beam is used to compute the maximum error. Subsequently, the maximum error is used in the proposed predictive maximum error-based second-stage controller to send the control signal for vibration suppression. The control signal in form of trajectory is communicated to the industrial robot controller that accommodates various types of delays present in the system.
Findings
The effectiveness and robustness of the proposed controller have been validated using simulation and experimental implementation on an Asea Brown Boveri make IRB 1410 industrial robot with a standard low frame rate camera sensor. In this experiment, two metallic flexible beams of different dimensions with the same material properties have been considered. The robot vision method measures the dimension within an acceptable error limit i.e. ±3%. The controller can suppress vibration amplitude up to approximately 97% in an average time of 4.2 s and reduces the stability time up to approximately 93% while comparing with control and without control suppression time. The vibration suppression performance is also compared with the results of classical control method and some recent results available in literature.
Originality/value
The important contributions of the current work are the following: an innovative robot-assisted camera calibration method is proposed to determine the extrinsic camera parameters that eliminate the need for any reference such as a checkerboard, robotic assembly, vibration suppression, second-stage controller, camera calibration, flexible beam and robot vision; an approach for robot vision method is developed to identify the object using a virtual marker and measure its dimension grasped by the robot gripper accommodating perspective view; the developed robot vision-based controller works along with FEM model of the flexible beam to predict the tip position and helps in handling different dimensions and material types; an approach has been proposed to handle different types of delays that are part of implementation for effective suppression of vibration; proposed method uses a low frame rate and low-cost camera for the second-stage controller and the controller does not interfere with the internal controller of the industrial robot.
Details
Keywords
Ambica Ghai, Pradeep Kumar and Samrat Gupta
Web users rely heavily on online content make decisions without assessing the veracity of the content. The online content comprising text, image, video or audio may be tampered…
Abstract
Purpose
Web users rely heavily on online content make decisions without assessing the veracity of the content. The online content comprising text, image, video or audio may be tampered with to influence public opinion. Since the consumers of online information (misinformation) tend to trust the content when the image(s) supplement the text, image manipulation software is increasingly being used to forge the images. To address the crucial problem of image manipulation, this study focusses on developing a deep-learning-based image forgery detection framework.
Design/methodology/approach
The proposed deep-learning-based framework aims to detect images forged using copy-move and splicing techniques. The image transformation technique aids the identification of relevant features for the network to train effectively. After that, the pre-trained customized convolutional neural network is used to train on the public benchmark datasets, and the performance is evaluated on the test dataset using various parameters.
Findings
The comparative analysis of image transformation techniques and experiments conducted on benchmark datasets from a variety of socio-cultural domains establishes the effectiveness and viability of the proposed framework. These findings affirm the potential applicability of proposed framework in real-time image forgery detection.
Research limitations/implications
This study bears implications for several important aspects of research on image forgery detection. First this research adds to recent discussion on feature extraction and learning for image forgery detection. While prior research on image forgery detection, hand-crafted the features, the proposed solution contributes to stream of literature that automatically learns the features and classify the images. Second, this research contributes to ongoing effort in curtailing the spread of misinformation using images. The extant literature on spread of misinformation has prominently focussed on textual data shared over social media platforms. The study addresses the call for greater emphasis on the development of robust image transformation techniques.
Practical implications
This study carries important practical implications for various domains such as forensic sciences, media and journalism where image data is increasingly being used to make inferences. The integration of image forgery detection tools can be helpful in determining the credibility of the article or post before it is shared over the Internet. The content shared over the Internet by the users has become an important component of news reporting. The framework proposed in this paper can be further extended and trained on more annotated real-world data so as to function as a tool for fact-checkers.
Social implications
In the current scenario wherein most of the image forgery detection studies attempt to assess whether the image is real or forged in an offline mode, it is crucial to identify any trending or potential forged image as early as possible. By learning from historical data, the proposed framework can aid in early prediction of forged images to detect the newly emerging forged images even before they occur. In summary, the proposed framework has a potential to mitigate physical spreading and psychological impact of forged images on social media.
Originality/value
This study focusses on copy-move and splicing techniques while integrating transfer learning concepts to classify forged images with high accuracy. The synergistic use of hitherto little explored image transformation techniques and customized convolutional neural network helps design a robust image forgery detection framework. Experiments and findings establish that the proposed framework accurately classifies forged images, thus mitigating the negative socio-cultural spread of misinformation.
Details
Keywords
Krištof Kovačič, Jurij Gregorc and Božidar Šarler
This study aims to develop an experimentally validated three-dimensional numerical model for predicting different flow patterns produced with a gas dynamic virtual nozzle (GDVN).
Abstract
Purpose
This study aims to develop an experimentally validated three-dimensional numerical model for predicting different flow patterns produced with a gas dynamic virtual nozzle (GDVN).
Design/methodology/approach
The physical model is posed in the mixture formulation and copes with the unsteady, incompressible, isothermal, Newtonian, low turbulent two-phase flow. The computational fluid dynamics numerical solution is based on the half-space finite volume discretisation. The geo-reconstruct volume-of-fluid scheme tracks the interphase boundary between the gas and the liquid. To ensure numerical stability in the transition regime and adequately account for turbulent behaviour, the k-ω shear stress transport turbulence model is used. The model is validated by comparison with the experimental measurements on a vertical, downward-positioned GDVN configuration. Three different combinations of air and water volumetric flow rates have been solved numerically in the range of Reynolds numbers for airflow 1,009–2,596 and water 61–133, respectively, at Weber numbers 1.2–6.2.
Findings
The half-space symmetry allows the numerical reconstruction of the dripping, jetting and indication of the whipping mode. The kinetic energy transfer from the gas to the liquid is analysed, and locations with locally increased gas kinetic energy are observed. The calculated jet shapes reasonably well match the experimentally obtained high-speed camera videos.
Practical implications
The model is used for the virtual studies of new GDVN nozzle designs and optimisation of their operation.
Originality/value
To the best of the authors’ knowledge, the developed model numerically reconstructs all three GDVN flow regimes for the first time.
Details
Keywords
Feng Shuang, Yang Du, Shaodong Li and Mingqi Chen
This study aims to introduce a multi-configuration, three-finger dexterous hand with integrated high-dimensional sensors and provides an analysis of its design, modeling and…
Abstract
Purpose
This study aims to introduce a multi-configuration, three-finger dexterous hand with integrated high-dimensional sensors and provides an analysis of its design, modeling and kinematics.
Design/methodology/approach
A mechanical design scheme of the three-finger dexterous hand with a reconfigurable palm is proposed based on the existing research on dexterous hands. The reconfigurable palm design enables the dexterous hand to achieve four grasping modes to adapt to multiple grasping tasks. To further enhance perception, two six-axis force and torque sensors are integrated into each finger. The forward and inverse kinematics equations of the dexterous hand are derived using the D-H method for kinematics modeling, thus providing a theoretical model for index analysis. The performance is evaluated using three widely applied indicators: workspace, interactivity of fingers and manipulability.
Findings
The results of kinematics analysis show that the proposed hand has excellent dexterity. Additionally, three different experiments are conducted based on the proposed hand. The performance of the dexterous hand is also verified by fingertip force, motion accuracy test, grasping and in-hand manipulation experiments based on Feix taxonomy. The results show that the dexterous hand has good grasping ability, reproducing 82% of the natural movement of the human hand in daily grasping activities and achieving in-hand manipulations such as translation and rotation.
Originality/value
A novel three-finger dexterous hand with multi-configuration and integrated high-dimensional sensors is proposed. It performs better than the previously designed dexterous hand in actual experiments and kinematic performance analysis.
Details
Keywords
Wenzhen Yang, Johan K. Crone, Claus R. Lønkjær, Macarena Mendez Ribo, Shuo Shan, Flavia Dalia Frumosu, Dimitrios Papageorgiou, Yu Liu, Lazaros Nalpantidis and Yang Zhang
This study aims to present a vision-guided robotic system design for application in vat photopolymerization additive manufacturing (AM), enabling vat photopolymerization AM hybrid…
Abstract
Purpose
This study aims to present a vision-guided robotic system design for application in vat photopolymerization additive manufacturing (AM), enabling vat photopolymerization AM hybrid with injection molding process.
Design/methodology/approach
In the system, a robot equipped with a camera and a custom-made gripper as well as driven by a visual servoing (VS) controller is expected to perceive objective, handle variation, connect multi-process steps in soft tooling process and realize automation of vat photopolymerization AM. Meanwhile, the vat photopolymerization AM printer is customized in both hardware and software to interact with the robotic system.
Findings
By ArUco marker-based vision-guided robotic system, the printing platform can be manipulated in arbitrary initial position quickly and robustly, which constitutes the first step in exploring automation of vat photopolymerization AM hybrid with soft tooling process.
Originality/value
The vision-guided robotic system monitors and controls vat photopolymerization AM process, which has potential for vat photopolymerization AM hybrid with other mass production methods, for instance, injection molding.
Details
Keywords
Faruk Bulut, Melike Bektaş and Abdullah Yavuz
In this study, supervision and control of the possible problems among people over a large area with a limited number of drone cameras and security staff is established.
Abstract
Purpose
In this study, supervision and control of the possible problems among people over a large area with a limited number of drone cameras and security staff is established.
Design/methodology/approach
These drones, namely unmanned aerial vehicles (UAVs) will be adaptively and automatically distributed over the crowds to control and track the communities by the proposed system. Since crowds are mobile, the design of the drone clusters will be simultaneously re-organized according to densities and distributions of people. An adaptive and dynamic distribution and routing mechanism of UAV fleets for crowds is implemented to control a specific given region. The nine popular clustering algorithms have been used and tested in the presented mechanism to gain better performance.
Findings
The nine popular clustering algorithms have been used and tested in the presented mechanism to gain better performance. An outperformed clustering performance from the aggregated model has been received when compared with a singular clustering method over five different test cases about crowds of human distributions. This study has three basic components. The first one is to divide the human crowds into clusters. The second one is to determine an optimum route of UAVs over clusters. The last one is to direct the most appropriate security personnel to the events that occurred.
Originality/value
This study has three basic components. The first one is to divide the human crowds into clusters. The second one is to determine an optimum route of UAVs over clusters. The last one is to direct the most appropriate security personnel to the events that occurred.
Details
Keywords
Yufang Cheng, Meng-Han Lee, Chung-Sung Yang and Pei-Yu Wu
The purpose of this study was to develop the augmented reality (AR) educational program combined with the instructional guidance for supportive learning, which enhanced the…
Abstract
Purpose
The purpose of this study was to develop the augmented reality (AR) educational program combined with the instructional guidance for supportive learning, which enhanced the thinking process cooperative discussion and problem-solving skills in chemistry subject.
Design/methodology/approach
The method used the quasi-experimental research design. Of the 45 students who attended this experiment, only 25 with low achievement qualified in operating the AR learning system of saponification and transesterification environment (ARLS-STE) system.
Findings
These results confirmed that the AR educational program could have increased substantial benefits in improvements of students’ knowledge and the ability of the thinking process for the participants with the lowest score. In semi-structured interviews, most of participants enjoyed manipulating the ARLS-STE system, which was realistic, motived and interesting for learning science subjects.
Originality/value
The low-achieving students have often been known with a low learning capability, and they lack in developing constructional knowledge, despite being keen for learning. Regarding educational concerns for this population, providing orientated learning and supportive materials could increase their learning effects. Virtual worlds are an efficient learning tool in educational setting. The AR can offer visual concepts and physical interaction for students with low achievement in learning. Thus, this study investigates the acceptability of an educational program designed in the ARLS-STE, which involves the learning effects of academic knowledge and the capability of thinking process for students with low achievement. The ARLS-STE system was developed for this proposal, based upon the marker-based AR technologies combined with hands-on manipulation.
Details
Keywords
Haolin Fei, Ziwei Wang, Stefano Tedeschi and Andrew Kennedy
This paper aims to evaluate and compare the performance of different computer vision algorithms in the context of visual servoing for augmented robot perception and autonomy.
Abstract
Purpose
This paper aims to evaluate and compare the performance of different computer vision algorithms in the context of visual servoing for augmented robot perception and autonomy.
Design/methodology/approach
The authors evaluated and compared three different approaches: a feature-based approach, a hybrid approach and a machine-learning-based approach. To evaluate the performance of the approaches, experiments were conducted in a simulated environment using the PyBullet physics simulator. The experiments included different levels of complexity, including different numbers of distractors, varying lighting conditions and highly varied object geometry.
Findings
The experimental results showed that the machine-learning-based approach outperformed the other two approaches in terms of accuracy and robustness. The approach could detect and locate objects in complex scenes with high accuracy, even in the presence of distractors and varying lighting conditions. The hybrid approach showed promising results but was less robust to changes in lighting and object appearance. The feature-based approach performed well in simple scenes but struggled in more complex ones.
Originality/value
This paper sheds light on the superiority of a hybrid algorithm that incorporates a deep neural network in a feature detector for image-based visual servoing, which demonstrates stronger robustness in object detection and location against distractors and lighting conditions.
Details
Keywords
The facilitation of digital spaces, in lieu of urban material spaces, for social interaction through computer gaming and other play activities has become particularly important to…
Abstract
The facilitation of digital spaces, in lieu of urban material spaces, for social interaction through computer gaming and other play activities has become particularly important to children in the wake of the 2020–2021 Coronavirus pandemic, to combat the negative effects of physical lockdown restrictions. Pre-pandemic, autistic children living in urban areas may already experience exclusion from physical society and may consequently already be isolated from current imposed normative societal groupings due to their neuro-difference, sensory sensitivities to the surrounding environment, communication comprehension, and social understanding. However, an exploration into personally and independently chosen play activities by autistic youth has identified how such isolation can be overcome and positive social experiences created. A particular play practice, cosplay, and related companionable fandom activities are providing and creating digital spaces for autistic youth to be social. Character play is also enabling the use of limited physical spaces within urban contexts and as such combatting anxiety from sensory overstimulation. Thematic analysis of online content together with semi-structured interviews with autistic young people have indicated a positive connection between cosplay practice, increased social activity, and reduced levels of sensory overload, anxiety, and depression, with early findings suggesting transferrable elements that could inform more effective support for others with social, environmental, and communication challenges or restrictions.
Details
Keywords
This study aims to enhance the understanding of the current research landscape regarding the utilisation of telepresence robots (TPRs) in education.
Abstract
Purpose
This study aims to enhance the understanding of the current research landscape regarding the utilisation of telepresence robots (TPRs) in education.
Design/methodology/approach
The bibliometric and thematic analysis of research publications on TPRs was conducted using papers in the Scopus database up to 2023. The final analysis focused on 53 papers that adhered to the selection criteria. A qualitative analysis was performed on this set of papers.
Findings
The analysis found a rising trend in TPR publications, mostly from the USA as conference papers and journal articles. However, these publications lacked technology integration frameworks, acceptance models and specific learning design models. TPRs have proven effective in various learning environments, fostering accessible education, better communication, engagement and social presence. TPRs can bridge geographical gaps, facilitate knowledge sharing and promote collaboration. Obstacles to implementation include technical, physical, social and emotional challenges. Publications were grouped into four thematic categories: didactic methods of using TPRs, TPRs for educational inclusivity, TPR as a teacher mediator and challenges in using TPRs. Despite the significant potential of TPRs, their broader adoption in education is still facing challenges.
Research limitations/implications
This research solely analysed research papers in the Scopus database, limiting TPR publications with the keywords “telepresence robots”, “learning”, “teaching” and “education”, excluding studies with different other keywords.
Originality/value
This study enhances understanding of TPR research in education, highlighting its pedagogical implications. It identifies a gap in the inclusion of technology integration frameworks, acceptance models and learning design models, indicating a need for further research and development.
Details