Search results
21 – 30 of over 1000J. Ahmad, H. Larijani, R. Emmanuel, M. Mannion and A. Javed
Buildings use approximately 40% of global energy and are responsible for almost a third of the worldwide greenhouse gas emissions. They also utilise about 60% of the world’s…
Abstract
Buildings use approximately 40% of global energy and are responsible for almost a third of the worldwide greenhouse gas emissions. They also utilise about 60% of the world’s electricity. In the last decade, stringent building regulations have led to significant improvements in the quality of the thermal characteristics of many building envelopes. However, similar considerations have not been paid to the number and activities of occupants in a building, which play an increasingly important role in energy consumption, optimisation processes, and indoor air quality. More than 50% of the energy consumption could be saved in Demand Controlled Ventilation (DCV) if accurate information about the number of occupants is readily available (Mysen et al., 2005). But due to privacy concerns, designing a precise occupancy sensing/counting system is a highly challenging task. While several studies count the number of occupants in rooms/zones for the optimisation of energy consumption, insufficient information is available on the comparison, analysis and pros and cons of these occupancy estimation techniques. This paper provides a review of occupancy measurement techniques and also discusses research trends and challenges. Additionally, a novel privacy preserved occupancy monitoring solution is also proposed in this paper. Security analyses of the proposed scheme reveal that the new occupancy monitoring system is privacy preserved compared to other traditional schemes.
Details
Keywords
The purpose of this paper is to investigate the effect on completion of mobile‐robot tasks depending on how a human tele‐operator interacts with a sensor system and a mobile‐robot.
Abstract
Purpose
The purpose of this paper is to investigate the effect on completion of mobile‐robot tasks depending on how a human tele‐operator interacts with a sensor system and a mobile‐robot.
Design/methodology/approach
Interaction is investigated using two mobile‐robot systems, three different ways of interacting with the robots and several different environments of increasing complexity. In each case, the operation is investigated with and without sensor systems to assist an operator to move a robot through narrower and narrower gaps and in completing progressively more complicated driving tasks. Tele‐operators used a joystick and either watched the robot while operating it, or sat at a computer and viewed scenes remotely on a screen. Cameras are either mounted on the robot to view the space ahead of the robot or mounted remotely so that they viewed both the environment and robot. Every test is compared with sensor systems engaged and with them disconnected.
Findings
A main conclusion is that human tele‐operators perform better without the assistance of sensor systems in simple environments and in those cases it may be better to switch‐off the sensor systems or reduce their effect. In addition, tele‐operators sometimes performed better with a camera mounted on the robot compared with pre‐mounted cameras observing the environment (but that depended on tasks being performed).
Research limitations/implications
Tele‐operators completed tests both with and without sensors. One robot system used an umbilical cable and one used a radio link.
Practical implications
The paper quantifies the difference between tele‐operation control and sensor‐assisted control when a robot passes through narrow passages. This could be an useful information when system designers decide if a system should be tele‐operated, automatic or sensor‐assisted. The paper suggests that in simple environments then the amount of sensor support should be small but in more complicated environments then more sensor support needs to be provided.
Originality/value
The paper investigates the effect of completing mobile‐robot tasks depending on whether a human tele‐operator uses a sensor system or not and how they interact with the sensor system and the mobile‐robot. The paper presents the results from investigations using two mobile‐robot systems, three different ways of interacting with the robots and several different environments of increasing complexity. The change in the ability of a human operator to complete progressively more complicated driving tasks with and without a sensor system is presented and the human tele‐operators performed better without the assistance of sensor systems in simple environments.
Details
Keywords
Abstract
Details
Keywords
Saquib Rouf, Ankush Raina, Mir Irfan Ul Haq and Nida Naveed
The involvement of wear, friction and lubrication in engineering systems and industrial applications makes it imperative to study the various aspects of tribology in relation with…
Abstract
Purpose
The involvement of wear, friction and lubrication in engineering systems and industrial applications makes it imperative to study the various aspects of tribology in relation with advanced technologies and concepts. The concept of Industry 4.0 and its implementation further faces a lot of barriers, particularly in developing economies. Real-time and reliable data is an important enabler for the implementation of the concept of Industry 4.0. For availability of reliable and real-time data about various tribological systems is crucial in applying the various concepts of Industry 4.0. This paper aims to attempt to highlight the role of sensors related to friction, wear and lubrication in implementing Industry 4.0 in various tribology-related industries and equipment.
Design/methodology/approach
A through literature review has been done to study the interrelationships between the availability of tribology-related data and implementation of Industry 4.0 are also discussed. Relevant and recent research papers from prominent databases have been included. A detailed overview about the various types of sensors used in generating tribological data is also presented. Some studies related to the application of machine learning and artificial intelligence (AI) are also included in the paper. A discussion on fault diagnosis and cyber physical systems in connection with tribology has also been included.
Findings
Industry 4.0 and tribology are interconnected through various means and the various pillars of Industry 4.0 such as big data, AI can effectively be implemented in various tribological systems. Data is an important parameter in the effective application of concepts of Industry 4.0 in the tribological environment. Sensors have a vital role to play in the implementation of Industry 4.0 in tribological systems. Determining the machine health, carrying out maintenance in off-shore and remote mechanical systems is possible by applying online-real-time data acquisition.
Originality/value
The paper tries to relate the pillars of Industry 4.0 with various aspects of tribology. The paper is a first of its kind wherein the interdisciplinary field of tribology has been linked with Industry 4.0. The paper also highlights the role of sensors in generating tribological data related to the critical parameters, such as wear rate, coefficient of friction, surface roughness which is critical in implementing the various pillars of Industry 4.0.
Details
Keywords
Abstract
Details
Keywords
Jitender Tanwar, Sanjay Kumar Sharma and Mandeep Mittal
Drones are used in several purposes including examining areas, mapping surroundings and rescue mission operations. During these tasks, they could encounter compound surroundings…
Abstract
Purpose
Drones are used in several purposes including examining areas, mapping surroundings and rescue mission operations. During these tasks, they could encounter compound surroundings having multiple obstacles, acute edges and deadlocks. The purpose of this paper is to propose an obstacle dodging technique required to move the drones autonomously and generate the obstacle's map of an unknown place dynamically.
Design/methodology/approach
Therefore, an obstacle dodging technique is essentially required to move autonomously. The automaton of drones requires complicated vision sensors and a high computing force. During this research, a methodology that uses two basic ultrasonic-oriented proximity sensors placed at the center of the drone and applies neural control using synaptic plasticity for dynamic obstacle avoidance is proposed. The two-neuron intermittent system has been established by neural control. The synaptic plasticity is used to find turning angles from different viewpoints with immediate remembrance, so it helps in decision-making for a drone. Hence, the automaton will be able to travel around and modify its angle of turning for escaping objects during the route in unknown surroundings with narrow junctions and dead ends. Furthermore, wherever an obstacle is detected during the route, the coordinate information is communicated using RESTful Web service to an android app and an obstacle map is generated according to the information sent by the drone. In this research, the drone is successfully designed and automated and an obstacle map using the V-REP simulation environment is generated.
Findings
Simulation results show that the drone effectively moves and turns around the obstacles and the experiment of using web services with the drone is also successful in generating the obstacle's map dynamically.
Originality/value
The obstacle map generated by autonomous drone is useful in many applications such as examining fields, mapping surroundings and rescue mission operations.
Details
Keywords
Ilesanmi Daniyan, Khumbulani Mpofu and Samuel Nwankwo
The need to examine the integrity of infrastructure in the rail industry in order to improve its reliability and reduce the chances of breakdown due to defects has brought about…
Abstract
Purpose
The need to examine the integrity of infrastructure in the rail industry in order to improve its reliability and reduce the chances of breakdown due to defects has brought about development of an inspection and diagnostic robot.
Design/methodology/approach
In this study, an inspection robot was designed for detecting crack, corrosion, missing clips and wear on rail track facilities. The robot is designed to use infrared and ultrasonic sensors for obstacles avoidance and crack detection, two 3D-profilometer for wear detection as well as cameras with high resolution to capture real time images and colour sensors for corrosion detection. The robot is also designed with cameras placed in front of it with colour sensors at each side to assist in the detection of corrosion in the rail track. The image processing capability of the robot will permit the analysis of the type and depth of the crack and corrosion captured in the track. The computer aided design and modeling of the robot was carried out using the Solidworks software version 2018 while the simulation of the proposed system was carried out in the MATLAB 2020b environment.
Findings
The results obtained present three frameworks for wear, corrosion and missing clips as well as crack detection. In addition, the design data for the development of the integrated robotic system is also presented in the work. The confusion matrix resulting from the simulation of the proposed system indicates significant sensitivity and accuracy of the system to the presence and detection of fault respectively. Hence, the work provides a design framework for detecting and analysing the presence of defects on the rail track.
Practical implications
The development and the implementation of the designed robot will bring about a more proactive way to monitor rail track conditions and detect rail track defects so that effort can be geared towards its restoration before it becomes a major problem thus increasing the rail network capacity and availability.
Originality/value
The novelty of this work is based on the fact that the system is designed to work autonomously to avoid obstacles and check for cracks, missing clips, wear and corrosion in the rail tracks with a system of integrated and coordinated components.
Details
Keywords
The purpose of this paper is to investigate the effect of time delay on the ability of a human operator to complete a task with a teleoperated mobile‐robot using two systems, two…
Abstract
Purpose
The purpose of this paper is to investigate the effect of time delay on the ability of a human operator to complete a task with a teleoperated mobile‐robot using two systems, two different ways of interacting with the mobile‐robots and several different environments.
Design/methodology/approach
Teleoperators are observed completing a series of tasks using a joystick to control a mobile‐robot while time delays are introduced to the system. They sit at a computer and view scenes remotely on a screen. Cameras are either mounted on the robot or mounted externally so that they view both the environment and robot. Teleoperators complete the tests both with and without sensors. One robot system uses an umbilical cable and one uses a radio link.
Findings
In simple environments, a teleoperator may perform better without a sensor system to assist them but as time delays are introduced then there are more failures. In more complicated environments or when time delays are longer, then teleoperators perform better with a sensor system to assist. Teleoperators may also tend to perform better with a radio link than with an umbilical connection.
Research limitations/implications
Teleoperated systems rely heavily on visual feedback and experienced operators. This paper investigates the effect of introducing a delay to the delivery of that visual feedback.
Practical implications
The paper suggests that in simple environments with short time delays then the amount of sensor support should be small but in more complicated environments or with longer delays then more sensor support needs to be provided.
Originality/value
Results from imposing time delays on a teleoperated mobile‐robot are presented. Effects on the task of different ways of viewing activity on a computer display are presented, that is with cameras mounted on the robot or cameras mounted externally to view both the environment and robot. Results from using sensors to assist teleoperators are presented. The paper suggests that the amount of sensor support should be varied depending on circumstances.
Details
Keywords
Xiaochun Guan, Sheng Lou, Han Li and Tinglong Tang
Deployment of deep neural networks on embedded devices is becoming increasingly popular because it can reduce latency and energy consumption for data communication. This paper…
Abstract
Purpose
Deployment of deep neural networks on embedded devices is becoming increasingly popular because it can reduce latency and energy consumption for data communication. This paper aims to give out a method for deployment the deep neural networks on a quad-rotor aircraft for further expanding its application scope.
Design/methodology/approach
In this paper, a design scheme is proposed to implement the flight mission of the quad-rotor aircraft based on multi-sensor fusion. It integrates attitude acquisition module, global positioning system position acquisition module, optical flow sensor, ultrasonic sensor and Bluetooth communication module, etc. A 32-bit microcontroller is adopted as the main controller for the quad-rotor aircraft. To make the quad-rotor aircraft be more intelligent, the study also proposes a method to deploy the pre-trained deep neural networks model on the microcontroller based on the software packages of the RT-Thread internet of things operating system.
Findings
This design provides a simple and efficient design scheme to further integrate artificial intelligence (AI) algorithm for the control system design of quad-rotor aircraft.
Originality/value
This method provides an application example and a design reference for the implementation of AI algorithms on unmanned aerial vehicle or terminal robots.
Details
Keywords
Abstract
Details