Search results

1 – 10 of 285
Article
Publication date: 4 January 2022

Satish Kumar, Tushar Kolekar, Ketan Kotecha, Shruti Patil and Arunkumar Bongale

Excessive tool wear is responsible for damage or breakage of the tool, workpiece, or machining center. Thus, it is crucial to examine tool conditions during the machining process…

Abstract

Purpose

Excessive tool wear is responsible for damage or breakage of the tool, workpiece, or machining center. Thus, it is crucial to examine tool conditions during the machining process to improve its useful functional life and the surface quality of the final product. AI-based tool wear prediction techniques have proven to be effective in estimating the Remaining Useful Life (RUL) of the cutting tool. However, the model prediction needs improvement in terms of accuracy.

Design/methodology/approach

This paper represents a methodology of fusing a feature selection technique along with state-of-the-art deep learning models. The authors have used NASA milling data sets along with vibration signals for tool wear prediction and performance analysis in 15 different fault scenarios. Multiple steps are used for the feature selection and ranking. Different Long Short-Term Memory (LSTM) approaches are used to improve the overall prediction accuracy of the model for tool wear prediction. LSTM models' performance is evaluated using R-square, Mean Absolute Error (MAE), Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE) parameters.

Findings

The R-square accuracy of the hybrid model is consistently high and has low MAE, MAPE and RMSE values. The average R-square score values for LSTM, Bidirection, EncoderDecoder and Hybrid LSTM are 80.43, 84.74, 94.20 and 97.85%, respectively, and corresponding average MAPE values are 23.46, 22.200, 9.5739 and 6.2124%. The hybrid model shows high accuracy as compared to the remaining LSTM models.

Originality/value

The low variance, Spearman Correlation Coefficient and Random Forest Regression methods are used to select the most significant feature vectors for training the miscellaneous LSTM model versions and highlight the best approach. The selected features pass to different LSTM models like Bidirectional, EncoderDecoder and Hybrid LSTM for tool wear prediction. The Hybrid LSTM approach shows a significant improvement in tool wear prediction.

Details

International Journal of Quality & Reliability Management, vol. 39 no. 7
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 11 December 2020

Hui Liu, Tinglong Tang, Jake Luo, Meng Zhao, Baole Zheng and Yirong Wu

This study aims to address the challenge of training a detection model for the robot to detect the abnormal samples in the industrial environment, while abnormal patterns are very…

Abstract

Purpose

This study aims to address the challenge of training a detection model for the robot to detect the abnormal samples in the industrial environment, while abnormal patterns are very rare under this condition.

Design/methodology/approach

The authors propose a new model with double encoderdecoder (DED) generative adversarial networks to detect anomalies when the model is trained without any abnormal patterns. The DED approach is used to map high-dimensional input images to a low-dimensional space, through which the latent variables are obtained. Minimizing the change in the latent variables during the training process helps the model learn the data distribution. Anomaly detection is achieved by calculating the distance between two low-dimensional vectors obtained from two encoders.

Findings

The proposed method has better accuracy and F1 score when compared with traditional anomaly detection models.

Originality/value

A new architecture with a DED pipeline is designed to capture the distribution of images in the training process so that anomalous samples are accurately identified. A new weight function is introduced to control the proportion of losses in the encoding reconstruction and adversarial phases to achieve better results. An anomaly detection model is proposed to achieve superior performance against prior state-of-the-art approaches.

Details

Industrial Robot: the international journal of robotics research and application, vol. 48 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 10 August 2021

Deepa S.N.

Limitations encountered with the models developed in the previous studies had occurrences of global minima; due to which this study developed a new intelligent ubiquitous…

252

Abstract

Purpose

Limitations encountered with the models developed in the previous studies had occurrences of global minima; due to which this study developed a new intelligent ubiquitous computational model that learns with gradient descent learning rule and operates with auto-encoders and decoders to attain better energy optimization. Ubiquitous machine learning computational model process performs training in a better way than regular supervised learning or unsupervised learning computational models with deep learning techniques, resulting in better learning and optimization for the considered problem domain of cloud-based internet-of-things (IOTs). This study aims to improve the network quality and improve the data accuracy rate during the network transmission process using the developed ubiquitous deep learning computational model.

Design/methodology/approach

In this research study, a novel intelligent ubiquitous machine learning computational model is designed and modelled to maintain the optimal energy level of cloud IOTs in sensor network domains. A new intelligent ubiquitous computational model that learns with gradient descent learning rule and operates with auto-encoders and decoders to attain better energy optimization is developed. A new unified deterministic sine-cosine algorithm has been developed in this study for parameter optimization of weight factors in the ubiquitous machine learning model.

Findings

The newly developed ubiquitous model is used for finding network energy and performing its optimization in the considered sensor network model. At the time of progressive simulation, residual energy, network overhead, end-to-end delay, network lifetime and a number of live nodes are evaluated. It is elucidated from the results attained, that the ubiquitous deep learning model resulted in better metrics based on its appropriate cluster selection and minimized route selection mechanism.

Research limitations/implications

In this research study, a novel ubiquitous computing model derived from a new optimization algorithm called a unified deterministic sine-cosine algorithm and deep learning technique was derived and applied for maintaining the optimal energy level of cloud IOTs in sensor networks. The deterministic levy flight concept is applied for developing the new optimization technique and this tends to determine the parametric weight values for the deep learning model. The ubiquitous deep learning model is designed with auto-encoders and decoders and their corresponding layers weights are determined for optimal values with the optimization algorithm. The modelled ubiquitous deep learning approach was applied in this study to determine the network energy consumption rate and thereby optimize the energy level by increasing the lifetime of the sensor network model considered. For all the considered network metrics, the ubiquitous computing model has proved to be effective and versatile than previous approaches from early research studies.

Practical implications

The developed ubiquitous computing model with deep learning techniques can be applied for any type of cloud-assisted IOTs in respect of wireless sensor networks, ad hoc networks, radio access technology networks, heterogeneous networks, etc. Practically, the developed model facilitates computing the optimal energy level of the cloud IOTs for any considered network models and this helps in maintaining a better network lifetime and reducing the end-to-end delay of the networks.

Social implications

The social implication of the proposed research study is that it helps in reducing energy consumption and increases the network lifetime of the cloud IOT based sensor network models. This approach helps the people in large to have a better transmission rate with minimized energy consumption and also reduces the delay in transmission.

Originality/value

In this research study, the network optimization of cloud-assisted IOTs of sensor network models is modelled and analysed using machine learning models as a kind of ubiquitous computing system. Ubiquitous computing models with machine learning techniques develop intelligent systems and enhances the users to make better and faster decisions. In the communication domain, the use of predictive and optimization models created with machine learning accelerates new ways to determine solutions to problems. Considering the importance of learning techniques, the ubiquitous computing model is designed based on a deep learning strategy and the learning mechanism adapts itself to attain a better network optimization model.

Details

International Journal of Pervasive Computing and Communications, vol. 18 no. 1
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 5 April 2011

Christos Grecos and Qi Wang

The interdisciplinary nature of video networking, coupled with various recent developments in standards, proposals and applications, poses great challenges to the research and…

Abstract

Purpose

The interdisciplinary nature of video networking, coupled with various recent developments in standards, proposals and applications, poses great challenges to the research and industrial communities working in this area. The main purpose of this paper is to provide a tutorial and survey on recent advances in video networking from an integrated perspective of both video signal processing and networking.

Design/methodology/approach

Detailed technical descriptions and insightful analysis are presented for recent and emerging video coding standards, in particular the H.264 family. The applications of selected video coding standards in emerging wireless networks are then introduced with an emphasis on scalable video streaming in multihomed mobile networks. Both research challenges and potential solutions are discussed along the description, and numerical results through simulations or experiments are provided to reveal the performances of selected coding standards and networking algorithms.

Findings

The tutorial helps to clarify the similarities and differences among the considered standards and networking applications. A number of research trends and challenges are identified, and selected promising solutions are discussed. This practice would provoke further thoughts on the development of this area and open up more research and application opportunities.

Research limitations/implications

Not all the concerned video coding standards are complemented with thorough studies of networking application scenarios.

Practical implications

The discussed video coding standards are either playing or going to play indispensable roles in the video industry; the introduced networking scenarios bring together these standards and various emerging wireless networking paradigms towards innovative application scenarios.

Originality/value

The comprehensive overview and critiques on existing standards and application approaches offer a valuable reference for researchers and system developers in related research and industrial communities.

Details

International Journal of Pervasive Computing and Communications, vol. 7 no. 1
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 16 July 2021

Junfu Chen, Xiaodong Zhao and Dechang Pi

The purpose of this paper is to ensure the stable operation of satellites in orbit and to assist ground personnel in continuously monitoring the satellite telemetry data and…

Abstract

Purpose

The purpose of this paper is to ensure the stable operation of satellites in orbit and to assist ground personnel in continuously monitoring the satellite telemetry data and finding anomalies in advance, which can improve the reliability of satellite operation and prevent catastrophic losses.

Design/methodology/approach

This paper proposes a deep auto-encoder (DAE) satellite anomaly advance warning framework for satellite telemetry data. Firstly, this study performs grey correlation analysis, extracts important feature attributes to construct feature vectors and builds the variational auto-encoder with bidirectional long short-term memory generative adversarial network discriminator (VAE/BLGAN). Then, the Mahalanobis distance is used to measure the reconstruction score of input and output. According to the periodic characteristic of satellite operation, a dynamic threshold method based on periodic time window is proposed. Satellite health monitoring and advance warning are achieved using reconstruction scores and dynamic thresholds.

Findings

Experiment results indicate DAE methods can probe that satellite telemetry data appear abnormal, trigger a warning before the anomaly occurring and thus allow enough time for troubleshooting. This paper further verifies that the proposed VAE/BLGAN model has stronger data learning ability than other two auto-encoder models and is sensitive to satellite monitoring data.

Originality/value

This paper provides a DAE framework to apply in the field of satellite health monitoring and anomaly advance warning. To the best of the authors’ knowledge, this is the first paper to combine DAE methods with satellite anomaly detection, which can promote the application of artificial intelligence in spacecraft health monitoring.

Details

Aircraft Engineering and Aerospace Technology, vol. 93 no. 6
Type: Research Article
ISSN: 1748-8842

Keywords

Article
Publication date: 1 December 2023

Hao Wang, Hamzeh Al Shraida and Yu Jin

Limited geometric accuracy is one of the major challenges that hinder the wider application of additive manufacturing (AM). This paper aims to predict in-plane shape deviation for…

Abstract

Purpose

Limited geometric accuracy is one of the major challenges that hinder the wider application of additive manufacturing (AM). This paper aims to predict in-plane shape deviation for online inspection and compensation to prevent error accumulation and improve shape fidelity in AM.

Design/methodology/approach

A sequence-to-sequence model with an attention mechanism (Seq2Seq+Attention) is proposed and implemented to predict subsequent layers or the occluded toolpath deviations after the multiresolution alignment. A shape compensation plan can be performed for the large deviation predicted.

Findings

The proposed Seq2Seq+Attention model is able to provide consistent prediction accuracy. The compensation plan proposed based on the predicted deviation can significantly improve the printing fidelity for those layers detected with large deviations.

Practical implications

Based on the experiments conducted on the knee joint samples, the proposed method outperforms the other three machine learning methods for both subsequent layer and occluded toolpath deviation prediction.

Originality/value

This work fills a research gap for predicting in-plane deviation not only for subsequent layers but also for occluded paths due to the missing scanning measurements. It is also combined with the multiresolution alignment and change point detection to determine the necessity of a compensation plan with updated G-code.

Details

Rapid Prototyping Journal, vol. 30 no. 2
Type: Research Article
ISSN: 1355-2546

Keywords

Article
Publication date: 13 July 2022

Juan R. Jaramillo

This paper aims to present two different methods to speed up a test used in the sanitary ware industry that requires to count the number of granules that remains in the commodity…

Abstract

Purpose

This paper aims to present two different methods to speed up a test used in the sanitary ware industry that requires to count the number of granules that remains in the commodity after flushing. The test requires that 2,500 granules are added to the lavatory and less than 125 remain.

Design/methodology/approach

The problem is approached using two deep learning computer vision (CV) models. The first model is a Vision Transformers (ViT) classification approach and the second one is a U-Net paired with a connected components algorithm. Both models are trained and evaluated using a proprietary data set of 3,518 labeled images, and performance is compared.

Findings

It was found that both algorithms are able to produce competitive solutions. The U-Net algorithm achieves accuracy levels above 94% and the ViT model reach accuracy levels above 97%. At this time, the U-Net algorithm is being piloted and the ViT pilot is at the planning stage.

Originality/value

To the best of the authors’ knowledge, this is the first approach using CV to solve the granules problem applying ViT. In addition, this work updates the U-Net-Connected components algorithm and compares the results of both algorithms.

Details

Journal of Modelling in Management, vol. 18 no. 5
Type: Research Article
ISSN: 1746-5664

Keywords

Article
Publication date: 1 August 2016

Chih-Ta Yen and Guan-Jie Huang

The purpose of this paper is to propose a new optical steganography framework that can be applied to public optical binary phase-shift keying (BPSK) systems by transmitting a…

Abstract

Purpose

The purpose of this paper is to propose a new optical steganography framework that can be applied to public optical binary phase-shift keying (BPSK) systems by transmitting a stealth spectrum-amplitude-coded optical code-division multiple-access signal through a BPSK link.

Design/methodology/approach

By using high-dispersion elements, the stealth data pulses temporally stretch and the amplitude of the signal decreases after stretching. Thus, the signal can be hidden underneath the public signal and system noise. At the receiver end, a polarizer is used for removing the public BPSK signal and the stealth signal is successfully recovered by a balanced detector.

Findings

In a simulation, the bit-error rate (BER) performance improved when the stealth power increased.

Research limitations/implications

The BER performance worsens when the noise power become large. Future work will consider increasing the system performance during high-noise power situation.

Practical implications

By properly adjusting the power of the amplified spontaneous emission noise, the stealth signal can be hidden well in the public channel while producing minimal influence on the public BPSK signal.

Originality/value

In conclusion, the proposed optical steganography framework makes it more difficult for eavesdroppers to detect and intercept the hidden stealth channel under public transmission, even when using a dispersion compensation scheme.

Details

Engineering Computations, vol. 33 no. 6
Type: Research Article
ISSN: 0264-4401

Keywords

Open Access
Article
Publication date: 11 August 2021

Yang Zhao and Zhonglu Chen

This study explores whether a new machine learning method can more accurately predict the movement of stock prices.

3265

Abstract

Purpose

This study explores whether a new machine learning method can more accurately predict the movement of stock prices.

Design/methodology/approach

This study presents a novel hybrid deep learning model, Residual-CNN-Seq2Seq (RCSNet), to predict the trend of stock price movement. RCSNet integrates the autoregressive integrated moving average (ARIMA) model, convolutional neural network (CNN) and the sequence-to-sequence (Seq2Seq) long–short-term memory (LSTM) model.

Findings

The hybrid model is able to forecast both linear and non-linear time-series component of stock dataset. CNN and Seq2Seq LSTMs can be effectively combined for dynamic modeling of short- and long-term-dependent patterns in non-linear time series forecast. Experimental results show that the proposed model outperforms baseline models on S&P 500 index stock dataset from January 2000 to August 2016.

Originality/value

This study develops the RCSNet hybrid model to tackle the challenge by combining both linear and non-linear models. New evidence has been obtained in predicting the movement of stock market prices.

Details

Journal of Asian Business and Economic Studies, vol. 29 no. 2
Type: Research Article
ISSN: 2515-964X

Keywords

Article
Publication date: 28 June 2021

Mingyan Zhang, Xu Du, Kerry Rice, Jui-Long Hung and Hao Li

This study aims to propose a learning pattern analysis method which can improve a predictive model’s performance, as well as discover hidden insights into micro-level learning…

Abstract

Purpose

This study aims to propose a learning pattern analysis method which can improve a predictive model’s performance, as well as discover hidden insights into micro-level learning pattern. Analyzing student’s learning patterns can help instructors understand how their course design or activities shape learning behaviors; depict students’ beliefs about learning and their motivation; and predict learning performance by analyzing individual students’ learning patterns. Although time-series analysis is one of the most feasible predictive methods for learning pattern analysis, literature-indicated current approaches cannot provide holistic insights about learning patterns for personalized intervention. This study identified at-risk students by micro-level learning pattern analysis and detected pattern types, especially at-risk patterns that existed in the case study. The connections among students’ learning patterns, corresponding self-regulated learning (SRL) strategies and learning performance were finally revealed.

Design/methodology/approach

The method used long short-term memory (LSTM)-encoder to process micro-level behavioral patterns for feature extraction and compression, thus the students’ behavior pattern information were saved into encoded series. The encoded time-series data were then used for pattern analysis and performance prediction. Time series clustering were performed to interpret the unique strength of proposed method.

Findings

Successful students showed consistent participation levels and balanced behavioral frequency distributions. The successful students also adjusted learning behaviors to meet with course requirements accordingly. The three at-risk patten types showed the low-engagement (R1) the low-interaction (R2) and the non-persistent characteristics (R3). Successful students showed more complete SRL strategies than failed students. Political Science had higher at-risk chances in all three at-risk types. Computer Science, Earth Science and Economics showed higher chances of having R3 students.

Research limitations/implications

The study identified multiple learning patterns which can lead to the at-risk situation. However, more studies are needed to validate whether the same at-risk types can be found in other educational settings. In addition, this case study found the distributions of at-risk types were vary in different subjects. The relationship between subjects and at-risk types is worth further investigation.

Originality/value

This study found the proposed method can effectively extract micro-level behavioral information to generate better prediction outcomes and depict student’s SRL learning strategies in online learning. The authors confirm that the research in their work is original, and that all the data given in the paper are real and authentic. The study has not been submitted to peer review and not has been accepted for publishing in another journal.

Details

Information Discovery and Delivery, vol. 50 no. 2
Type: Research Article
ISSN: 2398-6247

Keywords

1 – 10 of 285