Search results

1 – 10 of 11
Article
Publication date: 16 October 2023

Peng Wang and Renquan Dong

To improve the position tracking efficiency of the upper-limb rehabilitation robot for stroke hemiplegia patients, the optimization Learning rate of the membership function based…

Abstract

Purpose

To improve the position tracking efficiency of the upper-limb rehabilitation robot for stroke hemiplegia patients, the optimization Learning rate of the membership function based on the fuzzy impedance controller of the rehabilitation robot is propose.

Design/methodology/approach

First, the impaired limb’s damping and stiffness parameters for evaluating its physical recovery condition are online estimated by using weighted least squares method based on recursive algorithm. Second, the fuzzy impedance control with the rule has been designed with the optimal impedance parameters. Finally, the membership function learning rate online optimization strategy based on Takagi-Sugeno (TS) fuzzy impedance model was proposed to improve the position tracking speed of fuzzy impedance control.

Findings

This method provides a solution for improving the membership function learning rate of the fuzzy impedance controller of the upper limb rehabilitation robot. Compared with traditional TS fuzzy impedance controller in position control, the improved TS fuzzy impedance controller has reduced the overshoot stability time by 0.025 s, and the position error caused by simulating the thrust interference of the impaired limb has been reduced by 8.4%. This fact is verified by simulation and test.

Originality/value

The TS fuzzy impedance controller based on membership function online optimization learning strategy can effectively optimize control parameters and improve the position tracking speed of upper limb rehabilitation robots. This controller improves the auxiliary rehabilitation efficiency of the upper limb rehabilitation robot and ensures the stability of auxiliary rehabilitation training.

Details

Industrial Robot: the international journal of robotics research and application, vol. 51 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Content available
Article
Publication date: 12 April 2022

Monica Puri Sikka, Alok Sarkar and Samridhi Garg

With the help of basic physics, the application of computer algorithms in the form of recent advances such as machine learning and neural networking in textile Industry has been…

1475

Abstract

Purpose

With the help of basic physics, the application of computer algorithms in the form of recent advances such as machine learning and neural networking in textile Industry has been discussed in this review. Scientists have linked the underlying structural or chemical science of textile materials and discovered several strategies for completing some of the most time-consuming tasks with ease and precision. Since the 1980s, computer algorithms and machine learning have been used to aid the majority of the textile testing process. With the rise in demand for automation, deep learning, and neural networks, these two now handle the majority of testing and quality control operations in the form of image processing.

Design/methodology/approach

The state-of-the-art of artificial intelligence (AI) applications in the textile sector is reviewed in this paper. Based on several research problems and AI-based methods, the current literature is evaluated. The research issues are categorized into three categories based on the operation processes of the textile industry, including yarn manufacturing, fabric manufacture and coloration.

Findings

AI-assisted automation has improved not only machine efficiency but also overall industry operations. AI's fundamental concepts have been examined for real-world challenges. Several scientists conducted the majority of the case studies, and they confirmed that image analysis, backpropagation and neural networking may be specifically used as testing techniques in textile material testing. AI can be used to automate processes in various circumstances.

Originality/value

This research conducts a thorough analysis of artificial neural network applications in the textile sector.

Details

Research Journal of Textile and Apparel, vol. 28 no. 1
Type: Research Article
ISSN: 1560-6074

Keywords

Article
Publication date: 9 March 2022

G.L. Infant Cyril and J.P. Ananth

The bank is termed as an imperative part of the marketing economy. The failure or success of an institution relies on the ability of industries to compute the credit risk. The…

Abstract

Purpose

The bank is termed as an imperative part of the marketing economy. The failure or success of an institution relies on the ability of industries to compute the credit risk. The loan eligibility prediction model utilizes analysis method that adapts past and current information of credit user to make prediction. However, precise loan prediction with risk and assessment analysis is a major challenge in loan eligibility prediction.

Design/methodology/approach

This aim of the research technique is to present a new method, namely Social Border Collie Optimization (SBCO)-based deep neuro fuzzy network for loan eligibility prediction. In this method, box cox transformation is employed on input loan data to create the data apt for further processing. The transformed data utilize the wrapper-based feature selection to choose suitable features to boost the performance of loan eligibility calculation. Once the features are chosen, the naive Bayes (NB) is adapted for feature fusion. In NB training, the classifier builds probability index table with the help of input data features and groups values. Here, the testing of NB classifier is done using posterior probability ratio considering conditional probability of normalization constant with class evidence. Finally, the loan eligibility prediction is achieved by deep neuro fuzzy network, which is trained with designed SBCO. Here, the SBCO is devised by combining the social ski driver (SSD) algorithm and Border Collie Optimization (BCO) to produce the most precise result.

Findings

The analysis is achieved by accuracy, sensitivity and specificity parameter by. The designed method performs with the highest accuracy of 95%, sensitivity and specificity of 95.4 and 97.3%, when compared to the existing methods, such as fuzzy neural network (Fuzzy NN), multiple partial least squares regression model (Multi_PLS), instance-based entropy fuzzy support vector machine (IEFSVM), deep recurrent neural network (Deep RNN), whale social optimization algorithm-based deep RNN (WSOA-based Deep RNN).

Originality/value

This paper devises SBCO-based deep neuro fuzzy network for predicting loan eligibility. Here, the deep neuro fuzzy network is trained with proposed SBCO, which is devised by combining the SSD and BCO to produce most precise result for loan eligibility prediction.

Details

Kybernetes, vol. 52 no. 8
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 21 December 2023

Majid Rahi, Ali Ebrahimnejad and Homayun Motameni

Taking into consideration the current human need for agricultural produce such as rice that requires water for growth, the optimal consumption of this valuable liquid is…

Abstract

Purpose

Taking into consideration the current human need for agricultural produce such as rice that requires water for growth, the optimal consumption of this valuable liquid is important. Unfortunately, the traditional use of water by humans for agricultural purposes contradicts the concept of optimal consumption. Therefore, designing and implementing a mechanized irrigation system is of the highest importance. This system includes hardware equipment such as liquid altimeter sensors, valves and pumps which have a failure phenomenon as an integral part, causing faults in the system. Naturally, these faults occur at probable time intervals, and the probability function with exponential distribution is used to simulate this interval. Thus, before the implementation of such high-cost systems, its evaluation is essential during the design phase.

Design/methodology/approach

The proposed approach included two main steps: offline and online. The offline phase included the simulation of the studied system (i.e. the irrigation system of paddy fields) and the acquisition of a data set for training machine learning algorithms such as decision trees to detect, locate (classification) and evaluate faults. In the online phase, C5.0 decision trees trained in the offline phase were used on a stream of data generated by the system.

Findings

The proposed approach is a comprehensive online component-oriented method, which is a combination of supervised machine learning methods to investigate system faults. Each of these methods is considered a component determined by the dimensions and complexity of the case study (to discover, classify and evaluate fault tolerance). These components are placed together in the form of a process framework so that the appropriate method for each component is obtained based on comparison with other machine learning methods. As a result, depending on the conditions under study, the most efficient method is selected in the components. Before the system implementation phase, its reliability is checked by evaluating the predicted faults (in the system design phase). Therefore, this approach avoids the construction of a high-risk system. Compared to existing methods, the proposed approach is more comprehensive and has greater flexibility.

Research limitations/implications

By expanding the dimensions of the problem, the model verification space grows exponentially using automata.

Originality/value

Unlike the existing methods that only examine one or two aspects of fault analysis such as fault detection, classification and fault-tolerance evaluation, this paper proposes a comprehensive process-oriented approach that investigates all three aspects of fault analysis concurrently.

Details

International Journal of Intelligent Computing and Cybernetics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1756-378X

Keywords

Book part
Publication date: 18 January 2024

Naraindra Kistamah

This chapter offers an overview of the applications of artificial intelligence (AI) in the textile industry and in particular, the textile colouration and finishing industry. The…

Abstract

This chapter offers an overview of the applications of artificial intelligence (AI) in the textile industry and in particular, the textile colouration and finishing industry. The advent of new technologies such as AI and the Internet of Things (IoT) has changed many businesses and one area AI is seeing growth in is the textile industry. It is estimated that the AI software market shall reach a new high of over US$60 billion by 2022, and the largest increase is projected to be in the area of machine learning (ML). This is the area of AI where machines process and analyse vast amount of data they collect to perform tasks and processes. In the textile manufacturing industry, AI is applied to various areas such as colour matching, colour recipe formulation, pattern recognition, garment manufacture, process optimisation, quality control and supply chain management for enhanced productivity, product quality and competitiveness, reduced environmental impact and overall improved customer experience. The importance and success of AI is set to grow as ML algorithms become more sophisticated and smarter, and computing power increases.

Details

Artificial Intelligence, Engineering Systems and Sustainable Development
Type: Book
ISBN: 978-1-83753-540-8

Keywords

Article
Publication date: 19 April 2022

D. Divya, Bhasi Marath and M.B. Santosh Kumar

This study aims to bring awareness to the developing of fault detection systems using the data collected from sensor devices/physical devices of various systems for predictive…

1663

Abstract

Purpose

This study aims to bring awareness to the developing of fault detection systems using the data collected from sensor devices/physical devices of various systems for predictive maintenance. Opportunities and challenges in developing anomaly detection algorithms for predictive maintenance and unexplored areas in this context are also discussed.

Design/methodology/approach

For conducting a systematic review on the state-of-the-art algorithms in fault detection for predictive maintenance, review papers from the years 2017–2021 available in the Scopus database were selected. A total of 93 papers were chosen. They are classified under electrical and electronics, civil and constructions, automobile, production and mechanical. In addition to this, the paper provides a detailed discussion of various fault-detection algorithms that can be categorised under supervised, semi-supervised, unsupervised learning and traditional statistical method along with an analysis of various forms of anomalies prevalent across different sectors of industry.

Findings

Based on the literature reviewed, seven propositions with a focus on the following areas are presented: need for a uniform framework while scaling the number of sensors; the need for identification of erroneous parameters; why there is a need for new algorithms based on unsupervised and semi-supervised learning; the importance of ensemble learning and data fusion algorithms; the necessity of automatic fault diagnostic systems; concerns about multiple fault detection; and cost-effective fault detection. These propositions shed light on the unsolved issues of predictive maintenance using fault detection algorithms. A novel architecture based on the methodologies and propositions gives more clarity for the reader to further explore in this area.

Originality/value

Papers for this study were selected from the Scopus database for predictive maintenance in the field of fault detection. Review papers published in this area deal only with methods used to detect anomalies, whereas this paper attempts to establish a link between different industrial domains and the methods used in each industry that uses fault detection for predictive maintenance.

Details

Journal of Quality in Maintenance Engineering, vol. 29 no. 2
Type: Research Article
ISSN: 1355-2511

Keywords

Article
Publication date: 21 November 2023

Armin Mahmoodi, Leila Hashemi and Milad Jasemi

In this study, the central objective is to foresee stock market signals with the use of a proper structure to achieve the highest accuracy possible. For this purpose, three hybrid…

Abstract

Purpose

In this study, the central objective is to foresee stock market signals with the use of a proper structure to achieve the highest accuracy possible. For this purpose, three hybrid models have been developed for the stock markets which are a combination of support vector machine (SVM) with meta-heuristic algorithms of particle swarm optimization (PSO), imperialist competition algorithm (ICA) and genetic algorithm (GA).All the analyses are technical and are based on the Japanese candlestick model.

Design/methodology/approach

Further as per the results achieved, the most suitable algorithm is chosen to anticipate sell and buy signals. Moreover, the authors have compared the results of the designed model validations in this study with basic models in three articles conducted in the past years. Therefore, SVM is examined by PSO. It is used as a classification agent to search the problem-solving space precisely and at a faster pace. With regards to the second model, SVM and ICA are tested to stock market timing, in a way that ICA is used as an optimization agent for the SVM parameters. At last, in the third model, SVM and GA are studied, where GA acts as an optimizer and feature selection agent.

Findings

As per the results, it is observed that all new models can predict accurately for only 6 days; however, in comparison with the confusion matrix results, it is observed that the SVM-GA and SVM-ICA models have correctly predicted more sell signals, and the SCM-PSO model has correctly predicted more buy signals. However, SVM-ICA has shown better performance than other models considering executing the implemented models.

Research limitations/implications

In this study, the data for stock market of the years 2013–2021 were analyzed; the long length of timeframe makes the input data analysis challenging as they must be moderated with respect to the conditions where they have been changed.

Originality/value

In this study, two methods have been developed in a candlestick model; they are raw-based and signal-based approaches in which the hit rate is determined by the percentage of correct evaluations of the stock market for a 16-day period.

Details

EuroMed Journal of Business, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1450-2194

Keywords

Article
Publication date: 26 January 2022

Rajashekhar U., Neelappa and Harish H.M.

The natural control, feedback, stimuli and protection of these subsequent principles founded this project. Via properly conducted experiments, a multilayer computer rehabilitation…

Abstract

Purpose

The natural control, feedback, stimuli and protection of these subsequent principles founded this project. Via properly conducted experiments, a multilayer computer rehabilitation system was created that integrated natural interaction assisted by electroencephalogram (EEG), which enabled the movements in the virtual environment and real wheelchair. For blind wheelchair operator patients, this paper involved of expounding the proper methodology. For educating the value of life and independence of blind wheelchair users, outcomes have proven that virtual reality (VR) with EEG signals has that potential.

Design/methodology/approach

Individuals face numerous challenges with many disorders, particularly when multiple dysfunctions are diagnosed and especially for visually effected wheelchair users. This scenario, in reality, creates in a degree of incapacity on the part of the wheelchair user in terms of performing simple activities. Based on their specific medical needs, confined patients are treated in a modified method. Independent navigation is secured for individuals with vision and motor disabilities. There is a necessity for communication which justifies the use of VR in this navigation situation. For the effective integration of locomotion besides, it must be under natural guidance. EEG, which uses random brain impulses, has made significant progress in the field of health. The custom of an automated audio announcement system modified to have the help of VR and EEG for the training of locomotion and individualized interaction of wheelchair users with visual disability is demonstrated in this study through an experiment. Enabling the patients who were otherwise deemed incapacitated to participate in social activities, as the aim was to have efficient connections.

Findings

To protect their life straightaway and to report all these disputes, the military system should have high speed, more precise portable prototype device for nursing the soldier health, recognition of solider location and report about health sharing system to the concerned system. Field programmable gate array (FPGA)-based soldier’s health observing and position gratitude system is proposed in this paper. Reliant on heart rate which is centered on EEG signals, the soldier’s health is observed on systematic bases. By emerging Verilog hardware description language (HDL) programming language and executing on Artix-7 development FPGA board of part name XC7ACSG100t the whole work is approved in a Vivado Design Suite. Classification of different abnormalities and cloud storage of EEG along with the type of abnormalities, artifact elimination, abnormalities identification based on feature extraction, exist in the segment of suggested architecture. Irregularity circumstances are noticed through developed prototype system and alert the physically challenged (PHC) individual via an audio announcement. An actual method for eradicating motion artifacts from EEG signals that have anomalies in the PHC person’s brain has been established, and the established system is a portable device that can deliver differences in brain signal variation intensity. Primarily the EEG signals can be taken and the undesirable artifact can be detached, later structures can be mined by discrete wavelet transform these are the two stages through which artifact deletion can be completed. The anomalies in signal can be noticed and recognized by using machine learning algorithms known as multirate support vector machine classifiers when the features have been extracted using a combination of hidden Markov model (HMM) and Gaussian mixture model (GMM). Intended for capable declaration about action taken by a blind person, these result signals are protected in storage devices and conveyed to the controller. Pretending daily motion schedules allows the pretentious EEG signals to be caught. Aimed at the validation of planned system, the database can be used and continued with numerous recorded signals of EEG. The projected strategy executes better in terms of re-storing theta, delta, alpha and beta complexes of the original EEG with less alteration and a higher signal to noise ratio (SNR) value of the EEG signal, which illustrates in the quantitative analysis. The projected method used Verilog HDL and MATLAB software for both formation and authorization of results to yield improved results. Since from the achieved results, it is initiated that 32% enhancement in SNR, 14% in mean squared error (MSE) and 65% enhancement in recognition of anomalies, hence design is effectively certified and proved for standard EEG signals data sets on FPGA.

Originality/value

The proposed system can be used in military applications as it is high speed and excellent precise in terms of identification of abnormality, the developed system is portable and very precise. FPGA-based soldier’s health observing and position gratitude system is proposed in this paper. Reliant on heart rate which is centered on EEG signals the soldier health is observed in systematic bases. The proposed system is developed using Verilog HDL programming language and executing on Artix-7 development FPGA board of part name XC7ACSG100t and synthesised using in Vivado Design Suite software tool.

Details

International Journal of Pervasive Computing and Communications, vol. 19 no. 3
Type: Research Article
ISSN: 1742-7371

Keywords

Open Access
Article
Publication date: 8 December 2023

Armin Mahmoodi, Leila Hashemi, Amin Mahmoodi, Benyamin Mahmoodi and Milad Jasemi

The proposed model has been aimed to predict stock market signals by designing an accurate model. In this sense, the stock market is analysed by the technical analysis of Japanese…

Abstract

Purpose

The proposed model has been aimed to predict stock market signals by designing an accurate model. In this sense, the stock market is analysed by the technical analysis of Japanese Candlestick, which is combined by the following meta heuristic algorithms: support vector machine (SVM), meta-heuristic algorithms, particle swarm optimization (PSO), imperialist competition algorithm (ICA) and genetic algorithm (GA).

Design/methodology/approach

In addition, among the developed algorithms, the most effective one is chosen to determine probable sell and buy signals. Moreover, the authors have proposed comparative results to validate the designed model in this study with the same basic models of three articles in the past. Hence, PSO is used as a classification method to search the solution space absolutelyand with the high speed of running. In terms of the second model, SVM and ICA are examined by the time. Where the ICA is an improver for the SVM parameters. Finally, in the third model, SVM and GA are studied, where GA acts as optimizer and feature selection agent.

Findings

Results have been indicated that, the prediction accuracy of all new models are high for only six days, however, with respect to the confusion matrixes results, it is understood that the SVM-GA and SVM-ICA models have correctly predicted more sell signals, and the SCM-PSO model has correctly predicted more buy signals. However, SVM-ICA has shown better performance than other models considering executing the implemented models.

Research limitations/implications

In this study, the authors to analyze the data the long length of time between the years 2013–2021, makes the input data analysis challenging. They must be changed with respect to the conditions.

Originality/value

In this study, two methods have been developed in a candlestick model, they are raw based and signal-based approaches which the hit rate is determined by the percentage of correct evaluations of the stock market for a 16-day period.

Details

Journal of Capital Markets Studies, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-4774

Keywords

Article
Publication date: 12 October 2023

R.L. Manogna and Aayush Anand

Deep learning (DL) is a new and relatively unexplored field that finds immense applications in many industries, especially ones that must make detailed observations, inferences…

Abstract

Purpose

Deep learning (DL) is a new and relatively unexplored field that finds immense applications in many industries, especially ones that must make detailed observations, inferences and predictions based on extensive and scattered datasets. The purpose of this paper is to answer the following questions: (1) To what extent has DL penetrated the research being done in finance? (2) What areas of financial research have applications of DL, and what quality of work has been done in the niches? (3) What areas still need to be explored and have scope for future research?

Design/methodology/approach

This paper employs bibliometric analysis, a potent yet simple methodology with numerous applications in literature reviews. This paper focuses on citation analysis, author impacts, relevant and vital journals, co-citation analysis, bibliometric coupling and co-occurrence analysis. The authors collected 693 articles published in 2000–2022 from journals indexed in the Scopus database. Multiple software (VOSviewer, RStudio (biblioshiny) and Excel) were employed to analyze the data.

Findings

The findings reveal significant and renowned authors' impact in the field. The analysis indicated that the application of DL in finance has been on an upward track since 2017. The authors find four broad research areas (neural networks and stock market simulations; portfolio optimization and risk management; time series analysis and forecasting; high-frequency trading) with different degrees of intertwining and emerging research topics with the application of DL in finance. This article contributes to the literature by providing a systematic overview of the DL developments, trajectories, objectives and potential future research topics in finance.

Research limitations/implications

The findings of this paper act as a guide for literature review for anyone interested in doing research in the intersection of finance and DL. The article also explores multiple areas of research that have yet to be studied to a great extent and have abundant scope.

Originality/value

Very few studies have explored the applications of machine learning (ML), namely, DL in finance, which is a much more specialized subset of ML. The authors look at the problem from the aspect of different techniques in DL that have been used in finance. This is the first qualitative (content analysis) and quantitative (bibliometric analysis) assessment of current research on DL in finance.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

1 – 10 of 11