Search results
1 – 10 of over 11000Ahmad Mozaffari, Nasser Lashgarian Azad and Alireza Fathi
The purpose of this paper is to demonstrate the applicability of swarm and evolutionary techniques for regularized machine learning. Generally, by defining a proper penalty…
Abstract
Purpose
The purpose of this paper is to demonstrate the applicability of swarm and evolutionary techniques for regularized machine learning. Generally, by defining a proper penalty function, regularization laws are embedded into the structure of common least square solutions to increase the numerical stability, sparsity, accuracy and robustness of regression weights. Several regularization techniques have been proposed so far which have their own advantages and disadvantages. Several efforts have been made to find fast and accurate deterministic solvers to handle those regularization techniques. However, the proposed numerical and deterministic approaches need certain knowledge of mathematical programming, and also do not guarantee the global optimality of the obtained solution. In this research, the authors propose the use of constraint swarm and evolutionary techniques to cope with demanding requirements of regularized extreme learning machine (ELM).
Design/methodology/approach
To implement the required tools for comparative numerical study, three steps are taken. The considered algorithms contain both classical and swarm and evolutionary approaches. For the classical regularization techniques, Lasso regularization, Tikhonov regularization, cascade Lasso-Tikhonov regularization, and elastic net are considered. For swarm and evolutionary-based regularization, an efficient constraint handling technique known as self-adaptive penalty function constraint handling is considered, and its algorithmic structure is modified so that it can efficiently perform the regularized learning. Several well-known metaheuristics are considered to check the generalization capability of the proposed scheme. To test the efficacy of the proposed constraint evolutionary-based regularization technique, a wide range of regression problems are used. Besides, the proposed framework is applied to a real-life identification problem, i.e. identifying the dominant factors affecting the hydrocarbon emissions of an automotive engine, for further assurance on the performance of the proposed scheme.
Findings
Through extensive numerical study, it is observed that the proposed scheme can be easily used for regularized machine learning. It is indicated that by defining a proper objective function and considering an appropriate penalty function, near global optimum values of regressors can be easily obtained. The results attest the high potentials of swarm and evolutionary techniques for fast, accurate and robust regularized machine learning.
Originality/value
The originality of the research paper lies behind the use of a novel constraint metaheuristic computing scheme which can be used for effective regularized optimally pruned extreme learning machine (OP-ELM). The self-adaption of the proposed method alleviates the user from the knowledge of the underlying system, and also increases the degree of the automation of OP-ELM. Besides, by using different types of metaheuristics, it is demonstrated that the proposed methodology is a general flexible scheme, and can be combined with different types of swarm and evolutionary-based optimization techniques to form a regularized machine learning approach.
Details
Keywords
In recent decades, development of effective methods for optimizing a set of conflicted objective functions has been absorbing an increasing interest from researchers. This refers…
Abstract
Purpose
In recent decades, development of effective methods for optimizing a set of conflicted objective functions has been absorbing an increasing interest from researchers. This refers to the essence of real-life engineering systems and complex natural mechanisms which are generally multi-modal, non-convex and multi-criterion. Until now, several deterministic and stochastic methods have been proposed to cope with such complex systems. Advanced soft computational methods such as evolutionary games (cooperative and non-cooperative), Pareto-based techniques, fuzzy evolutionary methods, cooperative bio-inspired algorithms and neuro-evolutionary systems have effectively come to the aid of researchers to build up efficient paradigms with application to vector optimization. The paper aims to discuss this issue.
Design/methodology/approach
A novel hybrid algorithm called synchronous self-learning Pareto strategy (SSLPS) is presented for the sake of vector optimization. The method is the ensemble of evolutionary algorithms (EA), swarm intelligence (SI), adaptive version of self-organizing map (CSOM) and a data shuffling mechanism. EA are powerful numerical optimization algorithms capable of finding a global extreme point over a wide exploration domain. SI techniques (the swarm of bees in our case) can improve both intensification and robustness of exploration. CSOM network is an unsupervised learning methodology which learns the characteristics of non-dominated solutions and, thus, enhances the quality of the Pareto front.
Findings
To prove the effectiveness of the proposed method, the authors engage a set of well-known benchmark functions and some well-known rival optimization methods. Additionally, SSLPS is employed for optimal design of shape memory alloy actuator as a nonlinear multi-modal real-world engineering problem. The experiments show the acceptable potential of SSLPS for handling both numerical and engineering multi-objective problems.
Originality/value
To the author’s best knowledge, the proposed algorithm is among the rare multi-objective methods which fosters the use of automated unsupervised learning for increasing the intensity of Pareto front (while preserving the diversity). Also, the research evaluates the power of hybridization of SI and EA for efficient search.
Details
Keywords
Shrawan Kumar Trivedi and Shubhamoy Dey
Email is a rapid and cheapest medium of sharing information, whereas unsolicited email (spam) is constant trouble in the email communication. The rapid growth of the spam creates…
Abstract
Purpose
Email is a rapid and cheapest medium of sharing information, whereas unsolicited email (spam) is constant trouble in the email communication. The rapid growth of the spam creates a necessity to build a reliable and robust spam classifier. This paper aims to presents a study of evolutionary classifiers (genetic algorithm [GA] and genetic programming [GP]) without/with the help of an ensemble of classifiers method. In this research, the classifiers ensemble has been developed with adaptive boosting technique.
Design/methodology/approach
Text mining methods are applied for classifying spam emails and legitimate emails. Two data sets (Enron and SpamAssassin) are taken to test the concerned classifiers. Initially, pre-processing is performed to extract the features/words from email files. Informative feature subset is selected from greedy stepwise feature subset search method. With the help of informative features, a comparative study is performed initially within the evolutionary classifiers and then with other popular machine learning classifiers (Bayesian, naive Bayes and support vector machine).
Findings
This study reveals the fact that evolutionary algorithms are promising in classification and prediction applications where genetic programing with adaptive boosting is turned out not only an accurate classifier but also a sensitive classifier. Results show that initially GA performs better than GP but after an ensemble of classifiers (a large number of iterations), GP overshoots GA with significantly higher accuracy. Amongst all classifiers, boosted GP turns out to be not only good regarding classification accuracy but also low false positive (FP) rates, which is considered to be the important criteria in email spam classification. Also, greedy stepwise feature search is found to be an effective method for feature selection in this application domain.
Research limitations/implications
The research implication of this research consists of the reduction in cost incurred because of spam/unsolicited bulk email. Email is a fundamental necessity to share information within a number of units of the organizations to be competitive with the business rivals. In addition, it is continually a hurdle for internet service providers to provide the best emailing services to their customers. Although, the organizations and the internet service providers are continuously adopting novel spam filtering approaches to reduce the number of unwanted emails, the desired effect could not be significantly seen because of the cost of installation, customizable ability and the threat of misclassification of important emails. This research deals with all the issues and challenges faced by internet service providers and organizations.
Practical implications
In this research, the proposed models have not only provided excellent performance accuracy, sensitivity with low FP rate, customizable capability but also worked on reducing the cost of spam. The same models may be used for other applications of text mining also such as sentiment analysis, blog mining, news mining or other text mining research.
Originality/value
A comparison between GP and GAs has been shown with/without ensemble in spam classification application domain.
Details
Keywords
Rajashree Dash, Rasmita Rautray and Rasmita Dash
Since the last few decades, Artificial Neural Networks have been the center of attraction of a large number of researchers for solving diversified problem domains. Due to its…
Abstract
Since the last few decades, Artificial Neural Networks have been the center of attraction of a large number of researchers for solving diversified problem domains. Due to its distinguishing features such as generalization ability, robustness and strong ability to tackle nonlinear problems, it appears to be more popular in financial time series modeling and prediction. In this paper, a Pi-Sigma Neural Network is designed for foretelling the future currency exchange rates in different prediction horizon. The unrevealed parameters of the network are interpreted by a hybrid learning algorithm termed as Shuffled Differential Evolution (SDE). The main motivation of this study is to integrate the partitioning and random shuffling scheme of Shuffled Frog Leaping algorithm with evolutionary steps of a Differential Evolution technique to obtain an optimal solution with an accelerated convergence rate. The efficiency of the proposed predictor model is actualized by predicting the exchange rate price of a US dollar against Swiss France (CHF) and Japanese Yen (JPY) accumulated within the same period of time.
Details
Keywords
This paper aims to contribute to the formulation of a theory of consciousness based only on computational processes. In this manner, sound computational explanations of qualia and…
Abstract
Purpose
This paper aims to contribute to the formulation of a theory of consciousness based only on computational processes. In this manner, sound computational explanations of qualia and the “hard problem” of consciousness are provided in response to a lack of physical, chemical and psychological explanations.
Design/methodology/approach
The study analyses the little that can be objectively known about qualia, and proposes a process that imitates the same effects. Then it applies the process to a robot (using a thought experiment) to understand whether this would produce the same sensations as humans experience.
Findings
A computational explanation of qualia and the “hard problem” of consciousness is possible through computational processes.
Research limitations/implications
This is a proposal, subject to argumentation and proof. It is a falsifiable theory, meaning that it is possible to test or reject it, as its computational basis allows for a future implementation.
Practical implications
Subjective feeling emerges as an evolutionary by-product when there are no strong evolutionary pressures on the brain. Qualia do not involve magic. These aspects of consciousness in robots and in organisations are capable of being manufactured; one can choose whether to build robots and organisations with qualia and subjective experience.
Originality/value
To the best of the author’s knowledge, no other computational interpretation of these aspects of consciousness exists. However, it is compatible with the multiple draft model of Dennett (1991) and the attention schema theory of Webb and Graziano (2015).
Details
Keywords
Aleena Swetapadma, Tishya Manna and Maryam Samami
A novel method has been proposed to reduce the false alarm rate of arrhythmia patients regarding life-threatening conditions in the intensive care unit. In this purpose, the…
Abstract
Purpose
A novel method has been proposed to reduce the false alarm rate of arrhythmia patients regarding life-threatening conditions in the intensive care unit. In this purpose, the atrial blood pressure, photoplethysmogram (PLETH), electrocardiogram (ECG) and respiratory (RESP) signals are considered as input signals.
Design/methodology/approach
Three machine learning approaches feed-forward artificial neural network (ANN), ensemble learning method and k-nearest neighbors searching methods are used to detect the false alarm. The proposed method has been implemented using Arduino and MATLAB/SIMULINK for real-time ICU-arrhythmia patients' monitoring data.
Findings
The proposed method detects the false alarm with an accuracy of 99.4 per cent during asystole, 100 per cent during ventricular flutter, 98.5 per cent during ventricular tachycardia, 99.6 per cent during bradycardia and 100 per cent during tachycardia. The proposed framework is adaptive in many scenarios, easy to implement, computationally friendly and highly accurate and robust with overfitting issue.
Originality/value
As ECG signals consisting with PQRST wave, any deviation from the normal pattern may signify some alarming conditions. These deviations can be utilized as input to classifiers for the detection of false alarms; hence, there is no need for other feature extraction techniques. Feed-forward ANN with the Lavenberg–Marquardt algorithm has shown higher rate of convergence than other neural network algorithms which helps provide better accuracy with no overfitting.
Details
Keywords
The purpose of this paper is to introduce the work of Mark Coeckelbergh into the field of management.
Abstract
Purpose
The purpose of this paper is to introduce the work of Mark Coeckelbergh into the field of management.
Design/methodology/approach
This is a conceptual paper with interviews.
Findings
The author suggests that Coeckelberghs’ considerations of an anthropology of vulnerability have the potential to provide a rich and insightful exploration of the machine-human interface, which is not afforded by many of the current approaches taken in this field. Their development of an anthropology of vulnerability suggests an approach to the machine-human interface that re-frames the machine-human interface in terms of human vulnerability, rather than machine’s performance, and sustains that the machine-human interface can be understood in terms of the transfer of human vulnerability.
Research limitations/implications
This paper reveals some of the possibilities inherent in Coeckelbergh’s theories by providing an analysis of a specific event, the recent introduction of robo-advisors in portfolio management, from a Coeckelberghian perspective and by exploring some of the implications of this type of approach for the machine-human interface.
Originality/value
As far as the author knows, there is no previous paper on this topic.
Details
Keywords
Sara Gusmao Brissi, Oscar Wong Chong, Luciana Debs and Jiansong Zhang
The purpose is two-fold: (1) to explore the interactions of robotic systems and lean construction in the context of offsite construction (OC) that were addressed in the literature…
Abstract
Purpose
The purpose is two-fold: (1) to explore the interactions of robotic systems and lean construction in the context of offsite construction (OC) that were addressed in the literature published between 2008 and 2019 and (2) to identify the gaps in such interactions while discussing how addressing those gaps can benefit not only OC but the architecture, engineering and construction (AEC) industry as a whole.
Design/methodology/approach
First, a systematic literature review (SLR) identified journal papers addressing the interactions of automation and lean in OC. Then, the researchers focused the analysis on the under-researched subtopic of robotic systems. The focused analysis includes discussing the interactions identified in the SLR through a matrix of interactions and utilizing literature beyond the previously identified articles for future research directions on robotic systems and lean construction in OC.
Findings
The study found 35 journal papers that addressed automation and lean in the context of OC. Most of the identified literature focused on interactions of BIM and lean construction, while only nine focused on the interactions of robotic systems and lean construction. Identified literature related to robotic systems mainly addressed robots and automated equipment. Additional interactions were identified in the realm of wearable devices, unmanned aerial vehicles/automated guided vehicles and digital fabrication/computer numerical control (CNC) machines.
Originality/value
This is one of the first studies dedicated to exploring the interactions of robotic systems and lean construction in OC. Also, it proposes a categorization for construction automation and a matrix of interactions between construction automation and lean construction.
Details
Keywords
Nima Gerami Seresht, Rodolfo Lourenzutti, Ahmad Salah and Aminah Robinson Fayek
Due to the increasing size and complexity of construction projects, construction engineering and management involves the coordination of many complex and dynamic processes and…
Abstract
Due to the increasing size and complexity of construction projects, construction engineering and management involves the coordination of many complex and dynamic processes and relies on the analysis of uncertain, imprecise and incomplete information, including subjective and linguistically expressed information. Various modelling and computing techniques have been used by construction researchers and applied to practical construction problems in order to overcome these challenges, including fuzzy hybrid techniques. Fuzzy hybrid techniques combine the human-like reasoning capabilities of fuzzy logic with the capabilities of other techniques, such as optimization, machine learning, multi-criteria decision-making (MCDM) and simulation, to capitalise on their strengths and overcome their limitations. Based on a review of construction literature, this chapter identifies the most common types of fuzzy hybrid techniques applied to construction problems and reviews selected papers in each category of fuzzy hybrid technique to illustrate their capabilities for addressing construction challenges. Finally, this chapter discusses areas for future development of fuzzy hybrid techniques that will increase their capabilities for solving construction-related problems. The contributions of this chapter are threefold: (1) the limitations of some standard techniques for solving construction problems are discussed, as are the ways that fuzzy methods have been hybridized with these techniques in order to address their limitations; (2) a review of existing applications of fuzzy hybrid techniques in construction is provided in order to illustrate the capabilities of these techniques for solving a variety of construction problems and (3) potential improvements in each category of fuzzy hybrid technique in construction are provided, as areas for future research.
Details
Keywords
Walter Bataglia and Dimária Silva E. Meirelles
The purpose of this paper is to identify complementarities between the approaches of population ecology and evolutionary economics in order to contribute to a synthesis of…
Abstract
The purpose of this paper is to identify complementarities between the approaches of population ecology and evolutionary economics in order to contribute to a synthesis of organizational evolutionary dynamics and its implications for a strategic management research model. Using the metatriangulation technique to construct theories, we attempt to entwine these two perspectives. The proposed model is structured in two dimensions: the environmental selective system and the corporate adaptation process. The environmental selective system gathers together the complementary factors presented by evolutionary economics and ecology: technological innovation, demographic processes, environmental dynamism, population density and other institutional processes, and interpopulation dynamics. As ecology does not encompass the corporate adaptation process (generation, selection, and propagation of variations), the proposed model adopts the theoretical grounds underpinning evolutionary economics. The model offers three main contributions for future research into strategic management. First, it allows the development of descriptive and normative studies of the relationship among the environmental selection factors and the different types of enterprise strategies. Second, the proposed conceptual framework may be very beneficial for studies of interorganizational learning. Third, the model has the advantage of responding to the criticism of strategy theories in terms of their inability to generalize.
Details