Search results
1 – 10 of over 1000In this research, the authors demonstrate the advantage of reinforcement learning (RL) based intrusion detection systems (IDS) to solve very complex problems (e.g. selecting input…
Abstract
Purpose
In this research, the authors demonstrate the advantage of reinforcement learning (RL) based intrusion detection systems (IDS) to solve very complex problems (e.g. selecting input features, considering scarce resources and constrains) that cannot be solved by classical machine learning. The authors include a comparative study to build intrusion detection based on statistical machine learning and representational learning, using knowledge discovery in databases (KDD) Cup99 and Installation Support Center of Expertise (ISCX) 2012.
Design/methodology/approach
The methodology applies a data analytics approach, consisting of data exploration and machine learning model training and evaluation. To build a network-based intrusion detection system, the authors apply dueling double deep Q-networks architecture enabled with costly features, k-nearest neighbors (K-NN), support-vector machines (SVM) and convolution neural networks (CNN).
Findings
Machine learning-based intrusion detection are trained on historical datasets which lead to model drift and lack of generalization whereas RL is trained with data collected through interactions. RL is bound to learn from its interactions with a stochastic environment in the absence of a training dataset whereas supervised learning simply learns from collected data and require less computational resources.
Research limitations/implications
All machine learning models have achieved high accuracy values and performance. One potential reason is that both datasets are simulated, and not realistic. It was not clear whether a validation was ever performed to show that data were collected from real network traffics.
Practical implications
The study provides guidelines to implement IDS with classical supervised learning, deep learning and RL.
Originality/value
The research applied the dueling double deep Q-networks architecture enabled with costly features to build network-based intrusion detection from network traffics. This research presents a comparative study of reinforcement-based instruction detection with counterparts built with statistical and representational machine learning.
Details
Keywords
Kedong Yin, Yun Cao, Shiwei Zhou and Xinman Lv
The purposes of this research are to study the theory and method of multi-attribute index system design and establish a set of systematic, standardized, scientific index systems…
Abstract
Purpose
The purposes of this research are to study the theory and method of multi-attribute index system design and establish a set of systematic, standardized, scientific index systems for the design optimization and inspection process. The research may form the basis for a rational, comprehensive evaluation and provide the most effective way of improving the quality of management decision-making. It is of practical significance to improve the rationality and reliability of the index system and provide standardized, scientific reference standards and theoretical guidance for the design and construction of the index system.
Design/methodology/approach
Using modern methods such as complex networks and machine learning, a system for the quality diagnosis of index data and the classification and stratification of index systems is designed. This guarantees the quality of the index data, realizes the scientific classification and stratification of the index system, reduces the subjectivity and randomness of the design of the index system, enhances its objectivity and rationality and lays a solid foundation for the optimal design of the index system.
Findings
Based on the ideas of statistics, system theory, machine learning and data mining, the focus in the present research is on “data quality diagnosis” and “index classification and stratification” and clarifying the classification standards and data quality characteristics of index data; a data-quality diagnosis system of “data review – data cleaning – data conversion – data inspection” is established. Using a decision tree, explanatory structural model, cluster analysis, K-means clustering and other methods, classification and hierarchical method system of indicators is designed to reduce the redundancy of indicator data and improve the quality of the data used. Finally, the scientific and standardized classification and hierarchical design of the index system can be realized.
Originality/value
The innovative contributions and research value of the paper are reflected in three aspects. First, a method system for index data quality diagnosis is designed, and multi-source data fusion technology is adopted to ensure the quality of multi-source, heterogeneous and mixed-frequency data of the index system. The second is to design a systematic quality-inspection process for missing data based on the systematic thinking of the whole and the individual. Aiming at the accuracy, reliability, and feasibility of the patched data, a quality-inspection method of patched data based on inversion thought and a unified representation method of data fusion based on a tensor model are proposed. The third is to use the modern method of unsupervised learning to classify and stratify the index system, which reduces the subjectivity and randomness of the design of the index system and enhances its objectivity and rationality.
Details
Keywords
David Bogataj, Valerija Rogelj, Marija Bogataj and Eneja Drobež
The purpose of this study is to develop new type of reverse mortgage contract. How to provide adequate services and housing for an increasing number of people that are dependent…
Abstract
Purpose
The purpose of this study is to develop new type of reverse mortgage contract. How to provide adequate services and housing for an increasing number of people that are dependent on the help of others is a crucial question in the European Union (EU). The housing stock in Europe is not fit to support a shift from institutional care to the home-based independent living. Some 90% of houses in the UK and 70%–80% in Germany are not adequately built, as they contain accessibility barriers for people with emerging functional impairments. The available reverse mortgage contracts do not allow for relocation to their own adapted facilities. How to finance the adaptation from housing equity is discussed.
Design/methodology/approach
The authors have extended the existing loan reverse mortgage model. Actuarial methods based on the equivalence of the actuarial present values and the multiple decrement approach are used to evaluate premiums for flexible longevity and lifetime long-term care (LTC) insurance for financing adequate facilities.
Findings
The adequate, age-friendly housing provision that is appropriate to support the independence and autonomy of seniors with declining functional capacities can lower the cost of health care and improve the well-being of older adults. For financing the development of this kind of facilities for seniors, the authors developed the reverse mortgage scheme with embedded longevity and LTC insurance as a possible financial instrument for better LTC services and housing with care in assisted-living facilities. This kind of facilities should be available for the rapid growth of older cohorts.
Research limitations/implications
The numerical example is based on rather crude numbers, because of lack of data, as the developed reverse mortgage product with LTC insurance is a novelty. Intensity of care and probabilities of care in certain category of care will change after the introduction of this product.
Practical implications
The model results indicate that it is possible to successfully tie an insurance product to the insured and not to the object.
Social implications
The introduction of this insurance option will allow many older adult with low pension benefits and a substantial home equity to safely opt for a reverse mortgage and benefit from better social care.
Originality/value
While currently available reverse mortgage contracts lapse when the homeowner moves to assisted-living facilities in any EU Member State, in the paper a new method is developed where multiple adjustments of housing to the functional capacities with relocation is possible, under the same insurance and reverse mortgage contract. The case of Slovenia is presented as a numerical example. These insurance products, as a novelty, are portable, so the homeowner can move in own specialised housing unit in assisted-living facilities and keep the existing reverse mortgage contract with no additional costs, which is not possible in the current insurance products. With some small modifications, the method is useful for any EU Member State.
Details
Keywords
Weifei Hu, Tongzhou Zhang, Xiaoyu Deng, Zhenyu Liu and Jianrong Tan
Digital twin (DT) is an emerging technology that enables sophisticated interaction between physical objects and their virtual replicas. Although DT has recently gained significant…
Abstract
Digital twin (DT) is an emerging technology that enables sophisticated interaction between physical objects and their virtual replicas. Although DT has recently gained significant attraction in both industry and academia, there is no systematic understanding of DT from its development history to its different concepts and applications in disparate disciplines. The majority of DT literature focuses on the conceptual development of DT frameworks for a specific implementation area. Hence, this paper provides a state-of-the-art review of DT history, different definitions and models, and six types of key enabling technologies. The review also provides a comprehensive survey of DT applications from two perspectives: (1) applications in four product-lifecycle phases, i.e. product design, manufacturing, operation and maintenance, and recycling and (2) applications in four categorized engineering fields, including aerospace engineering, tunneling and underground engineering, wind engineering and Internet of things (IoT) applications. DT frameworks, characteristic components, key technologies and specific applications are extracted for each DT category in this paper. A comprehensive survey of the DT references reveals the following findings: (1) The majority of existing DT models only involve one-way data transfer from physical entities to virtual models and (2) There is a lack of consideration of the environmental coupling, which results in the inaccurate representation of the virtual components in existing DT models. Thus, this paper highlights the role of environmental factor in DT enabling technologies and in categorized engineering applications. In addition, the review discusses the key challenges and provides future work for constructing DTs of complex engineering systems.
Details
Keywords
Annye Braca and Pierpaolo Dondio
Prediction is a critical task in targeted online advertising, where predictions better than random guessing can translate to real economic return. This study aims to use machine…
Abstract
Purpose
Prediction is a critical task in targeted online advertising, where predictions better than random guessing can translate to real economic return. This study aims to use machine learning (ML) methods to identify individuals who respond well to certain linguistic styles/persuasion techniques based on Aristotle’s means of persuasion, rhetorical devices, cognitive theories and Cialdini’s principles, given their psychometric profile.
Design/methodology/approach
A total of 1,022 individuals took part in the survey; participants were asked to fill out the ten item personality measure questionnaire to capture personality traits and the dysfunctional attitude scale (DAS) to measure dysfunctional beliefs and cognitive vulnerabilities. ML classification models using participant profiling information as input were developed to predict the extent to which an individual was influenced by statements that contained different linguistic styles/persuasion techniques. Several ML algorithms were used including support vector machine, LightGBM and Auto-Sklearn to predict the effect of each technique given each individual’s profile (personality, belief system and demographic data).
Findings
The findings highlight the importance of incorporating emotion-based variables as model input in predicting the influence of textual statements with embedded persuasion techniques. Across all investigated models, the influence effect could be predicted with an accuracy ranging 53%–70%, indicating the importance of testing multiple ML algorithms in the development of a persuasive communication (PC) system. The classification ability of models was highest when predicting the response to statements using rhetorical devices and flattery persuasion techniques. Contrastingly, techniques such as authority or social proof were less predictable. Adding DAS scale features improved model performance, suggesting they may be important in modelling persuasion.
Research limitations/implications
In this study, the survey was limited to English-speaking countries and largely Western society values. More work is needed to ascertain the efficacy of models for other populations, cultures and languages. Most PC efforts are targeted at groups such as users, clients, shoppers and voters with this study in the communication context of education – further research is required to explore the capability of predictive ML models in other contexts. Finally, long self-reported psychological questionnaires may not be suitable for real-world deployment and could be subject to bias, thus a simpler method needs to be devised to gather user profile data such as using a subset of the most predictive features.
Practical implications
The findings of this study indicate that leveraging richer profiling data in conjunction with ML approaches may assist in the development of enhanced persuasive systems. There are many applications such as online apps, digital advertising, recommendation systems, chatbots and e-commerce platforms which can benefit from integrating persuasion communication systems that tailor messaging to the individual – potentially translating into higher economic returns.
Originality/value
This study integrates sets of features that have heretofore not been used together in developing ML-based predictive models of PC. DAS scale data, which relate to dysfunctional beliefs and cognitive vulnerabilities, were assessed for their importance in identifying effective persuasion techniques. Additionally, the work compares a range of persuasion techniques that thus far have only been studied separately. This study also demonstrates the application of various ML methods in predicting the influence of linguistic styles/persuasion techniques within textual statements and show that a robust methodology comparing a range of ML algorithms is important in the discovery of a performant model.
Details
Keywords
Manuel Rossetti, Juliana Bright, Andrew Freeman, Anna Lee and Anthony Parrish
This paper is motivated by the need to assess the risk profiles associated with the substantial number of items within military supply chains. The scale of supply chain management…
Abstract
Purpose
This paper is motivated by the need to assess the risk profiles associated with the substantial number of items within military supply chains. The scale of supply chain management processes creates difficulties in both the complexity of the analysis and in performing risk assessments that are based on the manual (human analyst) assessment methods. Thus, analysts require methods that can be automated and that can incorporate on-going operational data on a regular basis.
Design/methodology/approach
The approach taken to address the identification of supply chain risk within an operational setting is based on aspects of multiobjective decision analysis (MODA). The approach constructs a risk and importance index for supply chain elements based on operational data. These indices are commensurate in value, leading to interpretable measures for decision-making.
Findings
Risk and importance indices were developed for the analysis of items within an example supply chain. Using the data on items, individual MODA models were formed and demonstrated using a prototype tool.
Originality/value
To better prepare risk mitigation strategies, analysts require the ability to identify potential sources of risk, especially in times of disruption such as natural disasters.
Details
Keywords
Armando Di Meglio, Nicola Massarotti and Perumal Nithiarasu
In this study, the authors propose a novel digital twinning approach specifically designed for controlling transient thermal systems. The purpose of this study is to harness the…
Abstract
Purpose
In this study, the authors propose a novel digital twinning approach specifically designed for controlling transient thermal systems. The purpose of this study is to harness the combined power of deep learning (DL) and physics-based methods (PBM) to create an active virtual replica of the physical system.
Design/methodology/approach
To achieve this goal, we introduce a deep neural network (DNN) as the digital twin and a Finite Element (FE) model as the physical system. This integrated approach is used to address the challenges of controlling an unsteady heat transfer problem with an integrated feedback loop.
Findings
The results of our study demonstrate the effectiveness of the proposed digital twinning approach in regulating the maximum temperature within the system under varying and unsteady heat flux conditions. The DNN, trained on stationary data, plays a crucial role in determining the heat transfer coefficients necessary to maintain temperatures below a defined threshold value, such as the material’s melting point. The system is successfully controlled in 1D, 2D and 3D case studies. However, careful evaluations should be conducted if such a training approach, based on steady-state data, is applied to completely different transient heat transfer problems.
Originality/value
The present work represents one of the first examples of a comprehensive digital twinning approach to transient thermal systems, driven by data. One of the noteworthy features of this approach is its robustness. Adopting a training based on dimensionless data, the approach can seamlessly accommodate changes in thermal capacity and thermal conductivity without the need for retraining.
Details
Keywords
Mohammed Y. Fattah, Mahmood R. Mahmood and Mohammed F. Aswad
The main objective of the present research is to investigate the benefits of using geogrid reinforcement in minimizing the rate of deterioration of ballasted rail track geometry…
Abstract
Purpose
The main objective of the present research is to investigate the benefits of using geogrid reinforcement in minimizing the rate of deterioration of ballasted rail track geometry resting on soft clay and to explore the effect of load amplitude, load frequency, presence of geogrid layer in ballast layer and ballast layer thickness on the behavior of track system. These variables are studied both experimentally and numerically. This paper examines the effect of geogrid reinforced ballast laying on a layer of clayey soil as a subgrade layer, where a half full scale railway tests are conducted as well as a theoretical analysis is performed.
Design/methodology/approach
The experimental tests work consists of laboratory model tests to investigate the reduction in the compressibility and stress distribution induced in soft clay under a ballast railway reinforced by geogrid reinforcement subjected to dynamic load. Experimental model based on an approximate half scale for general rail track engineering practice is adopted in this study which is used in Iraqi railways. The investigated parameters are load amplitude, load frequency and presence of geogrid reinforcement layer. A half full-scale railway was constructed for carrying out the tests, which consists of two rails 800 mm in length with three wooden sleepers (900 mm × 90 mm × 90 mm). The ballast was overlying 500 mm thick clay layer. The tests were carried out with and without geogrid reinforcement, the tests were carried out in a well tied steel box of 1.5 m length × 1 m width × 1 m height. A series of laboratory tests were conducted to investigate the response of the ballast and the clay layers where the ballast was reinforced by a geogrid. Settlement in ballast and clay, was measured in reinforced and unreinforced ballast cases. In addition to the laboratory tests, the application of numerical analysis was made by using the finite element program PLAXIS 3D 2013.
Findings
It was concluded that the settlement increased with increasing the simulated train load amplitude, there is a sharp increase in settlement up to the cycle 500 and after that, there is a gradual increase to level out between, 2,500 and 4,500 cycles depending on the load frequency. There is a little increase in the induced settlement when the load amplitude increased from 0.5 to 1 ton, but it is higher when the load amplitude increased to 2 ton, the increase in settlement depends on the geogrid existence and the other studied parameters. Both experimental and numerical results showed the same behavior. The effect of load frequency on the settlement ratio is almost constant after 500 cycles. In general, for reinforced cases, the effect of load frequency on the settlement ratio is very small ranging between 0.5 and 2% compared with the unreinforced case.
Originality/value
Increasing the ballast layer thickness from 20 cm to 30 cm leads to decrease the settlement by about 50%. This ascertains the efficiency of ballast in spreading the waves induced by the track.
Details
Keywords
What is the relation between the land system with Chinese characteristics and the country's high-speed economic growth in the past decades? There is a lack of rigorous academic…
Abstract
Purpose
What is the relation between the land system with Chinese characteristics and the country's high-speed economic growth in the past decades? There is a lack of rigorous academic research based on the general equilibrium theory of macroeconomics on this issue.
Design/methodology/approach
By building a multisector dynamic general equilibrium framework with land system, this paper explores how the land supply mode with Chinese characteristics affects China's economic growth as well as its transmission mechanism.
Findings
This paper confirms the importance of land system with Chinese characteristics in explaining the mystery of China's high-speed economic growth. Counterfactual analysis shows that if China adopts a land system similar to that of other developing countries, GDP will drop 36% from the current level under the baseline model.
Originality/value
As the industrial sector shrinks relatively and the output elasticity of infrastructure decreases, this inhibitory effect will become more apparent. China should improve its land supply mode, especially expand the supply of commercial and residential land and reduce the cost of land in the service sector. This can promote better economic development in the future and thus improve household welfare and the structure of aggregate demand, replace “land-based public finance” and thus inhibit the “high leverage” risks of local governments.
Details
Keywords
Piergiorgio Alotto, Paolo Di Barba, Alessandro Formisano, Gabriele Maria Lozito, Raffaele Martone, Maria Evelina Mognaschi, Maurizio Repetto, Alessandro Salvini and Antonio Savini
Inverse problems in electromagnetism, namely, the recovery of sources (currents or charges) or system data from measured effects, are usually ill-posed or, in the numerical…
Abstract
Purpose
Inverse problems in electromagnetism, namely, the recovery of sources (currents or charges) or system data from measured effects, are usually ill-posed or, in the numerical formulation, ill-conditioned and require suitable regularization to provide meaningful results. To test new regularization methods, there is the need of benchmark problems, which numerical properties and solutions should be well known. Hence, this study aims to define a benchmark problem, suitable to test new regularization approaches and solves with different methods.
Design/methodology/approach
To assess reliability and performance of different solving strategies for inverse source problems, a benchmark problem of current synthesis is defined and solved by means of several regularization methods in a comparative way; subsequently, an approach in terms of an artificial neural network (ANN) is considered as a viable alternative to classical regularization schemes. The solution of the underlying forward problem is based on a finite element analysis.
Findings
The paper provides a very detailed analysis of the proposed inverse problem in terms of numerical properties of the lead field matrix. The solutions found by different regularization approaches and an ANN method are provided, showing the performance of the applied methods and the numerical issues of the benchmark problem.
Originality/value
The value of the paper is to provide the numerical characteristics and issues of the proposed benchmark problem in a comprehensive way, by means of a wide variety of regularization methods and an ANN approach.
Details