Search results
1 – 10 of 445Hussein Y.H. Alnajjar and Osman Üçüncü
Artificial intelligence (AI) models are demonstrating day by day that they can find long-term solutions to improve wastewater treatment efficiency. Artificial neural networks…
Abstract
Purpose
Artificial intelligence (AI) models are demonstrating day by day that they can find long-term solutions to improve wastewater treatment efficiency. Artificial neural networks (ANNs) are one of the most important of these models, and they are increasingly being used to forecast water resource variables. The goal of this study was to create an ANN model to estimate the removal efficiency of biological oxygen demand (BOD), total nitrogen (TN), total phosphorus (TP) and total suspended solids (TSS) at the effluent of various primary and secondary treatment methods in a wastewater treatment plant (WWTP).
Design/methodology/approach
The MATLAB App Designer model was used to generate the data set. Various combinations of wastewater quality data, such as temperature(T), TN, TP and hydraulic retention time (HRT) are used as inputs into the ANN to assess the degree of effect of each of these variables on BOD, TN, TP and TSS removal efficiency. Two of the models reflect two different types of primary treatment, while the other nine models represent different types of subsequent treatment. The ANN model’s findings are compared to the MATLAB App Designer model. For evaluating model performance, mean square error (MSE) and coefficient of determination statistics (R2) are utilized as comparative metrics.
Findings
For both training and testing, the R values for the ANN models were greater than 0.99. Based on the comparisons, it was discovered that the ANN model can be used to estimate the removal efficiency of BOD, TN, TP and TSS in WWTP and that the ANN model produces very similar and satisfying results to the APPDESIGNER model. The R-value (Correlation coefficient) of 0.9909 and the MSE of 5.962 indicate that the model is accurate. Because of the many benefits of the ANN models used in this study, it has a lot of potential as a general modeling tool for a range of other complicated process systems that are difficult to solve using conventional modeling techniques.
Originality/value
The objective of this study was to develop an ANN model that could be used to estimate the removal efficiency of pollutants such as BOD, TN, TP and TSS at the effluent of various primary and secondary treatment methods in a WWTP. In the future, the ANN could be used to design a new WWTP and forecast the removal efficiency of pollutants.
Details
Keywords
J. Anke M. van Eekelen, Justine A. Ellis, Craig E. Pennell, Richard Saffery, Eugen Mattes, Jeff Craig and Craig A. Olsson
Genetic risk for depressive disorders is poorly understood despite consistent suggestions of a high heritable component. Most genetic studies have focused on risk associated with…
Abstract
Genetic risk for depressive disorders is poorly understood despite consistent suggestions of a high heritable component. Most genetic studies have focused on risk associated with single variants, a strategy which has so far only yielded small (often non-replicable) risks for depressive disorders. In this paper we argue that more substantial risks are likely to emerge from genetic variants acting in synergy within and across larger neurobiological systems (polygenic risk factors). We show how knowledge of major integrated neurobiological systems provides a robust basis for defining and testing theoretically defensible polygenic risk factors. We do this by describing the architecture of the overall stress response. Maladaptation via impaired stress responsiveness is central to the aetiology of depression and anxiety and provides a framework for a systems biology approach to candidate gene selection. We propose principles for identifying genes and gene networks within the neurosystems involved in the stress response and for defining polygenic risk factors based on the neurobiology of stress-related behaviour. We conclude that knowledge of the neurobiology of the stress response system is likely to play a central role in future efforts to improve genetic prediction of depression and related disorders.
Details
Keywords
Alberto Antonio Agudelo Aguirre, Néstor Darío Duque Méndez and Ricardo Alfredo Rojas Medina
This study aims to determine whether, by means of the application of genetic algorithms (GA) through the traditional technical analysis (TA) using moving average…
Abstract
Purpose
This study aims to determine whether, by means of the application of genetic algorithms (GA) through the traditional technical analysis (TA) using moving average convergence/divergence (MACD), is possible to achieve higher yields than those that would be obtained using technical analysis investment strategies following a traditional approach (TA) and the buy and hold (B&H) strategy.
Design/methodology/approach
The study was carried out based on the daily price records of the NASDAQ financial asset during 2013–2017. TA approach was carried out under graphical analysis applying the standard MACD. GA approach took place by chromosome encoding, fitness evaluation and genetic operators. Traditional genetic operators (i.e. crossover and mutation) were adopted as based on the chromosome customization and fitness evaluation. The chromosome encoding stage used MACD to represent the genes of each chromosome to encode the parameters of MACD in a chromosome. For each chromosome, buy and sell indexes of the strategy were considered. Fitness evaluation served to defining the evaluation strategy of the chromosomes in the population according to the fitness function using the returns gained in each chromosome.
Findings
The paper provides empirical-theoretical insights about the effectiveness of GA to overcome the investment strategies based on MACD and B&H by achieving 5 and 11% higher returns per year, respectively. GA-based approach was additionally capable of improving the return-to-risk ratio of the investment.
Research limitations/implications
Limitations deal with the fact that the study was carried out on US markets conditions and data which hamper its application in some extend to markets with not as much development.
Practical implications
The findings suggest that not only skilled but also amateur investors may opt for investment strategies based on GA aiming at refining profitable financial signals to their advantage.
Originality/value
This paper looks at machine learning as an up-to-date tool with great potential for increasing effectiveness in profits when applied into TA investment approaches using MACD in well-developed stock markets.
Details
Keywords
V. Chowdary Boppana and Fahraz Ali
This paper presents an experimental investigation in establishing the relationship between FDM process parameters and tensile strength of polycarbonate (PC) samples using the…
Abstract
Purpose
This paper presents an experimental investigation in establishing the relationship between FDM process parameters and tensile strength of polycarbonate (PC) samples using the I-Optimal design.
Design/methodology/approach
I-optimal design methodology is used to plan the experiments by means of Minitab-17.1 software. Samples are manufactured using Stratsys FDM 400mc and tested as per ISO standards. Additionally, an artificial neural network model was developed and compared to the regression model in order to select an appropriate model for optimisation. Finally, the genetic algorithm (GA) solver is executed for improvement of tensile strength of FDM built PC components.
Findings
This study demonstrates that the selected process parameters (raster angle, raster to raster air gap, build orientation about Y axis and the number of contours) had significant effect on tensile strength with raster angle being the most influential factor. Increasing the build orientation about Y axis produced specimens with compact structures that resulted in improved fracture resistance.
Research limitations/implications
The fitted regression model has a p-value less than 0.05 which suggests that the model terms significantly represent the tensile strength of PC samples. Further, from the normal probability plot it was found that the residuals follow a straight line, thus the developed model provides adequate predictions. Furthermore, from the validation runs, a close agreement between the predicted and actual values was seen along the reference line which further supports satisfactory model predictions.
Practical implications
This study successfully investigated the effects of the selected process parameters - raster angle, raster to raster air gap, build orientation about Y axis and the number of contours - on tensile strength of PC samples utilising the I-optimal design and ANOVA. In addition, for prediction of the part strength, regression and ANN models were developed. The selected ANN model was optimised using the GA-solver for determination of optimal parameter settings.
Originality/value
The proposed ANN-GA approach is more appropriate to establish the non-linear relationship between the selected process parameters and tensile strength. Further, the proposed ANN-GA methodology can assist in manufacture of various industrial products with Nylon, polyethylene terephthalate glycol (PETG) and PET as new 3DP materials.
Details
Keywords
Ahmed Mohammed, Qian Wang and Xiaodong Li
The purpose of this paper is to investigate the economic feasibility of a three-echelon Halal Meat Supply Chain (HMSC) network that is monitored by a proposed radio frequency…
Abstract
Purpose
The purpose of this paper is to investigate the economic feasibility of a three-echelon Halal Meat Supply Chain (HMSC) network that is monitored by a proposed radio frequency identification (RFID)-based management system for enhancing the integrity traceability of Halal meat products and to maximize the average integrity number of Halal meat products, maximize the return of investment (ROI), maximize the capacity utilization of facilities and minimize the total investment cost of the proposed RFID-monitoring system. The location-allocation problem of facilities needs also to be resolved in conjunction with the quantity flow of Halal meat products from farms to abattoirs and from abattoirs to retailers.
Design/methodology/approach
First, a deterministic multi-objective mixed integer linear programming model was developed and used for optimizing the proposed RFID-based HMSC network toward a comprised solution based on four conflicting objectives as described above. Second, a stochastic programming model was developed and used for examining the impact on the number of Halal meat products by altering the value of integrity percentage. The ε-constraint approach and the modified weighted sum approach were proposed for acquisition of non-inferior solutions obtained from the developed models. Furthermore, the Max-Min approach was used for selecting the best solution among them.
Findings
The research outcome shows the applicability of the developed models using a real case study. Based on the computational results, a reasonable ROI can be achievable by implementing RFID into the HMSC network.
Research limitations/implications
This work addresses interesting avenues for further research on exploring the HMSC network design under different types of uncertainties and transportation means. Also, environmentalism has been becoming increasingly a significant global problem in the present century. Thus, the presented model could be extended to include the environmental aspects as an objective function.
Practical implications
The model can be utilized for food supply chain designers. Also, it could be applied to realistic problems in the field of supply chain management.
Originality/value
Although there were a few studies focusing on the configuration of a number of HMSC networks, this area is overlooked by researchers. The study shows the developed methodology can be a useful tool for designers to determine a cost-effective design of food supply chain networks.
Details
Keywords
Zheng Xu, Yihai Fang, Nan Zheng and Hai L. Vu
With the aid of naturalistic simulations, this paper aims to investigate human behavior during manual and autonomous driving modes in complex scenarios.
Abstract
Purpose
With the aid of naturalistic simulations, this paper aims to investigate human behavior during manual and autonomous driving modes in complex scenarios.
Design/methodology/approach
The simulation environment is established by integrating virtual reality interface with a micro-simulation model. In the simulation, the vehicle autonomy is developed by a framework that integrates artificial neural networks and genetic algorithms. Human-subject experiments are carried, and participants are asked to virtually sit in the developed autonomous vehicle (AV) that allows for both human driving and autopilot functions within a mixed traffic environment.
Findings
Not surprisingly, the inconsistency is identified between two driving modes, in which the AV’s driving maneuver causes the cognitive bias and makes participants feel unsafe. Even though only a shallow portion of the cases that the AV ended up with an accident during the testing stage, participants still frequently intervened during the AV operation. On a similar note, even though the statistical results reflect that the AV drives under perceived high-risk conditions, rarely an actual crash can happen. This suggests that the classic safety surrogate measurement, e.g. time-to-collision, may require adjustment for the mixed traffic flow.
Research limitations/implications
Understanding the behavior of AVs and the behavioral difference between AVs and human drivers are important, where the developed platform is only the first effort to identify the critical scenarios where the AVs might fail to react.
Practical implications
This paper attempts to fill the existing research gap in preparing close-to-reality tools for AV experience and further understanding human behavior during high-level autonomous driving.
Social implications
This work aims to systematically analyze the inconsistency in driving patterns between manual and autopilot modes in various driving scenarios (i.e. multiple scenes and various traffic conditions) to facilitate user acceptance of AV technology.
Originality/value
A close-to-reality tool for AV experience and AV-related behavioral study. A systematic analysis in relation to the inconsistency in driving patterns between manual and autonomous driving. A foundation for identifying the critical scenarios where the AVs might fail to react.
Details
Keywords
In the literature there are numerous tests that compare the accuracy of automated valuation models (AVMs). These models first train themselves with price data and property…
Abstract
Purpose
In the literature there are numerous tests that compare the accuracy of automated valuation models (AVMs). These models first train themselves with price data and property characteristics, then they are tested by measuring their ability to predict prices. Most of them compare the effectiveness of traditional econometric models against the use of machine learning algorithms. Although the latter seem to offer better performance, there is not yet a complete survey of the literature to confirm the hypothesis.
Design/methodology/approach
All tests comparing regression analysis and AVMs machine learning on the same data set have been identified. The scores obtained in terms of accuracy were then compared with each other.
Findings
Machine learning models are more accurate than traditional regression analysis in their ability to predict value. Nevertheless, many authors point out as their limit their black box nature and their poor inferential abilities.
Practical implications
AVMs machine learning offers a huge advantage for all real estate operators who know and can use them. Their use in public policy or litigation can be critical.
Originality/value
According to the author, this is the first systematic review that collects all the articles produced on the subject done comparing the results obtained.
Details
Keywords
Anirut Kantasa-ard, Tarik Chargui, Abdelghani Bekrar, Abdessamad AitElCadi and Yves Sallez
This paper proposes an approach to solve the vehicle routing problem with simultaneous pickup and delivery (VRPSPD) in the context of the Physical Internet (PI) supply chain. The…
Abstract
Purpose
This paper proposes an approach to solve the vehicle routing problem with simultaneous pickup and delivery (VRPSPD) in the context of the Physical Internet (PI) supply chain. The main objective is to minimize the total distribution costs (transportation cost and holding cost) to supply retailers from PI hubs.
Design/methodology/approach
Mixed integer programming (MIP) is proposed to solve the problem in smaller instances. A random local search (RLS) algorithm and a simulated annealing (SA) metaheuristic are proposed to solve larger instances of the problem.
Findings
The results show that SA provides the best solution in terms of total distribution cost and provides a good result regarding holding cost and transportation cost compared to other heuristic methods. Moreover, in terms of total carbon emissions, the PI concept proposed a better solution than the classical supply chain.
Research limitations/implications
The sustainability of the route construction applied to the PI is validated through carbon emissions.
Practical implications
This approach also relates to the main objectives of transportation in the PI context: reduce empty trips and share transportation resources between PI-hubs and retailers. The proposed approaches are then validated through a case study of agricultural products in Thailand.
Social implications
This approach is also relevant with the reduction of driving hours on the road because of share transportation results and shorter distance than the classical route planning.
Originality/value
This paper addresses the VRPSPD problem in the PI context, which is based on sharing transportation and storage resources while considering sustainability.
Details
Keywords
Himanshu Goel and Bhupender Kumar Som
This study aims to predict the Indian stock market (Nifty 50) by employing macroeconomic variables as input variables identified from the literature for two sub periods, i.e. the…
Abstract
Purpose
This study aims to predict the Indian stock market (Nifty 50) by employing macroeconomic variables as input variables identified from the literature for two sub periods, i.e. the pre-coronavirus disease 2019 (COVID-19) (June 2011–February 2020) and during the COVID-19 (March 2020–June 2021).
Design/methodology/approach
Secondary data on macroeconomic variables and Nifty 50 index spanning a period of last ten years starting from 2011 to 2021 have been from various government and regulatory websites. Also, an artificial neural network (ANN) model was trained with the scaled conjugate gradient algorithm for predicting the National Stock exchange's (NSE) flagship index Nifty 50.
Findings
The findings of the study reveal that Scaled Conjugate Gradient (SCG) algorithm achieved 96.99% accuracy in predicting the Indian stock market in the pre-COVID-19 scenario. On the contrary, the proposed ANN model achieved 99.85% accuracy in during the COVID-19 period. The findings of this study have implications for investors, portfolio managers, domestic and foreign institution investors, etc.
Originality/value
The novelty of this study lies in the fact that are hardly any studies that forecasts the Indian stock market using artificial neural networks in the pre and during COVID-19 periods.
Details
Keywords
Bingzi Jin and Xiaojie Xu
Agriculture commodity price forecasts have long been important for a variety of market players. The study we conducted aims to address this difficulty by examining the weekly…
Abstract
Purpose
Agriculture commodity price forecasts have long been important for a variety of market players. The study we conducted aims to address this difficulty by examining the weekly wholesale price index of green grams in the Chinese market. The index covers a ten-year period, from January 1, 2010, to January 3, 2020, and has significant economic implications.
Design/methodology/approach
In order to address the nonlinear patterns present in the price time series, we investigate the nonlinear auto-regressive neural network as the forecast model. This modeling technique is able to combine a variety of basic nonlinear functions to approximate more complex nonlinear characteristics. Specifically, we examine prediction performance that corresponds to several configurations across data splitting ratios, hidden neuron and delay counts, and model estimation approaches.
Findings
Our model turns out to be rather simple and yields forecasts with good stability and accuracy. Relative root mean square errors throughout training, validation and testing are specifically 4.34, 4.71 and 3.98%, respectively. The results of benchmark research show that the neural network produces statistically considerably better performance when compared to other machine learning models and classic time-series econometric methods.
Originality/value
Utilizing our findings as independent technical price forecasts would be one use. Alternatively, policy research and fresh insights into price patterns might be achieved by combining them with other (basic) prediction outputs.
Details