Search results
1 – 8 of 8Mohd Mustaqeem, Suhel Mustajab and Mahfooz Alam
Software defect prediction (SDP) is a critical aspect of software quality assurance, aiming to identify and manage potential defects in software systems. In this paper, we have…
Abstract
Purpose
Software defect prediction (SDP) is a critical aspect of software quality assurance, aiming to identify and manage potential defects in software systems. In this paper, we have proposed a novel hybrid approach that combines Gray Wolf Optimization with Feature Selection (GWOFS) and multilayer perceptron (MLP) for SDP. The GWOFS-MLP hybrid model is designed to optimize feature selection, ultimately enhancing the accuracy and efficiency of SDP. Gray Wolf Optimization, inspired by the social hierarchy and hunting behavior of gray wolves, is employed to select a subset of relevant features from an extensive pool of potential predictors. This study investigates the key challenges that traditional SDP approaches encounter and proposes promising solutions to overcome time complexity and the curse of the dimensionality reduction problem.
Design/methodology/approach
The integration of GWOFS and MLP results in a robust hybrid model that can adapt to diverse software datasets. This feature selection process harnesses the cooperative hunting behavior of wolves, allowing for the exploration of critical feature combinations. The selected features are then fed into an MLP, a powerful artificial neural network (ANN) known for its capability to learn intricate patterns within software metrics. MLP serves as the predictive engine, utilizing the curated feature set to model and classify software defects accurately.
Findings
The performance evaluation of the GWOFS-MLP hybrid model on a real-world software defect dataset demonstrates its effectiveness. The model achieves a remarkable training accuracy of 97.69% and a testing accuracy of 97.99%. Additionally, the receiver operating characteristic area under the curve (ROC-AUC) score of 0.89 highlights the model’s ability to discriminate between defective and defect-free software components.
Originality/value
Experimental implementations using machine learning-based techniques with feature reduction are conducted to validate the proposed solutions. The goal is to enhance SDP’s accuracy, relevance and efficiency, ultimately improving software quality assurance processes. The confusion matrix further illustrates the model’s performance, with only a small number of false positives and false negatives.
Details
Keywords
This paper seeks to explore the sensitivity of these parameters and their impact on fiscal policy outcomes. We use the existing literature to establish possible ranges for each…
Abstract
Purpose
This paper seeks to explore the sensitivity of these parameters and their impact on fiscal policy outcomes. We use the existing literature to establish possible ranges for each parameter, and we examine how changes within these ranges can alter the outcomes of fiscal policy. In this way, we aim to highlight the importance of these parameters in the formulation and evaluation of fiscal policy.
Design/methodology/approach
The role of fiscal policy, its effects and multipliers continues to be a subject of intense debate in macroeconomics. Despite adopting a New Keynesian approach within a macroeconomic model, the reactions of macroeconomic variables to fiscal shocks can vary across different contexts and theoretical frameworks. This paper aims to investigate these diverse reactions by conducting a sensitivity analysis of parameters. Specifically, the study examines how key variables respond to fiscal shocks under different parameter settings. By analyzing the behavioral dynamics of these variables, this research contributes to the ongoing discussion on fiscal policy. The findings offer valuable insights to enrich the understanding of the complex relationship between fiscal shocks and macroeconomic outcomes, thus facilitating informed policy debates.
Findings
This paper aims to investigate key elements of New Keynesian Dynamic Stochastic General Equilibrium (DSGE) models. The focus is on the calibration of parameters and their impact on macroeconomic variables, such as output and inflation. The study also examines how different parameter settings affect the response of monetary policy to fiscal measures. In conclusion, this study has relied on theoretical exploration and a comprehensive review of existing literature. The parameters and their relationships have been analyzed within a robust theoretical framework, offering valuable insights for further research on how these factors influence model forecasts and inform policy recommendations derived from New Keynesian DSGE models. Moving forward, it is recommended that future work includes empirical analyses to test the reliability and effectiveness of parameter calibrations in real-world conditions. This will contribute to enhancing the accuracy and relevance of DSGE models for economic policy decision-making.
Originality/value
This study is motivated by the aim to provide a deeper understanding of the roles macroeconomic model parameters play concerning responses to expansionary fiscal policies and the subsequent reactions of monetary authorities. Comprehensive reviews that encompass this breadth of relationships within a single text are rare in the literature, making this work a valuable contribution to stimulating discussions on macroeconomic policies.
Details
Keywords
Vinayambika S. Bhat, Thirunavukkarasu Indiran, Shanmuga Priya Selvanathan and Shreeranga Bhat
The purpose of this paper is to propose and validate a robust industrial control system. The aim is to design a Multivariable Proportional Integral controller that accommodates…
Abstract
Purpose
The purpose of this paper is to propose and validate a robust industrial control system. The aim is to design a Multivariable Proportional Integral controller that accommodates multiple responses while considering the process's control and noise parameters. In addition, this paper intended to develop a multidisciplinary approach by combining computational science, control engineering and statistical methodologies to ensure a resilient process with the best use of available resources.
Design/methodology/approach
Taguchi's robust design methodology and multi-response optimisation approaches are adopted to meet the research aims. Two-Input-Two-Output transfer function model of the distillation column system is investigated. In designing the control system, the Steady State Gain Matrix and process factors such as time constant (t) and time delay (?) are also used. The unique methodology is implemented and validated using the pilot plant's distillation column. To determine the robustness of the proposed control system, a simulation study, statistical analysis and real-time experimentation are conducted. In addition, the outcomes are compared to different control algorithms.
Findings
Research indicates that integral control parameters (Ki) affect outputs substantially more than proportional control parameters (Kp). The results of this paper show that control and noise parameters must be considered to make the control system robust. In addition, Taguchi's approach, in conjunction with multi-response optimisation, ensures robust controller design with optimal use of resources. Eventually, this research shows that the best outcomes for all the performance indices are achieved when Kp11 = 1.6859, Kp12 = −2.061, Kp21 = 3.1846, Kp22 = −1.2176, Ki11 = 1.0628, Ki12 = −1.2989, Ki21 = 2.454 and Ki22 = −0.7676.
Originality/value
This paper provides a step-by-step strategy for designing and validating a multi-response control system that accommodates controllable and uncontrollable parameters (noise parameters). The methodology can be used in any industrial Multi-Input-Multi-Output system to ensure process robustness. In addition, this paper proposes a multidisciplinary approach to industrial controller design that academics and industry can refine and improve.
Details
Keywords
Alolote Amadi and Onaopepo Adeniyi
This paper aims to quantitively assess the resilience of residential properties to urban flooding in Port Harcourt, Nigeria, and assess whether they vary at spatially aggregated…
Abstract
Purpose
This paper aims to quantitively assess the resilience of residential properties to urban flooding in Port Harcourt, Nigeria, and assess whether they vary at spatially aggregated scales relative to the level of flood exposure.
Design/methodology/approach
The study synthesizes theoretical constructs/indicators for quantifying property level resilience, as a basis for measuring resilience. Using a two-stage purposive/stratified randomized sampling approach, 407 questionnaires were sent out to residents of 25 flood-prone areas, to solicit information on the resilience constructs as indicated by the adaptation behaviors of individual households and their property attributes. A principal component analysis approach is used as a mechanism for weighting the indicators, based on which aggregated spatial-scale resilience indices were computed for the 25 sampled areas relative to their levels of flood exposure.
Findings
Area 11 located in the moderate flood zone has the lowest resilience index, while Area 20 located in the high flood zone has the highest resilience index. The resilience indices for the low, moderate and high flood zone show only minimal and statistically insignificant differences indicating maladaptation even with incremental levels of flood exposure.
Practical implications
The approach to resilience measurement exemplifies a reproducible lens through which the concept of “living with floods” can be holistically assessed at the property level while highlighting the nexus of the social and technical dimensions.
Originality/value
The study moves beyond theoretical conceptualization, to empirically quantify the complex concept of property-level flood resilience.
Details
Keywords
Florian Cramer and Christian Fikar
Short food supply chains have the potential to facilitate the transition to more sustainable food systems. Related distribution processes, however, can be challenging for…
Abstract
Purpose
Short food supply chains have the potential to facilitate the transition to more sustainable food systems. Related distribution processes, however, can be challenging for smallholder and family farmers. To extend the market reach of farmers without the need for extensive investments, crowd logistics (CL) can be used. The purpose of this paper is to explore the benefits and trade-offs of implementing CL platforms in short food supply chains (SFSCs).
Design/methodology/approach
A decision support system (DSS) based on agent-based and discrete event simulation (DES) modelling is developed, which closely approximates the behaviour of customers and distribution processes at outlets. Different scenarios are explored to evaluate the potential of CL in rural and urban settings using the example of regions from Bavaria, Germany.
Findings
Results show that CL can be used to increase the reach of farmers in SFSCs at the cost of minor food quality losses. Moreover, a difference between urban and rural settings is noted: An urban scenario requires less investment in the driver base, whereas the rural scenario shows a higher potential to increase market reach.
Originality/value
Platform-based food delivery services are still mostly unexplored in the context of SFSCs. This research shows that platform services such as CL can be used to support local agriculture and facilitate the distribution of perishable food items, introducing a simulation-based DSS and providing detailed results on various application settings; this research serves as a steppingstone to facilitate successful real-world implementations and encourage further research.
Details
Keywords
Abstract
Purpose
This research aims to argue that manual geometric modeling is blocking the building information modeling (BIM) promotion to small-size companies. Therefore, it is necessary to study a manner of automated modeling to reduce the dependence of BIM implementation on manpower. This paper aims to make a study into such a system to propose both its theory and prototype.
Design/methodology/approach
This research took a prototyping as the methodology, which consists of three steps: (1) proposing a theoretical framework supporting automated geometric modeling process; (2) developing a prototype system based on the framework; (3) conducting a testing for the prototype system on its performance.
Findings
Previous researches into automated geometric modeling only respectively focused on a specific procedure for a particular engineering domain. No general model was abstracted to support generic geometric modeling. This paper, taking higher level of abstraction, proposed such a model that can describe general geometric modeling process to serve generic automated geometric modeling systems.
Research limitations/implications
This paper focused on only geometric modeling, skipping non-geometric information of BIM. A complete BIM model consists of geometric and non-geometric data. Therefore, the method of combination of them is on the research agenda.
Originality/value
The model proposed by this paper provide a mechanism to translate engineering geometric objects into textual representations, being able to act as the kernel of generic automated geometric modeling systems, which are expected to boost BIM promotion in industry.
Details
Keywords
Fariba Ramezani, Amir Arjomandi and Charles Harvie
As a by-product of the production process, emissions can follow output fluctuations. Hence, disregarding the relationship between economic fluctuations and emissions could result…
Abstract
Purpose
As a by-product of the production process, emissions can follow output fluctuations. Hence, disregarding the relationship between economic fluctuations and emissions could result in undesirable environmental outcomes. This study aims to investigate the environmental and economic effects of abatement subsidies on overall emissions during business cycles in Australia.
Design/methodology/approach
A real business cycle (RBC) model is devised and parameterised in this paper. RBC models have been recently introduced to environmental policy analysis, and this study contributes to the literature by investigating the effects of a potential subsidy policy in an RBC framework. The model is also calibrated and provides solutions for the Australian economy.
Findings
The authors find that under a steady-state situation, supporting abatement can result in reducing emissions by 6.45% while it imposes welfare costs to the economy (by 0.61%). Simulation results show that an optimal abatement policy should be pro-cyclical, with the abatement subsidy increasing during expansions and decreasing during recessions. As well, in a subsidy policy setting, emissions would react pro-cyclically, i.e. emissions increase (decrease) when the gross domestic product increases (decreases). The abatement reaction by firms, however, is different, because when a positive productivity shock occurs, firms reduce abatement and allocate resources to production. Nonetheless, as time passes, the increased subsidy provides a strong enough incentive to allocate resources to abatement and, subsequently, abatement increases.
Originality/value
This paper investigates how an emission reduction subsidy should be adapted to macroeconomic fluctuations so that it can limit variations in emissions.
Details
Keywords
Ana Carolina Franco De Oliveira, Cristiano Saad Travassos do Carmo, Alexandre Santana Cruz and Renata Gonçalves Faisca
In developing countries, such as Brazil, the construction sector is consistently focused on the construction of new buildings, and there is no dissemination of the preservation…
Abstract
Purpose
In developing countries, such as Brazil, the construction sector is consistently focused on the construction of new buildings, and there is no dissemination of the preservation, restoration and maintenance of historic buildings. Idle buildings, due to the use and lack of maintenance, present pathological manifestations, such as moisture problems that compromise specially their thermal and energy performance. With this in mind, the purpose of this work is to create a digital model using terrestrial photogrammetry and suggest retrofit interventions based on computer simulation to improve the thermal and energy performance of a historical building.
Design/methodology/approach
The proposed methodology combined terrestrial photogrammetry using common smartphones and commercial software for historical buildings with building information modeling (historic building information modeling (HBIM)) and building energy modeling (BEM). The approach follows five steps: planning, site visit, data processing, data modeling and results. Also, as a case study, the School of Architecture and Urbanism of the Fluminense Federal University, built in 1888, was chosen to validate the approach.
Findings
A digital map of pathological manifestations in the HBIM model was developed, and interventions considering the application of expanded polystyrene in the envelope to reduce energy consumption were outlined. From the synergy between HBIM and BEM, it was concluded that the information modeled using photogrammetry was fundamental to create the energy model, and simulations were needed to optimize the possible solutions in terms of energy consumption.
Originality/value
Firstly, the work proposes a reasonable methodology to be applied in development countries without sophisticated technologies, but with acceptable precision for the study purpose. Secondly, the presented study shows that the use of HBIM for energy modeling proved to be useful to simulate possible solutions that optimize the thermal and energy performance.
Details