Search results

1 – 10 of 183
To view the access options for this content please click here
Article
Publication date: 10 August 2021

Deepa S.N.

Limitations encountered with the models developed in the previous studies had occurrences of global minima; due to which this study developed a new intelligent ubiquitous…

Downloads
17

Abstract

Purpose

Limitations encountered with the models developed in the previous studies had occurrences of global minima; due to which this study developed a new intelligent ubiquitous computational model that learns with gradient descent learning rule and operates with auto-encoders and decoders to attain better energy optimization. Ubiquitous machine learning computational model process performs training in a better way than regular supervised learning or unsupervised learning computational models with deep learning techniques, resulting in better learning and optimization for the considered problem domain of cloud-based internet-of-things (IOTs). This study aims to improve the network quality and improve the data accuracy rate during the network transmission process using the developed ubiquitous deep learning computational model.

Design/methodology/approach

In this research study, a novel intelligent ubiquitous machine learning computational model is designed and modelled to maintain the optimal energy level of cloud IOTs in sensor network domains. A new intelligent ubiquitous computational model that learns with gradient descent learning rule and operates with auto-encoders and decoders to attain better energy optimization is developed. A new unified deterministic sine-cosine algorithm has been developed in this study for parameter optimization of weight factors in the ubiquitous machine learning model.

Findings

The newly developed ubiquitous model is used for finding network energy and performing its optimization in the considered sensor network model. At the time of progressive simulation, residual energy, network overhead, end-to-end delay, network lifetime and a number of live nodes are evaluated. It is elucidated from the results attained, that the ubiquitous deep learning model resulted in better metrics based on its appropriate cluster selection and minimized route selection mechanism.

Research limitations/implications

In this research study, a novel ubiquitous computing model derived from a new optimization algorithm called a unified deterministic sine-cosine algorithm and deep learning technique was derived and applied for maintaining the optimal energy level of cloud IOTs in sensor networks. The deterministic levy flight concept is applied for developing the new optimization technique and this tends to determine the parametric weight values for the deep learning model. The ubiquitous deep learning model is designed with auto-encoders and decoders and their corresponding layers weights are determined for optimal values with the optimization algorithm. The modelled ubiquitous deep learning approach was applied in this study to determine the network energy consumption rate and thereby optimize the energy level by increasing the lifetime of the sensor network model considered. For all the considered network metrics, the ubiquitous computing model has proved to be effective and versatile than previous approaches from early research studies.

Practical implications

The developed ubiquitous computing model with deep learning techniques can be applied for any type of cloud-assisted IOTs in respect of wireless sensor networks, ad hoc networks, radio access technology networks, heterogeneous networks, etc. Practically, the developed model facilitates computing the optimal energy level of the cloud IOTs for any considered network models and this helps in maintaining a better network lifetime and reducing the end-to-end delay of the networks.

Social implications

The social implication of the proposed research study is that it helps in reducing energy consumption and increases the network lifetime of the cloud IOT based sensor network models. This approach helps the people in large to have a better transmission rate with minimized energy consumption and also reduces the delay in transmission.

Originality/value

In this research study, the network optimization of cloud-assisted IOTs of sensor network models is modelled and analysed using machine learning models as a kind of ubiquitous computing system. Ubiquitous computing models with machine learning techniques develop intelligent systems and enhances the users to make better and faster decisions. In the communication domain, the use of predictive and optimization models created with machine learning accelerates new ways to determine solutions to problems. Considering the importance of learning techniques, the ubiquitous computing model is designed based on a deep learning strategy and the learning mechanism adapts itself to attain a better network optimization model.

Details

International Journal of Pervasive Computing and Communications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1742-7371

Keywords

To view the access options for this content please click here
Article
Publication date: 5 September 2016

Naraina Avudayappan and S.N. Deepa

The loading and power variations in the power system, especially for the peak hours have abundant concussion on the loading patterns of the open access transmission…

Downloads
207

Abstract

Purpose

The loading and power variations in the power system, especially for the peak hours have abundant concussion on the loading patterns of the open access transmission system. During such unconditional state of loading the transmission line parameters and the line voltages show a substandard profile, which depicts exaction of congestion management of the power line in such events. The purpose of this paper is to present an uncomplicated and economical model for congestion management using flexible AC transmission system (FACTS) devices.

Design/methodology/approach

The approach desires a two-step procedure, first by optimal placement of thyristor controlled series capacitor (TCSC) and static VAR compensator (SVC) as FACTS devices in the network; second tuning the control parameters to their optimized values. The optimal location and tuning of TCSC and SVC represents a hectic optimization problem, due to its multi-objective and constrained nature. Hence, a reassuring heuristic optimization algorithm inspired by behavior of cat and firefly is employed to find the optimal placement and tuning of TCSC and SVC.

Findings

The effectiveness of the proposed model is tested through simulation on standard IEEE 14-bus system. The proposed approach proves to be better than the earlier existing approaches in the literature.

Research limitations/implications

With the completed simulation and results, it is proved that the proposed scheme has reduced the congestion in line, thereby increasing the voltage stability along with improved loading capability for the congested lines.

Practical implications

The usefulness of the proposed scheme is justified with the computed results, giving convenience for implementation to any practical transmission network.

Originality/value

This paper fulfills an identified need to study exaction of congestion management of the power line.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 35 no. 5
Type: Research Article
ISSN: 0332-1649

Keywords

To view the access options for this content please click here
Article
Publication date: 21 August 2018

Deepa N., P.K. Bhattacharya, Shantanu Ganguly and Anandajit Goswami

The purpose of the paper is to evaluate print and electronic resources of TERI’s Library and Information Centre (LIC) with an aim to maximize the net marginal benefits and…

Abstract

Purpose

The purpose of the paper is to evaluate print and electronic resources of TERI’s Library and Information Centre (LIC) with an aim to maximize the net marginal benefits and minimize net marginal costs, without compromising the quality of the library resources.

Design/methodology/approach

The parameters considered for analyzing the value of the library resources for this exercise were resource access costs, strategic value of the resource based on subject area coverage, frequency of use, citations, direct and indirect benefits to users. The data regarding these parameters were provided from wide range of sources (both tangible and intangible), to come out with the qualitative and quantitative assessment through an optimization and simulation based model.

Findings

Out of the total holdings in TERI LIC that were analyzed, 85 percent of book collections and 63.5 percent of journals were found to be useful for the researchers. The least-used books and journals were identified for weeding to optimize the value of library for users and make space for new and topical library collections.

Research limitations/implications

A sample of data sources out of the total library collections was defined for the evaluation.

Practical implications

The paper demonstrates the value of library resources that is of critical importance to libraries for an effective and efficient delivery of services for generating future knowledge. Evaluating the value of libraries resources has implications both for librarians as well as library users.

Originality/value

The evaluation exercise established the efficacy of the TERI library holdings for research and academic purposes in the domain of sustainable development. The library collection was found to be cost effective and beneficial to meet the future demand from the user community.

Details

Library Management, vol. 40 no. 3/4
Type: Research Article
ISSN: 0143-5124

Keywords

To view the access options for this content please click here
Article
Publication date: 30 August 2021

Mohamed L. Shaltout and Hesham A. Hegazi

In this work, the design problem of hydrodynamic plain journal bearings is formulated as a multi-objective optimization problem to improve bearing performance under…

Abstract

Purpose

In this work, the design problem of hydrodynamic plain journal bearings is formulated as a multi-objective optimization problem to improve bearing performance under different operating conditions.

Design/methodology/approach

The problem is solved using a hybrid approach combining genetic algorithm and sequential quadratic programming. The selected state variables are oil leakage flow rate, power loss and minimum oil film thickness. The selected design variables are the radial clearance, length-to-diameter ratio, oil viscosity, oil supply pressure and oil supply groove angular position. A validated empirical model is adopted to provide relatively accurate estimation of the bearing state variables with reduced computations. Pareto optimal solution sets are obtained for different operating conditions, and secondary selection criteria are proposed to choose a final optimum design.

Findings

The adopted hybrid optimization approach is a random search algorithm that generates a different solution set for each run, thus a different bearing design. For a number of runs, it is found that the key design variables that significantly affect the optimum state variables are the bearing radial clearance, oil viscosity and oil supply pressure. Additionally, oil viscosity is found to represent the significant factor that distinguishes the optimum designs obtained using the implemented secondary selection criteria. Finally, the results of the proposed optimum design framework at different operating conditions are presented and compared.

Originality/value

The proposed multi-objective formulation of the bearing design problem can provide engineers with a systematic approach and an important degree of flexibility to choose the optimum design that best fits the application requirements.

Details

Industrial Lubrication and Tribology, vol. 73 no. 7
Type: Research Article
ISSN: 0036-8792

Keywords

To view the access options for this content please click here
Article
Publication date: 26 August 2014

Nima Jafari Navimipour, Amir Masoud Rahmani, Ahmad Habibizad Navin and Mehdi Hosseinzadeh

Expert Cloud as a new class of Cloud computing systems enables its users to request the skill, knowledge and expertise of people by employing internet infrastructures and…

Abstract

Purpose

Expert Cloud as a new class of Cloud computing systems enables its users to request the skill, knowledge and expertise of people by employing internet infrastructures and Cloud computing concepts without any information of their location. Job scheduling is one of the most important issue in Expert Cloud and impacts on its efficiency and customer satisfaction. The purpose of this paper is to propose an applicable method based on genetic algorithm for job scheduling in Expert Cloud.

Design/methodology/approach

Because of the nature of the scheduling issue as a NP-Hard problem and the success of genetic algorithm in optimization and NP-Hard problems, the authors used a genetic algorithm to schedule the jobs on human resources in Expert Cloud. In this method, chromosome or candidate solutions are represented by a vector; fitness function is calculated based on response time; one point crossover and swap mutation are also used.

Findings

The results indicate that the proposed method can schedule the received jobs in appropriate time with high accuracy in comparison to common methods (First Come First Served, Shortest Process Next and Highest Response Ratio Next). Also the proposed method has better performance in term of total execution time, service+wait time, failure rate and Human Resource utilization rate in comparison to common methods.

Originality/value

In this paper the job scheduling issue in Expert Cloud is pointed out and the approach to resolve the problem is applied into a practical example.

Details

Kybernetes, vol. 43 no. 8
Type: Research Article
ISSN: 0368-492X

Keywords

To view the access options for this content please click here
Article
Publication date: 1 October 2018

Ka Yee Kok, Hieng Ho Lau, Thanh Duoc Phan and TIina Chui Huon Ting

This paper aims to present the design optimisation using genetic algorithm (GA) to achieve the highest strength to weight (S/W) ratio, for cold-formed steel residential roof truss.

Abstract

Purpose

This paper aims to present the design optimisation using genetic algorithm (GA) to achieve the highest strength to weight (S/W) ratio, for cold-formed steel residential roof truss.

Design/methodology/approach

The GA developed in this research simultaneously optimises roof pitch, truss configurations, joint coordinates and applied loading of typical dual-pitched symmetrical residential roof truss. The residential roof truss was considered with incremental uniform distributed loading, in both gravitational and uplift directions. The structural analyses of trusses were executed in this GA using finite element toolbox. The ultimate strength and serviceability of trusses were checked through the design formulation implemented in GA, according to the Australian standard, AS/NZS 4600 Cold-formed Steel Structures.

Findings

An optimum double-Fink roof truss which possess highest S/W ratio using GA was determined, with optimum roof pitch of 15°. The optimised roof truss is suitable for industrial application with its higher S/W ratio and cost-effectiveness. The combined methodology of multi-level optimisation and simultaneous optimisation developed in this research could determine optimum roof truss with consistent S/W ratio, although with huge GA search space.

Research limitations/implications

The sizing of roof truss member is not optimised in this paper. Only single type of cold-formed steel section is used throughout the whole optimisation. The design of truss connection is not considered in this paper. The corresponding connection costs are not included in the proposed optimisation.

Practical implications

The optimum roof truss presented in this paper is suitable for industrial application with higher S/W ratio and lower cost, in either gravitational or uplift loading configurations.

Originality/value

This research demonstrates the approaches in combining multi-level optimisation and simultaneous optimisation to handle large number of variables and hence executed an efficient design optimisation. The GA designed in this research determines the optimum residential roof truss with highest S/W ratio, instead of lightest truss weight in previous studies.

Details

World Journal of Engineering, vol. 15 no. 5
Type: Research Article
ISSN: 1708-5284

Keywords

To view the access options for this content please click here
Article
Publication date: 2 January 2018

Mahmoud M. Elkholy

The paper aims to present an application of teaching learning-based optimization (TLBO) algorithm and static Var compensator (SVC) to improve the steady state and dynamic…

Abstract

Purpose

The paper aims to present an application of teaching learning-based optimization (TLBO) algorithm and static Var compensator (SVC) to improve the steady state and dynamic performance of self-excited induction generators (SEIG).

Design/methodology/approach

The TLBO algorithm is applied to generate the optimal capacitance to maintain rated voltage with different types of prime mover. For a constant speed prime mover, the TLBO algorithm attains the optimal capacitance to have rated load voltage at different loading conditions. In the case of variable speed prime mover, the TLBO methodology is used to obtain the optimal capacitance and prime mover speed to have rated load voltage and frequency. The SVC of fixed capacitor and controlled reactor is used to have a fine tune in capacitance value and control the reactive power. The parameters of SVC are obtained using the TLBO algorithm.

Findings

The whole system of three-phase induction generator and SVC are established under MatLab/Simulink environment. The performance of the SEIG is demonstrated on two different ratings (i.e. 7.5 kW and 1.5 kW) using the TLBO algorithm and SVC. An experimental setup is built-up using a 1.5 kW three-phase induction machine to confirm the theoretical analysis. The TLBO results are matched with other meta heuristic optimization techniques.

Originality/value

The paper presents an application of the meta-heuristic algorithms and SVC to analysis the steady state and dynamic performance of SEIG with optimal performance.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 37 no. 1
Type: Research Article
ISSN: 0332-1649

Keywords

To view the access options for this content please click here
Article
Publication date: 12 June 2017

Shabia Shabir Khan and S.M.K. Quadri

As far as the treatment of most complex issues in the design is concerned, approaches based on classical artificial intelligence are inferior compared to the ones based on…

Abstract

Purpose

As far as the treatment of most complex issues in the design is concerned, approaches based on classical artificial intelligence are inferior compared to the ones based on computational intelligence, particularly this involves dealing with vagueness, multi-objectivity and good amount of possible solutions. In practical applications, computational techniques have given best results and the research in this field is continuously growing. The purpose of this paper is to search for a general and effective intelligent tool for prediction of patient survival after surgery. The present study involves the construction of such intelligent computational models using different configurations, including data partitioning techniques that have been experimentally evaluated by applying them over realistic medical data set for the prediction of survival in pancreatic cancer patients.

Design/methodology/approach

On the basis of the experiments and research performed over the data belonging to various fields using different intelligent tools, the authors infer that combining or integrating the qualification aspects of fuzzy inference system and quantification aspects of artificial neural network can prove an efficient and better model for prediction. The authors have constructed three soft computing-based adaptive neuro-fuzzy inference system (ANFIS) models with different configurations and data partitioning techniques with an aim to search capable predictive tools that could deal with nonlinear and complex data. After evaluating the models over three shuffles of data (training set, test set and full set), the performances were compared in order to find the best design for prediction of patient survival after surgery. The construction and implementation of models have been performed using MATLAB simulator.

Findings

On applying the hybrid intelligent neuro-fuzzy models with different configurations, the authors were able to find its advantage in predicting the survival of patients with pancreatic cancer. Experimental results and comparison between the constructed models conclude that ANFIS with Fuzzy C-means (FCM) partitioning model provides better accuracy in predicting the class with lowest mean square error (MSE) value. Apart from MSE value, other evaluation measure values for FCM partitioning prove to be better than the rest of the models. Therefore, the results demonstrate that the model can be applied to other biomedicine and engineering fields dealing with different complex issues related to imprecision and uncertainty.

Originality/value

The originality of paper includes framework showing two-way flow for fuzzy system construction which is further used by the authors in designing the three simulation models with different configurations, including the partitioning methods for prediction of patient survival after surgery. Several experiments were carried out using different shuffles of data to validate the parameters of the model. The performances of the models were compared using various evaluation measures such as MSE.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 10 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

To view the access options for this content please click here
Article
Publication date: 27 January 2021

Mohamed ElMenshawy and Mohamed Marzouk

Nowadays, building information modeling (BIM) represents an evolution in the architecture, engineering and construction (AEC) industries with its various applications. BIM…

Abstract

Purpose

Nowadays, building information modeling (BIM) represents an evolution in the architecture, engineering and construction (AEC) industries with its various applications. BIM is capable to store huge amounts of information related to buildings which can be leveraged in several areas such as quantity takeoff, scheduling, sustainability and facility management. The main objective of this research is to establish a model for automated schedule generation using BIM and to solve the time–cost trade-off problem (TCTP) resulting from the various scenarios offered to the user.

Design/methodology/approach

A model is developed to use the quantities exported from a BIM platform, then generate construction activities, calculate the duration of each activity and finally the logic/sequence is applied in order to link the activities together. Then, multiobjective optimization is performed using nondominated sorting genetic algorithm (NSGA-II) in order to provide the most feasible solutions considering project duration and cost. The researchers opted NSGA-II because it is one of the well-known and credible algorithms that have been used in many applications, and its performances were tested in several comparative studies.

Findings

The proposed model is capable to select the near-optimum scenario for the project and export it to Primavera software. A case study is worked to demonstrate the use of the proposed model and illustrate its main features.

Originality/value

The proposed model can provide a simple and user-friendly model for automated schedule generation of construction projects. In addition, opportunities related to the interface between an automated schedule generation model and Primavera software are enabled as Primavera is one of the most popular and common schedule software solutions in the construction industry. Furthermore, it allows importing data from MS Excel, which is used to store activities data in the different scenarios. In addition, there are numerous solutions, each one corresponds to a certain duration and cost according to the performance factor which often reflects the number of crews assigned to the activity and/or construction method.

Details

Engineering, Construction and Architectural Management, vol. 28 no. 10
Type: Research Article
ISSN: 0969-9988

Keywords

To view the access options for this content please click here
Article
Publication date: 25 June 2021

Amira Shalaby and A. Samer Ezeldin

In many developing countries, the sanitation sector constitutes a major part of their strategic plans of reform. Yet with the very limited budget of the public treasury…

Abstract

Purpose

In many developing countries, the sanitation sector constitutes a major part of their strategic plans of reform. Yet with the very limited budget of the public treasury, countries opt to major lending institutions for funds. “Results-Based-Finance” is a new funding mechanism that has proven its efficiency in achieving the necessary reform in sanitation sectors. Due to the complexity of the funding tool, it is crucial to be able to decompose the project into smaller packages to be able to effectively control the project. The objective of this paper is to reach an optimum packaging scheme that enables the project to be successfully managed through better planning and cost control practices.

Design/methodology/approach

With the aid of Unified Modelling Language (UML), an algorithm is developed to map the logic behind the model suggested with detailed illustrations of its different modules. Object-oriented processes and operations are modeled using different diagrams of the language, which automatically generate the optimum packaging combination. The packaging model is then implemented via a number of computer-aided programs. The Microsoft Excel 2019 is used for calculation purposes. Visual Basic for Applications (VBA) programming language is used to make the model user-friendly for non-engineering stakeholders. The Palisade's Decision Tools Suite is used for the optimization process

Findings

The model is validated through a case study of a mega sanitation project located in Egypt. The model output is not only the content of the packages but also a complete managing plan which demonstrates many useful information to the decision-makers and government officials.

Originality/value

The research aim is to provide the construction industry with a tool that makes the packaging process of mega projects funded through the “Results-Based-Finance” mechanism, done in an automated manner. Moreover, the packages are selected in a way to optimize the project cashflow. Having the optimum package size shall ensure better planning and a more accurate cost control. Yet it is a challenging task; especially, when the project cash flow is very sensitive and intolerant to delays like in the “Results-Based-Finance” mechanism.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

1 – 10 of 183