Search results
1 – 10 of 117Chongyi Chang, Gang Guo, Wen He and Zhendong Liu
The objective of this study is to investigate the impact of longitudinal forces on extreme-long heavy-haul trains, providing new insights and methods for their design and…
Abstract
Purpose
The objective of this study is to investigate the impact of longitudinal forces on extreme-long heavy-haul trains, providing new insights and methods for their design and operation, thereby enhancing safety, operational efficiency and track system design.
Design/methodology/approach
A longitudinal dynamics simulation model of the super long heavy haul train was established and verified by the braking test data of 30,000 t heavy-haul combination train on the long and steep down grade of Daqing Line. The simulation model was used to analyze the influence of factors on the longitudinal force of super long heavy haul train.
Findings
Under normal conditions, the formation length of extreme-long heavy-haul combined train has a small effect on the maximum longitudinal coupler force under full service braking and emergency braking on the straight line. The slope difference of the long and steep down grade has a great impact on the maximum longitudinal coupler force of the extreme-long heavy-haul trains. Under the condition that the longitudinal force does not exceed the safety limit of 2,250 kN under full service braking at the speed of 60 km/h the maximum allowable slope difference of long and steep down grade for 40,000 t super long heavy-haul combined trains is 13‰, and that of 100,000 t is only 5‰.
Originality/value
The results will provide important theoretical basis and practical guidance for further improving the transportation efficiency and safety of extreme-long heavy-haul trains.
Details
Keywords
Sameer Kumar, Yogesh Marawar, Gunjan Soni, Vipul Jain, Anand Gurumurthy and Rambabu Kodali
Lean manufacturing (LM) is prevalent in the manufacturing industry; thus, focusing on fast and accurate lean tool implementation is the new paradigm in manufacturing. Value stream…
Abstract
Purpose
Lean manufacturing (LM) is prevalent in the manufacturing industry; thus, focusing on fast and accurate lean tool implementation is the new paradigm in manufacturing. Value stream mapping (VSM) is one of the many LM tools. It is understood that combining LM implementation with VSM tools can generate better outcomes. This paper aims to develop an expert system for optimal sequencing of VSM tools for lean implementation.
Design/methodology/approach
A proposed artificial neural network (ANN) model is based on the analytic network process (ANP) devised for this study. It will facilitate the selection of VSM tools in an optimal sequence.
Findings
Considering different types of wastes and their level of occurrence, organizations need a set of specific tools that will be effective in the elimination of these wastes. The developed ANP model computes a level of interrelation between wastes and VSM tools. The ANN is designed and trained by data obtained from numerous case studies, so it can predict the accurate sequence of VSM tools for any new case data set.
Originality/value
The design and use of the ANN model provide an integrated result of both empirical and practical cases, which is more accurate because all viable aspects are then considered. The proposed modeling approach is validated through implementation in an automobile manufacturing company. It has resulted in benefits, namely, reduction in bias, time required, effort required and complexity of the decision process. More importantly, according to all performance criteria and subcriteria, the main goal of this research was satisfied by increasing the accuracy of selecting the appropriate VSM tools and their optimal sequence for lean implementation.
Details
Keywords
S. Thavasi and T. Revathi
With so many placement opportunities around the students in their final or prefinal year, they start to feel the strain of the season. The students feel the need to be aware of…
Abstract
Purpose
With so many placement opportunities around the students in their final or prefinal year, they start to feel the strain of the season. The students feel the need to be aware of their position and how to increase their chances of being hired. Hence, a system to guide their career is one of the needs of the day.
Design/methodology/approach
The job role prediction system utilizes machine learning techniques such as Naïve Bayes, K-Nearest Neighbor, Support Vector machines (SVM) and Artificial Neural Networks (ANN) to suggest a student’s job role based on their academic performance and course outcomes (CO), out of which ANN performs better. The system uses the Mepco Schlenk Engineering College curriculum, placement and students’ Assessment data sets, in which the CO and syllabus are used to determine the skills that the student has gained from their courses. The necessary skills for a job position are then extracted from the job advertisements. The system compares the student’s skills with the required skills for the job role based on the placement prediction result.
Findings
The system predicts placement possibilities with an accuracy of 93.33 and 98% precision. Also, the skill analysis for students gives the students information about their skill-set strengths and weaknesses.
Research limitations/implications
For skill-set analysis, only the direct assessment of the students is considered. Indirect assessment shall also be considered for future scope.
Practical implications
The model is adaptable and flexible (customizable) to any type of academic institute or universities.
Social implications
The research will be very much useful for the students community to bridge the gap between the academic and industrial needs.
Originality/value
Several works are done for career guidance for the students. However, these career guidance methodologies are designed only using the curriculum and students’ basic personal information. The proposed system will consider the students’ academic performance through direct assessment, along with their curriculum and basic personal information.
Details
Keywords
Kerim Koc, Ömer Ekmekcioğlu and Asli Pelin Gurgun
Central to the entire discipline of construction safety management is the concept of construction accidents. Although distinctive progress has been made in safety management…
Abstract
Purpose
Central to the entire discipline of construction safety management is the concept of construction accidents. Although distinctive progress has been made in safety management applications over the last decades, construction industry still accounts for a considerable percentage of all workplace fatalities across the world. This study aims to predict occupational accident outcomes based on national data using machine learning (ML) methods coupled with several resampling strategies.
Design/methodology/approach
Occupational accident dataset recorded in Turkey was collected. To deal with the class imbalance issue between the number of nonfatal and fatal accidents, the dataset was pre-processed with random under-sampling (RUS), random over-sampling (ROS) and synthetic minority over-sampling technique (SMOTE). In addition, random forest (RF), Naïve Bayes (NB), K-Nearest neighbor (KNN) and artificial neural networks (ANNs) were employed as ML methods to predict accident outcomes.
Findings
The results highlighted that the RF outperformed other methods when the dataset was preprocessed with RUS. The permutation importance results obtained through the RF exhibited that the number of past accidents in the company, worker's age, material used, number of workers in the company, accident year, and time of the accident were the most significant attributes.
Practical implications
The proposed framework can be used in construction sites on a monthly-basis to detect workers who have a high probability to experience fatal accidents, which can be a valuable decision-making input for safety professionals to reduce the number of fatal accidents.
Social implications
Practitioners and occupational health and safety (OHS) departments of construction firms can focus on the most important attributes identified by analysis results to enhance the workers' quality of life and well-being.
Originality/value
The literature on accident outcome predictions is limited in terms of dealing with imbalanced dataset through integrated resampling techniques and ML methods in the construction safety domain. A novel utilization plan was proposed and enhanced by the analysis results.
Details
Keywords
Jasleen Kaur and Khushdeep Dharni
The stock market generates massive databases of various financial companies that are highly volatile and complex. To forecast daily stock values of these companies, investors…
Abstract
Purpose
The stock market generates massive databases of various financial companies that are highly volatile and complex. To forecast daily stock values of these companies, investors frequently use technical analysis or fundamental analysis. Data mining techniques coupled with fundamental and technical analysis types have the potential to give satisfactory results for stock market prediction. In the current paper, an effort is made to investigate the accuracy of stock market predictions by using the combined approach of variables from technical and fundamental analysis for the creation of a data mining predictive model.
Design/methodology/approach
We chose 381 companies from the National Stock Exchange of India's CNX 500 index and conducted a two-stage data analysis. The first stage is identifying key fundamental variables and constructing a portfolio based on that study. Artificial neural network (ANN), support vector machines (SVM) and decision tree J48 were used to build the models. The second stage entails applying technical analysis to forecast price movements in the companies included in the portfolios. ANN and SVM techniques were used to create predictive models for all companies in the portfolios. We also estimated returns using trading decisions based on the model's output and then compared them to buy-and-hold returns and the return of the NIFTY 50 index, which served as a benchmark.
Findings
The results show that the returns of both the portfolios are higher than the benchmark buy-and-hold strategy return. It can be concluded that data mining techniques give better results, irrespective of the type of stock, and have the ability to make up for poor stocks. The comparison of returns of portfolios with the return of NIFTY as a benchmark also indicates that both the portfolios are generating higher returns as compared to the return generated by NIFTY.
Originality/value
As stock prices are influenced by both technical and fundamental indicators, the current paper explored the combined effect of technical analysis and fundamental analysis variables for Indian stock market prediction. Further, the results obtained by individual analysis have also been compared. The proposed method under study can also be utilized to determine whether to hold stocks for the long or short term using trend-based research.
Details
Keywords
Argaw Gurmu and Mani Pourdadash Miri
Several factors influence the costs of buildings. Thus, identifying the cost significant factors can assist to improve the accuracy of project cost forecasts during the planning…
Abstract
Purpose
Several factors influence the costs of buildings. Thus, identifying the cost significant factors can assist to improve the accuracy of project cost forecasts during the planning phase. This paper aims to identify the cost significant parameters and explore the potential for improving the accuracy of cost forecasts for buildings using machine learning techniques and large data sets.
Design/methodology/approach
The Australian State of Victoria Building Authority data sets, which comprise various parameters such as cost of the buildings, materials used, gross floor areas (GFA) and type of buildings, have been used. Five different machine learning regression models, such as decision tree, linear regression, random forest, gradient boosting and k-nearest neighbor were used.
Findings
The findings of the study showed that among the chosen models, linear regression provided the worst outcome (r2 = 0.38) while decision tree (r2 = 0.66) and gradient boosting (r2 = 0.62) provided the best outcome. Among the analyzed features, the class of buildings explained about 34% of the variations, followed by GFA and walls, which both accounted for 26% of the variations.
Originality/value
The output of this research can provide important information regarding the factors that have major impacts on the costs of buildings in the Australian construction industry. The study revealed that the cost of buildings is highly influenced by their classes.
Details
Keywords
Meltem Aksoy, Seda Yanık and Mehmet Fatih Amasyali
When a large number of project proposals are evaluated to allocate available funds, grouping them based on their similarities is beneficial. Current approaches to group proposals…
Abstract
Purpose
When a large number of project proposals are evaluated to allocate available funds, grouping them based on their similarities is beneficial. Current approaches to group proposals are primarily based on manual matching of similar topics, discipline areas and keywords declared by project applicants. When the number of proposals increases, this task becomes complex and requires excessive time. This paper aims to demonstrate how to effectively use the rich information in the titles and abstracts of Turkish project proposals to group them automatically.
Design/methodology/approach
This study proposes a model that effectively groups Turkish project proposals by combining word embedding, clustering and classification techniques. The proposed model uses FastText, BERT and term frequency/inverse document frequency (TF/IDF) word-embedding techniques to extract terms from the titles and abstracts of project proposals in Turkish. The extracted terms were grouped using both the clustering and classification techniques. Natural groups contained within the corpus were discovered using k-means, k-means++, k-medoids and agglomerative clustering algorithms. Additionally, this study employs classification approaches to predict the target class for each document in the corpus. To classify project proposals, various classifiers, including k-nearest neighbors (KNN), support vector machines (SVM), artificial neural networks (ANN), classification and regression trees (CART) and random forest (RF), are used. Empirical experiments were conducted to validate the effectiveness of the proposed method by using real data from the Istanbul Development Agency.
Findings
The results show that the generated word embeddings can effectively represent proposal texts as vectors, and can be used as inputs for clustering or classification algorithms. Using clustering algorithms, the document corpus is divided into five groups. In addition, the results demonstrate that the proposals can easily be categorized into predefined categories using classification algorithms. SVM-Linear achieved the highest prediction accuracy (89.2%) with the FastText word embedding method. A comparison of manual grouping with automatic classification and clustering results revealed that both classification and clustering techniques have a high success rate.
Research limitations/implications
The proposed model automatically benefits from the rich information in project proposals and significantly reduces numerous time-consuming tasks that managers must perform manually. Thus, it eliminates the drawbacks of the current manual methods and yields significantly more accurate results. In the future, additional experiments should be conducted to validate the proposed method using data from other funding organizations.
Originality/value
This study presents the application of word embedding methods to effectively use the rich information in the titles and abstracts of Turkish project proposals. Existing research studies focus on the automatic grouping of proposals; traditional frequency-based word embedding methods are used for feature extraction methods to represent project proposals. Unlike previous research, this study employs two outperforming neural network-based textual feature extraction techniques to obtain terms representing the proposals: BERT as a contextual word embedding method and FastText as a static word embedding method. Moreover, to the best of our knowledge, there has been no research conducted on the grouping of project proposals in Turkish.
Details
Keywords
Khaled Halteh and Milind Tiwari
The prevention of fraudulent activities, particularly within a financial context, is of paramount significance in all spheres, as it not only impacts the sustainability of…
Abstract
Purpose
The prevention of fraudulent activities, particularly within a financial context, is of paramount significance in all spheres, as it not only impacts the sustainability of corporate entities but also has the potential to have a broader economy-wide impact. This paper aims to focus on dual implications associated with financial distress, the first being associated with the temptation to launder funds due to financial distress, and the second being the potential for illicit activities, such as fraud, money laundering or terror financing, to give rise to financial distress.
Design/methodology/approach
The paper examines the literature on financial distress and uses theories of financial crime to establish a link between financial distress and financial crime.
Findings
In recent years, there has been a surge in corporate financial distress, particularly in the aftermath of concurrent crises such as the COVID-19 pandemic and the Russia–Ukraine war. Through a comprehensive examination of literature pertaining to financial distress and financial crime, this study identifies a proclivity towards fraudulent conduct arising from instances of financial distress. Moreover, the engagement in such illicit activities subsequently exacerbates the financial distress. An analysis of the relationship between financial crime and financial distress reveals the existence of a vicious cycle between the two.
Originality/value
The results of this study have the potential to advance understanding of the relationship between financial distress and financial crime, which has been previously underexplored.
Details
Keywords
Ackmez Mudhoo, Gaurav Sharma, Khim Hoong Chu and Mika Sillanpää
Adsorption parameters (e.g. Langmuir constant, mass transfer coefficient and Thomas rate constant) are involved in the design of aqueous-media adsorption treatment units. However…
Abstract
Adsorption parameters (e.g. Langmuir constant, mass transfer coefficient and Thomas rate constant) are involved in the design of aqueous-media adsorption treatment units. However, the classic approach to estimating such parameters is perceived to be imprecise. Herein, the essential features and performances of the ant colony, bee colony and elephant herd optimisation approaches are introduced to the experimental chemist and chemical engineer engaged in adsorption research for aqueous systems. Key research and development directions, believed to harness these algorithms for real-scale water treatment (which falls within the wide-ranging coverage of the Sustainable Development Goal 6 (SDG 6) ‘Clean Water and Sanitation for All’), are also proposed. The ant colony, bee colony and elephant herd optimisations have higher precision and accuracy, and are particularly efficient in finding the global optimum solution. It is hoped that the discussions can stimulate both the experimental chemist and chemical engineer to delineate the progress achieved so far and collaborate further to devise strategies for integrating these intelligent optimisations in the design and operation of real multicomponent multi-complexity adsorption systems for water purification.
Details
Keywords
Cemalettin Akdoğan, Tolga Özer and Yüksel Oğuz
Nowadays, food problems are likely to arise because of the increasing global population and decreasing arable land. Therefore, it is necessary to increase the yield of…
Abstract
Purpose
Nowadays, food problems are likely to arise because of the increasing global population and decreasing arable land. Therefore, it is necessary to increase the yield of agricultural products. Pesticides can be used to improve agricultural land products. This study aims to make the spraying of cherry trees more effective and efficient with the designed artificial intelligence (AI)-based agricultural unmanned aerial vehicle (UAV).
Design/methodology/approach
Two approaches have been adopted for the AI-based detection of cherry trees: In approach 1, YOLOv5, YOLOv7 and YOLOv8 models are trained with 70, 100 and 150 epochs. In Approach 2, a new method is proposed to improve the performance metrics obtained in Approach 1. Gaussian, wavelet transform (WT) and Histogram Equalization (HE) preprocessing techniques were applied to the generated data set in Approach 2. The best-performing models in Approach 1 and Approach 2 were used in the real-time test application with the developed agricultural UAV.
Findings
In Approach 1, the best F1 score was 98% in 100 epochs with the YOLOv5s model. In Approach 2, the best F1 score and mAP values were obtained as 98.6% and 98.9% in 150 epochs, with the YOLOv5m model with an improvement of 0.6% in the F1 score. In real-time tests, the AI-based spraying drone system detected and sprayed cherry trees with an accuracy of 66% in Approach 1 and 77% in Approach 2. It was revealed that the use of pesticides could be reduced by 53% and the energy consumption of the spraying system by 47%.
Originality/value
An original data set was created by designing an agricultural drone to detect and spray cherry trees using AI. YOLOv5, YOLOv7 and YOLOv8 models were used to detect and classify cherry trees. The results of the performance metrics of the models are compared. In Approach 2, a method including HE, Gaussian and WT is proposed, and the performance metrics are improved. The effect of the proposed method in a real-time experimental application is thoroughly analyzed.
Details