Search results
1 – 10 of 74Andreas Gschwentner, Manfred Kaltenbacher, Barbara Kaltenbacher and Klaus Roppert
Performing accurate numerical simulations of electrical drives, the precise knowledge of the local magnetic material properties is of utmost importance. Due to the various…
Abstract
Purpose
Performing accurate numerical simulations of electrical drives, the precise knowledge of the local magnetic material properties is of utmost importance. Due to the various manufacturing steps, e.g. heat treatment or cutting techniques, the magnetic material properties can strongly vary locally, and the assumption of homogenized global material parameters is no longer feasible. This paper aims to present the general methodology and two different solution strategies for determining the local magnetic material properties using reference and simulation data.
Design/methodology/approach
The general methodology combines methods based on measurement, numerical simulation and solving an inverse problem. Therefore, a sensor-actuator system is used to characterize electrical steel sheets locally. Based on the measurement data and results from the finite element simulation, the inverse problem is solved with two different solution strategies. The first one is a quasi Newton method (QNM) using Broyden's update formula to approximate the Jacobian and the second is an adjoint method. For comparison of both methods regarding convergence and efficiency, an artificial example with a linear material model is considered.
Findings
The QNM and the adjoint method show similar convergence behavior for two different cutting-edge effects. Furthermore, considering a priori information improved the convergence rate. However, no impact on the stability and the remaining error is observed.
Originality/value
The presented methodology enables a fast and simple determination of the local magnetic material properties of electrical steel sheets without the need for a large number of samples or special preparation procedures.
Details
Keywords
Daniel Šandor and Marina Bagić Babac
Sarcasm is a linguistic expression that usually carries the opposite meaning of what is being said by words, thus making it difficult for machines to discover the actual meaning…
Abstract
Purpose
Sarcasm is a linguistic expression that usually carries the opposite meaning of what is being said by words, thus making it difficult for machines to discover the actual meaning. It is mainly distinguished by the inflection with which it is spoken, with an undercurrent of irony, and is largely dependent on context, which makes it a difficult task for computational analysis. Moreover, sarcasm expresses negative sentiments using positive words, allowing it to easily confuse sentiment analysis models. This paper aims to demonstrate the task of sarcasm detection using the approach of machine and deep learning.
Design/methodology/approach
For the purpose of sarcasm detection, machine and deep learning models were used on a data set consisting of 1.3 million social media comments, including both sarcastic and non-sarcastic comments. The data set was pre-processed using natural language processing methods, and additional features were extracted and analysed. Several machine learning models, including logistic regression, ridge regression, linear support vector and support vector machines, along with two deep learning models based on bidirectional long short-term memory and one bidirectional encoder representations from transformers (BERT)-based model, were implemented, evaluated and compared.
Findings
The performance of machine and deep learning models was compared in the task of sarcasm detection, and possible ways of improvement were discussed. Deep learning models showed more promise, performance-wise, for this type of task. Specifically, a state-of-the-art model in natural language processing, namely, BERT-based model, outperformed other machine and deep learning models.
Originality/value
This study compared the performance of the various machine and deep learning models in the task of sarcasm detection using the data set of 1.3 million comments from social media.
Details
Keywords
Mengyang Gao, Jun Wang and Ou Liu
Given the critical role of user-generated content (UGC) in e-commerce, exploring various aspects of UGC can aid in understanding user purchase intention and commodity…
Abstract
Purpose
Given the critical role of user-generated content (UGC) in e-commerce, exploring various aspects of UGC can aid in understanding user purchase intention and commodity recommendation. Therefore, this study investigates the impact of UGC on purchase decisions and proposes new recommendation models based on sentiment analysis, which are verified in Douban, one of the most popular UGC websites in China.
Design/methodology/approach
After verifying the relationship between various factors and product sales, this study proposes two models, collaborative filtering recommendation model based on sentiment (SCF) and hidden factors topics recommendation model based on sentiment (SHFT), by combining traditional collaborative filtering model (CF) and hidden factors topics model (HFT) with sentiment analysis.
Findings
The results indicate that sentiment significantly influences purchase intention. Furthermore, the proposed sentiment-based recommendation models outperform traditional CF and HFT in terms of mean absolute error (MAE) and root mean square error (RMSE). Moreover, the two models yield different outcomes for various product categories, providing actionable insights for organizers to implement more precise recommendation strategies.
Practical implications
The findings of this study advocate the incorporation of UGC sentimental factors into websites to heighten recommendation accuracy. Additionally, different recommendation strategies can be employed for different products types.
Originality/value
This study introduces a novel perspective to the recommendation algorithm field. It not only validates the impact of UGC sentiment on purchase intention but also evaluates the proposed models with real-world data. The study provides valuable insights for managerial decision-making aimed at enhancing recommendation systems.
Details
Keywords
Benna Hu, Laifu Wen and Xuemei Zhou
Vertical electrical sounding (VES) and Rayleigh wave exploration are widely used in the exploration of near-surface structure, but both have limitations. This study aims to make…
Abstract
Purpose
Vertical electrical sounding (VES) and Rayleigh wave exploration are widely used in the exploration of near-surface structure, but both have limitations. This study aims to make full use of the advantages of the two methods, reduce the multiple solutions of single inversion and improve the accuracy of the inversion. Thus, a nonlinear joint inversion method of VES and Rayleigh wave exploration based on improved differential evolution (DE) algorithm was proposed.
Design/methodology/approach
Based on the DE algorithm, a new initialization strategy was proposed. Then, taking AK-type with high-velocity interlayer model and HA-type with low-velocity interlayer model near the surface as examples, the inversion results of different methods were compared and analyzed. Then, the proposed method was applied to the field data in Chengde, Hebei Province, China. The stratum structure was accurately depicted and verified by drilling.
Findings
The synthetic data and field data results showed that the joint inversion of VES and Rayleigh wave data based on the improved DE algorithm can effectively improve the interpretation accuracy of the single-method inversion and had strong stability and large generalizable ability in near-surface engineering problems.
Originality/value
A joint inversion method of VES and Rayleigh wave data based on improved DE algorithm is proposed, which can improve the accuracy of single-method inversion.
Details
Keywords
Lu Wang, Jiahao Zheng, Jianrong Yao and Yuangao Chen
With the rapid growth of the domestic lending industry, assessing whether the borrower of each loan is at risk of default is a pressing issue for financial institutions. Although…
Abstract
Purpose
With the rapid growth of the domestic lending industry, assessing whether the borrower of each loan is at risk of default is a pressing issue for financial institutions. Although there are some models that can handle such problems well, there are still some shortcomings in some aspects. The purpose of this paper is to improve the accuracy of credit assessment models.
Design/methodology/approach
In this paper, three different stages are used to improve the classification performance of LSTM, so that financial institutions can more accurately identify borrowers at risk of default. The first approach is to use the K-Means-SMOTE algorithm to eliminate the imbalance within the class. In the second step, ResNet is used for feature extraction, and then two-layer LSTM is used for learning to strengthen the ability of neural networks to mine and utilize deep information. Finally, the model performance is improved by using the IDWPSO algorithm for optimization when debugging the neural network.
Findings
On two unbalanced datasets (category ratios of 700:1 and 3:1 respectively), the multi-stage improved model was compared with ten other models using accuracy, precision, specificity, recall, G-measure, F-measure and the nonparametric Wilcoxon test. It was demonstrated that the multi-stage improved model showed a more significant advantage in evaluating the imbalanced credit dataset.
Originality/value
In this paper, the parameters of the ResNet-LSTM hybrid neural network, which can fully mine and utilize the deep information, are tuned by an innovative intelligent optimization algorithm to strengthen the classification performance of the model.
Details
Keywords
Mingke Gao, Zhenyu Zhang, Jinyuan Zhang, Shihao Tang, Han Zhang and Tao Pang
Because of the various advantages of reinforcement learning (RL) mentioned above, this study uses RL to train unmanned aerial vehicles to perform two tasks: target search and…
Abstract
Purpose
Because of the various advantages of reinforcement learning (RL) mentioned above, this study uses RL to train unmanned aerial vehicles to perform two tasks: target search and cooperative obstacle avoidance.
Design/methodology/approach
This study draws inspiration from the recurrent state-space model and recurrent models (RPM) to propose a simpler yet highly effective model called the unmanned aerial vehicles prediction model (UAVPM). The main objective is to assist in training the UAV representation model with a recurrent neural network, using the soft actor-critic algorithm.
Findings
This study proposes a generalized actor-critic framework consisting of three modules: representation, policy and value. This architecture serves as the foundation for training UAVPM. This study proposes the UAVPM, which is designed to aid in training the recurrent representation using the transition model, reward recovery model and observation recovery model. Unlike traditional approaches reliant solely on reward signals, RPM incorporates temporal information. In addition, it allows the inclusion of extra knowledge or information from virtual training environments. This study designs UAV target search and UAV cooperative obstacle avoidance tasks. The algorithm outperforms baselines in these two environments.
Originality/value
It is important to note that UAVPM does not play a role in the inference phase. This means that the representation model and policy remain independent of UAVPM. Consequently, this study can introduce additional “cheating” information from virtual training environments to guide the UAV representation without concerns about its real-world existence. By leveraging historical information more effectively, this study enhances UAVs’ decision-making abilities, thus improving the performance of both tasks at hand.
Details
Keywords
Priyanka Chawla, Rutuja Hasurkar, Chaithanya Reddy Bogadi, Naga Sindhu Korlapati, Rajasree Rajendran, Sindu Ravichandran, Sai Chaitanya Tolem and Jerry Zeyu Gao
The study aims to propose an intelligent real-time traffic model to address the traffic congestion problem. The proposed model assists the urban population in their everyday lives…
Abstract
Purpose
The study aims to propose an intelligent real-time traffic model to address the traffic congestion problem. The proposed model assists the urban population in their everyday lives by assessing the probability of road accidents and accurate traffic information prediction. It also helps in reducing overall carbon dioxide emissions in the environment and assists the urban population in their everyday lives by increasing overall transportation quality.
Design/methodology/approach
This study offered a real-time traffic model based on the analysis of numerous sensor data. Real-time traffic prediction systems can identify and visualize current traffic conditions on a particular lane. The proposed model incorporated data from road sensors as well as a variety of other sources. It is difficult to capture and process large amounts of sensor data in real time. Sensor data is consumed by streaming analytics platforms that use big data technologies, which is then processed using a range of deep learning and machine learning techniques.
Findings
The study provided in this paper would fill a gap in the data analytics sector by delivering a more accurate and trustworthy model that uses internet of things sensor data and other data sources. This method can also assist organizations such as transit agencies and public safety departments in making strategic decisions by incorporating it into their platforms.
Research limitations/implications
The model has a big flaw in that it makes predictions for the period following January 2020 that are not particularly accurate. This, however, is not a flaw in the model; rather, it is a flaw in Covid-19, the global epidemic. The global pandemic has impacted the traffic scenario, resulting in erratic data for the period after February 2020. However, once the circumstance returns to normal, the authors are confident in their model’s ability to produce accurate forecasts.
Practical implications
To help users choose when to go, this study intended to pinpoint the causes of traffic congestion on the highways in the Bay Area as well as forecast real-time traffic speeds. To determine the best attributes that influence traffic speed in this study, the authors obtained data from the Caltrans performance measurement system (PeMS), reviewed it and used multiple models. The authors developed a model that can forecast traffic speed while accounting for outside variables like weather and incident data, with decent accuracy and generalizability. To assist users in determining traffic congestion at a certain location on a specific day, the forecast method uses a graphical user interface. This user interface has been designed to be readily expanded in the future as the project’s scope and usefulness increase. The authors’ Web-based traffic speed prediction platform is useful for both municipal planners and individual travellers. The authors were able to get excellent results by using five years of data (2015–2019) to train the models and forecast outcomes for 2020 data. The authors’ algorithm produced highly accurate predictions when tested using data from January 2020. The benefits of this model include accurate traffic speed forecasts for California’s four main freeways (Freeway 101, I-680, 880 and 280) for a specific place on a certain date. The scalable model performs better than the vast majority of earlier models created by other scholars in the field. The government would benefit from better planning and execution of new transportation projects if this programme were to be extended across the entire state of California. This initiative could be expanded to include the full state of California, assisting the government in better planning and implementing new transportation projects.
Social implications
To estimate traffic congestion, the proposed model takes into account a variety of data sources, including weather and incident data. According to traffic congestion statistics, “bottlenecks” account for 40% of traffic congestion, “traffic incidents” account for 25% and “work zones” account for 10% (Traffic Congestion Statistics). As a result, incident data must be considered for analysis. The study uses traffic, weather and event data from the previous five years to estimate traffic congestion in any given area. As a result, the results predicted by the proposed model would be more accurate, and commuters who need to schedule ahead of time for work would benefit greatly.
Originality/value
The proposed work allows the user to choose the optimum time and mode of transportation for them. The underlying idea behind this model is that if a car spends more time on the road, it will cause traffic congestion. The proposed system encourages users to arrive at their location in a short period of time. Congestion is an indicator that public transportation needs to be expanded. The optimum route is compared to other kinds of public transit using this methodology (Greenfield, 2014). If the commute time is comparable to that of private car transportation during peak hours, consumers should take public transportation.
Details
Keywords
Lenka Papíková and Mário Papík
European Parliament adopted a new directive on gender balance in corporate boards when by 2026, companies must employ 40% of the underrepresented sex into non-executive directors…
Abstract
Purpose
European Parliament adopted a new directive on gender balance in corporate boards when by 2026, companies must employ 40% of the underrepresented sex into non-executive directors or 33% among all directors. Therefore, this study aims to analyze the impact of gender diversity (GD) on board of directors and the shareholders’ structure and their impact on the likelihood of company bankruptcy during the COVID-19 pandemic.
Design/methodology/approach
The data sample consists of 1,351 companies for 2019 and 2020, of which 173 were large, 351 medium-sized companies and 827 small companies. Three bankruptcy indicators were tested for each company size, and extreme gradient boosting (XGBoost) and logistic regression models were developed. These models were then cross-validated by a 10-fold approach.
Findings
XGBoost models achieved area under curve (AUC) over 98%, which is 25% higher than AUC achieved by logistic regression. Prediction models with GD features performed slightly better than those without them. Furthermore, this study indicates the existence of critical mass between 30% and 50%, which decreases the probability of bankruptcy for small and medium companies. Furthermore, the representation of women in ownership structures above 50% decreases bankruptcy likelihood.
Originality/value
This is a pioneering study to explore GD topics by application of ensembled machine learning methods. Moreover, the study does analyze not only the GD of boards but also shareholders. A highly innovative approach is GD analysis based on company size performed in one study considering the COVID-19 pandemic perspective.
Details
Keywords
Nehal Elshaboury, Tarek Zayed and Eslam Mohammed Abdelkader
Water pipes degrade over time for a variety of pipe-related, soil-related, operational, and environmental factors. Hence, municipalities are necessitated to implement effective…
Abstract
Purpose
Water pipes degrade over time for a variety of pipe-related, soil-related, operational, and environmental factors. Hence, municipalities are necessitated to implement effective maintenance and rehabilitation strategies for water pipes based on reliable deterioration models and cost-effective inspection programs. In the light of foregoing, the paramount objective of this research study is to develop condition assessment and deterioration prediction models for saltwater pipes in Hong Kong.
Design/methodology/approach
As a perquisite to the development of condition assessment models, spherical fuzzy analytic hierarchy process (SFAHP) is harnessed to analyze the relative importance weights of deterioration factors. Afterward, the relative importance weights of deterioration factors coupled with their effective values are leveraged using the measurement of alternatives and ranking according to the compromise solution (MARCOS) algorithm to analyze the performance condition of water pipes. A condition rating system is then designed counting on the generalized entropy-based probabilistic fuzzy C means (GEPFCM) algorithm. A set of fourth order multiple regression functions are constructed to capture the degradation trends in condition of pipelines overtime covering their disparate characteristics.
Findings
Analytical results demonstrated that the top five influential deterioration factors comprise age, material, traffic, soil corrosivity and material. In addition, it was derived that developed deterioration models accomplished correlation coefficient, mean absolute error and root mean squared error of 0.8, 1.33 and 1.39, respectively.
Originality/value
It can be argued that generated deterioration models can assist municipalities in formulating accurate and cost-effective maintenance, repair and rehabilitation programs.
Details
Keywords
Chong Wu, Xiaofang Chen and Yongjie Jiang
While the Chinese securities market is booming, the phenomenon of listed companies falling into financial distress is also emerging, which affects the operation and development of…
Abstract
Purpose
While the Chinese securities market is booming, the phenomenon of listed companies falling into financial distress is also emerging, which affects the operation and development of enterprises and also jeopardizes the interests of investors. Therefore, it is important to understand how to accurately and reasonably predict the financial distress of enterprises.
Design/methodology/approach
In the present study, ensemble feature selection (EFS) and improved stacking were used for financial distress prediction (FDP). Mutual information, analysis of variance (ANOVA), random forest (RF), genetic algorithms, and recursive feature elimination (RFE) were chosen for EFS to select features. Since there may be missing information when feeding the results of the base learner directly into the meta-learner, the features with high importance were fed into the meta-learner together. A screening layer was added to select the meta-learner with better performance. Finally, Optima hyperparameters were used for parameter tuning by the learners.
Findings
An empirical study was conducted with a sample of A-share listed companies in China. The F1-score of the model constructed using the features screened by EFS reached 84.55%, representing an improvement of 4.37% compared to the original features. To verify the effectiveness of improved stacking, benchmark model comparison experiments were conducted. Compared to the original stacking model, the accuracy of the improved stacking model was improved by 0.44%, and the F1-score was improved by 0.51%. In addition, the improved stacking model had the highest area under the curve (AUC) value (0.905) among all the compared models.
Originality/value
Compared to previous models, the proposed FDP model has better performance, thus bridging the research gap of feature selection. The present study provides new ideas for stacking improvement research and a reference for subsequent research in this field.
Details