Search results
1 – 10 of 218
This study aims to construct a sentiment series generation method for danmu comments based on deep learning, and explore the features of sentiment series after clustering.
Abstract
Purpose
This study aims to construct a sentiment series generation method for danmu comments based on deep learning, and explore the features of sentiment series after clustering.
Design/methodology/approach
This study consisted of two main parts: danmu comment sentiment series generation and clustering. In the first part, the authors proposed a sentiment classification model based on BERT fine-tuning to quantify danmu comment sentiment polarity. To smooth the sentiment series, they used methods, such as comprehensive weights. In the second part, the shaped-based distance (SBD)-K-shape method was used to cluster the actual collected data.
Findings
The filtered sentiment series or curves of the microfilms on the Bilibili website could be divided into four major categories. There is an apparently stable time interval for the first three types of sentiment curves, while the fourth type of sentiment curve shows a clear trend of fluctuation in general. In addition, it was found that “disputed points” or “highlights” are likely to appear at the beginning and the climax of films, resulting in significant changes in the sentiment curves. The clustering results show a significant difference in user participation, with the second type prevailing over others.
Originality/value
Their sentiment classification model based on BERT fine-tuning outperformed the traditional sentiment lexicon method, which provides a reference for using deep learning as well as transfer learning for danmu comment sentiment analysis. The BERT fine-tuning–SBD-K-shape algorithm can weaken the effect of non-regular noise and temporal phase shift of danmu text.
Details
Keywords
B.V. Binoy, M.A. Naseer and P.P. Anil Kumar
Land value varies at a micro level depending on the location’s economic, geographical and political determinants. The purpose of this study is to present a comprehensive…
Abstract
Purpose
Land value varies at a micro level depending on the location’s economic, geographical and political determinants. The purpose of this study is to present a comprehensive assessment of the determinants affecting land value in the Indian city of Thiruvananthapuram in the state of Kerala.
Design/methodology/approach
The global influence of the identified 20 explanatory variables on land value is measured using the traditional hedonic price modeling approach. The localized spatial variations of the influencing parameters are examined using the non-parametric regression method, geographically weighted regression. This study used advertised land value prices collected from Web sources and screened through field surveys.
Findings
Global regression results indicate that access to transportation facilities, commercial establishments, crime sources, wetland classification and disaster history has the strongest influence on land value in the study area. Local regression results demonstrate that the factors influencing land value are not stationary in the study area. Most variables have a different influence in Kazhakootam and the residential areas than in the central business district region.
Originality/value
This study confirms findings from previous studies and provides additional evidence in the spatial dynamics of land value creation. It is to be noted that advanced modeling approaches used in the research have not received much attention in Indian property valuation studies. The outcomes of this study have important implications for the property value fixation of urban Kerala. The regional variation of land value within an urban agglomeration shows the need for a localized method for land value calculation.
Details
Keywords
Laura Lucantoni, Sara Antomarioni, Filippo Emanuele Ciarapica and Maurizio Bevilacqua
The Overall Equipment Effectiveness (OEE) is considered a standard for measuring equipment productivity in terms of efficiency. Still, Artificial Intelligence solutions are rarely…
Abstract
Purpose
The Overall Equipment Effectiveness (OEE) is considered a standard for measuring equipment productivity in terms of efficiency. Still, Artificial Intelligence solutions are rarely used for analyzing OEE results and identifying corrective actions. Therefore, the approach proposed in this paper aims to provide a new rule-based Machine Learning (ML) framework for OEE enhancement and the selection of improvement actions.
Design/methodology/approach
Association Rules (ARs) are used as a rule-based ML method for extracting knowledge from huge data. First, the dominant loss class is identified and traditional methodologies are used with ARs for anomaly classification and prioritization. Once selected priority anomalies, a detailed analysis is conducted to investigate their influence on the OEE loss factors using ARs and Network Analysis (NA). Then, a Deming Cycle is used as a roadmap for applying the proposed methodology, testing and implementing proactive actions by monitoring the OEE variation.
Findings
The method proposed in this work has also been tested in an automotive company for framework validation and impact measuring. In particular, results highlighted that the rule-based ML methodology for OEE improvement addressed seven anomalies within a year through appropriate proactive actions: on average, each action has ensured an OEE gain of 5.4%.
Originality/value
The originality is related to the dual application of association rules in two different ways for extracting knowledge from the overall OEE. In particular, the co-occurrences of priority anomalies and their impact on asset Availability, Performance and Quality are investigated.
Details
Keywords
Changhai Tian and Shoushuai Zhang
The design goal for the tracking interval of high-speed railway trains in China is 3 min, but it is difficult to achieve, and it is widely believed that it is mainly limited by…
Abstract
Purpose
The design goal for the tracking interval of high-speed railway trains in China is 3 min, but it is difficult to achieve, and it is widely believed that it is mainly limited by the tracking interval of train arrivals. If the train arrival tracking interval can be compressed, it will be beneficial for China's high-speed railway to achieve a 3-min train tracking interval. The goal of this article is to study how to compress the train arrival tracking interval.
Design/methodology/approach
By simulating the process of dense train groups arriving at the station and stopping, the headway between train arrivals at the station was calculated, and the pattern of train arrival headway was obtained, changing the traditional understanding that the train arrival headway is considered the main factor limiting the headway of trains.
Findings
When the running speed of trains is high, the headway between trains is short, the length of the station approach throat area is considerable and frequent train arrivals at the station, the arrival headway for the first group or several groups of trains will exceed the headway, but the subsequent sets of trains will have a headway equal to the arrival headway. This convergence characteristic is obtained by appropriately increasing the running time.
Originality/value
According to this pattern, there is no need to overly emphasize the impact of train arrival headway on the headway. This plays an important role in compressing train headway and improving high-speed railway capacity.
Details
Keywords
Mohammed Ayoub Ledhem and Warda Moussaoui
This paper aims to apply several data mining techniques for predicting the daily precision improvement of Jakarta Islamic Index (JKII) prices based on big data of symmetric…
Abstract
Purpose
This paper aims to apply several data mining techniques for predicting the daily precision improvement of Jakarta Islamic Index (JKII) prices based on big data of symmetric volatility in Indonesia’s Islamic stock market.
Design/methodology/approach
This research uses big data mining techniques to predict daily precision improvement of JKII prices by applying the AdaBoost, K-nearest neighbor, random forest and artificial neural networks. This research uses big data with symmetric volatility as inputs in the predicting model, whereas the closing prices of JKII were used as the target outputs of daily precision improvement. For choosing the optimal prediction performance according to the criteria of the lowest prediction errors, this research uses four metrics of mean absolute error, mean squared error, root mean squared error and R-squared.
Findings
The experimental results determine that the optimal technique for predicting the daily precision improvement of the JKII prices in Indonesia’s Islamic stock market is the AdaBoost technique, which generates the optimal predicting performance with the lowest prediction errors, and provides the optimum knowledge from the big data of symmetric volatility in Indonesia’s Islamic stock market. In addition, the random forest technique is also considered another robust technique in predicting the daily precision improvement of the JKII prices as it delivers closer values to the optimal performance of the AdaBoost technique.
Practical implications
This research is filling the literature gap of the absence of using big data mining techniques in the prediction process of Islamic stock markets by delivering new operational techniques for predicting the daily stock precision improvement. Also, it helps investors to manage the optimal portfolios and to decrease the risk of trading in global Islamic stock markets based on using big data mining of symmetric volatility.
Originality/value
This research is a pioneer in using big data mining of symmetric volatility in the prediction of an Islamic stock market index.
Details
Keywords
Khadija Echefaj, Abdelkabir Charkaoui, Anass Cherrafi, Anil Kumar and Sunil Luthra
The purpose of this study is to identify and prioritize capabilities and practices to ensure a resilient supply chain during an unexpected disruption. In addition, this study…
Abstract
Purpose
The purpose of this study is to identify and prioritize capabilities and practices to ensure a resilient supply chain during an unexpected disruption. In addition, this study ranks maturity factors that influence the main capabilities identified.
Design/methodology/approach
This paper is conducted in three stages. First, capabilities and practices are extracted through a literature review. Second, capabilities and practices are ranked using the analytical hierarchical process method. Third, a gray technique for order preference by similarity to ideal solution method is used to rank maturity factors influencing capabilities.
Findings
The findings indicate that responsiveness, readiness, flexibility and adaptability are the most important capabilities for supply chain resilience. Also, commitment and communication are the highest maturity factors influencing resilience capabilities.
Research limitations/implications
The findings provide a hierarchical vision of capabilities and practices for industries to increase resilience. Limitations of the paper are related to capabilities, practices and number of experts consulted.
Practical implications
This paper highlights the importance of high-maturity practices in resilience capability adoption. The findings of this study will encourage decisions-makers to increase maturity practices to build resilience against disruption.
Originality/value
The paper reveals that developing powerful capabilities, good practices and a high level of maturity improve supply chain resilience.
Details
Keywords
Abdul-Manan Sadick, Argaw Gurmu and Chathuri Gunarathna
Developing a reliable cost estimate at the early stage of construction projects is challenging due to inadequate project information. Most of the information during this stage is…
Abstract
Purpose
Developing a reliable cost estimate at the early stage of construction projects is challenging due to inadequate project information. Most of the information during this stage is qualitative, posing additional challenges to achieving accurate cost estimates. Additionally, there is a lack of tools that use qualitative project information and forecast the budgets required for project completion. This research, therefore, aims to develop a model for setting project budgets (excluding land) during the pre-conceptual stage of residential buildings, where project information is mainly qualitative.
Design/methodology/approach
Due to the qualitative nature of project information at the pre-conception stage, a natural language processing model, DistilBERT (Distilled Bidirectional Encoder Representations from Transformers), was trained to predict the cost range of residential buildings at the pre-conception stage. The training and evaluation data included 63,899 building permit activity records (2021–2022) from the Victorian State Building Authority, Australia. The input data comprised the project description of each record, which included project location and basic material types (floor, frame, roofing, and external wall).
Findings
This research designed a novel tool for predicting the project budget based on preliminary project information. The model achieved 79% accuracy in classifying residential buildings into three cost_classes ($100,000-$300,000, $300,000-$500,000, $500,000-$1,200,000) and F1-scores of 0.85, 0.73, and 0.74, respectively. Additionally, the results show that the model learnt the contextual relationship between qualitative data like project location and cost.
Research limitations/implications
The current model was developed using data from Victoria state in Australia; hence, it would not return relevant outcomes for other contexts. However, future studies can adopt the methods to develop similar models for their context.
Originality/value
This research is the first to leverage a deep learning model, DistilBERT, for cost estimation at the pre-conception stage using basic project information like location and material types. Therefore, the model would contribute to overcoming data limitations for cost estimation at the pre-conception stage. Residential building stakeholders, like clients, designers, and estimators, can use the model to forecast the project budget at the pre-conception stage to facilitate decision-making.
Details
Keywords
Ahmad Khodamipour, Hassan Yazdifar, Mahdi Askari Shahamabad and Parvin Khajavi
Today, with the increasing involvement of the environment and human beings business units, paying attention to fulfilling social responsibility obligations while making a profit…
Abstract
Purpose
Today, with the increasing involvement of the environment and human beings business units, paying attention to fulfilling social responsibility obligations while making a profit has become increasingly necessary for achieving sustainable development goals. Attention to profit by organizations should not be without regard to their social and environmental performance. Social responsibility accounting (SRA) is an approach that can pay more attention to the social and environmental performance of companies, but it has many barriers. Therefore, the purpose of this study is to identify barriers to SRA implementation and provide strategies to overcome these barriers.
Design/methodology/approach
In this study, the authors identify barriers to social responsibility accounting implementation and provide strategies to overcome these barriers. By literature review, 12 barriers and seven strategies were identified and approved using the opinions of six academic experts. Interpretive structural modeling (ISM) has been used to identify significant barriers and find textual relationships between them. The fuzzy technique for order performance by similarity to ideal solution (TOPSIS) method has been used to identify and rank strategies for overcoming these barriers. This study was undertaken in Iran (an emerging market). The data has been gathered from 18 experts selected using purposive sampling and included CEOs of the organization, senior accountants and active researchers well familiar with the field of social responsibility accounting.
Findings
Based on the results of this study, the cultural differences barrier was introduced as the primary and underlying barrier of the social responsibility accounting barriers model. At the next level, barriers such as “lack of public awareness of the importance of social responsibility accounting, lack of social responsibility accounting implementation regulations and organization size” are significant barriers to social responsibility accounting implementation. Removing these barriers will help remove other barriers in this direction. In addition, the results of the TOPSIS method showed that “mandatory regulations, the introduction of guidelines and social responsibility accounting standards,” “regulatory developments and government incentive schemes to implement social responsibility accounting,” as well as “increasing public awareness of the benefits of social responsibility accounting” are some of the essential social responsibility accounting implementation strategies.
Practical implications
The findings of the study have implications for both professional accounting bodies for developing the necessary standards and for policymakers for adopting policies that facilitate the implementation of social responsibility accounting to achieve sustainability.
Social implications
This paper creates a new perspective on the practical implementation of social responsibility accounting, closely related to improving environmental performance and increasing social welfare through improving sustainability.
Originality/value
Experts believe that the strategies mentioned above will be very effective and helpful in removing the barriers of the lower level of the model. To the best of the authors’ knowledge, for the first time, this study develops a model of social responsibility accounting barriers and ranks the most critical implementation strategies.
Details
Keywords
Chi-Un Lei, Wincy Chan and Yuyue Wang
Higher education plays an essential role in achieving the United Nations sustainable development goals (SDGs). However, there are only scattered studies on monitoring how…
Abstract
Purpose
Higher education plays an essential role in achieving the United Nations sustainable development goals (SDGs). However, there are only scattered studies on monitoring how universities promote SDGs through their curriculum. The purpose of this study is to investigate the connection of existing common core courses in a university to SDG education. In particular, this study wanted to know how common core courses can be classified by machine-learning approach according to SDGs.
Design/methodology/approach
In this report, the authors used machine learning techniques to tag the 166 common core courses in a university with SDGs and then analyzed the results based on visualizations. The training data set comes from the OSDG public community data set which the community had verified. Meanwhile, key descriptions of common core courses had been used for the classification. The study used the multinomial logistic regression algorithm for the classification. Descriptive analysis at course-level, theme-level and curriculum-level had been included to illustrate the proposed approach’s functions.
Findings
The results indicate that the machine-learning classification approach can significantly accelerate the SDG classification of courses. However, currently, it cannot replace human classification due to the complexity of the problem and the lack of relevant training data.
Research limitations/implications
The study can achieve a more accurate model training through adopting advanced machine learning algorithms (e.g. deep learning, multioutput multiclass machine learning algorithms); developing a more effective test data set by extracting more relevant information from syllabus and learning materials; expanding the training data set of SDGs that currently have insufficient records (e.g. SDG 12); and replacing the existing training data set from OSDG by authentic education-related documents (such as course syllabus) with SDG classifications. The performance of the algorithm should also be compared to other computer-based and human-based SDG classification approaches for cross-checking the results, with a systematic evaluation framework. Furthermore, the study can be analyzed by circulating results to students and understanding how they would interpret and use the results for choosing courses for studying. Furthermore, the study mainly focused on the classification of topics that are taught in courses but cannot measure the effectiveness of adopted pedagogies, assessment strategies and competency development strategies in courses. The study can also conduct analysis based on assessment tasks and rubrics of courses to see whether the assessment tasks can help students understand and take action on SDGs.
Originality/value
The proposed approach explores the possibility of using machine learning for SDG classifications in scale.
Details