Search results

1 – 10 of 106
Open Access
Article
Publication date: 28 November 2022

Ruchi Kejriwal, Monika Garg and Gaurav Sarin

Stock market has always been lucrative for various investors. But, because of its speculative nature, it is difficult to predict the price movement. Investors have been using both…

1035

Abstract

Purpose

Stock market has always been lucrative for various investors. But, because of its speculative nature, it is difficult to predict the price movement. Investors have been using both fundamental and technical analysis to predict the prices. Fundamental analysis helps to study structured data of the company. Technical analysis helps to study price trends, and with the increasing and easy availability of unstructured data have made it important to study the market sentiment. Market sentiment has a major impact on the prices in short run. Hence, the purpose is to understand the market sentiment timely and effectively.

Design/methodology/approach

The research includes text mining and then creating various models for classification. The accuracy of these models is checked using confusion matrix.

Findings

Out of the six machine learning techniques used to create the classification model, kernel support vector machine gave the highest accuracy of 68%. This model can be now used to analyse the tweets, news and various other unstructured data to predict the price movement.

Originality/value

This study will help investors classify a news or a tweet into “positive”, “negative” or “neutral” quickly and determine the stock price trends.

Details

Vilakshan - XIMB Journal of Management, vol. 21 no. 1
Type: Research Article
ISSN: 0973-1954

Keywords

Open Access
Article
Publication date: 12 January 2024

Patrik Jonsson, Johan Öhlin, Hafez Shurrab, Johan Bystedt, Azam Sheikh Muhammad and Vilhelm Verendel

This study aims to explore and empirically test variables influencing material delivery schedule inaccuracies?

Abstract

Purpose

This study aims to explore and empirically test variables influencing material delivery schedule inaccuracies?

Design/methodology/approach

A mixed-method case approach is applied. Explanatory variables are identified from the literature and explored in a qualitative analysis at an automotive original equipment manufacturer. Using logistic regression and random forest classification models, quantitative data (historical schedule transactions and internal data) enables the testing of the predictive difference of variables under various planning horizons and inaccuracy levels.

Findings

The effects on delivery schedule inaccuracies are contingent on a decoupling point, and a variable may have a combined amplifying (complexity generating) and stabilizing (complexity absorbing) moderating effect. Product complexity variables are significant regardless of the time horizon, and the item’s order life cycle is a significant variable with predictive differences that vary. Decoupling management is identified as a mechanism for generating complexity absorption capabilities contributing to delivery schedule accuracy.

Practical implications

The findings provide guidelines for exploring and finding patterns in specific variables to improve material delivery schedule inaccuracies and input into predictive forecasting models.

Originality/value

The findings contribute to explaining material delivery schedule variations, identifying potential root causes and moderators, empirically testing and validating effects and conceptualizing features that cause and moderate inaccuracies in relation to decoupling management and complexity theory literature?

Details

International Journal of Operations & Production Management, vol. 44 no. 13
Type: Research Article
ISSN: 0144-3577

Keywords

Open Access
Article
Publication date: 23 January 2024

Luís Jacques de Sousa, João Poças Martins, Luís Sanhudo and João Santos Baptista

This study aims to review recent advances towards the implementation of ANN and NLP applications during the budgeting phase of the construction process. During this phase…

Abstract

Purpose

This study aims to review recent advances towards the implementation of ANN and NLP applications during the budgeting phase of the construction process. During this phase, construction companies must assess the scope of each task and map the client’s expectations to an internal database of tasks, resources and costs. Quantity surveyors carry out this assessment manually with little to no computer aid, within very austere time constraints, even though these results determine the company’s bid quality and are contractually binding.

Design/methodology/approach

This paper seeks to compile applications of machine learning (ML) and natural language processing in the architectural engineering and construction sector to find which methodologies can assist this assessment. The paper carries out a systematic literature review, following the preferred reporting items for systematic reviews and meta-analyses guidelines, to survey the main scientific contributions within the topic of text classification (TC) for budgeting in construction.

Findings

This work concludes that it is necessary to develop data sets that represent the variety of tasks in construction, achieve higher accuracy algorithms, widen the scope of their application and reduce the need for expert validation of the results. Although full automation is not within reach in the short term, TC algorithms can provide helpful support tools.

Originality/value

Given the increasing interest in ML for construction and recent developments, the findings disclosed in this paper contribute to the body of knowledge, provide a more automated perspective on budgeting in construction and break ground for further implementation of text-based ML in budgeting for construction.

Details

Construction Innovation , vol. 24 no. 7
Type: Research Article
ISSN: 1471-4175

Keywords

Open Access
Article
Publication date: 9 November 2023

Abdulmohsen S. Almohsen, Naif M. Alsanabani, Abdullah M. Alsugair and Khalid S. Al-Gahtani

The variance between the winning bid and the owner's estimated cost (OEC) is one of the construction management risks in the pre-tendering phase. The study aims to enhance the…

Abstract

Purpose

The variance between the winning bid and the owner's estimated cost (OEC) is one of the construction management risks in the pre-tendering phase. The study aims to enhance the quality of the owner's estimation for predicting precisely the contract cost at the pre-tendering phase and avoiding future issues that arise through the construction phase.

Design/methodology/approach

This paper integrated artificial neural networks (ANN), deep neural networks (DNN) and time series (TS) techniques to estimate the ratio of a low bid to the OEC (R) for different size contracts and three types of contracts (building, electric and mechanic) accurately based on 94 contracts from King Saud University. The ANN and DNN models were evaluated using mean absolute percentage error (MAPE), mean sum square error (MSSE) and root mean sums square error (RMSSE).

Findings

The main finding is that the ANN provides high accuracy with MAPE, MSSE and RMSSE a 2.94%, 0.0015 and 0.039, respectively. The DNN's precision was high, with an RMSSE of 0.15 on average.

Practical implications

The owner and consultant are expected to use the study's findings to create more accuracy of the owner's estimate and decrease the difference between the owner's estimate and the lowest submitted offer for better decision-making.

Originality/value

This study fills the knowledge gap by developing an ANN model to handle missing TS data and forecasting the difference between a low bid and an OEC at the pre-tendering phase.

Details

Engineering, Construction and Architectural Management, vol. 31 no. 13
Type: Research Article
ISSN: 0969-9988

Keywords

Open Access
Article
Publication date: 8 February 2024

Joseph F. Hair, Pratyush N. Sharma, Marko Sarstedt, Christian M. Ringle and Benjamin D. Liengaard

The purpose of this paper is to assess the appropriateness of equal weights estimation (sumscores) and the application of the composite equivalence index (CEI) vis-à-vis

2674

Abstract

Purpose

The purpose of this paper is to assess the appropriateness of equal weights estimation (sumscores) and the application of the composite equivalence index (CEI) vis-à-vis differentiated indicator weights produced by partial least squares structural equation modeling (PLS-SEM).

Design/methodology/approach

The authors rely on prior literature as well as empirical illustrations and a simulation study to assess the efficacy of equal weights estimation and the CEI.

Findings

The results show that the CEI lacks discriminatory power, and its use can lead to major differences in structural model estimates, conceals measurement model issues and almost always leads to inferior out-of-sample predictive accuracy compared to differentiated weights produced by PLS-SEM.

Research limitations/implications

In light of its manifold conceptual and empirical limitations, the authors advise against the use of the CEI. Its adoption and the routine use of equal weights estimation could adversely affect the validity of measurement and structural model results and understate structural model predictive accuracy. Although this study shows that the CEI is an unsuitable metric to decide between equal weights and differentiated weights, it does not propose another means for such a comparison.

Practical implications

The results suggest that researchers and practitioners should prefer differentiated indicator weights such as those produced by PLS-SEM over equal weights.

Originality/value

To the best of the authors’ knowledge, this study is the first to provide a comprehensive assessment of the CEI’s usefulness. The results provide guidance for researchers considering using equal indicator weights instead of PLS-SEM-based weighted indicators.

Details

European Journal of Marketing, vol. 58 no. 13
Type: Research Article
ISSN: 0309-0566

Keywords

Open Access
Article
Publication date: 8 December 2023

Flaviana Calignano, Alessandro Bove, Vincenza Mercurio and Giovanni Marchiandi

Polymer laser powder bed fusion (PBF-LB/P) is an additive manufacturing technology that is sustainable due to the possibility of recycling the powder multiple times and allowing…

474

Abstract

Purpose

Polymer laser powder bed fusion (PBF-LB/P) is an additive manufacturing technology that is sustainable due to the possibility of recycling the powder multiple times and allowing the fabrication of gears without the aid of support structures and subsequent assembly. However, there are constraints in the process that negatively affect its adoption compared to other additive technologies such as material extrusion to produce gears. This study aims to demonstrate that it is possible to overcome the problems due to the physics of the process to produce accurate mechanism.

Design/methodology/approach

Technological aspects such as orientation, wheel-shaft thicknesses and degree of powder recycling were examined. Furthermore, the evolving tooth profile was considered as a design parameter to provide a manufacturability map of gear-based mechanisms.

Findings

Results show that there are some differences in the functioning of the gear depending on the type of powder used, 100% virgin or 50% virgin and 50% recycled for five cycles. The application of a groove on a gear produced with 100% virgin powder allows the mechanism to be easily unlocked regardless of the orientation and wheel-shaft thicknesses. The application of a specific evolutionary profile independent of the diameter of the reference circle on vertically oriented gears guarantees rotation continuity while preserving the functionality of the assembled mechanism.

Originality/value

In the literature, there are various studies on material aging and reuse in the PBF-LB/P process, mainly focused on the powder deterioration mechanism, powder fluidity, microstructure and mechanical properties of the parts and process parameters. This study, instead, was focused on the functioning of gears, which represent one of the applications in which this technology can have great success, by analyzing the two main effects that can compromise it: recycled powder and vertical orientation during construction.

Details

Rapid Prototyping Journal, vol. 30 no. 11
Type: Research Article
ISSN: 1355-2546

Keywords

Open Access
Article
Publication date: 28 August 2023

Jonathan Passmore and David Tee

This study aimed to evaluate the potential of artificial intelligence (AI) as a tool for knowledge synthesis, the production of written content and the delivery of coaching…

1936

Abstract

Purpose

This study aimed to evaluate the potential of artificial intelligence (AI) as a tool for knowledge synthesis, the production of written content and the delivery of coaching conversations.

Design/methodology/approach

The research employed the use of experts to evaluate the outputs from ChatGPT's AI tool in blind tests to review the accuracy and value of outcomes for written content and for coaching conversations.

Findings

The results from these tasks indicate that there is a significant gap between comparative search tools such as Google Scholar, specialist online discovery tools (EBSCO and PsycNet) and GPT-4's performance. GPT-4 lacks the accuracy and detail which can be found through other tools, although the material produced has strong face validity. It argues organisations, academic institutions and training providers should put in place policies regarding the use of such tools, and professional bodies should amend ethical codes of practice to reduce the risks of false claims being used in published work.

Originality/value

This is the first research paper to evaluate the current potential of generative AI tools for research, knowledge curation and coaching conversations.

Details

Journal of Work-Applied Management, vol. 16 no. 1
Type: Research Article
ISSN: 2205-2062

Keywords

Open Access
Article
Publication date: 18 April 2024

Joseph Nockels, Paul Gooding and Melissa Terras

This paper focuses on image-to-text manuscript processing through Handwritten Text Recognition (HTR), a Machine Learning (ML) approach enabled by Artificial Intelligence (AI)…

Abstract

Purpose

This paper focuses on image-to-text manuscript processing through Handwritten Text Recognition (HTR), a Machine Learning (ML) approach enabled by Artificial Intelligence (AI). With HTR now achieving high levels of accuracy, we consider its potential impact on our near-future information environment and knowledge of the past.

Design/methodology/approach

In undertaking a more constructivist analysis, we identified gaps in the current literature through a Grounded Theory Method (GTM). This guided an iterative process of concept mapping through writing sprints in workshop settings. We identified, explored and confirmed themes through group discussion and a further interrogation of relevant literature, until reaching saturation.

Findings

Catalogued as part of our GTM, 120 published texts underpin this paper. We found that HTR facilitates accurate transcription and dataset cleaning, while facilitating access to a variety of historical material. HTR contributes to a virtuous cycle of dataset production and can inform the development of online cataloguing. However, current limitations include dependency on digitisation pipelines, potential archival history omission and entrenchment of bias. We also cite near-future HTR considerations. These include encouraging open access, integrating advanced AI processes and metadata extraction; legal and moral issues surrounding copyright and data ethics; crediting individuals’ transcription contributions and HTR’s environmental costs.

Originality/value

Our research produces a set of best practice recommendations for researchers, data providers and memory institutions, surrounding HTR use. This forms an initial, though not comprehensive, blueprint for directing future HTR research. In pursuing this, the narrative that HTR’s speed and efficiency will simply transform scholarship in archives is deconstructed.

Open Access
Article
Publication date: 15 December 2023

Nicola Castellano, Roberto Del Gobbo and Lorenzo Leto

The concept of productivity is central to performance management and decision-making, although it is complex and multifaceted. This paper aims to describe a methodology based on…

Abstract

Purpose

The concept of productivity is central to performance management and decision-making, although it is complex and multifaceted. This paper aims to describe a methodology based on the use of Big Data in a cluster analysis combined with a data envelopment analysis (DEA) that provides accurate and reliable productivity measures in a large network of retailers.

Design/methodology/approach

The methodology is described using a case study of a leading kitchen furniture producer. More specifically, Big Data is used in a two-step analysis prior to the DEA to automatically cluster a large number of retailers into groups that are homogeneous in terms of structural and environmental factors and assess a within-the-group level of productivity of the retailers.

Findings

The proposed methodology helps reduce the heterogeneity among the units analysed, which is a major concern in DEA applications. The data-driven factorial and clustering technique allows for maximum within-group homogeneity and between-group heterogeneity by reducing subjective bias and dimensionality, which is embedded with the use of Big Data.

Practical implications

The use of Big Data in clustering applied to productivity analysis can provide managers with data-driven information about the structural and socio-economic characteristics of retailers' catchment areas, which is important in establishing potential productivity performance and optimizing resource allocation. The improved productivity indexes enable the setting of targets that are coherent with retailers' potential, which increases motivation and commitment.

Originality/value

This article proposes an innovative technique to enhance the accuracy of productivity measures through the use of Big Data clustering and DEA. To the best of the authors’ knowledge, no attempts have been made to benefit from the use of Big Data in the literature on retail store productivity.

Details

International Journal of Productivity and Performance Management, vol. 73 no. 11
Type: Research Article
ISSN: 1741-0401

Keywords

Open Access
Article
Publication date: 31 July 2023

Daniel Šandor and Marina Bagić Babac

Sarcasm is a linguistic expression that usually carries the opposite meaning of what is being said by words, thus making it difficult for machines to discover the actual meaning…

2983

Abstract

Purpose

Sarcasm is a linguistic expression that usually carries the opposite meaning of what is being said by words, thus making it difficult for machines to discover the actual meaning. It is mainly distinguished by the inflection with which it is spoken, with an undercurrent of irony, and is largely dependent on context, which makes it a difficult task for computational analysis. Moreover, sarcasm expresses negative sentiments using positive words, allowing it to easily confuse sentiment analysis models. This paper aims to demonstrate the task of sarcasm detection using the approach of machine and deep learning.

Design/methodology/approach

For the purpose of sarcasm detection, machine and deep learning models were used on a data set consisting of 1.3 million social media comments, including both sarcastic and non-sarcastic comments. The data set was pre-processed using natural language processing methods, and additional features were extracted and analysed. Several machine learning models, including logistic regression, ridge regression, linear support vector and support vector machines, along with two deep learning models based on bidirectional long short-term memory and one bidirectional encoder representations from transformers (BERT)-based model, were implemented, evaluated and compared.

Findings

The performance of machine and deep learning models was compared in the task of sarcasm detection, and possible ways of improvement were discussed. Deep learning models showed more promise, performance-wise, for this type of task. Specifically, a state-of-the-art model in natural language processing, namely, BERT-based model, outperformed other machine and deep learning models.

Originality/value

This study compared the performance of the various machine and deep learning models in the task of sarcasm detection using the data set of 1.3 million comments from social media.

Details

Information Discovery and Delivery, vol. 52 no. 2
Type: Research Article
ISSN: 2398-6247

Keywords

Access

Only Open Access

Year

Last week (106)

Content type

1 – 10 of 106