Search results
1 – 10 of over 1000Madison B. Harvey, Heather L. Price and Kirk Luther
The purpose of this study was to explore potential witnesses' memories for a day that was experienced an unremarkable. There may be instances in an investigation in which all…
Abstract
Purpose
The purpose of this study was to explore potential witnesses' memories for a day that was experienced an unremarkable. There may be instances in an investigation in which all leads have been exhausted, and investigators use a broad appeal for witnesses who may have witnessed something important. Investigators can benefit from knowing the types of information that may be recalled in such circumstances, as well as identifying specific methods that are effective in eliciting useful information.
Design/methodology/approach
The present study explored how the delay to recall and recall method influenced the recollection of a seemingly unremarkable day that later became important. Participants were asked to recall an experienced event that occurred either recently (a few weeks prior) or in the distant past (a year prior). Participants recalled via either a written method, in-person individual-spoken or collaborative-spoken interviews.
Findings
Results suggest an independent benefit for individual-spoken in-person recall (compared to written or collaborative-spoken recall) and recall undertaken closely after an event (compared to delayed recall). Both individual-spoken interviews as well as more recent recollection resulted in a greater number of overall details recalled. The authors further examined the types of details recalled that might be important to progressing an investigation (e.g. other witnesses and records).
Originality/value
The present work provides important implications for interviewing witnesses about a seemingly unremarkable event that later became important.
Details
Keywords
Sooin Kim, Atefe Makhmalbaf and Mohsen Shahandashti
This research aims to forecast the ABI as a leading indicator of U.S. construction activities, applying multivariate machine learning predictive models over different horizons and…
Abstract
Purpose
This research aims to forecast the ABI as a leading indicator of U.S. construction activities, applying multivariate machine learning predictive models over different horizons and utilizing the nonlinear and long-term dependencies between the ABI and macroeconomic and construction market variables. To assess the applicability of the machine learning models, six multivariate machine learning predictive models were developed considering the relationships between the ABI and other construction market and macroeconomic variables. The forecasting performances of the developed predictive models were evaluated in different forecasting scenarios, such as short-term, medium-term, and long-term horizons comparable to the actual timelines of construction projects.
Design/methodology/approach
The architecture billings index (ABI) as a macroeconomic indicator is published monthly by the American Institute of Architects (AIA) to evaluate business conditions and track construction market movements. The current research developed multivariate machine learning models to forecast ABI data for different time horizons. Different macroeconomic and construction market variables, including Gross Domestic Product (GDP), Total Nonresidential Construction Spending, Project Inquiries, and Design Contracts data were considered for predicting future ABI values. The forecasting accuracies of the machine learning models were validated and compared using the short-term (one-year-ahead), medium-term (three-year-ahead), and long-term (five-year-ahead) ABI testing datasets.
Findings
The experimental results show that Long Short Term Memory (LSTM) provides the highest accuracy among the machine learning and traditional time-series forecasting models such as Vector Error Correction Model (VECM) or seasonal ARIMA in forecasting the ABIs over all the forecasting horizons. This is because of the strengths of LSTM for forecasting temporal time series by solving vanishing or exploding gradient problems and learning long-term dependencies in sequential ABI time series. The findings of this research highlight the applicability of machine learning predictive models for forecasting the ABI as a leading indicator of construction activities, business conditions, and market movements.
Practical implications
The architecture, engineering, and construction (AEC) industry practitioners, investment groups, media outlets, and business leaders refer to ABI as a macroeconomic indicator to evaluate business conditions and track construction market movements. It is crucial to forecast the ABI accurately for strategic planning and preemptive risk management in fluctuating AEC business cycles. For example, cost estimators and engineers who forecast the ABI to predict future demand for architectural services and construction activities can prepare and price their bids more strategically to avoid a bid loss or profit loss.
Originality/value
The ABI data have been forecasted and modeled using linear time series models. However, linear time series models often fail to capture nonlinear patterns, interactions, and dependencies among variables, which can be handled by machine learning models in a more flexible manner. Despite the strength of machine learning models to capture nonlinear patterns and relationships between variables, the applicability and forecasting performance of multivariate machine learning models have not been investigated for ABI forecasting problems. This research first attempted to forecast ABI data for different time horizons using multivariate machine learning predictive models using different macroeconomic and construction market variables.
Details
Keywords
Saleh Abu Dabous, Fakhariya Ibrahim and Ahmad Alzghoul
Bridge deterioration is a critical risk to public safety, which mandates regular inspection and maintenance to ensure sustainable transport services. Many models have been…
Abstract
Purpose
Bridge deterioration is a critical risk to public safety, which mandates regular inspection and maintenance to ensure sustainable transport services. Many models have been developed to aid in understanding deterioration patterns and in planning maintenance actions and fund allocation. This study aims at developing a deep-learning model to predict the deterioration of concrete bridge decks.
Design/methodology/approach
Three long short-term memory (LSTM) models are formulated to predict the condition rating of bridge decks, namely vanilla LSTM (vLSTM), stacked LSTM (sLSTM), and convolutional neural networks combined with LSTM (CNN-LSTM). The models are developed by utilising the National Bridge Inventory (NBI) datasets spanning from 2001 to 2019 to predict the deck condition ratings in 2021.
Findings
Results reveal that all three models have accuracies of 90% and above, with mean squared errors (MSE) between 0.81 and 0.103. Moreover, CNN-LSTM has the best performance, achieving an accuracy of 93%, coefficient of correlation of 0.91, R2 value of 0.83, and MSE of 0.081.
Research limitations/implications
The study used the NBI bridge inventory databases to develop the bridge deterioration models. Future studies can extend the model to other bridge databases and other applications in the construction industry.
Originality/value
This study provides a detailed and extensive data cleansing process to address the shortcomings in the NBI database. This research presents a framework for implementing artificial intelligence-based models to enhance maintenance planning and a guideline for utilising the NBI or other bridge inventory databases to develop accurate bridge deterioration models. Future studies can extend the model to other bridge databases and other applications in the construction industry.
Details
Keywords
Antonijo Marijić and Marina Bagić Babac
Genre classification of songs based on lyrics is a challenging task even for humans, however, state-of-the-art natural language processing has recently offered advanced solutions…
Abstract
Purpose
Genre classification of songs based on lyrics is a challenging task even for humans, however, state-of-the-art natural language processing has recently offered advanced solutions to this task. The purpose of this study is to advance the understanding and application of natural language processing and deep learning in the domain of music genre classification, while also contributing to the broader themes of global knowledge and communication, and sustainable preservation of cultural heritage.
Design/methodology/approach
The main contribution of this study is the development and evaluation of various machine and deep learning models for song genre classification. Additionally, we investigated the effect of different word embeddings, including Global Vectors for Word Representation (GloVe) and Word2Vec, on the classification performance. The tested models range from benchmarks such as logistic regression, support vector machine and random forest, to more complex neural network architectures and transformer-based models, such as recurrent neural network, long short-term memory, bidirectional long short-term memory and bidirectional encoder representations from transformers (BERT).
Findings
The authors conducted experiments on both English and multilingual data sets for genre classification. The results show that the BERT model achieved the best accuracy on the English data set, whereas cross-lingual language model pretraining based on RoBERTa (XLM-RoBERTa) performed the best on the multilingual data set. This study found that songs in the metal genre were the most accurately labeled, as their text style and topics were the most distinct from other genres. On the contrary, songs from the pop and rock genres were more challenging to differentiate. This study also compared the impact of different word embeddings on the classification task and found that models with GloVe word embeddings outperformed Word2Vec and the learning embedding layer.
Originality/value
This study presents the implementation, testing and comparison of various machine and deep learning models for genre classification. The results demonstrate that transformer models, including BERT, robustly optimized BERT pretraining approach, distilled bidirectional encoder representations from transformers, bidirectional and auto-regressive transformers and XLM-RoBERTa, outperformed other models.
Details
Keywords
Enforcing employee compliance with information systems security policies (ISSP) is a herculean task for organizations as security breaches due to non-compliance continue to soar…
Abstract
Purpose
Enforcing employee compliance with information systems security policies (ISSP) is a herculean task for organizations as security breaches due to non-compliance continue to soar. To improve this situation, researchers have employed fear appeals that are based on protection motivation theory (PMT) to induce compliance behavior. However, extant research on fear appeals has yielded mixed findings. To help explain these mixed findings, the authors contend that efficacy formation is a cognitive process that is impacted by the cognitive load exerted by the design of fear appeal messages.
Design/methodology/approach
The study draws on cognitive load theory (CLT) to examine the effects of intrinsic cognitive load, extraneous cognitive load and germane cognitive load on stimulating an individual’s efficacy and coping appraisals. The authors designed a survey to collect data from 359 respondents and tested the model using partial least squares.
Findings
The analysis showed significant relationships between cognitive load (intrinsic, extraneous, and germane) and fear, maladaptive rewards, response costs, self-efficacy and response efficacy.
Originality/value
This provides support for the assertion that fear appeals impact the cognitive processes of individuals that then in turn can potentially affect the efficacy of fear and coping appraisals. These findings demonstrate the need to further investigate how individual cognition is impacted by fear appeal design and the resulting effects on compliance intention and behavior.
Details
Keywords
Yuzhuo Wang, Chengzhi Zhang, Min Song, Seongdeok Kim, Youngsoo Ko and Juhee Lee
In the era of artificial intelligence (AI), algorithms have gained unprecedented importance. Scientific studies have shown that algorithms are frequently mentioned in papers…
Abstract
Purpose
In the era of artificial intelligence (AI), algorithms have gained unprecedented importance. Scientific studies have shown that algorithms are frequently mentioned in papers, making mention frequency a classical indicator of their popularity and influence. However, contemporary methods for evaluating influence tend to focus solely on individual algorithms, disregarding the collective impact resulting from the interconnectedness of these algorithms, which can provide a new way to reveal their roles and importance within algorithm clusters. This paper aims to build the co-occurrence network of algorithms in the natural language processing field based on the full-text content of academic papers and analyze the academic influence of algorithms in the group based on the features of the network.
Design/methodology/approach
We use deep learning models to extract algorithm entities from articles and construct the whole, cumulative and annual co-occurrence networks. We first analyze the characteristics of algorithm networks and then use various centrality metrics to obtain the score and ranking of group influence for each algorithm in the whole domain and each year. Finally, we analyze the influence evolution of different representative algorithms.
Findings
The results indicate that algorithm networks also have the characteristics of complex networks, with tight connections between nodes developing over approximately four decades. For different algorithms, algorithms that are classic, high-performing and appear at the junctions of different eras can possess high popularity, control, central position and balanced influence in the network. As an algorithm gradually diminishes its sway within the group, it typically loses its core position first, followed by a dwindling association with other algorithms.
Originality/value
To the best of the authors’ knowledge, this paper is the first large-scale analysis of algorithm networks. The extensive temporal coverage, spanning over four decades of academic publications, ensures the depth and integrity of the network. Our results serve as a cornerstone for constructing multifaceted networks interlinking algorithms, scholars and tasks, facilitating future exploration of their scientific roles and semantic relations.
Details
Keywords
Zengli Mao and Chong Wu
Because the dynamic characteristics of the stock market are nonlinear, it is unclear whether stock prices can be predicted. This paper aims to explore the predictability of the…
Abstract
Purpose
Because the dynamic characteristics of the stock market are nonlinear, it is unclear whether stock prices can be predicted. This paper aims to explore the predictability of the stock price index from a long-memory perspective. The authors propose hybrid models to predict the next-day closing price index and explore the policy effects behind stock prices. The paper aims to discuss the aforementioned ideas.
Design/methodology/approach
The authors found a long memory in the stock price index series using modified R/S and GPH tests, and propose an improved bi-directional gated recurrent units (BiGRU) hybrid network framework to predict the next-day stock price index. The proposed framework integrates (1) A de-noising module—Singular Spectrum Analysis (SSA) algorithm, (2) a predictive module—BiGRU model, and (3) an optimization module—Grid Search Cross-validation (GSCV) algorithm.
Findings
Three critical findings are long memory, fit effectiveness and model optimization. There is long memory (predictability) in the stock price index series. The proposed framework yields predictions of optimum fit. Data de-noising and parameter optimization can improve the model fit.
Practical implications
The empirical data are obtained from the financial data of listed companies in the Wind Financial Terminal. The model can accurately predict stock price index series, guide investors to make reasonable investment decisions, and provide a basis for establishing individual industry stock investment strategies.
Social implications
If the index series in the stock market exhibits long-memory characteristics, the policy implication is that fractal markets, even in the nonlinear case, allow for a corresponding distribution pattern in the value of portfolio assets. The risk of stock price volatility in various sectors has expanded due to the effects of the COVID-19 pandemic and the R-U conflict on the stock market. Predicting future trends by forecasting stock prices is critical for minimizing financial risk. The ability to mitigate the epidemic’s impact and stop losses promptly is relevant to market regulators, companies and other relevant stakeholders.
Originality/value
Although long memory exists, the stock price index series can be predicted. However, price fluctuations are unstable and chaotic, and traditional mathematical and statistical methods cannot provide precise predictions. The network framework proposed in this paper has robust horizontal connections between units, strong memory capability and stronger generalization ability than traditional network structures. The authors demonstrate significant performance improvements of SSA-BiGRU-GSCV over comparison models on Chinese stocks.
Details
Keywords
Ying Hu and Feng’e Zheng
The ancient town of Lijiang is a representative place of ethnic minorities in China’s southwest border area jointly built by many ethnic groups. Its rich and diversified history…
Abstract
Purpose
The ancient town of Lijiang is a representative place of ethnic minorities in China’s southwest border area jointly built by many ethnic groups. Its rich and diversified history, culture and architecture as well as its artistic and spiritual values need to be better retained and explored.
Design/methodology/approach
The protection and inheritance of Lijiang’s cultural heritage will be improved through the construction of digital memory resources. To guide Lijiang’s digital memory construction, this study explores strategies of digital memory construction by analyzing four case studies of well-known memory projects from China and America.
Findings
From the case studies analysis, factors of digital memory construction were identified and compared. Factors led to the discussion of strategies for constructing the digital memory of Lijiang within its design, construction and service phases.
Originality/value
The ancient town of Lijiang is a famous historical and cultural city in China, and it is also a representative place of ethnic minorities in the border area jointly built by many ethnic groups. The rich culture should be preserved and digitalized to offer better use for the whole nation.
Details
Keywords
Biswajit Paul, Raktim Ghosh, Ashish Kumar Sana, Bhaskar Bagchi, Priyajit Kumar Ghosh and Swarup Saha
This study empirically investigates the interdependency of select Asian emerging economies along with the financial stress index during the times of the global financial crisis…
Abstract
Purpose
This study empirically investigates the interdependency of select Asian emerging economies along with the financial stress index during the times of the global financial crisis, the Euro crisis and the COVID-19 period. Moreover, it inspects the long-memory effects of the different crises during the study period.
Design/methodology/approach
To address the objectives of the study, the authors apply different statistical tools, namely the adjusted correlation coefficient, fractionally integrated generalised autoregressive conditional heteroskedasticity (FIGARCH) model and wavelet coherence model, along with descriptive statistics.
Findings
Financial stress is having a prodigious effect on the economic growth of select economies. From the data analysis, it is found that the long-memory effect is noted in the gross domestic product (GDP) for India and Korea only, which implies that the volatility in the GDP series for these two nations demonstrates persistence and dependency on previous values over a lengthy period.
Originality/value
The study is unique of its kind to consider multi-segments within the period of the study to get a clear idea about the effects of the financial stress index on select Asian emerging economies by applying different econometric tools.
Details
Keywords
Iwin Thanakumar Joseph Swamidason, Sravanthy Tatiparthi, Karunakaran Velswamy and S. Velliangiri
An intelligent personal assistant for personal computers (PCs) is a vital application for the current generation. The current computer personal assistant services checking…
Abstract
Purpose
An intelligent personal assistant for personal computers (PCs) is a vital application for the current generation. The current computer personal assistant services checking frameworks are not proficient at removing significant data from PCs and long-range informal communication information.
Design/methodology/approach
The proposed verbalizers use long short-term memory to classify the user task and give proper guidelines to the users. The outcomes show that the proposed method determinedly handles heterogeneous information and improves precision. The main advantage of long short-term memory is that handle the long-term dependencies in the input data.
Findings
The proposed model gives the 22% mean absolute error. The proposed method reduces mean square error than support vector machine (SVM), convolutional neural network (CNN), multilayer perceptron (MLP) and K-nearest neighbors (KNN).
Originality/value
This paper fulfills the necessity of intelligent personal assistant for PCs using verbalizer.