Search results

1 – 10 of over 3000
Article
Publication date: 29 January 2024

Kai Wang

The identification of network user relationship in Fancircle contributes to quantifying the violence index of user text, mining the internal correlation of network behaviors among…

Abstract

Purpose

The identification of network user relationship in Fancircle contributes to quantifying the violence index of user text, mining the internal correlation of network behaviors among users, which provides necessary data support for the construction of knowledge graph.

Design/methodology/approach

A correlation identification method based on sentiment analysis (CRDM-SA) is put forward by extracting user semantic information, as well as introducing violent sentiment membership. To be specific, the topic of the implementation of topology mapping in the community can be obtained based on self-built field of violent sentiment dictionary (VSD) by extracting user text information. Afterward, the violence index of the user text is calculated to quantify the fuzzy sentiment representation between the user and the topic. Finally, the multi-granularity violence association rules mining of user text is realized by constructing violence fuzzy concept lattice.

Findings

It is helpful to reveal the internal relationship of online violence under complex network environment. In that case, the sentiment dependence of users can be characterized from a granular perspective.

Originality/value

The membership degree of violent sentiment into user relationship recognition in Fancircle community is introduced, and a text sentiment association recognition method based on VSD is proposed. By calculating the value of violent sentiment in the user text, the annotation of violent sentiment in the topic dimension of the text is achieved, and the partial order relation between fuzzy concepts of violence under the effective confidence threshold is utilized to obtain the association relation.

Details

Data Technologies and Applications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 28 May 2024

Naurin Farooq Khan, Hajra Murtaza, Komal Malik, Muzammil Mahmood and Muhammad Aslam Asadi

This research aims to understand the smartphone security behavior using protection motivation theory (PMT) and tests the current PMT model employing statistical and predictive…

Abstract

Purpose

This research aims to understand the smartphone security behavior using protection motivation theory (PMT) and tests the current PMT model employing statistical and predictive analysis using machine learning (ML) algorithms.

Design/methodology/approach

This study employs a total of 241 questionnaire-based responses in a nonmandated security setting and uses multimethod approach. The research model includes both security intention and behavior making use of a valid smartphone security behavior scale. Structural equation modeling (SEM) – explanatory analysis was used in understanding the relationships. ML algorithms were employed to predict the accuracy of the PMT model in an experimental evaluation.

Findings

The results revealed that the threat-appraisal element of the PMT did not have any influence on the intention to secure smartphone while the response efficacy had a role in explaining the smartphone security intention and behavior. The ML predictive analysis showed that the protection motivation elements were able to predict smartphone security intention and behavior with an accuracy of 73%.

Research limitations/implications

The findings imply that the response efficacy of the individuals be improved by cybersecurity training programs in order to enhance the protection motivation. Researchers can test other PMT models, including fear appeals to improve the predictive accuracy.

Originality/value

This study is the first study that makes use of theory-driven SEM analysis and data-driven ML analysis to bridge the gap between smartphone security’s theory and practice.

Details

Information Technology & People, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0959-3845

Keywords

Article
Publication date: 14 July 2023

Bowen Zheng, Mudasir Hussain, Yang Yang, Albert P.C. Chan and Hung-Lin Chi

In the last decades, various building information modeling–life cycle assessment (BIM-LCA) integration approaches have been developed to assess the environmental impact of the…

Abstract

Purpose

In the last decades, various building information modeling–life cycle assessment (BIM-LCA) integration approaches have been developed to assess the environmental impact of the built asset. However, there is a lack of consensus on the optimal BIM-LCA integration approach that provides the most accurate and efficient assessment outcomes. To compare and determine their accuracy and efficiency, this study aimed to investigate four typical BIM-LCA integration solutions, namely, conventional, parametric modeling, plug-in and industry foundation classes (IFC)-based integration.

Design/methodology/approach

The four integration approaches were developed and applied using the same building project. A quantitative technique for evaluating the accuracy and efficiency of BIM-LCA integration solutions was used. Four indicators for assessing the performance of BIM-LCA integration were (1) validity of LCA results, (2) accuracy of bill-of-quantity (BOQ) extraction, (3) time for developing life cycle inventories (i.e. developing time) and (4) time for calculating LCA results (i.e. calculation time).

Findings

The results show that the plug-in-based approach outperforms others in developing and calculation time, while the conventional one could derive the most accuracy in BOQ extraction and result validity. The parametric modeling approach outperforms the IFC-based method regarding BOQ extraction, developing time and calculation time. Despite this, the IFC-based approach produces LCA outcomes with approximately 1% error, proving its validity.

Originality/value

This paper forms one of the first studies that employ a quantitative and objective method to determine the performance of four typical BIM-LCA integration solutions and reveal the trade-offs between the accuracy and efficiency of the integration approaches. The findings provide practical references for LCA practitioners to select appropriate BIM-LCA integration approaches for evaluating the environmental impact of the built asset during the design phase.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 30 August 2023

Donghui Yang, Yan Wang, Zhaoyang Shi and Huimin Wang

Improving the diversity of recommendation information has become one of the latest research hotspots to solve information cocoons. Aiming to achieve both high accuracy and…

Abstract

Purpose

Improving the diversity of recommendation information has become one of the latest research hotspots to solve information cocoons. Aiming to achieve both high accuracy and diversity of recommender system, a hybrid method has been proposed in this paper. This study aims to discuss the aforementioned method.

Design/methodology/approach

This paper integrates latent Dirichlet allocation (LDA) model and locality-sensitive hashing (LSH) algorithm to design topic recommendation system. To measure the effectiveness of the method, this paper builds three-level categories of journal paper abstracts on the Web of Science platform as experimental data.

Findings

(1) The results illustrate that the diversity of recommended items has been significantly enhanced by leveraging hashing function to overcome information cocoons. (2) Integrating topic model and hashing algorithm, the diversity of recommender systems could be achieved without losing the accuracy of recommender systems in a certain degree of refined topic levels.

Originality/value

The hybrid recommendation algorithm developed in this paper can overcome the dilemma of high accuracy and low diversity. The method could ameliorate the recommendation in business and service industries to address the problems of information overload and information cocoons.

Details

Aslib Journal of Information Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 28 May 2024

Guang-Zhi Zeng, Zheng-Wei Chen, Yi-Qing Ni and En-Ze Rui

Physics-informed neural networks (PINNs) have become a new tendency in flow simulation, because of their self-advantage of integrating both physical and monitored information of…

Abstract

Purpose

Physics-informed neural networks (PINNs) have become a new tendency in flow simulation, because of their self-advantage of integrating both physical and monitored information of fields in solving the Navier–Stokes equation and its variants. In view of the strengths of PINN, this study aims to investigate the impact of spatially embedded data distribution on the flow field results around the train in the crosswind environment reconstructed by PINN.

Design/methodology/approach

PINN can integrate data residuals with physical residuals into the loss function to train its parameters, allowing it to approximate the solution of the governing equations. In addition, with the aid of labelled training data, PINN can also incorporate the real site information of the flow field in model training. In light of this, the PINN model is adopted to reconstruct a two-dimensional time-averaged flow field around a train under crosswinds in the spatial domain with the aid of sparse flow field data, and the prediction results are compared with the reference results obtained from numerical modelling.

Findings

The prediction results from PINN results demonstrated a low discrepancy with those obtained from numerical simulations. The results of this study indicate that a threshold of the spatial embedded data density exists, in both the near wall and far wall areas on the train’s leeward side, as well as the near train surface area. In other words, a negative effect on the PINN reconstruction accuracy will emerge if the spatial embedded data density exceeds or slips below the threshold. Also, the optimum arrangement of the spatial embedded data in reconstructing the flow field of the train in crosswinds is obtained in this work.

Originality/value

In this work, a strategy of reconstructing the time-averaged flow field of the train under crosswind conditions is proposed based on the physics-informed data-driven method, which enhances the scope of neural network applications. In addition, for the flow field reconstruction, the effect of spatial embedded data arrangement in PINN is compared to improve its accuracy.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 15 March 2024

Nawar Boujelben, Manal Hadriche and Yosra Makni Fourati

The purpose of this study is to examine the interplay between integrated reporting quality (IRQ) and capital markets. More specifically, the authors test the impact of IRQ on…

Abstract

Purpose

The purpose of this study is to examine the interplay between integrated reporting quality (IRQ) and capital markets. More specifically, the authors test the impact of IRQ on stock liquidity, cost of capital and analyst forecast accuracy.

Design/methodology/approach

The sample consists of listed firms on the Johannesburg Stock Exchange in South Africa, covering the period from 2012 to 2020. The IRQ measure used in this study is based on data from Ernst and Young. To test the proposed hypotheses, the authors conducted a generalized least squares regression analysis.

Findings

The empirical results evince a positive relationship between IRQ and stock liquidity. However, the authors did not find a significant effect of IRQ on the cost of capital and financial analysts’ forecast accuracy. In robustness tests, it was shown that firms with a higher IRQ score exhibit higher liquidity and improved analyst forecast accuracy. Additional analysis indicates a negative association between IRQ and the cost of capital, as well as a positive association between IRQ and financial analyst forecast accuracy for firms with higher IRQ scores (TOP ten, Excellent, Good).

Originality/value

The study stands as one of the initial endeavors to investigate the impact of IRQ on the capital market. It provides valuable insights for managers and policymakers who are interested in enhancing disclosure practices within the financial market. Furthermore, these findings are significant for investors as they make informed investment decisions.

Details

Journal of Financial Reporting and Accounting, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1985-2517

Keywords

Article
Publication date: 21 November 2023

Armin Mahmoodi, Leila Hashemi and Milad Jasemi

In this study, the central objective is to foresee stock market signals with the use of a proper structure to achieve the highest accuracy possible. For this purpose, three hybrid…

Abstract

Purpose

In this study, the central objective is to foresee stock market signals with the use of a proper structure to achieve the highest accuracy possible. For this purpose, three hybrid models have been developed for the stock markets which are a combination of support vector machine (SVM) with meta-heuristic algorithms of particle swarm optimization (PSO), imperialist competition algorithm (ICA) and genetic algorithm (GA).All the analyses are technical and are based on the Japanese candlestick model.

Design/methodology/approach

Further as per the results achieved, the most suitable algorithm is chosen to anticipate sell and buy signals. Moreover, the authors have compared the results of the designed model validations in this study with basic models in three articles conducted in the past years. Therefore, SVM is examined by PSO. It is used as a classification agent to search the problem-solving space precisely and at a faster pace. With regards to the second model, SVM and ICA are tested to stock market timing, in a way that ICA is used as an optimization agent for the SVM parameters. At last, in the third model, SVM and GA are studied, where GA acts as an optimizer and feature selection agent.

Findings

As per the results, it is observed that all new models can predict accurately for only 6 days; however, in comparison with the confusion matrix results, it is observed that the SVM-GA and SVM-ICA models have correctly predicted more sell signals, and the SCM-PSO model has correctly predicted more buy signals. However, SVM-ICA has shown better performance than other models considering executing the implemented models.

Research limitations/implications

In this study, the data for stock market of the years 2013–2021 were analyzed; the long length of timeframe makes the input data analysis challenging as they must be moderated with respect to the conditions where they have been changed.

Originality/value

In this study, two methods have been developed in a candlestick model; they are raw-based and signal-based approaches in which the hit rate is determined by the percentage of correct evaluations of the stock market for a 16-day period.

Details

EuroMed Journal of Business, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1450-2194

Keywords

Open Access
Article
Publication date: 8 December 2023

Armin Mahmoodi, Leila Hashemi, Amin Mahmoodi, Benyamin Mahmoodi and Milad Jasemi

The proposed model has been aimed to predict stock market signals by designing an accurate model. In this sense, the stock market is analysed by the technical analysis of Japanese…

Abstract

Purpose

The proposed model has been aimed to predict stock market signals by designing an accurate model. In this sense, the stock market is analysed by the technical analysis of Japanese Candlestick, which is combined by the following meta heuristic algorithms: support vector machine (SVM), meta-heuristic algorithms, particle swarm optimization (PSO), imperialist competition algorithm (ICA) and genetic algorithm (GA).

Design/methodology/approach

In addition, among the developed algorithms, the most effective one is chosen to determine probable sell and buy signals. Moreover, the authors have proposed comparative results to validate the designed model in this study with the same basic models of three articles in the past. Hence, PSO is used as a classification method to search the solution space absolutelyand with the high speed of running. In terms of the second model, SVM and ICA are examined by the time. Where the ICA is an improver for the SVM parameters. Finally, in the third model, SVM and GA are studied, where GA acts as optimizer and feature selection agent.

Findings

Results have been indicated that, the prediction accuracy of all new models are high for only six days, however, with respect to the confusion matrixes results, it is understood that the SVM-GA and SVM-ICA models have correctly predicted more sell signals, and the SCM-PSO model has correctly predicted more buy signals. However, SVM-ICA has shown better performance than other models considering executing the implemented models.

Research limitations/implications

In this study, the authors to analyze the data the long length of time between the years 2013–2021, makes the input data analysis challenging. They must be changed with respect to the conditions.

Originality/value

In this study, two methods have been developed in a candlestick model, they are raw based and signal-based approaches which the hit rate is determined by the percentage of correct evaluations of the stock market for a 16-day period.

Details

Journal of Capital Markets Studies, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-4774

Keywords

Open Access
Article
Publication date: 30 May 2024

Nadja Fugleberg Damtoft, Dennis van Liempd and Rainer Lueg

Researchers and practitioners have recently been interested in corporate sustainability performance (CSP). However, knowledge on measuring CSP is limited. Many CSP-measurements…

Abstract

Purpose

Researchers and practitioners have recently been interested in corporate sustainability performance (CSP). However, knowledge on measuring CSP is limited. Many CSP-measurements are eclectic, without guidance for contextual applications. This paper aims to develop a conceptual framework that categorizes, explains and evaluates measurements based on their accuracy and precision and provides a guideline for their context-specific application.

Design/methodology/approach

The authors conducted a systematic literature review of an initial sample of 1,415 papers.

Findings

The final sample of 74 papers suggested four measurement categories: isolated indicators, indicator frameworks, Sustainability Balanced Scorecards (SBSC) and Sustainability Performance Measurement Systems (SPMS). The analysis reveals that isolated indicators are inaccurate and imprecise, limiting their application to organizations with delimited, specific measurements of parts of CSP due to the risk of a GIGO-effect (i.e. low-quality input will always produce low-quality output). CSP-indicator frameworks are imprecise but accurate, making them applicable to organizations that handle a more significant amount of CSP data. They have a risk of greensplashing, i.e. many indicators not connected to the industry, organization or strategy. In contrast, SBSCs are precise but inaccurate and valuable for organizations desiring a comprehensive strategic management tool with limited capacity to handle sustainability issues. They pose a risk of the streetlight effect, where organisations do not measure relevant indicators but what is easy to measure.

Originality/value

The ideal CSP-measurement was identified as SPMSs, which are both precise and accurate. SPMSs are useful for organizations with complex, comprehensive, connected and tailored indicators but are methodologically challenging.

Details

Journal of Global Responsibility, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2041-2568

Keywords

Open Access
Article
Publication date: 22 August 2023

Mahesh Babu Purushothaman and Kasun Moolika Gedara

This pragmatic research paper aims to unravel the smart vision-based method (SVBM), an AI program to correlate the computer vision (recorded and live videos using mobile and…

1450

Abstract

Purpose

This pragmatic research paper aims to unravel the smart vision-based method (SVBM), an AI program to correlate the computer vision (recorded and live videos using mobile and embedded cameras) that aids in manual lifting human pose deduction, analysis and training in the construction sector.

Design/methodology/approach

Using a pragmatic approach combined with the literature review, this study discusses the SVBM. The research method includes a literature review followed by a pragmatic approach and lab validation of the acquired data. Adopting the practical approach, the authors of this article developed an SVBM, an AI program to correlate computer vision (recorded and live videos using mobile and embedded cameras).

Findings

Results show that SVBM observes the relevant events without additional attachments to the human body and compares them with the standard axis to identify abnormal postures using mobile and other cameras. Angles of critical nodal points are projected through human pose detection and calculating body part movement angles using a novel software program and mobile application. The SVBM demonstrates its ability to data capture and analysis in real-time and offline using videos recorded earlier and is validated for program coding and results repeatability.

Research limitations/implications

Literature review methodology limitations include not keeping in phase with the most updated field knowledge. This limitation is offset by choosing the range for literature review within the last two decades. This literature review may not have captured all published articles because the restriction of database access and search was based only on English. Also, the authors may have omitted fruitful articles hiding in a less popular journal. These limitations are acknowledged. The critical limitation is that the trust, privacy and psychological issues are not addressed in SVBM, which is recognised. However, the benefits of SVBM naturally offset this limitation to being adopted practically.

Practical implications

The theoretical and practical implications include customised and individualistic prediction and preventing most posture-related hazardous behaviours before a critical injury happens. The theoretical implications include mimicking the human pose and lab-based analysis without attaching sensors that naturally alter the working poses. SVBM would help researchers develop more accurate data and theoretical models close to actuals.

Social implications

By using SVBM, the possibility of early deduction and prevention of musculoskeletal disorders is high; the social implications include the benefits of being a healthier society and health concerned construction sector.

Originality/value

Human pose detection, especially joint angle calculation in a work environment, is crucial to early deduction of muscoloskeletal disorders. Conventional digital technology-based methods to detect pose flaws focus on location information from wearables and laboratory-controlled motion sensors. For the first time, this paper presents novel computer vision (recorded and live videos using mobile and embedded cameras) and digital image-related deep learning methods without attachment to the human body for manual handling pose deduction and analysis of angles, neckline and torso line in an actual construction work environment.

Details

Smart and Sustainable Built Environment, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2046-6099

Keywords

1 – 10 of over 3000